In the land of dynamic DNS

In the previous post I talked about my home network and services, and I pointed out how I ended up writing some code while trying to work around lack of pure-IPv6 hosts in Let’s Encrypt. This is something I did post about on Google+, but that’s not really a reliable place to post this for future reference, so it’s time to write it down here.

In the post I referred to the fact that, up until around April this year, Let’s Encrypt did not support IPv6-only hosts, and since I only have DS-Lite connectivity at home, I wanted that. To be precise, it’s the http authentication that was not supported on IPv6, but the ACME protocol (which Let’s Encrypt designed and implements) supports another authentication method: dns-01, at least as a draft.

Since this is a DNS-based challenge, there is no involvement of IPv4 or IPv6 addresses altogether. Unfortunately, the original client, now called certbot, does not support this type of validation, among other things because it’s bloody complicated. On the bright side, lego (an alternative client written in Go), does support this validation, including a number of DNS provider code supported.

Unfortunately, Afraid.org, which is the dynamic host provider I started using for my home network, is not supported. The main reason is that its API do not allow creating TXT or CNAME records, which are needed for dns-01 validation. I did contact the Afraid.org owner hoping that a non-documented API was found, but I got no answer back.

Gandi, on the other hand, is supported, and is my main DNS provider, so I started looking into that direction. Unlike my previous provider (OVH), Gandi does not appear to provide you any support for delegating to a dynamic host system. So instead I looked for options around it, and I found that Gandi provides some APIs (which, after all, is what lego uses itself.)

I ended up writing two DNS updating tools for that; if nothing else because they are very similar, one for Gandi and one for Afraid.org (the one for Afraid.org was what I started with — at the time I thought that they didn’t have an endpoint to update IPv6 hosts, since the default endpoint was v4 only.) I got clearance to publish it and is now on GitHub, it can work as a framework for any other dynamic host provider, if you feel like writing one, it provides some basic helper methods to figure out the current IPv4 or IPv6 assigned to an interface — while this makes no sense behind NAT, it makes sense with DS-Lite.

But once I got it all up and running I realized something that should have been obvious from the start: Gandi’s API is not great for this use case at all. In the case of Afraid.org and OVH’s protocol, there is a per-host token, usually randomly generated, you deploy that to the host you want to keep up to date, and that’s it, nothing else can be done with that token: it’s a one-way update of the host.

Gandi’s API is designed to be an all-around provisioning API, so it allows executing any operation whatsoever with your token. Including registering or dropping domains, or dropping the whole zone or reconfiguring it. It’s a super-user access token. And it sidesteps the 2-factors authentication that you can set up on the Gandi. If you lose track of this API key, it’s game over.

So at the end of the day, I decided not to use this at all. But since I already wrote the tools, I thought it would be a good idea to leave it to the world. It was also a bit of a nice way for me to start writing some public Go code,

It’s 2014, why are domains so hard?

Last year I’ve moved to Ireland and as of last December, I closed the VAT ID that made me a self-employed consultant. The reason why it took me a while to close the ID was that I still had a few contracts going on, which I needed to have expire before. One of the side effect of closing the ID is also that I had/have to deal with all the accounts where said ID was used, including the OVH account where almost all of my domains, and one of my servers, were registered — notably, xine-project.org has been registered forever at a different provider, Register4less of UserFriendly fame.

For most accounts, removing a VAT ID is easy: you update your billing information and tell them that you no longer have a VAT ID. In some rare cases, you have to forgo the account, so you just change the email address to a different one, and register a new account. In the case of OVH, things are more interesting. Especially so if you want to actually be honest.

In this case, I wanted to move my domains out of my account, in Italy, with a VAT ID, to an account in Ireland, without said ID — one of the reasons is that the address used for registration is visible in most whois output, and while I don’t care for my address being public, the address is now my mother’s only, and it bothered me having it visible there. That was a mistake (from one point of view, and a godsend from another).

First problem is that you cannot change either your account or your VAT ID status out of an OVH account, which meant I needed to transfer the domains and server to a different account. There was (and possibly is, I don’t want to go look for it) some documentation on how to transfer resources across contacts (i.e. OVH accounts), but when I tried to follow it, the website gave me just a big “Error” page, unfortunately “Error” was the whole content of the page.

Contacted for help, OVH Italia suggested using their MoM management software to handle the transition. I tried, and the results have been just as bad, but at least it did error out with an explanation, about trying to cross countries with the transfer. I then contacted OVH Ireland as well as OVH Italia, with the former, after a long discussion where they suggested me to do … exactly what I did, “assured me” that the procedure works correctly — only for OVH Italia apologizing a couple of days later that indeed a month earlier they changed the procedures because of some contractual differences between countries. They suggested using the “external transfer” – going through your standard transfer procedure for domains – but it turns out their system fails when you try that, as the domain is already in their database, so they suggest using the “internal transfer” instead (which as I said does not work).

Since a couple of my domains were going to expire soon, this past weekend I decided to start moving them out of OVH, given that the two branches couldn’t decide how to handle my case. The result is that I started loading the domains onto Gandi — among the reasons, the VideoLan people and one of my colleague know them pretty well and suggested them warmly. This proved trickier, but it also provide one thing: not all French companies are equal!

I started by moving my .eu, .es and .info domains (I own among others automake.info, which redirects to my Autotools Mythbuster — the reason is that if you type the name of the info manuals on G+, it actually brings you there! I was actually planning to make them actually point to a copy of the respective info manuals, but I’ve not studied the GFDL enough yet to know whether I can). While the .info domains are still in limbo right now, as OVH has a five-days timeout before you transfer out, and the .es domains were transferred almost immediately (the Spanish NIC is extremely efficient in that regard, they basically just send you an email to confirm you want to change the registry, and if you accept, that’s it!), the .eu were a bit of a pain.

Turns out that EURid wants a full address to assign the domain to, including a post code; unfortunately Ireland has no post code, yet and even the usual ways to represent my area of Dublin (4, 0004, D4, etc) failed; even the “classical” 123456 that is used by many Irish failed. After complaining on Twitter, a very dedicated Gandi employee, Emerick, checked it out and found that the valid value, according to EURid (but not to Gandi’s own frontend app, ironically) is “Dublin 4”. He fixed that for me on their backend, and the .eu registration went through; this blog is now proudly served by Gandi and that makes it completely IPv6 compatible.

But the trial was not completed yet. One of the reasons why I wanted to move to Gandi now, was that Register4Less was requiring me sort-of-transfer the domain from Tucows (where they resold it before) to their new dedicated registry, to keep the privacy options on. The reason for that being that Tucows started charging more, and they would have had to charge me the extra if I wanted to keep it. On the other hand, they offered to transfer it, extend the expiration another year and keep the privacy option on. I did not like the option because I just had renewed the domain the past November for a bunch of years, so I did not want to extend it even further already — and if I had to, I would at that point try to reduce the number of services I need to keep my eyes on. Besides, unlike Register4Less and OVH, Gandi supports auto-renewal of domains, which is a good thing.

Unfortunately, for ICANN or whoever else manages the .org decided that “Dublin 4” is not a valid postal code, so I had to drop it again off the account, to be able to transfer xine-project.org. Fun, isn’t it? Interestingly both the .org and .it administrators handle the lack of a post code properly — the former as N/A and the latter as the direct translation N.D.. Gandi has been warned, they will probably handle it sometime soon. In the mean time it seems like the .eu domains are not available to Irish residents, as long as they don’t want to fake an address somewhere else.

And the cherries on top, now that I’m migrating everything to Gandi? Their frontend webapp is much better at handling multiple identically-configured domains, to begin with. And as they shown already their support is terrific especially when compared to the mirror-climbing of their other French competitors. But most importantly, have you read a couple of weeks ago, the story of @N? How an attacker got a hold of GoDaddy and caused trouble for the owner of the @N twitter account? Well, turns out that Gandi people are much more security conscious than GoDaddy (okay that was easy) and not only they provide an option to disable the “reset password by email” option, but also provide 2FA through HOTP, which means it’s compatible with Google Authenticator (as well as a bunch of other software).

End of story? I’m perfectly happy to finally having a good provider for my domains, one that is safe and secure and that I can trust. And I owe Emerick a drink next time I stop by Paris! Thanks Gandi, thanks Emerick!

A story of bad suggestions

You might have noticed that my blog has been down for a little while today. The reason that happened is that I was trying to get Google Webmaster Tools to work again, as I’ve been spending some more time lately to clean up my web presence — I’ll soon post more about news related to Autotools Mythbuster and the direction it’s going to take.

How did that cause my blog’s not coming up though? Well, the new default for GWT’s validation of the authorship of the website is to use DNS TXT records, instead of the old header on the homepage, or file on the root. Unfortunately, it doesn’t work as well.

First, it actually tends to be smart, by checking whose DNS servers are assigned to the domain — which meant that it showed up instructions on how to login on my OVH account (great). On the other hand, it told me to create the new TXT record without setting a subdomain — too bad that it will not accept a validation on flameeyes.eu for blog.flameeyes.eu.

The other problem is that the moment I added a TXT record for blog.flameeyes.eu, the resolution of the host didn’t lead to the CNAME anymore, which meant that the host was unreachable altogether. I’ve not checked the DNS documentation to learn whether this is a bug in OVH or if the GWT suggestion is completely broken. In either case it was a bad suggestion.

Also, if you happen to not be able to reach posts and you end up always on the homepage, please flush your cache, I made a mess when I was fixing the redirects to fix more links all over the Internet — it should all be fine now, and links should all work, even those that were mangled beforehand due to non-ASCII-compatible URLs.

Finally, I’ve updated the few posts were a YouTube video was linked, and they now use the iframe-based embed strategy, which means they are visible without using Adobe Flash, via HTML5. But that’s all fine, no issue should be created by that.

A story of a Registry, an advertiser, and an unused domain

This is a post that relates to one of my dayjobs, and has nothing to do with Free Software, yet it is technical. If you’re not interested in non-Free Software related posts, you’re suggested to skip this altogether. If you still care about technical matters, read on!

Do you remember that customer of mine that almost never pays me in time, for which I work basically free of charge, and yet gives me huge headaches from time to time with requests that make little to no sense? Okay you probably remember by now, or you simply don’t care.

Two years or so ago, that customer calls me up one morning asking me to register a couple of second-level domains in as many TLDs as I thought it made sense to, so that they could set up a new web-end to the business. Said project still hasn’t delivered, mostly because the original estimate I sent the customer was considered unreasonably expensive, and taking “too much time” — like they haven’t spent about the same already, and my nine months estimate sounds positively short when you compare it with the over two years gestation the project is lingering on. At any rate, this is of no importance to what I want to focus on here.

Since that day, one set of domains was left to expire as it wasn’t as catchy as it sounded at first, and only the second set was kept registered. I have been paid for the registration of course, while the domains have been left parked for the time being (no they decided not to forward them to the main domain of the business where the address, email and phone number are).

The other day I was trying to find a way to recover a bit more money out of this customer and, incidentally, this blog, and I decided to register to AdSense again, this time with my VAT ID as I have to declare eventual profits coming from that venue. One of the nice features of AdSense allows to “monetize” (gosh how much I hate that word!) parked domains. Since these are by all means parked domains, I gave it a chance.

Four are the domains parked this way: .net, .com, .eu and .it. All registered with OVH – which incidentally has fixed its IPv6 troubles – and up to now all pointing to a blackhole redirect. How do you assign a parked domain to Google’s AdSense service? Well, it’s actually easy: you just have to point the nameservers for the domain to the four provided by Google, and you’re set. On three out of four of the TLDs I had to deal with.

After setting it up on Friday, as of Monday, Google still wouldn’t verify the .it domain; OVH was showing the task alternatively as “processing” and “completed” depending on whether I looked at the NS settings (they knew they had a request to change them) or at the task’s status page (as it’ll be apparent in a moment, it was indeed cloesd). I called them — reason I like OVH: I can get somebody on the phone to eat least listen to me.

What happens? Well, looks like Registro.it – already NIC-IT, the Italian Registration Authority – is once again quite strict in what it accepts. It was just two years ago that they stopped requiring you to fax an agreement to actually be able to register a .it domain, and as of last year you still had to do the same when transferring the domain. Luckily they stopped requiring both, and this year I was able to transfer a domain in the matter of a week or so. But what about this time?

Well, it turns out that the NIC validates the new nameservers when you want to change them, to make sure that the new servers list the domain, and configure it properly. This is common procedure, and both the OVH staff and me were aware of this. What we weren’t aware of (OVH staffers had no clue about this either, they had to call NIC-IT to see what the trouble was, they weren’t informed properly either) is the method they do that: using dig +ANY.

Okay, it’s nothing surprising actually, dig +ANY is the standard way to check for a domain’s zone at a name server… but turns out that ns1.googleghs.com and its brothers – the nameservers you need to point a domain to, for use with AdSense – do not support said queries, making them invalid in the eyes of NIC-IT. Ain’t that lovely? The OVH staffer I spoke with said they’ll inform NIC-IT about the situation, but they don’t count on them changing their ways and … I actually have to say that I can’t blame them. Indeed I don’t see the reason why Google’s DNS might ignore ANY queries.

For my part, I told them that I would try to open a support request with Google to see if they intend to rectify the situation. The issue here is that, as much as I spent trying to find that out, I can’t seem to find a place where to open a ticket for the Google AdSense staff to read. I tried tweeting to their account, but it seems like it didn’t make much sense.

Luckily there is an alternative when you can’t simply set up the domain to point to Google’s DNS, and that is to create a custom zone, which is what I’ve done now. It’s not much of a problem, but it’s still bothersome that one of Google’s most prominent services is incompatible with a first-world Registration Authority such as NIC-IT.

Oh well.

On IPv6 and Dynamic Hosts (and PowerDNS Express in particular)

So yesterday I wrote about my tests on bypassing an hostile NAT that left me with a public-accessible dynamic IPv6. This helps me a whole lot, but it ’s almost unusable for more than a couple of days as I cannot know the IP address (unless that is I mail it to myself each time it change). The idea of using Mobile IPv6 to get a stable address for the box left away because of the complexity, I came back on my steps to my original, possibly easy, option: dynamic hostnames.

Dynamic hostnames are a very old technology to work around the issue of dynamic IPs (which was much more common years ago), it seemed obvious to me that the solution was the easiest to implement: I get a stable address (in form of hostname) to the router, then I can get to the remaining hosts through a SSH jump (or some kind of limited-scope IPv6 routing).

Unfortunately the first obvious choice (DynDNS) is a failure: it does not support using IPv6 for dynamic hostnames as far as I can see, and that makes it useless for my aim here. The second option for me was using the OVH system for DynHOSTs — it’s a service I pay, so I was expecting it to have the needed features, unfortunately they also don’t allow using IPv6 for their hosts. There used to be a service that supported this kind of feature, called DNS6, but that seems to be now dead. Hurricane Electric is planning on supporting dynamic hosts at some point, but right now there is no support for it.

Then I started looking at some more complex solutions, including paid-for solutions and custom solutions. One of my first ideas was UltraDNS but no public pricing seems to be available and that is not something I’m very fond of; plus it’s based in the United States which gives me trouble with taxes and payments, an European solution would have been better for my requests.

After discarding this solution as well, I started down the most complex road for this kind of situations, at least that’s what I thought at that point: writing my own dynamic DNS system. Luckily, a job I took earlier this year gave me a bit of expertise with PowerDNS (the software) so I only had to write some CGI application in Ruby to modify a PostgreSQL database on this very server, and serve it from there. I started looking into the pieces of the puzzle for what concerned the CGI script, and found a number of other problems, mostly related to SSL and certificates (but that’s, again, something for another post), and then I looked at PowerDNS itself, starting from looking at the latest version available on the site.

When I looked at the homepage (which is the one I linked earlier), I noted two more interesting things: the first is that the developers offer a paid-for custom DNS service, the second that the company is in the Netherlands, so it’s within the European Union, which is good for my taxes. Also, the price for a single domain (which is what I’d be needing at first) is low enough that it looked acceptable ($2 per domain, $0.50 for the transaction, less than €1.90 total). Beside the usual user-friendly operation interface to set the DNS records, their service has the one thing that was important to me: API access based on the SOAP protocol (and a WSDL description), that allowed updating records via scripts.

While on paper the service is great – and cheap too! – there are way too many shortcomings in their approach:

  • Authentication shautentication: the SOAP interface is only available in simple HTTP, there is no proper authentication, but it’s all left to the single API key that is provider per user; this means that if you deploy this on a non-trusted network (and, well, do you really trust the rest of the Internet?), and somebody is able to get your API key, not only they can mess with the records you’re messing at a given time, but they can mess with any of your zones and domains.
  • API keys and WSDL: beside the fact that it’s the one and only authentication mechanism, the API key is not passed as part of the SOAP request; instead it is passed as a query parameter on the POST request, as part of the uri. Unfortunately, the WSDL that is reported by the interface is not fixed to use the API key. As their documentation only speaks about PHP and VB.NET, I assume that those two libraries ignore the endpoint URL provided by the WSDL response (as soap:address); Ruby on the other hand respects what the response tells it to use, and it turns out it does not include the API key back, resulting in receiving “Invalid User” messages back at any request.
  • No advanced editing: while the PowerDNS express user interface for assisted editing of zones is one of the best I’ve seen among various DNS services, including the ability to automatically add Google Mail records (which would have been nice given that I actually have them on a number of domains already), they don’t have an “advanced” mode, which would allow you to either edit the zone manually, or at least add any kind of records in a free-form way; the SOAP interface also doesn’t allow you to add all kind of records either, which is a bad thing. It gets even worse when you add the fact that you don’t have SSHFP as a record type that you can use to set the fingerprint of a SSH server — this was actually a half decent idea to provide some extra safety that the lack of real authentication didn’t give me.

I’m seriously disappointed by the bad quality of the service PowerDNS Express provides, even though their software (pdns) seems pretty good and their basic interface is one of the best, as I said. As it is, it’s definitely not an option.

Luckily, Robin (robbat2) saved me from writing and deploying my own custom solution, so I’ll be working on deploying that in the next few days, taking most definitely less time than it would require me to waste for writing all the code myself. In most recent Bind version, support for Dynamic DNS entries is supported via the nsupdate tool; this means that if I set up a standard bind instance on my server (which might be harder to configure, but requires less than half the dependencies of pdns, most importantly it doesn’t need boost, and does not require a database behind it), then I can simply use that (which provides a strong authentication system, and a complete authorization system) to provide dynamic host names, exactly like I intended to originally.

For now, I’ll like the two reference pages that Robin gave me, if I’ll encounter problems implementing it that way, I might post again about it:

Amazon EC2: stable hostnames, for free, really

In my current line of work, dealing with Amazon’s “Elastic” Computer Cloud, I had to cope with the problem of getting a stable hostname for a “master” instance when dealing with “slaves”. I was appalled by the fact that Amazon does not provide a way to assign a “role” or “class name” to their instances, so to have them recognized from within their network.

When I was pointed out at multiple documentation showing how to achieve that feature, I knew that the problem wasn’t that Amazon failed to to think that through, but rather that they are most likely trying to get more money out of the users with these need. The common suggestion to solve the need to identify a particular instance, between executions, is to use their “Elastic IP” feature (which – like many other things they call “Elastic” – is not really elastic at all).

The idea behind this solution is that by assigning a public IPv4 (and keep in mind this is a precious resource nowadays) to the instance, you can pick up the reverse DNS for it (for instance will reverse-resolve to ec2-184-73-246-248.compute-1.amazonaws.com), and use that in your intra-EC2 requests… once using EC2’s own nameservers, ec2-184-73-246-248.compute-1.amazonaws.com will resolve to the (dynamic) internal IP address ( of the instance the IP was assigned to.

All fine and dandy, and this can be set up cleanly with Rudy as well. The problem here is… the IP is not free. Oh yes, the documentation will try to tell you that you won’t have to pay for it… as long as you use it. But it’s definitely not free, especially during the development phase of a project. As long as the address is assigned to an instance, you won’t pay for the Elastic IP service (but you will pay for the running instance); if you don’t have an assigned instance, you’ll end up paying $2.4 per day for the IP address to be reserved.

This is definitely one of the cases where Amazon’s limitation seem totally artificial to get you more money, as much money as they can get from you. D’oh!

What is my solution to this problem? It’s very simple: net-misc/ddclient. The IP address you’re looking for is the one, inside the instance, that is assigned to the eth0 interface. Just tell ddclient to fetch the address and use it on a classic DynDNS (or similar) hostname service. It doesn’t matter that you’re feeding them a local IP address (in the range); you won’t be able to use the same hostname for external and internal dealing, but it’ll definitely cost you less.

Web pages and DNS requests

Since Google DNS was announced, I noted that people repeated the Google message:

Complex pages often require multiple DNS lookups before they start loading, so your computer may be performing hundreds of lookups a day.

Is this really true? Well let’s try to break this up in more verifiable keyphrases, starting from the end of the message.

“Your computer may be performing hundreds of lookups a day” is definitely true in general. On the other hand I’d like to note a few things here. Most decent operating systems, and that includes Linux, FreeBSD, OSX and I think Windows as well, tend to cache the DNS results. In Linux and FreeBSD the piece of software that does that is called nscd (Name Service Cache Daemon). If it’s not running, then multiple requests do tend to be quite taxing (I found that out by myself when I forgot to start it in the tinderbox after moving to containers), so set it to auto-start if you didn’t already.

Now it is true that the type of caching that nscd do is quite limited, so it’s definitely not perfect but there is probably another problem there, and that’s well summarised by Paul Vixie’s rant about DNS abuse where he explain how DNS have been abused, among other things, to provide policy results rather than absolute truths. To express policies, it also means that the response cannot really be cached for long periods of time, and that’s definitely a problem for all systems caching, as having a low time-to-live (TTL) means that you have to make at least one full resolution any given time.

And this is probably the main reason why DNS caches, whoever provides them, tend to get more interesting: while they have to make a full recursive resolution every time the TTL expires, it’s most likely that somebody else have requested it already so the common entries (like Google’s hosts) are likely to always be fresh. It still doesn’t solve one problem though: you need multi-level caches, just like the CPU has, since making a (cached) request to an external server is always going to take more time than getting it out of a local cache. Myself to solve the problem I usually have nscd running as well as having pdnsd running on my router to cache the requests (yes that’s why most home routers don’t assign your ISP’s DNS to the single boxes but try to be the main DNS, sometimes failing badly because of bad caches).

But what I really have a problem with is the other half of the message: “Complex pages often require multiple DNS lookups before they start loading”. This is also true of course, but has anybody thought about why is that? Well I sincerely think this is also Google’s own fault!

Indeed even my own blog pages have resources coming from a number of different hosts: Google (for the custom search), PayPal (for the donation button which I restored a couple of weeks ago), Amazon (for the rare associate links) and Creative Commons for the logo at the end of the page; single post pages also have Gravatar images. While it’s probably impossible to just avoid using external resources in a web page, it doesn’t help that a lot of services, including Amazon’s associates website, Google’s Analytics and AdSense have external hostnames: a web page with Google custom search, AdSense and Analytics will have to make three resolutions, rather than one.

And this made me think about some self-cleansing as well. My personal reason to have my website and my blog as separate hostnames is mostly that at the time I first set it up, Typo was hard to properly set on a sub-directory. I wonder if I should now reduce the two of them at just using the flameeyes.eu domain and have a single /blog/ subdirectory for Typo.

But I also noticed other people creating different subdomains for “static” content, which end up requiring further resolutions, for no good reason at all. I think that most people don’t realise that at all (I also haven’t thought much about that before), and I wonder if Google shouldn’t rather start trying to teach people the right way to do things rather than coming up with some strangely boring, badly hyped, privacy-concerning project like the DNS.

Avoiding captive redirects on Libero/Wind/Infostrada

New chapter of my router project if you don’t care to follow it you probably don’t want to read this at all.

Libero – or Infostrada, Wind, how the heck do you want to call it today – is my provider. Like other providers in Italy, who have probably noticed their users using OpenDNS instead of the standard DNS they provide, they started providing “captive redirects” on failed urls: when you mistype an URL or you try to access an hostname that does not exist, they redirect to their own servers, using their own “search engine” (nowadays just a Google frontend!).

This breaks quite a few assumption, included the fact that the .local domains won’t resolve in the standard DNS servers, which in turn makes nss-mdns almost unusable.

Up to a couple of months ago, Libero only provided this service in the primary nameserver, and if you switched around primary and secondary servers, you sidestepped the issue (that was the actual advertised procedure by the Libero staff, on the public page that was linked from within the search results). Unfortunately this had other side effects, for instance the time needed for the update of records more than doubled, which was quite boring with dynamic DNS and with newly-created domains.

Luckily, pdnsd supports blocking particular IP returned by the results to avoid the fake records created for captive redirects, and the example configuration file itself provides an example for using that with OpenDNS to avoid falling into their redirected Google host (quite evil of them in my opinion). And in particular, at the time, there was only one host used for captive redirect, so the rule was quite simple.

Fast forwards to today, the rule have changed; first of all it seems like Libero now uses redirects on both servers (or the secondary fails so often that it always responds from the primary), and most importantly they increased the number of IPs the redirects respond from. After counting four different IPs I decided to go with something more drastic, and ended up blacklisting the whole /24 network that they belong to (which is assigned, in RIPE, to Tiscali France… which is quite strange). I’m not sure if I ended up blacklisting more than I should have; for now it blacklists just enough for me to keep on browsing the net without adverse effects that I can see, and it also no longer stop me from enjoying .local domains… and Firefox auto-search with Google when the hostname does not exist.

For those interested, the configuration section is this one:

server {
 label= “libero”;
 ip =,;
 reject =,;

The first IP (a single host) is the one that was used earlier, I keep it on the blacklist just to be on the safe side.