The geeks’ wet dreams

As a follow up to my previous rant about FOSDEM I thought I would talk to what I define the “geeks’ wet dream”: IPv6 and public IPv4.

During the whole Twitter storm I had regarding IPv6 and FOSDEM, I said out loud that I think users should not care about IPv6 to begin with, and that IPv4 is not a legacy protocol but a current one. I stand by my description here: you can live your life on the Internet without access to IPv6 but you can’t do that without access to IPv4, at the very least you need a proxy or NAT64 to reach a wide variety of the network.

Having IPv6 everywhere is, for geeks, something important. But at the same time, it’s not really something consumers care about, because of what I just said. Be it for my mother, my brother-in-law or my doctor, having IPv6 access does not give them any new feature they would otherwise not have access to. So while I can also wish for IPv6 to be more readily available, I don’t really have any better excuse for it than making my personal geek life easier by allowing me to SSH to my hosts directly, or being able to access my Transmission UI from somewhere else.

Yes, there is a theoretical advantage for speed and performance, because NAT is not involved, but to be honest for most people that is not what they care about: a 36036 Mbit connection is plenty fast enough even when doing double-NAT, and even Ultra HD, 4K, HDR resolution is well provided by that. You could argue that an even lower latency network may enable more technologies to be available, but that not only sounds to me like a bit of a stretch, it also misses the point once again.

I already liked to Todd’s post, and I don’t need to repeat it here, but if it’s about the technologies that can be enabled, it should be something the service providers should care about. Which by the way is what is happening already: indeed the IPv6 forerunners are the big player companies that effectively are looking for a way to enable better technology. But at the same time, a number of other plans were executed so that better performance can be gained without gating it to the usage of a new protocol that, as we said, really brings nothing to the table.

Indeed, if you look at protocols such as QUIC or HTTP/2, you can notice how these reduce the amount of ports that need to be opened, and that has a lot to do with the double-NAT scenario that is more and more common in homes. Right now I’m technically writing under a three-layers NAT: the carrier-grade NAT used by my ISP for deploying DS-Lite, the NAT issued by my provider-supplied router, and the last NAT which is set up by my own router, running LEDE. I don’t even have a working IPv6 right now for various reasons, and know what? The bottleneck is not the NATs but rather the WiFi.

As I said before, I’m not doing a full Todd and thinking that ignoring IPv6 is a good idea, or that we should not spend any time fixing things that break with it. I just think that we should get a sense of proportion and figuring out what the relative importance of IPv6 is in this world. As I said in the other post, there are plenty of newcomers that do not need to be told to screw themselves if their systems don’t support IPv6.

And honestly the most likely scenario to test for is a dual-stack network in which some of the applications or services don’t work correctly because they misunderstand the system. Like OVH did. So I would have kept the default network dual-stack, and provided a single-stack, NAT64 network as a “perk”, for those who actually do care to test and improve apps and services — and possibly also have a clue not to go and ping a years old bug that was fixed but not fully released.

But there are more reasons why I call these dreams. A lot of the reasoning behind IPv6 appears to be grounded into the idea that geeks want something, and that has to be good for everybody even when they don’t know about it: IPv6, publicly-routed IPv4, static addresses and unfiltered network access. But that is not always the case.

Indeed, if you look even just at IPv6 addressing, and in particular to how stateless addressing works, you can see how there has been at least three different choices at different times for generating them:

  • Modified EUI-64 was the original addressing option, and for a while the only one supported; it uses the MAC address of the network card that receives the IPv6, and that is quite the privacy risk, as it means you can extract an unique identifier of a given machine and identify every single request coming from said machine even when it moves around different IPv6 prefixes.
  • RFC4941 privacy extensions were introduced to address that point. These are usually enabled at some point, but these are not stable: Wikipedia correctly calls these temporary addresses, and are usually fine to provide unique identifiers when connecting to an external service. This makes passive detection of the same machine across network not possible — actually, it makes it impossible to detect a given machine even in the same network, because the address changes over time. This is good on one side, but it means that you do need session cookies to maintain login section active, as you can’t (and you shouldn’t) rely on the persistence of an IPv6 address. It also allows active detection, at least of the presence of a given host within a network, as it does not by default disable the EUI-64 address, just not use it to connect to services.
  • RFC7217 adds another alternative for address selection: it provides a stable-within-a-network address, making it possible to keep long-running connections alive, and at the same time ensuring that at least a simple active probing does not give up the presence of a known machine in the network. For more details, refer to Lubomir Rintel’s post as he went in more details than me on the topic.

Those of you fastest on the uptake will probably notice the connection with all these problems: it all starts by designing the addressing assuming that the most important addressing option is stable and predictable. Which makes perfect sense for servers, and for the geeks who want to run their own home server. But for the average person, these are all additional risks that do not provide any desired additional feature!

There is one more extension to this: static IPv4 addresses suffer from the same problem. If your connection is always coming from the same IPv4 address, it does not matter how private your browser may be, your connections will be very easy to track across servers — passively, even. What is the remaining advantage of a static IP address? Certainly not authentication, as in 2017 you can’t claim ignorance of source address spoofing.

And by the way this is the same reason why providers started issuing dynamic IPv6 prefixes: you don’t want a household (if not a person strictly) to be tied to the same address forever, otherwise passive tracking is effectively a no-op. And yes, this is a pain for the geek in me, but it makes sense.

Static, publicly-routable IP addresses make accessing services running at home much easier, but at the same time puts you at risk. We all have been making about the “Internet of Things”, but at the same time it appears everybody wants to be able to set their own devices to be accessible from the outside, somehow. Even when that is likely the most obvious way for external attackers to access one’s unprotected internal network.

There are of course ways around this that do not require such a publicly routable address, and they are usually more secure. On the other hand, they are not quite a panacea of course. Indeed they effectively require a central call-back server to exist and be accessible, and usually that is tied to a single company, with customise protocols. As far as I know, no open-source such call-back system appears to exist, and that still surprises me.

Conclusions? IPv6, just like static and publicly routable IP addresses, are interesting tools that are very useful to technologists, and it is true that if you consider the original intentions behind the Internet these are pretty basic necessities, but if you think that the world, and thus requirements and importance of features, have not changed, then I’m afraid you may be out of touch.

Personal Infrastructure Services Security and Reliability

I started drafting this post just before I left Ireland for Enigma 2017. While at ENIGMA I realized how important it is to write about this because it is too damn easy to forget about it altogether.

How secure and reliable are our personal infrastructure services, such as our ISPs? My educated guess is, not much.

The start of this story I already talked about: my card got cloned and I had to get it replaced. Among the various services that I needed it replaced in, there were providers in both Italy and Ireland: Wind and Vodafone in Italy, 3 IE in Ireland. As to why I had to use an Irish credit card in Italy, it is because SEPA Direct Debit does not actually work, so my Italian services cannot debit my Irish account directly, as I would like, but they can charge (nearly) any VISA or MasterCard credit card.

Changing the card on Wind Italy was trivial, except that when (three weeks later) I went to restore to the original Tesco card, Chrome 56 reported the site as Not Secure because the login page is served on a non-secure connection by default (which means it can be hijacked by a MITM attack). I bookmarked the HTTPS copy (which load non-encrypted resources, which makes it still unsafe) and will keep using that for the near future.

Vodafone Italy proved more interesting in many ways. The main problem is that I could not actually set up the payment with the temporary card I intended to use (Ulster Bank Gold), the website would just error out on me providing a backend error message — after annoying Vodafone Italy over Twitter, I found out that the problem is in the BIN of the credit card, as the Tesco Bank one is whitelisted in their backend, but the Ulster Bank is not. But that is not all; all the pages of the “Do it yourself” have mixed-content requests, making it not completely secure. But this is not completely uncommon.

What was uncommon and scary was that while I was trying to force them into accepting the card I got to the point where Chrome would not auto-fill the form because not secure. Uh? Turned out that, unlike news outlets, Vodafone decided that their website with payment information, invoices, and call details does not need to be hardened against MITM, and instead allows stripping HTTPS just fine: non-secure cookies and all.

In particular what happened was that the left-side navigation link to “Payment methods” used an explicit http:// link, and the further “Edit payment method” link is a relative link… so it would bring up the form in a non-encrypted page. I brought it up on Twitter (together with the problems with changing the credit card on file), and they appear to have fixed that particular problem.

But almost a month later when I went out to replace the card with the new Tesco replacement card, I managed to find something else with a similar problem: when going through the “flow” to change the way I receive my bill (I wanted the PDF attached), the completion stage redirects me to an HTTP page. And from there, even though the iframes are then loaded over HTTPS, the security is lost.

Of course there are two other problems: the login pane is rendered on HTTP, which means that Chrome 56 and the latest Firefox consider it not secure, and since the downgrade from HTTPS to HTTP does not log me out, it means the cookies are not secure, and that makes it possible for an attacker to steal them with not much difficulty. Particularly as the site does not seem to send any HTTP headers to make the connection safe ( of Mozilla Observatory).

Okay so these two Italian providers have horrible security, but at least I have to say that they mostly worked fine when I was changing the credit cards — despite the very cryptic error that Vodafone decided to give me because my card was foreign. Let’s now see two other (related) providers: Three Ireland and UK — ironically enough, in-between me having to replace the card and writing this post, Wind Italy has completed the merge with Three Italy.

Both the Threes websites are actually fairly secure, as they have a SAML flow on a separate host for login, and then a separate host again for the account management. Even though they also get a bad grade on Mozilla Observatory.

What is more interesting with these two websites is their reliability, or lack thereof. For now almost a month, the Three Ireland website does not allow me to check my connected payment cards, or change them. Which means the automatic top-up does not work and I have to top-up manually. Whenever I try to get to the “Payment Cards” page, it starts loading and then decides to redirect me back to the homepage of the self-service area. It also appears to be using a way to do redirection that is not compatible with some Chrome policy as there is a complicated warning message on the console when that happens.

Three UK is slightly better but not by much. All of this frustrating experience happened just before I left for my trip to the USA for ENIGMA 2017. As I wrote previously I generally use 3 UK roaming there. To use the roaming I need to enable an add-on (after topping up the prepaid account of course), but the add-ons page kept throwing errors. And the documentation suggested to call the wrong number to enable the add-ons on the phone. They gave me the right one over Twitter, though.

Without going into more examples of failures from phone providers, the question for me would be, why is that all we hear about security and reliability comes from either big companies like Google and Facebook, or startups like Uber and AirBnb, but not from ISPs.

While ISPs stopped being the default provider of email for most people years and years ago, they are still the one conduit we need to connect to the rest of the Internet. And when they screw up, they screw up big. Why is it that they are not driving the reliability efforts?

Another obvious question would be whether the open source movement can actually improve the reliability of ISPs by building more tools for management and accounting, just as they used to be more useful to ISPs by building mail and news servers. Unfortunately, that would require admitting that some times you need to be able to restrict the “freedom” of your users, and that’s not something the open source movement has ever been able to accept.

Running services on Virgin Media (Ireland)

Update 2020-09-16: lots of customers of Virgin Media Ireland appear to be finding this blog post trying to get IPv6 Prefix Delegation working. Unfortunately, the instructions are now completely out of date and provided only for a historical perspective. I haven’t lived in Ireland for three years now, so I don’t know what the current best practices are – if you do, please let the other readers know in the comments! – but the last setup I had for this involved using RA bridging, which is not the easiest thing to set up. Unfortunately I can’t even document how I did that, since movers stole my router when I moved.

Update: just a week after I wrote this down (and barely after I managed to post this), Virgin Media turned off IPv6-PD on the Hub 3.0. I’m now following up with them and currently without working IPv6 (and no public IPv4) at home, which sucks.

I have not spoken much about my network setup since I moved to Dublin, mostly because there isn’t much to speak, although admittedly there are a few interesting things that are quite different from before.

The major one is that my provider (Virgin Media Ireland, formerly known as UPC Ireland) supports native IPv6 connectivity through DS-Lite. For those who are not experts of IPv6 deployments, this means that the network has native IPv6 but loses the public IPv4 addressing: the modem/router gets instead a network-local IPv4 address (usually in the RFC1918 or RFC6598 ranges), and one or more IPv6 prefix delegations from which it provides connectivity to the local network.

This means you lose the ability to port-forward a public IPv4 address to a local host, which many P2P users would be unhappy about, as well as having to deal with one more level of NAT (and that almost always involves rate limiting by the provider on the number of ports that can be opened simultaneously.) On the other hand, it gives you direct, native access to the IPv6 network without taking away (outbound) access to the legacy, IPv4 network, in a much more user-friendly way than useless IPv6-only networks that rely on NAT64. But it also brings a few other challenges with it.

Myself, I actually asked to be opted in the DS-Lite trial when it was still not mandatory. The reason for me is that I don’t really use P2P that much (although a couple of times it was simpler to find a “pirate” copy of a DVD I already own, rather than trying to rip it to watch it now that I have effectively no DVD reader), and so I have very few reasons to need a public IPv4 address. On the other hand, I do have a number of backend-only servers that are only configured over the IPv6 network, and so having native access to the network is preferable. On the other hand I do have sometimes the need to SSH into a local box or HTTP over Transmission or similar software.

Anyway back on my home network, I have a Buffalo router running OpenWRT behind the so-called Virgin Media Hub (which is also the cable modem — and no it’s not more convenient to just get a modem-mode device, because this is EuroDOCSIS which is different from the US version, and Virgin Media does not support it.) And yes this means that IPv4 is actually triple-natted! This device is configured to get a IPv6 prefix delegation from the Hub, and uses that for the local network, as well as the IPv4 NAT.

Note: for this to work your Hub needs to be configured to have DHCPv6 enabled, which may or may not do so by default (mine was enabled, but then a “factory restore” disabled it!) To do so, go to the Hub admin page, login, and under Advanced, DHCP make sure that IPv6 is set to Stateful. That’s it.

There are two main problems that needs to be solved to be able to provide external access to a webapp running on the local server: dynamic addressing and firewalls. These two issues are more intertwined than I would like, making it difficult to explain the solution step by step, so let me first present the problems.

On the machine that needs to serve the web app, the first problem to solve is making sure that it gets at least one stable IPv6 address that can be reached from the outside. It used to be very simple, because except for IPv6 privacy extensions the IPv6 address was stable and calculated based on prefix and hardware (MAC) address. Unfortunately this is not the the case now; RFC7217 provides “Privacy stable addressing”, and NetworkManager implements it. In a relatively-normal situation, these addresses are by all means stable, and you could use them just fine. Except there is a second dynamic issue at hand, at least with my provider: the prefix is not stable, both for the one assigned to the Hub, and for that then delegated to the Buffalo router. Which means the network address that the device gets is closer to random than stable.

While this first part is relatively easy to fix by using one a service that allows you to dynamically update a host name, and indeed this is part of my setup too (I use, it does not solve the next problem, which is to open the firewall to let the connections in. Indeed, firewalls are particularly important on IPv6 networks, where every device would otherwise be connected and visible to the public. Unfortunately unless you connect directly to the Hub, there is no way to tell the device to only allow a given device, no matter what the prefix assigned is. So I started by disabling the IPv6 firewall (since no device is connected to the Hub directly beside the OpenWRT), and rely exclusively on the OpenWRT-provided firewall. This is the first level passed. There is one more.

Since the prefix that the OpenWRT receives as delegation keeps changing, it’s not possible to just state the IPv6 you want to allow access to in the firewall config, as it’ll change every time the prefix changes, even without the privacy mode enabled. But there is a solution: when using stable, not privacy enabled addresses, the suffix of the address is stable, and you can bet that someone already added support in ip6tables to match against a suffix. Unfortunately the OpenWRT UI does not let you set it up, but you can do that from the config file itself.

On the target host, which I’m assuming is using NetworkManager (because if not, you can just let it use the default address and not have to do anything), you have to set this one property:

# nmcli connection show
[take note of the UUID shown in the list]
# nmcli connection modify ${uuid} +ipv6.addr-gen-mode eui64

This re-enables EUI-64 based addressing for IPv6, which is based off the mac address of the card. It’ll change the address (and will require reconfiguration in OpenWRT, too) if you change the network card or its MAC address. But it does the job for me.

From the OpenWRT UI, as I said, there is no way to set the right rule. But you can configure it just fine in the firewall configuration file, /etc/config/firewall:

config rule
        option enabled '1'
        option target 'ACCEPT'
        option name 'My service'
        option family 'ipv6'
        option src 'wan'
        option dest 'lan'
        option dest_ip '::0123:45ff:fe67:89AB/::ffff:ffff:ffff:ffff'

You have to replace ::0123:45ff:fe67:89AB with the correct EUI64 suffix, which includes splicing in ff:fe and flipping one bit. I never remember how to calculate it so I just copy-paste it from the machine as I need it. This should give you a way to punch through all the firealls and get you remote access.

What remains to be solved at this point is having a stable way to contact the service. This is usually easy, as dynamic DNS hosts have existed for over twenty years by now, and indeed the now-notorious (for being at the receiving end of one of the biggest DDoS attacks just a few days ago) Dyn built up their fame. Unfortunately, they appear to have actually vastly dropped the ball when it comes to dynamic DNS hosting, as I couldn’t convince them (at least at the time) to let me update a host with only IPv6. This might be more of a problem with the clients than the service, but it’s still the same. So, as I noted earlier, I ended up using, although it took me a while where to find the right way to update a v6-only host: the default curl command you can find is actually for IPv4 hosts.

Oh yeah there was a last one remaining problem with this, at least when I started looking into fixing this all up: at the time Let’s Encrypt did not support IPv6-only hosts, when it came to validating domains with HTTP requests, so I spent a few weeks fighting and writing tools trying to find a decent way to have a hostname that is both dynamic and allows for DNS-based domain control validation for ACME. I will write about that separately, since it takes us on a tangent that has nothing to do with the actual Virgin Media side of things.

What do you mean it’s not IPv6-compatible?

For those who wonder where I disappeared, I’ve had a bit of an health scare, which is unfortunately common for me during summertime. This time the problem seems to be tied to anxiety, should probably pass once most of the stress related to taxes and work projects deadlines gives up.

Earlier this month we’ve had the IPv6 World Test Day, and things seems to have moved quite a bit since then. Even the Gentoo Infra team had a bit of work to be done to get ready for the test day and to set it up, and if you follow Apple’s software updates you probably know that they “improved IPv6 reliability” with their latest release.

I’m very interested in the IPv6 technology myself, and I’d very much like to rely more on it; unfortunately, as it happens, my hosting provider still hasn’t provided me with IPv6 addresses, nor it seems likely to happen soon. On the other hand, I’ve deployed it at home, even backing off from 6to4 which was my original hope to avoid tunnels (Hurricane Electric is much more reliable, and faster). While I can’t remember an IPv6 address by heart, I can set up proper, visible hostnames for my boxes so that I can compare the logs and not be forced to use NATed addresses all the time.

Now, given that IPv6 is fully deployed in my home network, if a website is set up to use IPv6, then it’ll be using IPv6. It could be a bit of a slow-down when you consider that I use a tunnel to get to the IPv6 network, but generally it seems to behave just as good, possibly because my home network is slow enough by itself. Of course, the website needs to be IPv6-compatible, and not just “IPv6 ready”.

What happens is that a number of websites have enabled IPv6 during the World Test Day, and when they saw that enough users were able to access them just fine, they kept the extra addresses on.. why doing twice the work to turn it off? But that kind of testing sometimes is not just good enough. While the main obstacle to IPv6 support is listening for and responding to IPv6-bound requests, there is code that deals with remote and local host addresses in most applications, including web applications. Validating addresses, applying ACLs, and all these things are due to require knowledge of the addresses it has to deal with, and so many times, they expect dotted-quad IPv4 addresses.

I’m still fighting with one real-world scenario as such. Most of my domains are registered through the French provider OVH who also started providing IPv6-access to their website after the World Day. All the management services work just fine (even though last I checked they didn’t provide a dynamic AAAA record, which is why I had to search for complex alternative approaches which, actually, I’m still keeping up with), as well as the pages detailing their products and services. But when I had to renew one of the domains, it stopped when I was supposed to be redirected to pay (via creditcard), with an internal server error (HTTP 500 Error).

After waiting over the weekend (and a bit, given I was swamped with work), I’ve decided to call to see if it was a known issue: it wasn’t, the system was supposedly working fine, and they suggested me to try a different computer. After testing with Firefox on Windows (no go), I’ve tried the infamous iPad and… it worked. A quick doubt I got was related to the connection protocol, and bingo: it works all fine with IPv4, but fails badly with v6.

This is a very plain example of how just listening for v6 connections is not enough: you need to ensure that the interaction between pieces of the software are still satisfied. In this instance, I can think of two possible reasons why it doesn’t work correctly with IPv6:

  • the system logs all the payment requests separately, and to do so, it assumes the remote host address to be a dotted-quad;
  • since the page redirects to their processing bank’s site, it probably signals it of the incoming request to avoid cross-site forgery, and again it assumes a dotted-quad address.

Whatever the problem, right now they fail to process payments properly, and when I reported it they shut me down twice (first on the phone “oh it’s not our problem, but the bank’s!”), then by mail (“everything should be fine now!” — no it isn’t).. and still they are publishing AAAA records for their website.

If even an European-wide ISP fails this badly at implementing IPv6 on their website (for one critical piece of infrastructure as payment processing is!), I’m afraid that we have very little hope for IPv6 to get deployed worldwide painlessly.

I learn from my mistakes: no more black-box routers

For a series of (un)fortunate happenings at home, I decided to move the phone and ADSL subscription for my house from the previous owner (my mother) to myself as a business entity (since the main use of it is, anyway, the internet connection I use for work), back in November.

My previous provider was Wind/Infostrada; while the service wasn’t perfect it was mostly good enough, but I ended up switching provider. Why? Because their accounting system didn’t allow them to transfer my phone number from the previous (personal) account to a new (business) account: they could move it to business only by changing the number, but I really wasn’t keen to the idea of losing my main phone number that has served us for over twenty years over this.

A quick round calls of other providers turned out that the ex-monopolist (Telecom Italia), with their near-mob business practice refuses to move a personal account with a different provider to a business account with them (they suggested me to move to them, then change the subscription from personal to business, but also noted that “it would definitely cost a lot” — and if they consider the thievery of their mobile subscription a good offer, I’m not sure I ant to know what it would mean to “cost a lot”), and that half the other providers don’t provide with ADSL2+-modulated lines (while I cannot reach the promised 20MB because of the distance from the nearest exchange, at least ADSL2+ is generally more noise-resilient than the standard ADSL modulation pushed to 8Mbit — the default for non-2+ lines in Italy).

Tiscali, on the other hand, has a decent offer, it costed me just as much as I paid before, they provided the wireless router without surcharge, 20Mbit ADSL2+ line. The phone line was to be switched to VoIP but that wasn’t much of a problem, I thought, as I still need an UPS to power the cordless phone and I still keep it running for the two laptops. And they were ready to move my phone number from private to business, changing the account holder and provider all in one move. Cool, I thought.

The first problem came when Wind/Infostrada decided to cut my line seven days shorter than it was supposed to: after a few days of bickering with Tiscali people on the phone (not a toll-free call either, since I had to call from my cellphone), they finally told me that they knew of that particular business practice coming from Wind (and from another provider, FastWeb), and that they could only offer to manage the reimburse for me. Okay not their fault, still not a nice thing to have seven days downtime because of that.

Line worked fine up to yesterday: I connect at a variable speed between 4Mbit and 6Mbit, as usual (although I have a mostly-constant 900kbit upstream which is one thing I was very much looking for), static public IP address. One problem though: they don’t allow me to be directly connected to Internet. As their router also is the VoIP client (and provides two RJ11 connectors for the phones — only one is active, the other is for the two-lines configuration), it has to talk with their servers, so you cannot just ask for a DMZ host. The configuration pages for the router allows you to set a bunch of port redirections, even multiple ports at once, but then again you have a lot of “reserved ports” so you have to insert a long list of redirections ignoring the ports that are used by the VoIP connection (this sounded fishy, why couldn’t they assign an internal IP to the router and use that to talk with their VoIP servers?).

Yesterday, the shit it the fan: while I was updating the PostgreSQL installation on vanguard (the vserver hosting my blog), I lost entirely the connection. Thankfully I have a backup line on both the neighbour’s network and on the cellphone so I’m not entirely cut out, but it definitely was some trouble even just for the moment it happened. Okay, no hurry, wait the usual 20 minutes to see if it’s a temporary problem, but no it’s not that.

The router seems not to find the ADSL connection at all for a while, then it finds it, connects, tries to authenticate… sometimes it works, sometimes it doesn’t. When it works, the connection is established for about 40 seconds then it goes away again. The funnier stuff? Each time the wireless network is also restarted entirely, so it’s a software-reboot of the router each time. And this is not all: when I was still waiting for Wind to give up the line to Tiscali I used the router as simply an access point, without Internet connection but with wireless turned on, and it worked great. Now, if it cannot establish an Internet connection, it disables wireless entirely after a few minutes.

What do you say? A firmware update? Yes, it seems to me as well. They can do remote updates, as they need to do that to change the connection parameters; most likely, something went awry with the update, or something in my use of the router is no longer well supported. The result is the same for me though: I’ve lost connectivity on my business line!

Once I called the tech support (again, not a toll-free call) they told me that technicians would be verifying the line, and that the ETA is between 24 hours and 7 solar days. What?! A whole week to check a line? A business line? When the problem is most likely the router itself? Terrific, I’d say. So I ask if they can give me the SIP parameters for the VoIP line (I have a VoIP-capable phone at home because I have my own, ISP-independent office number over VoIP), and they “obviously” tell me that it’s not possible, as they are only hardcoded into the router’s firmware and not available to their own users.

Oh shit.

So the result is that I’m now waiting Monday to call another service provider (Telecom Italia) and move my connection to them; even if the connection problems were to be solved Monday or Tuesday, there are way too many things that don’t work out with their setup:

  • the (static, public) IP they provided me is still masked in quite a bit of different spam blacklists as my provider is considered “unresponsive”; I have no idea what the status of Telecom Italia is but at least I get better chances with that;
  • the VoIP line was quite clear by itself, but having no voicebox, no in-call incoming call warnings, and no way to configure my own phone to handle that is a bit too much to compromise;
  • further, even though I can redirect most ports (I don’t care about the reserved ones) the redirection only works for TCP and UDP protocols, I have no way to configure IPv6 that way, as HE6 doesn’t get through;
  • and the two notes above ar combined with the fact that they force me onto hardware that I have no control of, and that they seem instead to have control of (as the behaviour changed, and the only way to do that is to reflash it remotely).

So at the end, given that from the website, Telecom will let me use my own hardware for connectivity, and will provide me with a real phone line, I’m going to be more than happy to switch. I guess it helps that a friend of mine works for them, so I would know who to “yell” at if something goes wrong.

Avoiding captive redirects on Libero/Wind/Infostrada

New chapter of my router project if you don’t care to follow it you probably don’t want to read this at all.

Libero – or Infostrada, Wind, how the heck do you want to call it today – is my provider. Like other providers in Italy, who have probably noticed their users using OpenDNS instead of the standard DNS they provide, they started providing “captive redirects” on failed urls: when you mistype an URL or you try to access an hostname that does not exist, they redirect to their own servers, using their own “search engine” (nowadays just a Google frontend!).

This breaks quite a few assumption, included the fact that the .local domains won’t resolve in the standard DNS servers, which in turn makes nss-mdns almost unusable.

Up to a couple of months ago, Libero only provided this service in the primary nameserver, and if you switched around primary and secondary servers, you sidestepped the issue (that was the actual advertised procedure by the Libero staff, on the public page that was linked from within the search results). Unfortunately this had other side effects, for instance the time needed for the update of records more than doubled, which was quite boring with dynamic DNS and with newly-created domains.

Luckily, pdnsd supports blocking particular IP returned by the results to avoid the fake records created for captive redirects, and the example configuration file itself provides an example for using that with OpenDNS to avoid falling into their redirected Google host (quite evil of them in my opinion). And in particular, at the time, there was only one host used for captive redirect, so the rule was quite simple.

Fast forwards to today, the rule have changed; first of all it seems like Libero now uses redirects on both servers (or the secondary fails so often that it always responds from the primary), and most importantly they increased the number of IPs the redirects respond from. After counting four different IPs I decided to go with something more drastic, and ended up blacklisting the whole /24 network that they belong to (which is assigned, in RIPE, to Tiscali France… which is quite strange). I’m not sure if I ended up blacklisting more than I should have; for now it blacklists just enough for me to keep on browsing the net without adverse effects that I can see, and it also no longer stop me from enjoying .local domains… and Firefox auto-search with Google when the hostname does not exist.

For those interested, the configuration section is this one:

server {
 label= “libero”;
 ip =,;
 reject =,;

The first IP (a single host) is the one that was used earlier, I keep it on the blacklist just to be on the safe side.

Free time? Where can I get some?

In the last two weeks I didn’t have much time to write, and in the past week I really didn’t have time. Till today.

In theory today I should have been offline so that my ISP could switch my connection from a 2.5Mbit ADSL to a 20Mbit ADSL2. Unfortunately they did some mess, and instead of disconnecting the network connection while that was switched, they disconnected my phone line.

Luckily my “office” number is a VoIP number, and I use it through a Siemens S450IP cordless that actually continued working. My VoIP provider is also quite cheap when it comes to calls, even though they cost more to Italy than to UK (“uh?” don’t ask…), so I’m not isolated.

If all goes well, tomorrow I’ll be really offline, and then I’ll be back up with ADSL2+. In the worst case, I’ll be offline a few months while I fight against my ISP and I’ll connect from the UMTS phone in the mean time.

Also, it’s the second week in a row that I can spare some time for electrical chore. Last week I modified an halogen lamp (500W!) so that it now has an E27 screw, which in turn allows me to use a standard fluorescent lightbulb, removed the dimmer and replaced it with a foot-controlled switch. Quite a nice thing as we can get to use again the lamp without consuming so much. It’s actually the second lamp I cable back in less than a month, the one by my bed is a very old glass and ceramic lamp, quite nice, but had a very bad cable, I didn’t like touching the plug because it wasn’t properly insulated, so I bought some new cable, a new switch, and cabled it back…

This week instead I was able to put back one of the three lights I removed on the stairs and the corridor because the cable was incorrectly connected. I decided to split the two on the stairs (that were cabled together before), so that the one upstairs is only switched on and off right outside my room, while the one in the middle is connected where it was before downstairs, and on a new, external, plastic box upstairs. This way I can avoid mixing the two phases like they did before, I don’t have to let cable pass through the whole house, and I can still turn them on and off both downstairs and upstairs.

I’ll also avoid adding a switch for the third light outside my room, as it’s basically only used when coming outside either my home office or my mother’s room, which are on the far end of the corridor (which is quite small anyway).

I think this is nice because it means I need less cables. I also was able to pass a new satellite cable in my mother’s room so i don’t need to have it running around the whole room (I just recently found how to pass underneath the floor to get to my room).

Unfortunately, I don’t have all the things I need to complete the cabling, I need some a couple more sockets, and a few more switches, I’ll get those next week hopefully. Piece by piece I’m being able to review the whole electric system.

Oh well, tomorrow I’ll probably be doing some woodwork, maybe I’ll get some photos of what I’m going to do…

I’m glad to be back

So here I am, simply a quick post now to tell you I’m not dead. Unfortunately Italian telco, “Telecom Italia”, took 26 days to find that I wasn’t having connection because another company (Sielte) detached my cable when they repaired the cable of the house in front of me …

So no summer break for me, no vacation, and no connection for a month. I’ll be explaining what happened with details in the next hours, but now I have a month worth of stuff to handle :(

Amarok, Kopete, PulseAudio and other bumps will follow, too.

The long build has started

So, today I had some issues with connection, I’m hating my ISP even more, and I’m considering moving to something else.. unfortunately the only ISP I can choose continuing paying the same monthly amount is Tiscali and I have mixed feelings toward them. If somebody lives in the area of Mestre, Venice, and uses Tiscali, I’d be interesting to hear how the service is here.

A part from that, today KDE 3.5.4 prerelease process started, so I’ve generated the new updated ebuilds and I’m now building kdelibs for the update.. I “just” have to build KDE on two boxes (Enterprise and Defiant, Linux and FreeBSD), two times (one with pre-release and one with final release). Sigh.

I hope not to have to do also monolithic ebuilds, but that isn’t sure yet.

At least this time I don’t work on a short (negative) notice :) Thanks a lot to Caleb and KDE sysadmins for adding me so quickly to the packagers’ list so that I seen the pre-release myself :)

Maybe I’ll be able to get some sleep while KDE builds..

ISP problems are lame

The title is a quote from lfranchi in #amarok channel ;) When I read him saying that, I thought it was a good idea for a blog title, so here it is.

It’s now three days that my ISP is having problems with routing. I cannot access Userfriendly, I cannot connect to Azzurra through their own server, and every time I try a cvs up of gentoo-x86 module, it stops still counting what has to update. The result is that I cannot have an update cvs checkout to work with, sigh.

The situation is ridiculous even more when it comes to the ISP’s customer care, that does not even ask em for which damn gateway I’m using and simply tell me to “try with another computer” … I have three computers, I tried with another router, I asked a friend to try with another connection and they still think the problem is mine. And I pay them, sometimes I wonder why…

So you can tell if I’m not that quick to fix issues as usual, it’s not that I don’t want to, it’s just that I have to use a stupid ISP that does not seem to be willing to give its users a good service. If anybody from that particular ISP staff is reading this (thing that I doubt, at least because they seem to be more and more oriented on dropping useful features for features that bring them more money from Windows users), I will be happy to explain them what they could do to improve the support with almost no waste of time.

On other news, I’m happy to read that Poettering ported PulseAudio to Avahi, next ebuild will be quite simpler to handle.