SITREP — Munin

You might be wondering where I disappeared given that I haven’t written for over a week. Knowing me it might have been bad, but luckily the situation is not that negative. I’m actually back in the States for a visit given I missed this year’s H1B lottery.

Now while I can’t figure out a permanent presence here, I’ve started working on a few tasks, including working with Munin to figure out the load on our current system. Thankfully, now that Munin is developed on Git, it’s dead easy to backport fixes, and to send new ones upstream. Indeed, the 2.0.2-r2 version that is in Gentoo is a little bit more stable and usable than the upstream-released one thanks to it. The one thing that I haven’t been able to work on yet as much as I want is supporting IPv6 nodes.

In particular, if you add a node to the Munin master using a hostname as address, and the hostname resolves as both A and AAAA, with version 2 it’ll try the IPv6 address and that will time out, because the node by default is only listening to IPv4 (on 0.0.0.0 — for whatever reason, the default config has an open listener and authorisation for localhost only, which usually is not what you want). For node IPv6 support, you need the new Net::Server for which an ebuild is present but it’s not (yet) in tree.

Now, in this new version I wired in a good support for the Java-based plugin — which is basically just a way to connect and monitor remote JMX support. The problem with this plugin is that it’s designed to monitor Tomcat and only Tomcat — and is not really wildcarded: you can choose between one of many possible plugin loggers, but they do not let you choose a custom value — this makes it hard for me to use since I want to monitor some custom data out of a Java app that uses JMX by default. So I guess I’ll soon be spending time working in Java, whooooo.

Still talking about Munin, you will probably soon have a new revision bump with an added syslog USE flag that adds a Log::Syslog dependency as well as setting up the configuration files to use syslog — I really dislike having too many files for logs around, especially when metalog is there for that.

I guess this is all for what concerns Munin, for now.

More on the SuperMicro iKVM

In my previous post I narrated my adventure trying to get a memtest running on Excelsior — at the end I was able to run it through a modified JNLP, and one more open port. I was suggested to look into one particular Java application from Supermicro that does not require using the JNLP at all, and instead installs as a client, giving access to more features such as the SOL and generic IPMI control…

Unfortunately it seems like the installer (InstallShield for Linux, written in Java — what the heck?) is extremely picky as to which JRE it find and which one it wants, so I haven’t been able to even test it. But at least I got some details going.

I basically listened with Wireshark to what’s going on with the JNLP; the interface uses four combined interfaces, between the browser and Java: port 80, 443, 5900 and 623. The first two are the HTTP/HTTPS interfaces (the JNLP downloads the JARs from HTTPS even thought hey are available on HTTP as well); the third is the VNC/RFB default port, while the fourth is the one that I haven’t understood yet. It’s some kind of USB over IP protocol, and seem to send raw USB data over the wire, standing to the USBC/USBS instances in the trace, but at the same time it doesn’t seem like it’s using it at runtime, as I see no traffic on that port after I connect the ISO file.

The RFB protocol used seems to be the standard one using TightVNC extensions for authentication — I guess they actually used TightVNC’s code for it. The problem with the authentication is that for whatever the reason, it’s not a clear user/password auth. Instead it uses some hash or unique identifier, which changes each time I connect to the web interface — I’m not sure if it’s a hash, it’s definitely not an OTP (as I can start multiple instances of the javaws applet without having to re-download the JNLP), or just a nonce-based authentication, but it’s used both as user and as password.

Edit: actually I had a hunch while looking into it and I confirmed that what it uses is the same SID saved as a cookie after my login on the web interface. Now if I could get the iKVM viewer to work on my system and I could see how that one connects…

The USB over IP protocol is interesting as well; it doesn’t seem to use a standardised port, and Wireshark has no clue as to what’s going on there. As I said I can see USBC and USBS as literals within the traffic as well as the data for the ISO and some other not-well-explained things — I’ll have to work more on that, possibly with smaller files, and without the RFB in the trace.

Does anybody else have clues about this kind of protocols? For what I can tell the firmware for my board’s IPMI contains a copy of Linux (this is what nmap said as well), but I see no released sources for it, nor an offer for them on the zip file I downloaded. I wonder if I should just mail SuperMicro to ask them about it.

Free Software Integralism — Helping or causing trouble?

So today we have two main news that seem to interest the Free Software advocates and, to some lesser extent, developers:

  • Apple officially decided to shun Java as well as Flash for their own operating system, discontinuing the development of their own JRE for one, and declaring that the latter won’t be available as part of the base operating system install in the next release (10.7 Lion);
  • the Free Software Foundation published the criteria under which hardware can be endorsed by the FSF itself.

I’ll start with the second point, first pointing directly to the post by Wouter Verhelst of Debian fame; as he points out, basically the FSF is asking for hardware manufacturer to forget that, up to now, Linux is really a niche market, and push only their way. “My way or the highway”.

Now this could make sense for the ideas embedded into the FSF, although it definitely shows a lack of marketing (I’ll wait and see if there will be any serious hardware manufacturer going to take this way, I bet… none). What should really upset users and developers alike is the reason why that is “[other badges] would give an appearance of legitimacy to those products” … what? Are you now arguing that not only proprietary software is not legitimate at all? Well, let’s give a bit of thought on the meaning of the word from Wiktionary:

  1. In accordance with the law or established legal forms and requirements; lawful.
  2. Conforming to known principles, or established or accepted rules or standards; valid.
  3. Authentic, real, genuine.

If you can’t call that integralism, then I don’t even think I can argue with you on anything.

Now on the other issue, Apple has decided that neither Flash or Java suit their need; in particular they are working on a “Mac App Store” that is designed, obviously, to put the spotlight (sorry for the pun) on the applications they find more complacent for their requirements. Is that such a bad thing? I’m pretty sure that Free Software Foundation is doing the same by trying to strive for “100% Free GNU/Linux distributions”. We’re talking of different requirements for acceptance, does one side have to be totally wrong, unethical, unlawful, rapist and homicidal at all? (Yes, I’m forcefully exaggerating here, bear with me).

What I find very upsetting is that instead of simply putting light on the fact that the requirements of the Mac App Store might not be in the best interest of users but in those of Apple, Free Software Advocates seem to be siding with Adobe and Oracle… wha? Are we talking of the same Adobe whose Flash software we all so much despise, the same Oracle who’s blamed for killing OpenSolaris?

But it seems that somehow Apple is a bigger problem than Adobe and Oracle; why’s that? Well, because somehow they are eating in the numbers that, if better handled at many levels, starting from the FSF itself, would have been there for Linux (GNU or not, at this point): the hacky geeks, the tech-savvy average person who’s tired of Windows; the low-budget small office who doesn’t want to spend half their money on Windows licenses…

In-fighting, religious fights for “purity”, elitism, all these things have put Linux and FLOSS in a very bad light for many people. And rather than trying to solve that, we’re blaming the one actor that at the time looked much less palatable, and became instead the most common choice: Apple.

On the other hand, I’d like to point out that when properly managed, Free Software can become much better than proprietaryware: Amarok (of 1.4 apex) took iTunes full on and become something better – although right now it feels like iTunes caught up and hadn’t been cleared up just yet.

JRuby in Gentoo, a bumpy road

So as you might remember one of the reasons for my restructuring of Ruby eclasses has been the chance of getting “proper” support for JRuby in Gentoo. With proper I mean that the packages are declared to support specifically JRuby, and at the same time get tested so that they actually work with it. For this to work out properly it also requires that the testsuites for the packages were written in such a way to work fine when run through JRuby. But I already ranted about that so I won’t go into those details again now.

Instead of writing about the Ruby-NG side of the fence, I’d like now to talk about the (to me, much more unknown) Java side of the fence. Now, I know near to nothing about Java packaging. I know my way around the Java language, sort of, but not much about the packaging side of it: ant is mostly black magic, and the whole mess with Jar files, manifests, and dependencies are huge question marks in my view.

So the first problem, is that while JRuby has been in tree for a while, there was near to no way to make sure that it worked as intended, before. This resulted in pretty erratic behaviour with it (like the FFI-based libraries not working at all); with the root causes hidden by sub-projects as well (in this case, jffi was being installed without the needed JNI library). This is also often tied with the way JRuby is developed: it’s modularised, but the projects are sometimes only used by JRuby itself, so without it working, it’s almost impossible to tell if the rest is working as intended either.

There is also the problem of packaging what was not available before; for instance I found out that JRuby now needs a Java implementation of a YAML parser, yecht written by a JRUby developer Ola Bini, which is bundled by JRuby itself, but we should have been packaging ourselves. Thanks to Fabiano “elbryan”, I got an ebuild for yecht itself… the problem is that the version shipped with JRuby also contains the classes needed for the JRuby bindings, which aren’t available when building the Jar standalone. D’uh!

Other problems are really tied to the projects standalone: as I said above, we had some trouble with JFFI and the fact that we didn’t install the JNI library (an ELF shared object) that it needs to work. Now we do, but… every time we start jruby, whether we’re going to use FFI or not, we’ve got to call into java-config (which is a bash call into a Python script), to ask it where the jffi library is so that we can set a property… it doesn’t sound right! And the same applies to dev-java/jna (which seem to provide similar features, by the way). The problem with both is that they really should have a way to tell at build time were their native libraries should be found, and then leave the property as a way to override it… Unfortunately it seems like there is no way to do that. I reported a feature request for JFFI and we’ll see how it’s going to turn out.

Unfortunately, the problems don’t stop here. Another problem happens with the OpenSSL-compatibility extension provided by dev-ruby/jruby-openssl; the idea is to provide the same interface of Ruby with the OpenSSL extension, but using Java libraries instead. Since the standard JRE does not provide all the needed interface, this extension uses BouncyCastle to do the work, and that does not work fine here at all: not only the extension bundles a copy of bcprov and bcmail (even in the very git repository!), which by the way don’t seem to work right for me, maybe because they are the JDK 1.4 versions, while we use at least Java5, if not 6 altogether, but more importantly our already-present bcprov/bcmail packages fail badly. The issue here is that bcprov can be loaded as a JCE provider; JCE providers need to be signed, and obviously our rebuilt Jar file is not signed. This is a huge problem that the Java team should most likely look into and something I really cannot look into myself.

On the bright side, though, tonight I’m going to commit a new JRuby ebuild with a much simplified launcher script (and no more symlinks around!). This basically cuts all the conditionals for cygwin, as well as for dynamic discovery of JAVA_HOME path and replace them with assumptions about the Gentoo setup (JAVA_HOME is filled in during install phase) and calls into java-config. Hopefully, this should reduce both possibility of mistakes, and reduce the time needed to process the script. Unfortunately, the script as it is, uses bash, that we all know is far from the fastest shell out there. Porting it to use pure sh is definitely not possible — although one could argue that Java’s own startup is definitely going to take up more time than using bash).

Oh well, we’ll see how it goes from here.

The destiny of Ruby-Elf

After my quick last post I decided to look more into what I could do to save the one year of work I pushed into Ruby-Elf. After fighting a bit to understand how the new String class from Ruby 1.9 worked, I could get the testsuite to pass on Ruby 1.9, and cowstats to also report the correct data.

Unfortunately, this broke down Ruby 1.8 support, as far as I can see because IO#readpartial does not work that well on 1.8 even for on-disk files; similarly happens to JRuby.

After getting Ruby 1.9 to work the obvious next task was to make cowstats work with multiple threading. The final idea was to have a -j parameter akin to make’s but for now I only wanted to create one thread per file to scan. In theory, given native threading, all the threads would be executing at once, scheduled by the system schedule, allowing to saturate the 8 cores, reaching 800% of CPU time as a theoretical maximum.

Unfortunately reality soon kicked in, and the ruby19 process limited itself to 100%, which means a single core out of eight, which also means no parallel scan. A quick glance through the sources shows that while YARV (the Ruby 1.9 VM) lists three possible methods to achieve mutlithreading, only one is currently implemented, the second one. The first method is the old one, green threading, which basically means simulated threads, as the code never executes in parallel but uses an event-loop-like construct to switch the execution between different “threads”. The second method makes use of a giant lock, which in this case is called Giant VM Lock (GVL), and is called GIL (Giant Interpreter Lock) in Python, where the threads are scheduled by the operating system, which allows for more fair scheduling among execution threads, but still allows just one thread per VM to be executed in parallel. The third method is the one I was hoping for and allows for multiple threads to be executed at the same time on different cores on the same virtual machine; instead of having a single lock on the whole VM, the locks are sparse around the code to just lock the needed resources for each thread.

I also checked this out on JRuby, to compare; unfortunately JRuby in portage cannot handle the code as I changed it to work with Ruby 1.9, so I have been unable to actually benchmark a working run of cowstats with it; but I could see that the CPU used by JRuby spiked at 250%, which means it at least is able to execute the threads quite independently; which proves that Ruby can be parallelised up to that point just fine.

So what is the fuss about Ruby 1.9 new native threading support if multiple threads cannot be executed in parallel? Well it still allows for a single process to spawn multiple VMs and execute parallel threads on them, isolated one from the other. Which happens to be useful for Ruby on Rails web applications. If you think well about it, the extra complexity added to deal with binary files is also to address some interesting problems that come up in environment where multiple encodings can often be used, which is, web applications. Similarly the JRuby approach, which is very fast once the JVM is loaded, works fine for applications where you start up once and then proceed to elaborate for a long time, which again fits web application and little more.

I’m afraid to say that what we’re going to see in the next and not-so-next future is for Ruby to lose the general-purpose support and just focus more and more on the web application side of the fence. Which is sad since I really cannot think of anything else I would like to rewrite my tools in, beside, maybe, C# (if it could be compiled in ELF — I should try Vala for that). I feel like my favourite general-purpose language is slipping away, and I should stop worrying and working on that.

The end of Ruby-Elf?

One thing that always bothered me of Ruby-Elf and its tools (cowstats, the linking collision script and the rest) is that they don’t really make good use of the 8-way system I have as main workstation, which is not really good considering that it also means that the cowstats run after each emerge in my main system blocks on a single core, and I don’t even want to try it on tinderbox as a whole. It also means I cannot replace scanelf with a similar script in Ruby (neither are parallelised but the C-based scanelf is obviously faster).

To address this problem, I considered moving to JRuby as interpreter; it’s already using native threading with the 1.8 syntax, and it would have been decently good to get cowstats multithreaded, the problem is that the startup time is considerable, which wasn’t very good to begin with. So I decided to bite the bullet and try Ruby 1.9 to see how it performed.

Beside some slight syntax change, I started already having problems with Ruby 1.9 and Ruby-Elf. The testsuite is still written using Test::Unit, because RSpec didn’t suit my needs well at all, and for that reason I prepared an ebuild (masked for now; remember I hate overlays unless very necessary, I’ll go deeper inside on that issue in the next weeks hopefully) for the improved test-unit extension (not using gem, as usual). It should work with Ruby 1.8 too, although I found some test failures on the test framework itself with both 1.9 and 1.8.

The following problem has been with the readbytes.rb interface I am using, which is gone in 1.9, stating that the interface is implemented already in the IO class. Unfortunately the only interface that gets near is IO#readpartial but it’s not actually the same thing and has a quite different meaning in my opinion, but again, let’s not get anal about that, it could be fixed quite easily.

What became a big problem are the changes in the String class, which is now encoding-aware, and it expects each elements in it to be a character. While this is tremendously good since String would then work more like a String than a character array (like is done in the underlying C language), it lacked a parallel ByteArray object for handling sequences of bytes, and a binary file interface into IO. This is a very huge deal because the whole of Ruby-Elf is doing little more than binary file parsing.

Now to be honest I didn’t spend too much time running through the changelogs of Ruby 1.9 to identify all the changes, but since the changes seem to be quite huge by themselves, and I could even get simpler binary file handling in PHP than what I’ve seen in the two hours I spent trying to force Ruby 1.9 under submission, I’m afraid to say that Ruby-Elf is going to stagnate and I’ll end up looking at a different language to implement the whole thing.

Luca suggested (as usual) for me to look at Python, but I don’t really like Python that much myself, while the forced indentation may help to make the code more readable, take a look at Brian’s (ferringb) code and you will never say that Perl is the only SSL-based language (sorry Brian, but you know that I find your code quite encrypted). I’m sincerely considering the idea of moving to C#, given that the whole Mono runtime is adding less startup overhead than Java/JRuby would.

Or I could go for the alternative route and just try to write it in Objective C.

On the road to Free Java – a story

After my post about the long road to Free Java I’ve tried to inquiry everybody who might have a clue about it and found what the root cause of the problem was.

Basically, when IcedTea6 is built, it has to bootstrap itself, so it first builds itself with the JDK you provide (gcj-jdk) and then it rebuilds it with icedtea6 itself; but to rebuild itself, it sets the JAVA_HOME variable during build, hoping for ant to pick it up. But by choice of the Gentoo Java team, the JAVA_HOME variable is not supported nor respected, so the override fails, and it tries to build itself still with the previous compiler, the wrong one.

How can this work for anybody then, like Andrew said ? Well the trick is in the keyword you use. On stable systems, ant-core-1.7.0-r3 from the Java overlay is picked up, which contains an hack from Andrew (no you cannot call it “the proper way” since it does not fix the comments; if your idea of hack does not encompass doing a change and leave the opposite comments still there, then I start to worry…) to allow respecting JAVA_HOME. If you are on unstable systems, you’re going to get ant-core-1.7.1 from the main tree, that version does not have the hack, and thus will fail to build IcedTea6. I’m not sure where David Philippi have seen ant-core-1.7.1-r1 from java-overlay, since it still has the old version.

So I decided that even if it does not conform to my usual QA strictness, I wanted to try out IcedTea6. The reason for that is that I’m addicted to Yahoo Games and I haven’t found any free software package yet that supports playing online to Canasta, for instance… and I was tired to use the laptop for that since I have Yamato here all the time. I then disabled my --as-needed compiler (the build system fails when it comes to properly order the linking lines), installed the hacked ant-core, and merged icedtea6.

This time 1.3.1-r1 finally merged and I could try it out, good! about:plugins on Firefox shows me that it’s picked up, but … once I get to Yahoo games page, it does not really work: the “table” window opens, but then it does not load the applet, it goes in timeout and tries to reload; does so a few times, then Yahoo tells you to disable popup blockers.

I tried a couple more applets along the line, but it still failed quite badly, crashing a couple of time. Yeah we’re on the road to a Free Java, but we’re certainly not there yet.

On the other hand, if somebody knows how to debug problems like the ones I described above, I’d be glad to provide more information to the icedtea/openjdk developers to see that they get resolved and we can finally have a working nsplugin on AMD64.

The road to a Free Java is still very long

If you’ve been reading my blog for a while, you know I was really enthusiastic for Sun opening the JDK so that we could have Free Java. This should have solved both the Java trap and the Java crap (the false sense of portability among systems), and I was really looking forward for it.

When Sun finally made OpenJDK sources available, the Gentoo Java team was already on the field, and we were the first distribution packaging it, albeit in an experimental overlay that most users wouldn’t want to try. I also tried to do my best to improve the situation submitting buildsystem fixes to Sun itself to get it to build on my term, which means system libraries, --as-needed support and so on. The hospital set me back so I couldn’t continue my work to have something working, so at the end I gave up. Too bad.

After I came home I discovered that the IcedTea idea seemed to work fine, and the project was really getting something done, cool! But it wasn’t ready for prime time yet, so I decided to wait; I tried getting back on track last summer, but hospital set me back again, so I decided to not stick around too much, being out of the loop.

But since I stopped using Konqueror (with the rest of KDE) and moved to Firefox I’m missing Java functionality, since I’m on AMD64 and I don’t intend to use the 32-bit Firefox builds. So I decided to check out IcedTea6, based on OpenJDK 6 (that is, the codebase of the 1.6 series of Sun JDK, which should be much more stable). IcedTea6 actually got releases out, 1.2, 1.3, 1.3.1 now. Even though Andrew seems to be optimistic, this is not working just yet.

First problem: while OpenJDK only bootstrapped with Sun JDK 1.7 betas, IcedTea6 only bootstraps with IcedTea itself or another GNU classpath based compiler, like gcj or cacao. Okay so I merged gcj-jdk and used that one; IcedTea6 fails when I force --as-needed through compiler specs. Since the build system is quite too much of a mess for me I didn’t want to try fixing that just yet, I just wanted it to work, so I disabled that and restarted build. With gcj-jdk it fails to build because of a problem with source/target settings. Okay I can still use cacao.

The first problem with cacao is that gnu-classpath does not build with the nsplugin USE flag enabled since it does not recognise xulrunner-1.9, there is a bug open for that but no solution just yet. I disable that one and cacao builds, although IcedTea6 fails later on with internal compiler error. Yuppie!

And this is not just the end of the Odissey; I want to get Freemind to work on Yamato since I’ve tried it out and it really seems cool, but I can get it to work only on OS X for now, since to build some of its dependencies on Gentoo I need a 1.4 JDK, but on AMD64 there is no Sun JDK 1.4, no Blackdown, just Kaffe… and Kaffe 1.1.7 (the latest is 1.1.9), which is not considered anything useful or usable (and indeed it fails to build the dependency here).

I think the road is still very long, and very tricky. And I need to get a Java minion to help me finding what the heck of the problem it is!

Between Mono and Java

Some time ago I expressed my feelings about C# ; to sum them up, I think it’s a nice language, by itself. It’s near enough C to be understandable by most developers who ever worked with that or C++ and it’s much saner than C++ in my opinion.

But I haven’t said much about Mono, even though I’ve been running GNOME for a while now and of course I’ve been using F-spot and, as Tante suggested, gnome-do.

I’ve been thinking about writing something about this since he also posted about Mono, but I think today is the best day of all, as there has been some interesting news in Java land.

While I do see that Mono has improved hugely since I last tried it (for Beagle), I do still have some reserves against Mono/.NET when compared with Java.

The reason for this is not that I think Mono cannot improve or that Java is technically superior, it’s more that I’m glad Sun finally covered the Java crap. OpenJDK was a very good step further, as it opened most of the important parts of the source code for others. But it also became more interesting in the last few days.

First, Sun accepted the FreeBSD port of their JDK into OpenJDK (which is a very good thing for the Gentoo/FreeBSD project!), and then a Darwin port was merged in OpenJDK. Lovely, Sun is taking the right steps to come out of the crap)

In particular, the porters project is something I would have liked to get involved in, if it wasn’t for last year’s health disaster.

In general, I think Java has now much more chances to become the true high-level multiplatform language and environment, over C# and Mono. This because the main implementation is open, rather than having one (or more) open implementations trying to track down the first and main implementation.

But I’d be seriously interested on a C# compiler that didn’t need Mono runtime, kinda like Vala.

The long way for Java on Gentoo/FreeBSD

Even though the Gentoo/FreeBSD team had support for Java for a while now, thanks to Timothy who prepared the first ebuilds for the Diablo JRE and JDK, the actual support for Java wasn’t an easy task.

The first problem is that the Diablo software is only for version 1.5 of Sun’s Java implementation (Java5), so the old Generation 1 ebuilds simply didn’t work, and just recently the Gentoo Java team (with a huge effort) ported most of the ebuilds in the tree to Generation 2 by moving away those that weren’t ported already.

Another problem is to actually get a few packages fixed, as it’s common even more for Java than other classes of packages that the find or cp commands get their parameters in the wrong order. This says nothing about the skills of the Java Team (which is a really good and skilled team), it’s just that for Java there are almost no make install commands to run, so the ebuilds have to do their magic by hand.

Then there is the «little» problem that Sun Java 1.5 is quite old, and the Diablo project hasn’t released anything in quite a few months, so the only hope to get a working modern Java on FreeBSD is represented by OpenJDK (if Sun decided to actually apply my patches, or at least review them).

Nevertheless, even with these problems we were able to provide working Tomcat and NetBeans on Gentoo/FreeBSD, as well as Azureus, JRisk and a few other programs, with all their dependencies. Yesterday I started working on getting the new NetBeans 6.0 ebuild from the java-experimental overlay to work, but this proved to be a wicked work.

Beside the huge amount of new dependencies, most of which comes from the main tree, but a few came from the overlays too, the new ebuild has a dependency over dev-java/proguard which in turn depends on dev-java/sun-j2me-bin that, as the name suggest, is a binary, this time not a Java binary, but an ELF Linux binary.

I think I read that J2ME was being opensourced by Sun a while before OpenJDK, but I haven’t been able to find the sources, to try getting a from-source version of it working, solving the problem, so for now, we’re stuck.

In the mean time, for what concerns OpenJDK, I have quite a few patches here that are not applied on the java-experimental overlay either; beside one to fix the executable stack of the ASM source files (Sun fixes this after build, I suppose), there’s a warning fix for GCC 4.2, a patch to use libstdc++ dynamically rather than statically, one to use the system copy of libXinerama, rather than the internal copy, one to link libfontconfig directly rather than by dlopen(), and in particular, a big one that fixes the locale problem, setting the language to the value set by LC_MESSAGES rather than LC_CTYPE; this way when running a Java application with LC_CTYPE set to Italian, and LC_MESSAGES set to English, the interface doesn’t come up in Italian.

And keeping on Sun topic, I’m currently running Solaris Express installation on VMware server, waiting for VirtualBox to fix their PCnet emulation (no, it’s not Solaris not supporting the emulated hardware, it’s the emulated hardware that is not good enough).