Impressions of Path64 compiler

So I noticed today that an ebuild for the Path64 compiler hit Portage; being the ELF nerd that I am, it interested me on the technical level, more than for the optimizations (especially since I’m never happy to hear about “the most sophisticated” about anything; claims like that tend to be simply bothersome to me).

Before starting with testing the compiler, I got to say that the ebuilds themselves had a bit of trouble: the pre-built binary one (dev-lang/ekopath) is changing the path at each update, which breaks Makefiles and other scripts where you could be using a full path as compiler (which has to be the case if you wish to target the binary toolchain rather than the custom-built one), while the custom-built one (dev-lang/path64) does not check for the validity of the dynamic linker name when trying to gather it from GCC, and breaks when using my customized specs for forced --as-needed as they change the commandline used to call collect2. Both problems are now reported in bugzilla and I hope they’ll be solved soon.

What is my baseline test? Well, let’s start with something simple: Ruby-ELF has a number of tests implemented for multiple compilers, in particular GCC, SunStudio and ICC on Linux/AMD64; adding a new compiler just requires rebuilding some object files, and then add some lines of code in the testsuite to check those out. There are always a few attributes that need to be adapted, such as the ELF entry points, but that’s beside the point now, and it is expected of compilers to have small variations in their behaviour, otherwise it wouldn’t make sense to have multiple compilers at all.

This test alone caused me to feel like I’m playing with an alpha-version of a compiler rather than something already targeted at production use, like it seems to be sold to the public. Given that the testfiles I use are very small and simplistic, I wasn’t expecting any difference at all, beside the most obvious ones. For instance, I already know that ICC appends a .0 suffix on all the local symbols (unit-static ones), and SunCC uses common symbols rather than BSS symbols for external TLS variables. But all in all, they are very similar. Turns out that Path64 has more semantic differences than the others.

First issue: on a very simple, hello-world type executable, where only one symbols – printf() – is used, all the compilers manage to only link to libc.so.6, which provides that symbol. Path64 instead adds one more dependency over libgcc.so, or rather its own variation of it. This in turn adds a dependency over libm.so, which makes it two extra objects to be loaded for simple executables (yes it might sound like it is impossible not to load the math library, but there are cases where that actually happens). This is extra nasty because linking to that library also means emitting “weak symbols” used for C++ language support.

Not extremely difficult to work around though: just add -Wl,--as-needed to the command line to make it skip over libgcc.so as it is really unused — this is what GCC does in its specs files by the way, it enables as-needed linking, lists its support library, then disable it again, so that the original semantics are restored.

There is one particularity to the Pathscale compiler: it sets the OS ABI on the ELF file to the code for Linux, on static executables. Neither GCC nor ICC do so (I’m not sure about SunStudio as I was unable to produce a static executable out of it last time). Nothing wrong with this, and I’m actually often wondering why compilers never did that.

Next up start the trouble for the compiler: one of the tests is designed to make sure that Ruby-ELF can provide the correct nm-style description code for the symbols in the object files. This is the most compiler-specific test of the whole suite, as both the notes I wrote above about ICC and SunStudio come from this one. Path64 is not as much inconsistent as it seems to be buggy in this area though.

The first difference is that the other three compilers are emitting, in the relocatable object file, an absolute symbol with the name of the source translation unit. This is not the case for Path64, but it isn’t much of a problem: the symbol is probably helpful during debug but not for real usage of the object, so it would just be an issue of rewiring the test. Where the problems arise is when it comes to the .data.rel.ro section and Copy-on-Write which is one of my pet peeves.

The test source file contains combination of static, exported, and external variables and constants; since the unit is compiled as PIC, it also contains combination of constants that contain relocated and non-relocated code:



char external_variable[] = "foo";
static char static_variable[] tc_used = "foo";

const char external_constant[] = "foo";
static const char static_constant[] tc_used = "foo";

const char *relocated_external_variable = "foo";
const char *const relocated_external_constant = "foo";

static const char *relocated_static_variable tc_used = "foo";
static const char *const relocated_static_constant tc_used = "foo";

All three of the compilers implemented up to now are good and emit the non-relocated constants in the .rodata section, keeping only the relocated ones (i.e., the pointers) in the .data.rel.ro sections that are copy-on-write.

Finally, for those who keep scores, the missed optimization I noted back in April, is missing in path64 as well as GCC and ICC. Only clang up to now was able to actually make the best out of that code.

I guess I’ll have some reports to do to PathScale, and I’ll keep an eye on this compiler. On the other hand, please don’t ask for this to be tested in any tinderbox for now. Before I can even just consider this, it’ll need to improve a bit further… and I’ll need a more powerful machine to use for tinderboxing.

Exit mobile version