Yesterday I have released a new version of unpaper which is now in Portage, even though is dependencies are not exactly straightforward after making it use libav. But when I packaged it, I realized that the tests were failing — but I have been sure to run the tests all the time while making changes to make sure not to break the algorithms which (as you may remember) I have not designed or written — I don’t really have enough math to figure out what’s going on with them. I was able to simplify a few things but I needed Luca’s help for the most part.
Turned out that the problem only happened when building with -O2 -march=native
so I decided to restrict tests and look into it in the morning again. Indeed, on Excelsior, using -march=native
would cause it to fail, but on my laptop (where I have been running the test after every single commit), it would not fail. Why? Furthermore, Luca was also reporting test failures on his laptop with OSX and clang, but I had not tested there to begin with.
A quick inspection of one of the failing tests’ outputs with vbindiff
showed that the diffs would be quite minimal, one bit off at some non-obvious interval. It smelled like a very minimal change. After complaining on G+, Måns pushed me to the right direction: some instruction set that differs between the two.
My laptop uses the core-avx-i
arch, while the server uses bdver1
. They have different levels of SSE4 support – AMD having their own SSE4a implementation – and different extensions. I should probably have paid more attention here and noticed how the Bulldozer has FMA4 instructions, but I did not, it’ll show important later.
I decided to start disabling extensions in alphabetical order, mostly expecting the problem to be in AMD’s implementation of some instructions pending some microcode update. When I disabled AVX, the problem went away — AVX has essentially a new encoding of instructions, so enabling AVX causes all the instructions otherwise present in SSE to be re-encoded, and is a dependency for FMA4 instructions to be usable.
The problem was reducing the code enough to be able to figure out if the problem was a bug in the code, in the compiler, in the CPU or just in the assumptions. Given that unpaper is over five thousands lines of code and comments, I needed to reduce it a lot. Luckily, there are ways around it.
The first step is to look in which part of the code the problem appears. Luckily unpaper is designed with a bunch of functions that run one after the other. I started disabling filters and masks and I was able to limit the problem to the deskewing code — which is when most of the problems happened before.
But even the deskewing code is a lot — and it depends on at least some part of the general processing to be run, including loading the file and converting it to an AVFrame
structure. I decided to try to reduce the code to a standalone unit calling into the full deskewing code. But when I copied over and looked at how much code was involved, between the skew detection and the actual rotation, it was still a lot. I decided to start looking with gdb
to figure out which of the two halves was misbehaving.
The interface between the two halves is well-defined: the first return the detected skew, and the latter takes the rotation to apply (the negative value to what the first returned) and the image to apply it to. It’s easy. A quick look through gdb on the call to rotate()
in both a working and failing setup told me that the returned value from the first half matched perfectly, this is great because it meant that the surface to inspect was heavily reduced.
Since I did not want to have to test all the code to load the file from disk and decode it into a RAW representation, I looked into the gdb
manual and found the dump
commands that allows you to dump part of the process’s memory into a file. I dumped the AVFrame::data
content, and decided to use that as an input. At first I decided to just compile it into the binary (you only need to use xxd -i
to generate C code that declares the whole binary file as a byte array) but it turns out that GCC is not designed to compile efficiently a 17MB binary blob passed in as a byte array. I then opted in for just opening the raw binary file and fread()
it into the AVFrame
object.
My original plan involved using creduce to find the minimal set of code needed to trigger the problem, but it was tricky, especially when trying to match a complete file output to the md5. I decided to proceed with the reduction manually, starting from all the conditional for pixel formats that were not exercised… and then I realized that I could split again the code in two operations. Indeed while the main interface is only rotate()
, there were two logical parts of the code in use, one translating the coordinates before-and-after the rotation, and the interpolation code that would read the old pixels and write the new ones. This latter part also depended on all the code to set the pixel in place starting from its components.
By writing as output the calls to the interpolation function, I was able to restrict the issue to the coordinate translation code, rather than the interpolation one, which made it much better: the reduced test case went down to a handful of lines:
void rotate(const float radians, AVFrame *source, AVFrame *target) {
const int w = source->width;
const int h = source->height;
// create 2D rotation matrix
const float sinval = sinf(radians);
const float cosval = cosf(radians);
const float midX = w / 2.0f;
const float midY = h / 2.0f;
for (int y = 0; y < h; y++) {
for (int x = 0; x < w; x++) {
const float srcX = midX + (x - midX) * cosval + (y - midY) * sinval;
const float srcY = midY + (y - midY) * cosval - (x - midX) * sinval;
externalCall(srcX, srcY);
}
}
}
Code language: C++ (cpp)
Here externalCall
being a simple function to extrapolate the values, the only thing it does is printing them on the standard error stream. In this version there is still reference to the input and output AVFrame
objects, but as you can notice there is no usage of them, which means that now the testcase is self-contained and does not require any input or output file.
Much better but still too much code to go through. The inner loop over x
was simple to remove, just hardwire it to zero and the compiler still was able to reproduce the problem, but if I hardwired y
to zero, then the compiler would trigger constant propagation and just pre-calculate the right value, whether or not AVX was in use.
At this point I was able to execute creduce
; I only needed to check for the first line of the output to match the “incorrect” version, and no input was requested (the radians
value was fixed). Unfortunately it turns out that using creduce
with loops is not a great idea, because it is well possible for it to reduce away the y++
statement or the y < h
comparison for exit, and then you’re in trouble. Indeed it got stuck multiple times in infinite loops on my code.
But it did help a little bit to simplify the calculation. And with again a lot of help by Måns on making sure that the sinf()
/cosf()
functions would not return different values – they don’t, also they are actually collapsed by the compiler to a single call to sincosf()
, so you don’t have to write ugly code to leverage it! – I brought down the code to
extern void externCall(float);
extern float sinrotation();
extern float cosrotation();
static const float midX = 850.5f;
static const float midY = 1753.5f;
void main() {
const float srcX = midX * cosrotation() - midY * sinrotation();
externCall(srcX);
}
Code language: C++ (cpp)
No external libraries, not even libm. The external functions are in a separate source file, and beside providing fixed values for sine and cosine, the externCall()
function only calls printf()
with the provided value. Oh if you’re curious, the radians
parameter became 0.6f
, because 0, 1 and 0.5 would not trigger the behaviour, but 0.6 which is the truncated version of the actual parameter coming from the test file, would.
Checking the generated assembly code for the function then pointed out the problem, at least to Måns who actually knows Intel assembly. Here follows a diff of the code above, built with -march=bdver1
and with -march=bdver1 -mno-fma4
— because turns out the instruction causing the problem is not an AVX one but an FMA4, more on that after the diff.
movq -8(%rbp), %rax
xorq %fs:40, %rax
jne .L6
- vmovss -20(%rbp), %xmm2
- vmulss .LC1(%rip), %xmm0, %xmm0
- vmulss .LC0(%rip), %xmm2, %xmm1
+ vmulss .LC1(%rip), %xmm0, %xmm0
+ vmovss -20(%rbp), %xmm1
+ vfmsubss %xmm0, .LC0(%rip), %xmm1, %xmm0
leave
.cfi_remember_state
.cfi_def_cfa 7, 8
- vsubss %xmm0, %xmm1, %xmm0
jmp externCall@PLT
.L6:
.cfi_restore_state
Code language: Diff (diff)
It’s interesting that it’s changing the order of the instructions as well, as well as the constants — for this diff I have manually swapped .LC0
and .LC1
on one side of the diff, as they would just end up with different names due to instruction ordering.
As you can see, the FMA4 version has one instruction less: vfmsubss
replaces both one of the vmulss
and the one vsubss
instruction. vfmsubss
is a FMA4 instruction that performs a Fused Multiply and Subtract operation — midX * cosrotation() - midY * sinrotation()
indeed has a multiply and subtract!
Originally, since I was disabling the whole AVX instruction set, all the vmulss
instructions would end up replaced by mulss
which is the SSE version of the same instruction. But when I realized that the missing correspondence was vfmsubss
and I googled for it, it was obvious that FMA4 was the culprit, not the whole AVX.
Great, but how does that explain the failure on Luca’s laptop? He’s not so crazy so use an AMD laptop — nobody would be! Well, turns out that Intel also have their Fused Multiply-Add instruction set, just only with three operands rather than four, starting from Haswell CPUs, which include… Luca’s laptop. A quick check on my NUC which also has a Haswell CPU confirms that the problem exists also for the core-avx2
architecture, even though the code diff is slightly less obvious:
movq -24(%rbp), %rax
xorq %fs:40, %rax
jne .L6
- vmulss .LC1(%rip), %xmm0, %xmm0
- vmovd %ebx, %xmm2
- vmulss .LC0(%rip), %xmm2, %xmm1
+ vmulss .LC1(%rip), %xmm0, %xmm0
+ vmovd %ebx, %xmm1
+ vfmsub132ss .LC0(%rip), %xmm0, %xmm1
addq $24, %rsp
+ vmovaps %xmm1, %xmm0
popq %rbx
- vsubss %xmm0, %xmm1, %xmm0
popq %rbp
.cfi_remember_state
.cfi_def_cfa 7, 8
Code language: Diff (diff)
Once again I swapped .LC0
and .LC1
afterwards for consistency.
The main difference here is that the instruction for fused multiply-subtract is vfmsub132ss
and a vmovaps
is involved as well. If I read the documentation correctly this is because it stores the result in %xmm1
but needs to move it to %xmm0
to pass it to the external function. I’m not enough of an expert to tell whether gcc is doing extra work here.
So why is this instruction causing problems? Well, Måns knew and pointed out that the result is now more precise, thus I should not work it around. Wikipedia, as linked before, points also out why this happens:
A fused multiply–add is a floating-point multiply–add operation performed in one step, with a single rounding. That is, where an unfused multiply–add would compute the product b×c, round it to N significant bits, add the result to a, and round back to N significant bits, a fused multiply–add would compute the entire sum a+b×c to its full precision before rounding the final result down to N significant bits.
Unfortunately this does mean that we can’t have bitexactness of images for CPUs that implement fused operations. Which means my current test harness is not good, as it compares the MD5 of the output with the golden output from the original test. My probable next move is to use cmp
to count how many bytes differ from the “golden” output (the version without optimisations in use), and if the number is low, like less than 1‰, accept it as valid. It’s probably not ideal and could lead to further variation in output, but it might be a good start.
Optimally, as I said a long time ago I would like to use a tool like pdiff
to tell whether there is actual changes in the pixels, and identify things like 1-pixel translation to any direction, which would be harmless… but until I can figure something out, it’ll be an imperfect testsuite anyway.
A huge thanks to Måns for the immense help, without him I wouldn’t have figured it out so quickly.
Fun fact regarding the Intel FMA thing above that surprised me: GCC 4.9.1 does not seem to generate fused multiply-subtract instructions for either architecture for the reduced testcase. But it does produce them when building the full unpaper.
Some times ago I wrote some SSIM-based diff filter if you want to try it.
Using Måns for that is like employing a robot to hammer a single nail—a huge waste of resources for a simple task.And I’m sorry to remind you rather trivial things but it’s normal for a floating point code to produce slightly different results on different platforms (like slightly non-IEEE compliant floating point operations on Power, or 80-bit x87). If you want it bitexact then employ rounding.
The problem is that the code was tweaked to resist to those particular issues, and indeed it is fine to become slightly different up to a point. Even fastmath works fine for unpaper, which is why this threw me off that much.Also, Måns is the one who suggested the approach of testing the file in itself, so it felt slight appropriate, once he ventured into the post 😅 but most of all I’m still wondering how I can test this… Luca, patches are welcome
Reminds me of LAME test – they encoded an MP3 with freshly compiled encoder and compared it against the reference. If the difference was too large you’re using GCC 2.96 😉