ChatGPT解决这个技术问题 Extra ChatGPT

Why does NaN - NaN == 0.0 with the Intel C++ Compiler?

It is well-known that NaNs propagate in arithmetic, but I couldn't find any demonstrations, so I wrote a small test:

#include <limits>
#include <cstdio>

int main(int argc, char* argv[]) {
    float qNaN = std::numeric_limits<float>::quiet_NaN();

    float neg = -qNaN;

    float sub1 = 6.0f - qNaN;
    float sub2 = qNaN - 6.0f;
    float sub3 = qNaN - qNaN;

    float add1 = 6.0f + qNaN;
    float add2 = qNaN + qNaN;

    float div1 = 6.0f / qNaN;
    float div2 = qNaN / 6.0f;
    float div3 = qNaN / qNaN;

    float mul1 = 6.0f * qNaN;
    float mul2 = qNaN * qNaN;

    printf(
        "neg: %f\nsub: %f %f %f\nadd: %f %f\ndiv: %f %f %f\nmul: %f %f\n",
        neg, sub1,sub2,sub3, add1,add2, div1,div2,div3, mul1,mul2
    );

    return 0;
}

The example (running live here) produces basically what I would expect (the negative is a little weird, but it kind of makes sense):

neg: -nan
sub: nan nan nan
add: nan nan
div: nan nan nan
mul: nan nan

MSVC 2015 produces something similar. However, Intel C++ 15 produces:

neg: -nan(ind)
sub: nan nan 0.000000
add: nan nan
div: nan nan nan
mul: nan nan

Specifically, qNaN - qNaN == 0.0.

This... can't be right, right? What do the relevant standards (ISO C, ISO C++, IEEE 754) say about this, and why is there a difference in behavior between the compilers?

Javascript and Python(numpy) do not have this behavior. Nan-NaN is NaN. Perl and Scala also behave similarly.
Maybe you enabled unsafe math optimizations (the equivalent of -ffast-math on gcc)?
@n.m.: Not true. Annex F, which is optional but normative when supported, and necessary to have floating point behavior specified at all, essentially incorporates IEEE 754 into C.
If you want to ask about the IEEE 754 standard, mention it somewhere in the question.
I was sure this question was about JavaScript from the title.

P
Petr Abdulin

The default floating point handling in Intel C++ compiler is /fp:fast, which handles NaN's unsafely (which also results in NaN == NaN being true for example). Try specifying /fp:strict or /fp:precise and see if that helps.


I was just trying this myself. Indeed, specifying either precise or strict fixes the problem.
I'd like to endorse Intel's decision to default to /fp:fast: if you want something safe, you should probably better avoid NaNs turning up in the first place, and generally don't use == with floating-point numbers. Relying on the weird semantics that IEEE754 assigns to NaN is asking for trouble.
@leftaroundabout: What do you find weird about NaN, aside from the IMHO horrible decision to have NaN!=NaN return true?
NaNs have important uses - they can detect exceptional situations without requiring tests after every calculation. Not every floating-point developer needs them but don't dismiss them.
@supercat Out of curiosity, do you agree with the decision to have NaN==NaN return false?
C
Community

This . . . can't be right, right? My question: what do the relevant standards (ISO C, ISO C++, IEEE 754) say about this?

Petr Abdulin already answered why the compiler gives a 0.0 answer.

Here is what IEEE-754:2008 says:

(6.2 Operations with NaNs) "[...] For an operation with quiet NaN inputs, other than maximum and minimum operations, if a floating-point result is to be delivered the result shall be a quiet NaN which should be one of the input NaNs."

So the only valid result for the subtraction of two quiet NaN operand is a quiet NaN; any other result is not valid.

The C Standard says:

(C11, F.9.2 Expression transformations p1) "[...] x − x → 0. 0 "The expressions x − x and 0. 0 are not equivalent if x is a NaN or infinite"

(where here NaN denotes a quiet NaN as per F.2.1p1 "This specification does not define the behavior of signaling NaNs. It generally uses the term NaN to denote quiet NaNs")


z
zwol

Since I see an answer impugning the standards compliance of Intel's compiler, and no one else has mentioned this, I will point out that both GCC and Clang have a mode in which they do something quite similar. Their default behavior is IEEE-compliant —

$ g++ -O2 test.cc && ./a.out 
neg: -nan
sub: nan nan nan
add: nan nan
div: nan nan nan
mul: nan nan

$ clang++ -O2 test.cc && ./a.out 
neg: -nan
sub: -nan nan nan
add: nan nan
div: nan nan nan
mul: nan nan

— but if you ask for speed at the expense of correctness, you get what you ask for —

$ g++ -O2 -ffast-math test.cc && ./a.out 
neg: -nan
sub: nan nan 0.000000
add: nan nan
div: nan nan 1.000000
mul: nan nan

$ clang++ -O2 -ffast-math test.cc && ./a.out 
neg: -nan
sub: -nan nan 0.000000
add: nan nan
div: nan nan nan
mul: nan nan

I think it is entirely fair to criticize ICC's choice of default, but I would not read the entire Unix wars back into that decision.


Notice that with -ffast-math, gcc is not complying to ISO 9899:2011 with respect to floating point arithmetic any more.
@FUZxxl Yes, the point is that both compilers have a noncompliant floating-point mode, it's just that icc defaults to that mode and gcc doesn't.
Just to throw fuel in the fire, I really like Intel's choice to enable fast math by default. The whole point of using floats is to get high throughput.