ChatGPT解决这个技术问题 Extra ChatGPT

Signed/unsigned comparisons

I'm trying to understand why the following code doesn't issue a warning at the indicated place.

//from limits.h
#define UINT_MAX 0xffffffff /* maximum unsigned int value */
#define INT_MAX  2147483647 /* maximum (signed) int value */
            /* = 0x7fffffff */

int a = INT_MAX;
//_int64 a = INT_MAX; // makes all warnings go away
unsigned int b = UINT_MAX;
bool c = false;

if(a < b) // warning C4018: '<' : signed/unsigned mismatch
    c = true;
if(a > b) // warning C4018: '<' : signed/unsigned mismatch
    c = true;
if(a <= b) // warning C4018: '<' : signed/unsigned mismatch
    c = true;
if(a >= b) // warning C4018: '<' : signed/unsigned mismatch
    c = true;
if(a == b) // no warning <--- warning expected here
    c = true;
if(((unsigned int)a) == b) // no warning (as expected)
    c = true;
if(a == ((int)b)) // no warning (as expected)
    c = true;

I thought it was to do with background promotion, but the last two seem to say otherwise.

To my mind, the first == comparison is just as much a signed/unsigned mismatch as the others?

gcc 4.4.2 prints warning when invoked with '-Wall'
This is speculation but maybe its optimizing out all the comparisons since it knows the answer at compile time.
Ah! re. bobah's comment: I turned on all warnings and the missing warning now appears. I'm of the opinion that it should have appeared at the same warning level setting as the other comparisons.
@bobah: I really hate that gcc 4.4.2 prints that warning (with no way to tell it to only print it for inequality), since all ways of silencing that warning make things worse. Default promotion reliably converts both -1 or ~0 to highest possible value of any unsigned type, but if you silence the warning by casting it yourself, that you have to know the exact type. So if you change the type (extend it say to unsigned long long), your comparisons with bare -1 will still work (but those give warning) while your comparisons with -1u or (unsigned)-1 will both fail miserably.
I don't know why you need a warning, and why compilers just can't make it work. -1 is negative so is less than any unsigned number. Simples.

E
Erik

When comparing signed with unsigned, the compiler converts the signed value to unsigned. For equality, this doesn't matter, -1 == (unsigned) -1. For other comparisons it matters, e.g. the following is true: -1 > 2U.

EDIT: References:

5/9: (Expressions)

Many binary operators that expect operands of arithmetic or enumeration type cause conversions and yield result types in a similar way. The purpose is to yield a common type, which is also the type of the result. This pattern is called the usual arithmetic conversions, which are defined as follows:

If either operand is of type long double, the other shall be converted to long double.

Otherwise, if either operand is double, the other shall be converted to double.

Otherwise, if either operand is float, the other shall be converted to float.

Otherwise, the integral promotions (4.5) shall be performed on both operands.54)

Then, if either operand is unsigned long the other shall be converted to unsigned long.

Otherwise, if one operand is a long int and the other unsigned int, then if a long int can represent all the values of an unsigned int, the unsigned int shall be converted to a long int; otherwise both operands shall be converted to unsigned long int.

Otherwise, if either operand is long, the other shall be converted to long.

Otherwise, if either operand is unsigned, the other shall be converted to unsigned.

4.7/2: (Integral conversions)

If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo 2n where n is the number of bits used to represent the unsigned type). [Note: In a two’s complement representation, this conversion is conceptual and there is no change in the bit pattern (if there is no truncation). ]

EDIT2: MSVC warning levels

What is warned about on the different warning levels of MSVC is, of course, choices made by the developers. As I see it, their choices in relation to signed/unsigned equality vs greater/less comparisons make sense, this is entirely subjective of course:

-1 == -1 means the same as -1 == (unsigned) -1 - I find that an intuitive result.

-1 < 2 does not mean the same as -1 < (unsigned) 2 - This is less intuitive at first glance, and IMO deserves an "earlier" warning.


How can you convert signed to unsigned? What is the unsigned version of the signed value -1? (signed -1 = 1111, whereas unsigned 15 = 1111, bitwise they may be equal, but they are not logically equal.) I understand that if you force this conversion it will work, but why would the compiler do that? It's illogical. Moreover, as I commented above, when I turned up the warnings the missing == warning appeared, which seems to back up what I say?
As 4.7/2 say, signed to unsigned means no change in bit pattern for two's complement. As to why the compiler does this, it's required by the C++ standard. I believe the reasoning behind VS's warnings at different levels is the chance of an expression being unintended - and I'd agree with them that equality comparison of signed/unsigned is "less likely" to be a problem than inequality comparisons. This is subjective of course - these are choices made by the VC compiler developers.
Ok, I think I almost get it. The way I read that, is the compiler is (conceptually) doing: 'if(((unsigned _int64)0x7fffffff) == ((unsigned _int64)0xffffffff))', because _int64 is the smallest type that can represent both 0x7fffffff and 0xffffffff in unsigned terms?
Actually comparing to (unsigned)-1 or -1u is often worse than comparing to -1. That's because (unsigned __int64)-1 == -1, but (unsigned __int64)-1 != (unsigned)-1. So if the compiler gives a warning, you attempt to silence it by cast to unsigned or using -1u and if the value actually happens to be 64-bit or you happen to change it to one later, you'll break your code! And remember that size_t is unsigned, 64-bit on 64-bit platforms only and using -1 for invalid value is very common with it.
Maybe cpmpilers shouldn't do that then. If it compares signed and unsigned, just check if the signed value is negative. If so it is guaranteed to be less than the unsigned one regardless.
N
Nawaz

Why signed/unsigned warnings are important and programmers must pay heed to them, is demonstrated by the following example.

Guess the output of this code?

#include <iostream>

int main() {
        int i = -1;
        unsigned int j = 1;
        if ( i < j ) 
            std::cout << " i is less than j";
        else
            std::cout << " i is greater than j";

        return 0;
}

Output:

i is greater than j

Surprised? Online Demo : http://www.ideone.com/5iCxY

Bottomline: in comparison, if one operand is unsigned, then the other operand is implicitly converted into unsigned if its type is signed!


He's right! It's dumb, but he's right. This is a major gotcha that I never came across before. Why doesn't it convert the unsigned to a (larger) signed value?! If you do "if ( i < ((int)j) )" it works as you would expect. Although "if ( i < ((_int64)j) )" would make more sense (assuming, which you can't, that _int64 is twice the size of int).
@Peter "Why doesn't it convert the unsgiend to a (larger) signed value?" The answer is simple: there may not be a larger signed value. On a 32 bit machine, in the days before long long, both int and long were 32 bits, and there wasn't anything bigger. When comparing signed and unsigned, the earliest C++ compilers converted both to signed. For I forget what reasons, the C standards committee changed this. Your best solution is to avoid unsigned as much as possible.
@JamesKanze: I suspect it also has to do something with the fact, that result of signed overflow is Undefined Behaviour while result of unsigned overflow is not and therefore conversion of negative signed value to unsigned is defined while conversion of large unsigned value to negative signed value is not.
@James The compiler could always generate assembly that would implement the more intuitive semantics of this comparison without casting to some larger type. In this particular example, it would suffice to first check whether i<0. Then i is smaller than j for sure. If i is not less than zero, then ì can be safely converted to unsigned to compare it with j. Sure, comparisons between signed and unsigned would be slower, but their result would be more correct in some sense.
@Nawaz Your bottom line conclusion is incorrect, unfortunately: if the signed type can contain the unsigned type, the unsigned will be converted to the signed type and not the opposite. Just replace "int i" with "long i" in your example. It will print "i is less than j". Erik's answer explained that tricky rule exhaustively.
Y
Yochai Timmer

The == operator just does a bitwise comparison (by simple division to see if it is 0).

The smaller/bigger than comparisons rely much more on the sign of the number.

4 bit Example:

1111 = 15 ? or -1 ?

so if you have 1111 < 0001 ... it's ambiguous...

but if you have 1111 == 1111 ... It's the same thing although you didn't mean it to be.


I understand this, but it does not answer my question. As you point out, 1111 != 1111 if the signs don't match. The compiler knows there is a mismatch from the types, so why doesn't it warn about it? (My point being that my code could contain many such mismatches that I'm not being warned of.)
It's the way it's designed. The equality test checks similarity. And it is similar. I agree with you that it shouldn't be this way. You could do a macro or something that overloads x==y to be !((xy))
H
Hossein

In a system that represents the values using 2-complement (most modern processors) they are equal even in their binary form. This may be why compiler doesn't complain about a == b.

And to me it's strange compiler doesn't warn you on a == ((int)b). I think it should give you an integer truncation warning or something.


The philosophy of C/C++ is: the compiler trusts that the developer knows what (s)he is doing when explicitly converting between types. Thus, no warning (at least by default - I believe there are compilers which generate warnings for this if the warning level is set higher than default).
T
Tim Rae

The line of code in question does not generate a C4018 warning because Microsoft have used a different warning number (i.e. C4389) to handle that case, and C4389 is not enabled by default (i.e. at level 3).

From the Microsoft docs for C4389:

// C4389.cpp
// compile with: /W4
#pragma warning(default: 4389)

int main()
{
   int a = 9;
   unsigned int b = 10;
   if (a == b)   // C4389
      return 0;
   else
      return 0;
};

The other answers have explained quite well why Microsoft might have decided to make a special case out of the equality operator, but I find those answers are not super helpful without mentioning C4389 or how to enable it in Visual Studio.

I should also mention that if you are going to enable C4389, you might also consider enabling C4388. Unfortunately there is no official documentation for C4388 but it seems to pop up in expressions like the following:

int a = 9;
unsigned int b = 10;
bool equal = (a == b); // C4388

h
hdnn

Starting from C++20 we have special functions for correct comparing signed-unsigned values https://en.cppreference.com/w/cpp/utility/intcmp