In gdb -2147483648=2147483648 - gcc

In gdb,
(gdb) p -2147483648
$28 = 2147483648
(gdb) pt -2147483648
type = unsigned int
Since -2147483648 is within the range of type int, why is gdb treating it as an unsigned int?
(gdb) pt -2147483647-1
type = int
(gdb) p -2147483647-1
$27 = -2147483648

I suspect that gdb applies the unary negation operator after setting the type of the digit value:
In case 1: gdb parses 2147483648 which overflows int type and becomes unsigned int. Then it applies the negation.
In case 2: 2147483647 is a valid int and stays int when negation and subtraction are subsequently applied.

gdb appears to be following a set of rules for determining the type of a decimal integer literal that are inconsistent with the rules given by the C standard.
I'll assume your system has a 32-bit int and long int types, using 2's-complement and no padding bits (that's a common choice for 32-bit systems, and it's consistent with what you're seeing). Then the ranges of int and unsigned int are:
int: -2147483648 .. +2147483647
unsigned int: 0 .. 4294967295
and the ranges of long int and unsigned long int are the same.
2147483647 is within the range of type int, so that's its type.
Since the value of 2147483648 is outside the range of type int, apparently gdb is choosing to treat it as an unsigned int. And -2147483648 is not an integer literal, it's an expression consisting of a unary - operator applied to the constant 2147483648. Since gdb treats 2147483648 as an unsigned int, it also treats -2147483648 as an unsigned int, and the unary - operator for unsigned types wraps around, yielding 2147483648.
As for -2147483647-1, that's an expression all of whose operands are of type int, and there's no overflow.
In all versions of ISO C, though, an unsuffixed decimal literal can never be of type unsigned int. In C90, its type is the first of:
int
long int
unsigned long int
that can represent its value. Under C99 rules (and later), the type of a decimal integer constant is the first of:
int
long int
long long int
that can represent its value.
I don't know whether there's a way to tell gdb to use C rules for integer literals.

Related

Bit operation makes signed integer become unsigned

Computer uses two's complement to store integers. Say, for int32 signed, 0xFFFFFFFF represents '-1'. According to this theory, it is not hard to write such code in C to init a signed integer to -1;
int a = 0xffffffff;
printf("%d\n", a);
Obviously, the result is -1.
However, in Go, the same logic dumps differently.
a := int(0xffffffff)
fmt.Printf("%d\n", c)
The code snippet prints 4294967295, the maximum number an uint32 type can hold. Even if I cast c explicitly in fmt.Printf("%d\n", int(c)), the result is still the same.
The same problem happens when some bit operations are imposed on signed integer as well, make signed become unsigned.
So, what happens to Go in such a situation?
The problem here is that size of int is not fixed, it is platform dependent. It may be 32 or 64 bits. In the latter case assigning 0xffffffff to it is equivalent to assigning 4294967295 to it, which is what you see printed.
Now if you convert that value to int32 (which is 32-bit), you'll get your -1:
a := int(0xffffffff)
fmt.Printf("%d\n", a)
b := int32(a)
fmt.Printf("%d\n", b)
This will output (try it on the Go Playgroung):
4294967295
-1
Also note that in Go it is not possible to assign 0xffffffff directly to a value of type int32, because the value would overflow; nor it is valid to create a typed constant having an illegal value, such as int32(0xffffffff). Spec: Constants:
The values of typed constants must always be accurately representable by values of the constant type.
So this gives a compile-time error:
var c int32 = 0xffffffff // constant 4294967295 overflows int32
But you may simply do:
var c int32 = -1
You may also do:
var c = ^int32(0) // -1

Gcc compiler would default consider HEX integer constant variable as unsigned int?

#include<stdio.h>
int main()
{
int ret = -1071;
if(ret == 0xfffffbd1)
{
printf("HAHAHA");
}
return 0;
}
why does the GCC compiler recognize the const variable 0xfffffbd1 as unsigned int as the condition ret == 0xfffffbd1
The C standard says that [t]he type of an integer constant is the first of the corresponding list in which its value can be represented (paragraph 6.4.4.1/5 in C99), and for hexadecimal types without a suffix, this list is:
int
unsigned int
long int
unsigned long int
long long int
unsigned long long int
Assuming a 32-bit int type, 0xfffffbd1 is larger than INT_MAX but less than UINT_MAX, so the type of the constant is unsigned int.
Assuming 32 bit int type
At first time:
I defined a signed int ret = -1071 and executed the expression ret == 0xfffffbd1, the default type of const variable 0xfffffbd1 is signed integer(the sign bit is 1, which is a negative number), so result of the expression is TRUE
The second time
I updated the ret type to long long int, let ret = -1071 and executed the expression ret == 0xfffffbd1, the expression result is FALSE, the default type of 0xfffffbd1 is signed long long int(sign bit is 0, which is a positive number), if i updated the right value to 0xfffffffffffffbd1, the result of expression will return TRUE

Go - Perform unsigned shift operation

Is there anyway to perform an unsigned shift (namely, unsigned right shift) operation in Go? Something like this in Java
0xFF >>> 3
The only thing I could find on this matter is this post but I'm not sure what I have to do.
Thanks in advance.
The Go Programming Language Specification
Numeric types
A numeric type represents sets of integer or floating-point values.
The predeclared architecture-independent numeric types include:
uint8 the set of all unsigned 8-bit integers (0 to 255)
uint16 the set of all unsigned 16-bit integers (0 to 65535)
uint32 the set of all unsigned 32-bit integers (0 to 4294967295)
uint64 the set of all unsigned 64-bit integers (0 to 18446744073709551615)
int8 the set of all signed 8-bit integers (-128 to 127)
int16 the set of all signed 16-bit integers (-32768 to 32767)
int32 the set of all signed 32-bit integers (-2147483648 to 2147483647)
int64 the set of all signed 64-bit integers (-9223372036854775808 to 9223372036854775807)
byte alias for uint8
rune alias for int32
The value of an n-bit integer is n bits wide and represented using
two's complement arithmetic.
There is also a set of predeclared numeric types with
implementation-specific sizes:
uint either 32 or 64 bits
int same size as uint
uintptr an unsigned integer large enough to store the uninterpreted bits of a pointer value
Conversions are required when different numeric types are mixed in an
expression or assignment.
Arithmetic operators
<< left shift integer << unsigned integer
>> right shift integer >> unsigned integer
The shift operators shift the left operand by the shift count
specified by the right operand. They implement arithmetic shifts if
the left operand is a signed integer and logical shifts if it is an
unsigned integer. There is no upper limit on the shift count. Shifts
behave as if the left operand is shifted n times by 1 for a shift
count of n. As a result, x << 1 is the same as x*2 and x >> 1 is the
same as x/2 but truncated towards negative infinity.
In Go, it's an unsigned integer shift. Go has signed and unsigned integers.
It depends on what type the value 0xFF is. Assume it's one of the unsigned integer types, for example, uint.
package main
import "fmt"
func main() {
n := uint(0xFF)
fmt.Printf("%X\n", n)
n = n >> 3
fmt.Printf("%X\n", n)
}
Output:
FF
1F
Assume it's one of the signed integer types, for example, int.
package main
import "fmt"
func main() {
n := int(0xFF)
fmt.Printf("%X\n", n)
n = int(uint(n) >> 3)
fmt.Printf("%X\n", n)
}
Output:
FF
1F

Character range in Java

I've read in a book:
..characters are just 16-bit unsigned integers under the hood. That means you can assign a number literal, assuming it will fit into the unsigned 16-bit range (65535 or less).
It gives me the impression that I can assign integers to characters as long as it's within the 16-bit range.
But how come I can do this:
char c = (char) 80000; //80000 is beyond 65535.
I'm aware the cast did the magic. But what exactly happened behind the scenes?
Looks like it's using the int value mod 65536. The following code:
int i = 97 + 65536;
char c = (char)i;
System.out.println(c);
System.out.println(i % 65536);
char d = 'a';
int n = (int)d;
System.out.println(n);
Prints out 'a' and then '97' twice (a is char 97 in ascii).

How to type cast a literal in C

I have a small sample function:
#define VALUE 0
int test(unsigned char x) {
if (x>=VALUE)
return 0;
else
return 1;
}
My compiler warns me that the comparison (x>=VALUE) is true in all cases, which is right, because x is an unsigned character and VALUE is defined with the value 0. So I changed my code to:
if ( ((signed int) x ) >= ((signed int) VALUE ))
But the warning comes again. I tested it with three GCC versions (all versions > 4.0, sometimes you have to enable -Wextra).
In the changed case, I have this explicit cast and it should be an signed int comparison. Why is it claiming, that the comparison is always true?
Even with the cast, the comparison is still true in all cases of defined behavior. The compiler still determines that (signed int)0 has the value 0, and still determines that (signed int)x) is non-negative if your program has defined behavior (casting from unsigned to signed is undefined if the value is out of range for the signed type).
So the compiler continues warning because it continues to eliminate the else case altogether.
Edit: To silence the warning, write your code as
#define VALUE 0
int test(unsigned char x) {
#if VALUE==0
return 1;
#else
return x>=VALUE;
#endif
}
x is an unsigned char, meaning it is between 0 and 256. Since an int is bigger than a char, casting unsigned char to signed int still retains the chars original value. Since this value is always >= 0, your if is always true.
All the values of an unsigned char can fir perfectly in your int, so even with the cast you will never get a negative value. The cast you need is to signed char - however, in that case you should declare x as signed in the function signature. There is no point lying to the clients that you need an unsigned value while in fact you need a signed one.
The #define of VALUE to 0 means that your function is reduced to this:
int test(unsigned char x) {
if (x>=0)
return 0;
else
return 1;
}
Since x is always passed in as an unsigned char, then it will always have a value between 0 and 255 inclusive, regardless of whether you cast x or 0 to a signed int in the if statement. The compiler therefore warns you that x will always be greater than or equal to 0, and that the else clause can never be reached.

Resources