Overflow of unsigned integers - go

Go spec say on unsigned integer overflow:
For unsigned integer values, the operations +, -, *, and << are
computed modulo 2n, where n is the bit width of the unsigned integer's
type. Loosely speaking, these unsigned integer operations discard high
bits upon overflow, and programs may rely on ''wrap around''.
I try to test it, but get inconsistent result - http://play.golang.org/p/sJxtSHbigT:
package main
import "fmt"
func main() {
fmt.Println("test")
var num uint32 = 1 << 35
}
This give error:
prog.go:7: constant 34359738368 overflows uint32
[process exited with non-zero status]
But according to spec should be no error but rather I should seen 0.

The specification you quote refers specifically to the results of "the operations +, -, *, and <<". You're trying to define a constant, not looking at the result of one of those operations.
You also can't use those over-sized values for the input of those operations. The compiler won't wrap any values for you; that's just the runtime behaviour of those operations.
package main
import "fmt"
func main() {
var num uint32 = 1 + 1 << 35
fmt.Printf("num = %v\n", num)
}
prog.go:6: constant 34359738369 overflows uint32
[process exited with non-zero status]
Here's an interesting example.
var num uint32 = (1 << 31) + (1 << 31)
fmt.Printf("num = %v\n", num)
prog.go:6: constant 4294967296 overflows uint32
[process exited with non-zero status]
In this case, the compiler attempts to evaluate (1 << 31) + (1 << 31) at compile-time, producing the constant value 4294967296, which is too large to fit.
var num uint32 = (1 << 31)
num += (1 << 31)
fmt.Printf("num = %v\n", num)
num = 0
In this case, the addition is performed at run-time, and the value wraps around as you'd expect.

That's because 1 << 35 is an untyped constant expression (it only involves numerical constants). It doesn't become an uint32 until you assign it. Go prohibits you to assign to a variable a constant expression that would overflow it as stuff like that is almost certainly unintentional.

Related

Bit operation makes signed integer become unsigned

Computer uses two's complement to store integers. Say, for int32 signed, 0xFFFFFFFF represents '-1'. According to this theory, it is not hard to write such code in C to init a signed integer to -1;
int a = 0xffffffff;
printf("%d\n", a);
Obviously, the result is -1.
However, in Go, the same logic dumps differently.
a := int(0xffffffff)
fmt.Printf("%d\n", c)
The code snippet prints 4294967295, the maximum number an uint32 type can hold. Even if I cast c explicitly in fmt.Printf("%d\n", int(c)), the result is still the same.
The same problem happens when some bit operations are imposed on signed integer as well, make signed become unsigned.
So, what happens to Go in such a situation?
The problem here is that size of int is not fixed, it is platform dependent. It may be 32 or 64 bits. In the latter case assigning 0xffffffff to it is equivalent to assigning 4294967295 to it, which is what you see printed.
Now if you convert that value to int32 (which is 32-bit), you'll get your -1:
a := int(0xffffffff)
fmt.Printf("%d\n", a)
b := int32(a)
fmt.Printf("%d\n", b)
This will output (try it on the Go Playgroung):
4294967295
-1
Also note that in Go it is not possible to assign 0xffffffff directly to a value of type int32, because the value would overflow; nor it is valid to create a typed constant having an illegal value, such as int32(0xffffffff). Spec: Constants:
The values of typed constants must always be accurately representable by values of the constant type.
So this gives a compile-time error:
var c int32 = 0xffffffff // constant 4294967295 overflows int32
But you may simply do:
var c int32 = -1
You may also do:
var c = ^int32(0) // -1

golang overflow int64 with operating directly but not with assigned a value beforehand? [duplicate]

This question already has an answer here:
Does Go compiler's evaluation differ for constant expression and other expression
(1 answer)
Closed 4 years ago.
func main() {
var a = math.MaxInt64
fmt.Println(a + 1) //-9223372036854775808
fmt.Println(math.MaxInt64 + 1) //constant 9223372036854775808 overflows int
}
why the two ways perform differently?
In the second example math.MaxInt64 + 1 is a constant expression and is computed at compile time. The spec says:
Constant expressions are always evaluated exactly; intermediate values and the constants themselves may require precision significantly larger than supported by any predeclared type in the language.
However when the value of the expression is passed to fmt.Println it has to be converted into a real predeclared type, in this case an int, which is represented as a signed 64 bit integer, which is incapable of representing the constant.
A constant may be given a type explicitly by a constant declaration or conversion, or implicitly when used in a variable declaration or an assignment or as an operand in an expression. It is an error if the constant value cannot be represented as a value of the respective type.
In the first example a + 1 is not a constant expression, rather it's normal arithmetic because a was declared to be a variable and so the constant expression math.MaxInt64 is converted to an int. It's the same as:
var a int = math.MaxInt64
Normal arithmetic is allowed to overflow:
For signed integers, the operations +, -, *, /, and << may legally overflow and the resulting value exists and is deterministically defined by the signed integer representation, the operation, and its operands. No exception is raised as a result of overflow.
With minor modifications you can make the examples the same:
func main() {
const a = math.MaxInt64
fmt.Println(a + 1) //constant 9223372036854775808 overflows int
fmt.Println(math.MaxInt64 + 1) //constant 9223372036854775808 overflows int
}
It is showing an error because if you check the type of a constant value of 1 it will show you that you are actually adding int to a int64 value. So first declare a variable of type int64 and than add to const math.MaxInt64 just like that
package main
import (
"fmt"
"math"
)
func main() {
var a int64 = 1
fmt.Println(math.MaxInt64 + a) //-9223372036854775808
fmt.Printf("%#T", 1)
//fmt.Println(math.MaxInt64 + 1) //constant 9223372036854775808 overflows
}

Go - Perform unsigned shift operation

Is there anyway to perform an unsigned shift (namely, unsigned right shift) operation in Go? Something like this in Java
0xFF >>> 3
The only thing I could find on this matter is this post but I'm not sure what I have to do.
Thanks in advance.
The Go Programming Language Specification
Numeric types
A numeric type represents sets of integer or floating-point values.
The predeclared architecture-independent numeric types include:
uint8 the set of all unsigned 8-bit integers (0 to 255)
uint16 the set of all unsigned 16-bit integers (0 to 65535)
uint32 the set of all unsigned 32-bit integers (0 to 4294967295)
uint64 the set of all unsigned 64-bit integers (0 to 18446744073709551615)
int8 the set of all signed 8-bit integers (-128 to 127)
int16 the set of all signed 16-bit integers (-32768 to 32767)
int32 the set of all signed 32-bit integers (-2147483648 to 2147483647)
int64 the set of all signed 64-bit integers (-9223372036854775808 to 9223372036854775807)
byte alias for uint8
rune alias for int32
The value of an n-bit integer is n bits wide and represented using
two's complement arithmetic.
There is also a set of predeclared numeric types with
implementation-specific sizes:
uint either 32 or 64 bits
int same size as uint
uintptr an unsigned integer large enough to store the uninterpreted bits of a pointer value
Conversions are required when different numeric types are mixed in an
expression or assignment.
Arithmetic operators
<< left shift integer << unsigned integer
>> right shift integer >> unsigned integer
The shift operators shift the left operand by the shift count
specified by the right operand. They implement arithmetic shifts if
the left operand is a signed integer and logical shifts if it is an
unsigned integer. There is no upper limit on the shift count. Shifts
behave as if the left operand is shifted n times by 1 for a shift
count of n. As a result, x << 1 is the same as x*2 and x >> 1 is the
same as x/2 but truncated towards negative infinity.
In Go, it's an unsigned integer shift. Go has signed and unsigned integers.
It depends on what type the value 0xFF is. Assume it's one of the unsigned integer types, for example, uint.
package main
import "fmt"
func main() {
n := uint(0xFF)
fmt.Printf("%X\n", n)
n = n >> 3
fmt.Printf("%X\n", n)
}
Output:
FF
1F
Assume it's one of the signed integer types, for example, int.
package main
import "fmt"
func main() {
n := int(0xFF)
fmt.Printf("%X\n", n)
n = int(uint(n) >> 3)
fmt.Printf("%X\n", n)
}
Output:
FF
1F

How would you set and clear a single bit in Go?

In Golang, how do you set and clear individual bits of an integer? For example, functions that behave like this:
clearBit(129, 7) // returns 1
setBit(1, 7) // returns 129
Here's a function to set a bit. First, shift the number 1 the specified number of spaces in the integer (so it becomes 0010, 0100, etc). Then OR it with the original input. This leaves the other bits unaffected but will always set the target bit to 1.
// Sets the bit at pos in the integer n.
func setBit(n int, pos uint) int {
n |= (1 << pos)
return n
}
Here's a function to clear a bit. First shift the number 1 the specified number of spaces in the integer (so it becomes 0010, 0100, etc). Then flip every bit in the mask with the ^ operator (so 0010 becomes 1101). Then use a bitwise AND, which doesn't touch the numbers AND'ed with 1, but which will unset the value in the mask which is set to 0.
// Clears the bit at pos in n.
func clearBit(n int, pos uint) int {
mask := ^(1 << pos)
n &= mask
return n
}
Finally here's a function to check whether a bit is set. Shift the number 1 the specified number of spaces (so it becomes 0010, 0100, etc) and then AND it with the target number. If the resulting number is greater than 0 (it'll be 1, 2, 4, 8, etc) then the bit is set.
func hasBit(n int, pos uint) bool {
val := n & (1 << pos)
return (val > 0)
}
There is also a compact notation to clear a bit. The operator for that is &^ and called "and not".
Using this operator the clearBit function can be written like this:
// Clears the bit at pos in n.
func clearBit(n int, pos uint) int {
n &^= (1 << pos)
return n
}
Or like this:
// Clears the bit at pos in n.
func clearBit(n int, pos uint) int {
return n &^ (1 << pos)
}

How to type cast a literal in C

I have a small sample function:
#define VALUE 0
int test(unsigned char x) {
if (x>=VALUE)
return 0;
else
return 1;
}
My compiler warns me that the comparison (x>=VALUE) is true in all cases, which is right, because x is an unsigned character and VALUE is defined with the value 0. So I changed my code to:
if ( ((signed int) x ) >= ((signed int) VALUE ))
But the warning comes again. I tested it with three GCC versions (all versions > 4.0, sometimes you have to enable -Wextra).
In the changed case, I have this explicit cast and it should be an signed int comparison. Why is it claiming, that the comparison is always true?
Even with the cast, the comparison is still true in all cases of defined behavior. The compiler still determines that (signed int)0 has the value 0, and still determines that (signed int)x) is non-negative if your program has defined behavior (casting from unsigned to signed is undefined if the value is out of range for the signed type).
So the compiler continues warning because it continues to eliminate the else case altogether.
Edit: To silence the warning, write your code as
#define VALUE 0
int test(unsigned char x) {
#if VALUE==0
return 1;
#else
return x>=VALUE;
#endif
}
x is an unsigned char, meaning it is between 0 and 256. Since an int is bigger than a char, casting unsigned char to signed int still retains the chars original value. Since this value is always >= 0, your if is always true.
All the values of an unsigned char can fir perfectly in your int, so even with the cast you will never get a negative value. The cast you need is to signed char - however, in that case you should declare x as signed in the function signature. There is no point lying to the clients that you need an unsigned value while in fact you need a signed one.
The #define of VALUE to 0 means that your function is reduced to this:
int test(unsigned char x) {
if (x>=0)
return 0;
else
return 1;
}
Since x is always passed in as an unsigned char, then it will always have a value between 0 and 255 inclusive, regardless of whether you cast x or 0 to a signed int in the if statement. The compiler therefore warns you that x will always be greater than or equal to 0, and that the else clause can never be reached.

Resources