Is there anyway to perform an unsigned shift (namely, unsigned right shift) operation in Go? Something like this in Java
0xFF >>> 3
The only thing I could find on this matter is this post but I'm not sure what I have to do.
Thanks in advance.
The Go Programming Language Specification
Numeric types
A numeric type represents sets of integer or floating-point values.
The predeclared architecture-independent numeric types include:
uint8 the set of all unsigned 8-bit integers (0 to 255)
uint16 the set of all unsigned 16-bit integers (0 to 65535)
uint32 the set of all unsigned 32-bit integers (0 to 4294967295)
uint64 the set of all unsigned 64-bit integers (0 to 18446744073709551615)
int8 the set of all signed 8-bit integers (-128 to 127)
int16 the set of all signed 16-bit integers (-32768 to 32767)
int32 the set of all signed 32-bit integers (-2147483648 to 2147483647)
int64 the set of all signed 64-bit integers (-9223372036854775808 to 9223372036854775807)
byte alias for uint8
rune alias for int32
The value of an n-bit integer is n bits wide and represented using
two's complement arithmetic.
There is also a set of predeclared numeric types with
implementation-specific sizes:
uint either 32 or 64 bits
int same size as uint
uintptr an unsigned integer large enough to store the uninterpreted bits of a pointer value
Conversions are required when different numeric types are mixed in an
expression or assignment.
Arithmetic operators
<< left shift integer << unsigned integer
>> right shift integer >> unsigned integer
The shift operators shift the left operand by the shift count
specified by the right operand. They implement arithmetic shifts if
the left operand is a signed integer and logical shifts if it is an
unsigned integer. There is no upper limit on the shift count. Shifts
behave as if the left operand is shifted n times by 1 for a shift
count of n. As a result, x << 1 is the same as x*2 and x >> 1 is the
same as x/2 but truncated towards negative infinity.
In Go, it's an unsigned integer shift. Go has signed and unsigned integers.
It depends on what type the value 0xFF is. Assume it's one of the unsigned integer types, for example, uint.
package main
import "fmt"
func main() {
n := uint(0xFF)
fmt.Printf("%X\n", n)
n = n >> 3
fmt.Printf("%X\n", n)
}
Output:
FF
1F
Assume it's one of the signed integer types, for example, int.
package main
import "fmt"
func main() {
n := int(0xFF)
fmt.Printf("%X\n", n)
n = int(uint(n) >> 3)
fmt.Printf("%X\n", n)
}
Output:
FF
1F
Related
Computer uses two's complement to store integers. Say, for int32 signed, 0xFFFFFFFF represents '-1'. According to this theory, it is not hard to write such code in C to init a signed integer to -1;
int a = 0xffffffff;
printf("%d\n", a);
Obviously, the result is -1.
However, in Go, the same logic dumps differently.
a := int(0xffffffff)
fmt.Printf("%d\n", c)
The code snippet prints 4294967295, the maximum number an uint32 type can hold. Even if I cast c explicitly in fmt.Printf("%d\n", int(c)), the result is still the same.
The same problem happens when some bit operations are imposed on signed integer as well, make signed become unsigned.
So, what happens to Go in such a situation?
The problem here is that size of int is not fixed, it is platform dependent. It may be 32 or 64 bits. In the latter case assigning 0xffffffff to it is equivalent to assigning 4294967295 to it, which is what you see printed.
Now if you convert that value to int32 (which is 32-bit), you'll get your -1:
a := int(0xffffffff)
fmt.Printf("%d\n", a)
b := int32(a)
fmt.Printf("%d\n", b)
This will output (try it on the Go Playgroung):
4294967295
-1
Also note that in Go it is not possible to assign 0xffffffff directly to a value of type int32, because the value would overflow; nor it is valid to create a typed constant having an illegal value, such as int32(0xffffffff). Spec: Constants:
The values of typed constants must always be accurately representable by values of the constant type.
So this gives a compile-time error:
var c int32 = 0xffffffff // constant 4294967295 overflows int32
But you may simply do:
var c int32 = -1
You may also do:
var c = ^int32(0) // -1
Go spec say on unsigned integer overflow:
For unsigned integer values, the operations +, -, *, and << are
computed modulo 2n, where n is the bit width of the unsigned integer's
type. Loosely speaking, these unsigned integer operations discard high
bits upon overflow, and programs may rely on ''wrap around''.
I try to test it, but get inconsistent result - http://play.golang.org/p/sJxtSHbigT:
package main
import "fmt"
func main() {
fmt.Println("test")
var num uint32 = 1 << 35
}
This give error:
prog.go:7: constant 34359738368 overflows uint32
[process exited with non-zero status]
But according to spec should be no error but rather I should seen 0.
The specification you quote refers specifically to the results of "the operations +, -, *, and <<". You're trying to define a constant, not looking at the result of one of those operations.
You also can't use those over-sized values for the input of those operations. The compiler won't wrap any values for you; that's just the runtime behaviour of those operations.
package main
import "fmt"
func main() {
var num uint32 = 1 + 1 << 35
fmt.Printf("num = %v\n", num)
}
prog.go:6: constant 34359738369 overflows uint32
[process exited with non-zero status]
Here's an interesting example.
var num uint32 = (1 << 31) + (1 << 31)
fmt.Printf("num = %v\n", num)
prog.go:6: constant 4294967296 overflows uint32
[process exited with non-zero status]
In this case, the compiler attempts to evaluate (1 << 31) + (1 << 31) at compile-time, producing the constant value 4294967296, which is too large to fit.
var num uint32 = (1 << 31)
num += (1 << 31)
fmt.Printf("num = %v\n", num)
num = 0
In this case, the addition is performed at run-time, and the value wraps around as you'd expect.
That's because 1 << 35 is an untyped constant expression (it only involves numerical constants). It doesn't become an uint32 until you assign it. Go prohibits you to assign to a variable a constant expression that would overflow it as stuff like that is almost certainly unintentional.
In gdb,
(gdb) p -2147483648
$28 = 2147483648
(gdb) pt -2147483648
type = unsigned int
Since -2147483648 is within the range of type int, why is gdb treating it as an unsigned int?
(gdb) pt -2147483647-1
type = int
(gdb) p -2147483647-1
$27 = -2147483648
I suspect that gdb applies the unary negation operator after setting the type of the digit value:
In case 1: gdb parses 2147483648 which overflows int type and becomes unsigned int. Then it applies the negation.
In case 2: 2147483647 is a valid int and stays int when negation and subtraction are subsequently applied.
gdb appears to be following a set of rules for determining the type of a decimal integer literal that are inconsistent with the rules given by the C standard.
I'll assume your system has a 32-bit int and long int types, using 2's-complement and no padding bits (that's a common choice for 32-bit systems, and it's consistent with what you're seeing). Then the ranges of int and unsigned int are:
int: -2147483648 .. +2147483647
unsigned int: 0 .. 4294967295
and the ranges of long int and unsigned long int are the same.
2147483647 is within the range of type int, so that's its type.
Since the value of 2147483648 is outside the range of type int, apparently gdb is choosing to treat it as an unsigned int. And -2147483648 is not an integer literal, it's an expression consisting of a unary - operator applied to the constant 2147483648. Since gdb treats 2147483648 as an unsigned int, it also treats -2147483648 as an unsigned int, and the unary - operator for unsigned types wraps around, yielding 2147483648.
As for -2147483647-1, that's an expression all of whose operands are of type int, and there's no overflow.
In all versions of ISO C, though, an unsuffixed decimal literal can never be of type unsigned int. In C90, its type is the first of:
int
long int
unsigned long int
that can represent its value. Under C99 rules (and later), the type of a decimal integer constant is the first of:
int
long int
long long int
that can represent its value.
I don't know whether there's a way to tell gdb to use C rules for integer literals.
I've read in a book:
..characters are just 16-bit unsigned integers under the hood. That means you can assign a number literal, assuming it will fit into the unsigned 16-bit range (65535 or less).
It gives me the impression that I can assign integers to characters as long as it's within the 16-bit range.
But how come I can do this:
char c = (char) 80000; //80000 is beyond 65535.
I'm aware the cast did the magic. But what exactly happened behind the scenes?
Looks like it's using the int value mod 65536. The following code:
int i = 97 + 65536;
char c = (char)i;
System.out.println(c);
System.out.println(i % 65536);
char d = 'a';
int n = (int)d;
System.out.println(n);
Prints out 'a' and then '97' twice (a is char 97 in ascii).
From the XDrawString man page it seems that it aceepts signed 32 bit x and y coordinates
int XDrawString(Display *display, Drawable d, GC
gc, int x, int y, char *string, int length);
Note how both x and y are int ( ie: 32 bit signed Integer on gcc/linux2.6-i386 at least )
The problem is when I pass y = 32767 ( 2^15 - 1) the string is drawn in the correct location but anything above this value the string is not drawn.
I suspect that internally 32 bit integers are not used but instead 16 bit signed integers for the coordinates.
Given that the man pages seem to indicate that the function accepts 32 bit integers, is there some compile option that needs to be turned to allow the use of the longer integers? Or is this a limmitation of Xlib?
The X11 protocol does specify 16 bits.
Have a look at the definition for xPolyTextReq in <X11/Xproto.h>
typedef struct {
CARD8 reqType;
CARD8 pad;
CARD16 length B16;
Drawable drawable B32;
GContext gc B32;
INT16 x B16, y B16; /* items (xTextElt) start after struct */
} xPolyTextReq;