Golang shift operator conversion - go

I can not understand in golang how 1<<s return 0 if var s uint = 33.
But 1<<33 return 8589934592.
How a shift operator conversion end up with a value of 0.
I'm reading the language specification and stuck in this section:
https://golang.org/ref/spec#Operators
Specifically this paragraph from docs:
"The right operand in a shift expression must have unsigned integer
type or be an untyped constant representable by a value of type uint.
If the left operand of a non-constant shift expression is an untyped
constant, it is first implicitly converted to the type it would assume
if the shift expression were replaced by its left operand alone."
Some example from official Golang docs:
var s uint = 33
var i = 1<<s // 1 has type int
var j int32 = 1<<s // 1 has type int32; j == 0
var k = uint64(1<<s) // 1 has type uint64; k == 1<<33
Update:
Another very related question, with an example:
package main
import (
"fmt"
)
func main() {
v := int16(4336)
fmt.Println(int8(v))
}
This program return -16
How does the number 4336 become -16 in converting int16 to int8

If you have this:
var s uint = 33
fmt.Println(1 << s)
Then the quoted part applies:
If the left operand of a non-constant shift expression is an untyped constant, it is first implicitly converted to the type it would assume if the shift expression were replaced by its left operand alone.
Because s is not a constant (it's a variable), therefore 1 >> s is a non-constant shift expression. And the left operand is 1 which is an untyped constant (e.g. int(1) would be a typed constant), so it is converted to a type that it would get if the expression would be simply 1 instead of 1 << s:
fmt.Println(1)
In the above, the untyped constant 1 would be converted to int, because that is its default type. Default type of constants is in Spec: Constants:
An untyped constant has a default type which is the type to which the constant is implicitly converted in contexts where a typed value is required, for instance, in a short variable declaration such as i := 0 where there is no explicit type. The default type of an untyped constant is bool, rune, int, float64, complex128 or string respectively, depending on whether it is a boolean, rune, integer, floating-point, complex, or string constant.
And the result of the above is architecture dependent. If int is 32 bits, it will be 0. If int is 64 bits, it will be 8589934592 (because shifting a 1 bit 33 times will shift it out of a 32-bit int number).
On the Go playground, size of int is 32 bits (4 bytes). See this example:
fmt.Println("int size:", unsafe.Sizeof(int(0)))
var s uint = 33
fmt.Println(1 << s)
fmt.Println(int32(1) << s)
fmt.Println(int64(1) << s)
The above outputs (try it on the Go Playground):
int size: 4
0
0
8589934592
If I run the above app on my 64-bit computer, the output is:
int size: 8
8589934592
0
8589934592
Also see The Go Blog: Constants for how constants work in Go.
Note that if you write 1 << 33, that is not the same, that is not a non-constant shift expression, which your quote applies to: "the left operand of a non-constant shift expression". 1<<33 is a constant shift expression, which is evaluated at "constant space", and the result would be converted to int which does not fit into a 32-bit int, hence the compile-time error. It works with variables, because variables can overflow. Constants do not overflow:
Numeric constants represent exact values of arbitrary precision and do not overflow.
See How does Go perform arithmetic on constants?
Update:
Answering your addition: converting from int16 to int8 simply keeps the lowest 8 bits. And integers are represented using the 2's complement format, where the highest bit is 1 if the number is negative.
This is detailed in Spec: Conversions:
When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended. It is then truncated to fit in the result type's size. For example, if v := uint16(0x10F0), then uint32(int8(v)) == 0xFFFFFFF0. The conversion always yields a valid value; there is no indication of overflow.
So when you convert a int16 value to int8, if source number has a 1 in bit position 7 (8th bit), the result will be negative, even if the source wasn't negative. Similarly, if the source has 0 at bit position 7, the result will be positive, even if the source is negative.
See this example:
for _, v := range []int16{4336, -129, 8079} {
fmt.Printf("Source : %v\n", v)
fmt.Printf("Source hex: %4x\n", uint16(v))
fmt.Printf("Result hex: %4x\n", uint8(int8(v)))
fmt.Printf("Result : %4v\n", uint8(int8(v)))
fmt.Println()
}
Output (try it on the Go Playground):
Source : 4336
Source hex: 10f0
Result hex: f0
Result : -16
Source : -129
Source hex: ff7f
Result hex: 7f
Result : 127
Source : 8079
Source hex: 1f8f
Result hex: 8f
Result : -113
See related questions:
When casting an int64 to uint64, is the sign retained?
Format printing the 64bit integer -1 as hexadecimal deviates between golang and C

You're building and running the program in 32bit mode (go playground?). In it, int is 32-bit wide and behaves the same as int32.

Related

Odd behavior of Golang type conversion [duplicate]

Why does below code fail to compile?
package main
import (
"fmt"
"unsafe"
)
var x int = 1
const (
ONE int = 1
MIN_INT int = ONE << (unsafe.Sizeof(x)*8 - 1)
)
func main() {
fmt.Println(MIN_INT)
}
I get an error
main.go:12: constant 2147483648 overflows int
Above statement is correct. Yes, 2147483648 overflows int (In 32 bit architecture). But the shift operation should result in a negative value ie -2147483648.
But the same code works, If I change the constants into variables and I get the expected output.
package main
import (
"fmt"
"unsafe"
)
var x int = 1
var (
ONE int = 1
MIN_INT int = ONE << (unsafe.Sizeof(x)*8 - 1)
)
func main() {
fmt.Println(MIN_INT)
}
There is a difference in evaluation between constant and non-constant expression that arises from constants being precise:
Numeric constants represent exact values of arbitrary precision and do not overflow.
Typed constant expressions cannot overflow; if the result cannot be represented by its type, it's a compile-time error (this can be detected at compile-time).
The same thing does not apply to non-constant expressions, as this can't be detected at compile-time (it could only be detected at runtime). Operations on variables can overflow.
In your first example ONE is a typed constant with type int. This constant expression:
ONE << (unsafe.Sizeof(x)*8 - 1)
Is a constant shift expression, the following applies: Spec: Constant expressions:
If the left operand of a constant shift expression is an untyped constant, the result is an integer constant; otherwise it is a constant of the same type as the left operand, which must be of integer type.
So the result of the shift expression must fit into an int because this is a constant expression; but since it doesn't, it's a compile-time error.
In your second example ONE is not a constant, it's a variable of type int. So the shift expression here may –and will– overflow, resulting in the expected negative value.
Notes:
Should you change ONE in the 2nd example to a constant instead of a variable, you'd get the same error (as the expression in the initializer would be a constant expression). Should you change ONE to a variable in the first example, it wouldn't work as variables cannot be used in constant expressions (it must be a constant expression because it initializes a constant).
Constant expressions to find min-max values
You may use the following solution which yields the max and min values of uint and int types:
const (
MaxUint = ^uint(0)
MinUint = 0
MaxInt = int(MaxUint >> 1)
MinInt = -MaxInt - 1
)
func main() {
fmt.Printf("uint: %d..%d\n", MinUint, MaxUint)
fmt.Printf("int: %d..%d\n", MinInt, MaxInt)
}
Output (try it on the Go Playground):
uint: 0..4294967295
int: -2147483648..2147483647
The logic behind it lies in the Spec: Constant expressions:
The mask used by the unary bitwise complement operator ^ matches the rule for non-constants: the mask is all 1s for unsigned constants and -1 for signed and untyped constants.
So the typed constant expression ^uint(0) is of type uint and is the max value of uint: it has all its bits set to 1. Given that integers are represented using 2's complement: shifting this to the left by 1 you'll get the value of max int, from which the min int value is -MaxInt - 1 (-1 due to the 0 value).
Reasoning for the different behavior
Why is there no overflow for constant expressions and overflow for non-constant expressions?
The latter is easy: in most other (programming) languages there is overflow. So this behavior is consistent with other languages and it has its benefits.
The real question is the first: why isn't overflow allowed for constant expressions?
Constants in Go are more than values of typed variables: they represent exact values of arbitrary precision. Staying at the word exact, if you have a value that you want to assign to a typed constant, allowing overflow and assigning a completely different value doesn't really live up to exact.
Going forward, this type checking and disallowing overflow can catch mistakes like this one:
type Char byte
var c1 Char = 'a' // OK
var c2 Char = '世' // Compile-time error: constant 19990 overflows Char
What happens here? c1 Char = 'a' works because 'a' is a rune constant, and rune is alias for int32, and 'a' has numeric value 97 which fits into byte's valid range (which is 0..255).
But c2 Char = '世' results in a compile-time error because the rune '世' has numeric value 19990 which doesn't fit into a byte. If overflow would be allowed, your code would compile and assign 22 numeric value ('\x16') to c2 but obviously this wasn't your intent. By disallowing overflow this mistake is easily caught, and at compile-time.
To verify the results:
var c1 Char = 'a'
fmt.Printf("%d %q %c\n", c1, c1, c1)
// var c2 Char = '世' // Compile-time error: constant 19990 overflows Char
r := '世'
var c2 Char = Char(r)
fmt.Printf("%d %q %c\n", c2, c2, c2)
Output (try it on the Go Playground):
97 'a' a
22 '\x16'
To read more about constants and their philosophy, read the blog post: The Go Blog: Constants
And a couple more questions (+answers) that relate and / or are interesting:
Golang: on-purpose int overflow
How does Go perform arithmetic on constants?
Find address of constant in go
Why do these two float64s have different values?
How to change a float64 number to uint64 in a right way?
Writing powers of 10 as constants compactly

Different behavior of len() with const or non-const value

The code below
const s = "golang.go"
var a byte = 1 << len(s) / 128
The result of a is 4. However, after changing const s to var s as following
var s = "golang.go"
var a byte = 1 << len(s) / 128
The result of a is 0 now.
Also other test codes as below
const s = "golang.go"
var a byte = 1 << len(s) / 128 // the result of a is 4
var b byte = 1 << len(s[:]) / 128 // the result of b is 0
var ss = "golang.go"
var aa byte = 1 << len(ss) / 128 // the result of aa is 0
var bb byte = 1 << len(ss[:]) / 128 // the result of bb is 0
It is weird that b is 0 with evaluating the length of s[:]
I try to understand it per golang spec
The expression len(s) is constant if s is a string constant. The expressions len(s) and cap(s) are constants if the type of s is an array or pointer to an array and the expression s does not contain channel receives or (non-constant) function calls
But I failed. Could someone explain it more clearly to me?
The difference is that when s is constant, the expression is interpreted and executed as a constant expression, using untyped integer type and resulting in int type. When s is a variable, the expression is interpreted and executed as a non-constant expression, using byte type.
Spec: Operators:
The right operand in a shift expression must have integer type or be an untyped constant representable by a value of type uint. If the left operand of a non-constant shift expression is an untyped constant, it is first implicitly converted to the type it would assume if the shift expression were replaced by its left operand alone.
The quoted part applies when s is a variable. The expression is a non-constant shift expression (1 << len(s)) because s is a variable (so len(s) is non-constant), and the left operand is an untyped constant (1). So 1 is converted to a type it would assume if the shift expression were replaced by its left operand alone:
var a byte = 1 << len(s) / 128
replaced to
var a byte = 1 / 128
In this variable declaration byte type will be used because that type is used for the variable a. So back to the original: byte(1) shifted left by 9 will be 0, dividing it by 128 will also be 0.
And when s is constant, int will be used because Spec: Constant expressions:
If the left operand of a constant shift expression is an untyped constant, the result is an integer constant; otherwise it is a constant of the same type as the left operand, which must be of integer type.
Here 1 will not be converted to byte but 1 << len(s) => 1 << 9 will be 512, divided by 128 will be 4.
Constant in Go behave differently than you might expect. They are "arbitrary precision and _un_typed".
With const consts = "golang.go" the expression 1 << len(consts) / 128 is a constant expression and evaluated as a constant expression with arbitrary precision resulting in an untyped integer 4 which can be assigned to a byte resulting in a == 4.
With var vars = "golang.go" the expression 1 << len(vars) / 128 no longer is a constant expression but has to be evaluated as some typed int. How is defined in https://go.dev/ref/spec#Operators
The right operand in a shift expression must have integer type or be an untyped constant representable by a value of type uint. If the left operand of a non-constant shift expression is an untyped constant, it is first implicitly converted to the type it would assume if the shift expression were replaced by its left operand alone.
The second sentence applies to your problem. The 1 is converted to "the type it would [read will] assume". Spelled out this is byte(1) << len(vars) which is 0.
https://go.dev/blog/constants

Whats happening with this method?

type IntSet struct {
words []uint64
}
func (s *IntSet) Has(x int) bool {
word, bit := x/64, uint(x%64)
return word < len(s.words) && s.words[word]&(1<<bit) != 0
}
Lets go through what I think is going on:
A new type is declared called IntSet. Underneath its new type declaration it is unint64 slice.
A method is created called Has(). It can only receive IntSet types, after playing around with ints she returns a bool
Before she can play she needs two ints. She stores these babies on the stack.
Lost for words
This methods purpose is to report whether the set contains the non-negative value x. Here is a the go test:
func TestExample1(t *testing.T) {
//!+main
var x, y IntSet
fmt.Println(x.Has(9), x.Has(123)) // "true false"
//!-main
// Output:
// true false
}
Looking for some guidance understanding what this method is doing inside. And why the programmer did it in such complicated means (I feel like I am missing something).
The return statement:
return word < len(s.words) && s.words[word]&(1<<bit) != 0
Are the order of operations this?
return ( word < len(s.words) && ( s.words[word]&(1<<bit)!= 0 )
And what is the [words] and & doing within:
s.words[word]&(1<<bit)!= 0
edit: Am beginning to see slightly seeing that:
s.words[word]&(1<<bit)!= 0
Is just a slice but don't understand the &
As I read the code, I scribbled some notes:
package main
import "fmt"
// A set of bits
type IntSet struct {
// bits are grouped into 64 bit words
words []uint64
}
// x is the index for a bit
func (s *IntSet) Has(x int) bool {
// The word index for the bit
word := x / 64
// The bit index within a word for the bit
bit := uint(x % 64)
if word < 0 || word >= len(s.words) {
// error: word index out of range
return false
}
// the bit set within the word
mask := uint64(1 << bit)
// true if the bit in the word set
return s.words[word]&mask != 0
}
func main() {
nBits := 2*64 + 42
// round up to whole word
nWords := (nBits + (64 - 1)) / 64
bits := IntSet{words: make([]uint64, nWords)}
// bit 127 = 1 * 64 + 63
bits.words[1] = 1 << 63
fmt.Printf("%b\n", bits.words)
for i := 0; i < nWords*64; i++ {
has := bits.Has(i)
if has {
fmt.Println(i, has)
}
}
has := bits.Has(127)
fmt.Println(has)
}
Playground: https://play.golang.org/p/rxquNZ_23w1
Output:
[0 1000000000000000000000000000000000000000000000000000000000000000 0]
127 true
true
The Go Programming Language Specification
Arithmetic operators
& bitwise AND integers
peterSO's answer is spot on - read it. But I figured this might also help you understand.
Imagine I want to store some random numbers in the range 1 - 8. After I store these numbers I will be asked if the number n (also in the range of 1 - 8) appears in the numbers I recorded earlier. How would we store the numbers?
One, probably obvious, way would be to store them in a slice or maybe a map. Maybe we would choose a map since lookups will be constant time. So we create our map
seen := map[uint8]struct{}{}
Our code might look something like this
type IntSet struct {
seen: map[uint8]struct{}
}
func (i *IntSet) AddValue(v uint8) {
i.seen[v] = struct{}{}
}
func (i *IntSet) Has(v uint8) bool {
_, ok := i.seen[v]
return ok
}
For each number we store we take up (at least) 1 byte (8 bits) of memory. If we were to store all 8 numbers we would be using 64 bits / 8 bytes.
However, as the name implies, this is an int Set. We don't care about duplicates, we only care about membership (which Has provides for us).
But there is another way we could store these numbers, and we could do it all within a single byte. Since a byte provides 8 bits, we can use these 8 bits as markers for values we have seen. The initial value (in binary notation) would be
00000000 == uint8(0)
If we did an AddValue(3) we could change the 3rd bit and end up with
00000100 == uint8(3)
^
|______ 3rd bit
If we then called AddValue(8) we would have
10000100 == uint8(132)
^ ^
| |______ 3rd bit
|___________ 8th bit
So after adding 3 and 8 to our IntSet we have the internally stored integer value of 132. But how do we take 132 and figure out whether a particular bit is set? Easy, we use bitwise operators.
The & operator is a logical AND. It will return the value of the bits common between the numbers on each side of the operator. For example
10001100 01110111 11111111
& 01110100 & 01110000 & 00000001
-------- -------- --------
00000100 01110000 00000001
So to find out if n is in our set we simply do
our_set_value & (1 << (value_we_are_looking_for - 1))
which if we were searching for 4 would yield
10000100
& 00001000
----------
0 <-- so 4 is not present
or if we were searching for 8
10000100
& 10000000
----------
10000000 <-- so 8 is present
You may have noticed I subtracted 1 from our value_we_are_looking for. This is because I am fitting 1-8 into our 8bit number. If we only wanted to store seven numbers then we could just skip using the very first bit and assume our counting starts at bit #2 then we wouldn't have to subtract 1, like the code you posted does.
Assuming you understand all of that, here's where things get interesting. So far we have been storing our values in a uint8 (so we could only have 8 values, or 7 if you omit the first bit). But there are larger numbers that have more bits, like uint64. Instead of 8 values, we can store 64 values! But what happens if the range of values we want to track exceed 1-64? What if we want to store 65? This is where the slice of words comes from in the original code.
Since the code posted skips the first bit, from now on I will do so as well.
We can use the first uint64 to store the numbers 1 - 63. When we want to store the numbers 64-127 we need a new uint64. So our slice would be something like
[ uint64_of_1-63, uint64_of_64-127, uint64_of_128-192, etc]
Now, to answer the question about whether a number is in our set we need to first find the uint64 whose range would contain our number. If we were searching for 110 we would want to use the uint64 located at index 1 (uint64_of_64-128) because 110 would fall in that range.
To find the index of the word we need to look at, we take the whole number value of n / 64. In the case of 110 we would get 1, which is exactly what we want.
Now we need to examine the specific bit of that number. The bit that needs to be checked would be the remainder when dividing 110 by 64, or 46. So if the 46th bit of the word at index 1 is set, then we have seen 110 before.
This is how it might look in code
type IntSet struct {
words []uint64
}
func (s *IntSet) Has(x int) bool {
word, bit := x/64, uint(x%64)
return word < len(s.words) && s.words[word]&(1<<bit) != 0
}
func (s *IntSet) AddValue(x int) {
word := x / 64
bit := x % 64
if word < len(s.words) {
s.words[word] |= (1 << uint64(bit))
}
}
And here is some code to test it
func main() {
rangeUpper := 1000
bits := IntSet{words: make([]uint64, (rangeUpper/64)+1)}
bits.AddValue(127)
bits.AddValue(8)
bits.AddValue(63)
bits.AddValue(64)
bits.AddValue(998)
fmt.Printf("%b\n", bits.words)
for i := 0; i < rangeUpper; i++ {
if ok := bits.Has(i); ok {
fmt.Printf("Found %d\n", i)
}
}
}
OUTPUT
Found 8
Found 63
Found 64
Found 127
Found 998
Playground of above
Note
The |= is another bitwise operator OR. It means combine the two values keeping anywhere there is a 1 in either value
10000000 00000001 00000001
& 01000000 & 10000000 & 00000001
-------- -------- --------
11000000 10000001 00000001 <-- important that we
can set the value
multiple times
Using this method we can reduce the cost of storage for 65535 numbers from 131KB to just 1KB. This type of bit manipulation for set membership is very common in implementations of Bloom Filters
An IntSet represents a Set of integers. The presence in the set of any of a contiguous range of integers can be established by writing a single bit in the IntSet. Likewise, checking whether a specific integer is in the IntSet can be done by checking whether the particular integer corresponding to that bit is set.
So the code is finding the specific uint64 in the Intset corresponding to the integer:
word := x/64
and then the specific bit in that uint64:
bit := uint(x%64)
and then checking first that the integer being tested is in the range supported by the IntSet:
word < len(s.words)
and then whether the specific bit corresponding to the specific integer is set:
&& s.words[word]&(1<<bit) != 0
This part:
s.words[word]
pulls out the specific uint64 of the IntSet that tracks whether the integer in question is in the set.
&
is a bitwise AND.
(1<<bit)
means take a 1, shift it to the bit position representing the specific integer being tested.
Performing the bitwise AND between the integer in question, and the bit-shifted 1 will return a 0 if the bit corresponding to the integer is not set, and a 1 if the bit is set (meaning, the integer in question is a member of the IntSet).

go - encoding unsigned 16 bit float in binary

In Go, how can I encode a float into a byte array as a 16 bit unsigned float with 11 explicit bits of mantissa and 5 bits of explicit exponent?
There doesn't seem to be a clean way to do it. The only thing I can think of is encoding it as in Convert byte array "[]uint8" to float64 in GoLang and manually truncating the bits.
Is there a "go" way to do this?
Here's the exact definition:
A 16 bit unsigned float with 11 explicit bits of mantissa and 5 bits of explicit exponent
The bit format is loosely modeled after IEEE 754. For example, 1 microsecond is represented as 0x1, which has an exponent of zero, presented in the 5 high order bits, and mantissa of 1, presented in the 11 low order bits. When the explicit exponent is greater than zero, an implicit high-order 12th bit of 1 is assumed in the mantissa. For example, a floatingvalue of 0x800 has an explicit exponent of 1, as well as an explicit mantissa of 0, but then has an effective mantissa of 4096 (12th bit is assumed to be 1). Additionally, the actual exponent is one-less than the explicit exponent, and the value represents 4096 microseconds. Any values larger than the representable range are clamped to 0xFFFF.
I am not sure whether I understand the encoding correctly (see my comment on the original question), but here is a function which may do what you want:
func EncodeFloat(seconds float64) uint16 {
us := math.Floor(1e6*seconds + 0.5)
if us < 0 {
panic("cannot encode negative value")
} else if us > (1<<30)*4095+0.5 {
return 0xffff
}
usInt := uint64(us)
expBits := uint16(0)
if usInt >= 2048 {
exp := uint16(1)
for usInt >= 4096 {
exp++
usInt >>= 1
}
usInt -= 2048
expBits = exp << 11
}
return expBits | uint16(usInt)
}
(code is at http://play.golang.org/p/G599VOBMcL )

Convert uint64 to int64 without loss of information

The problem with the following code:
var x uint64 = 18446744073709551615
var y int64 = int64(x)
is that y is -1. Without loss of information, is the only way to convert between these two number types to use an encoder and decoder?
buff bytes.Buffer
Encoder(buff).encode(x)
Decoder(buff).decode(y)
Note, I am not attempting a straight numeric conversion in your typical case. I am more concerned with maintaining the statistical properties of a random number generator.
Your conversion does not lose any information in the conversion. All the bits will be untouched. It is just that:
uint64(18446744073709551615) = 0xFFFFFFFFFFFFFFFF
int64(-1) = 0xFFFFFFFFFFFFFFFF
Try:
var x uint64 = 18446744073709551615 - 3
and you will have y = -4.
For instance: playground
var x uint64 = 18446744073709551615 - 3
var y int64 = int64(x)
fmt.Printf("%b\n", x)
fmt.Printf("%b or %d\n", y, y)
Output:
1111111111111111111111111111111111111111111111111111111111111100
-100 or -4
Seeing -1 would be consistent with a process running as 32bits.
See for instance the Go1.1 release notes (which introduced uint64)
x := ^uint32(0) // x is 0xffffffff
i := int(x) // i is -1 on 32-bit systems, 0xffffffff on 64-bit
fmt.Println(i)
Using fmt.Printf("%b\n", y) can help to see what is going on (see ANisus' answer)
As it turned out, the OP wheaties confirms (in the comments) it was run initially in 32 bits (hence this answer), but then realize 18446744073709551615 is 0xffffffffffffffff (-1) anyway: see ANisusanswer;
The types uint64 and int64 can both represent 2^64 discrete integer values.
The difference between the two is that uint64 holds only positive integers (0 thru 2^64-1), where as int64 holds both negative and positive integers using 1 bit to hold the sign (-2^63 thru 2^63-1).
As others have said, if your generator is producing 0xffffffffffffffff, uint64 will represent this as the raw integer (18,446,744,073,709,551,615) whereas int64 will interpret the two's complement value and return -1.

Resources