I have try to write one logic is to convert an int32 positive value to a corresponding negative one, i.e., abs(negativeInt32) == positiveInt32.
I have tried with both:
First:
fmt.Printf("%v\n", int32(^uint32(int32(2) -1)))
This results in an error : prog.go:8: constant 4294967294 overflows int32
Second:
var b int32 = 2
fmt.Printf("%v\n", int32(^uint32(int32(b)-1)))
This results in -2.
How can both result in different results. I think they are equal.
play.golang.org
EDIT
Edit for replacing uint32 with int32 for the first situation.
ANSWERED
For those who come to this problem, I have answered the question myself. :)
The two results are different because the first value is typecast to an unsigned int32 (a uint32).
This occurs here: uint32(^uint32(int32(2) -1))
Or more simply: uint32(-2)
An int32 can store any integer between -2147483648 and 2147483647.
That's a total of 4294967296 different integer values (2^32... i.e. 32 bits).
An unsigned int32 can store the same amount of different integer values, but drops the signage (+/-). In other words, an unsigned int32 can store any value from 0 to 4294967295.
But what happens when we typecast a signed int32 (with a value of -2) to an unsigned int32, which cannot possibly store the value of -2?
Well as you have found, we get the value of 4294967294. Which in a number system where one integer less than 0 is 4294967295; 4294967294 happens to be the sum of 0 - 2.
Hello You can simply try below code
var z int32 =5
a:=-(z)
Occasionally, i have learned that why we can not do
fmt.Printf("%v\n", int32(^uint32(int32(2) -1)))
at compile time. It is that ^uint32(int32(2)-1) is treated as a constant value with uint32 type. It's value is 4294967294. This exceeds the maximum value of int32 for 2147483647. So when you run go build on the source code file. Compile error is shown.
The right answer to this should be:
fmt.Printf("%v\n, ^(int32(2) - 1))
i.e., we should first get the corresponding value of 1 in int32 type and, then convert it to the two's complementary form as value of -1.
However, according to this golang blog's An exercise: The largest unsigned int section, this is legal in runtime. So the code
var b int32 = 2
fmt.Printf("%v\n", int32(^uint32(int32(b)-1)))
is alright.
And, finally it comes to that this is related to constants in Golang. :)
Related
I'm writing some inner-loop type input parsing code, and need to compare the length of a buffer to a uint32. The ideal solution would be fast, concise, dumb (easy to see the correctness of), and work for all possible inputs including those in which an attacker maliciously manipulates the values. Integer overflow is a big deal in this context if it can be exploited to crash the program. This is what I have so far:
// Safely check if len(buf) <= size for all possible values of each
func sizeok(buf []MyType, size uint32) bool {
var n int = len(buf)
return n == int(uint32(n)) && uint32(n) <= size
}
That's a pain, and can't be abstracted over other slice types.
My questions: First, is this actually correct? (You can never be too careful defending against exploitable integer overflows.) Second, is there a simpler way to do it with a single comparison? Maybe uint(len(buf)) <= uint(size) if that could be guaranteed to work securely on all platforms and inputs? Or uint64(len(buf)) <= uint64(size) if that won't generate suboptimal code on 32-bit platforms?
The Go Programming Language Specification
Length and capacity
The built-in functions len and cap take arguments of various types and
return a result of type int. The implementation guarantees that the
result always fits into an int.
Call Argument type Result
len(s) string type string length in bytes
[n]T, *[n]T array length (== n)
[]T slice length
map[K]T map length (number of defined keys)
chan T number of elements queued in channel buffer
Numeric types
uint32 all unsigned 32-bit integers (0 to 4294967295)
int64 all signed 64-bit integers (-9223372036854775808 to 9223372036854775807)
There is also a set of predeclared numeric types with
implementation-specific sizes:
uint either 32 or 64 bits
int same size as uint
len(buf) is type int which is 32 or 64 bits depending on the implementation. For example, for any Go type with a built-in len function,
// Safely check if len(buf) <= size for all possible values of each
var size uint32
if int64(len(buf)) <= int64(size) {
// handle len(buf) <= size
}
I have an int64 variable which contains a negative number and I wish to subtract it from an uint64 variable which contains a positive number:
var endTime uint64
now := time.Now().Unix()
endTime = uint64(now)
var interval int64
interval = -3600
endTime = endTime + uint64(interval)
The above code appears to work but I wonder if I can rely on this. I am surprised, being new to Go, that after casting a negative number to uint64 that it remains negative -- I had planned to subtract the now positive value (after casting) to get what I wanted.
Converting a signed number to an unsigned number will not remain negative, it can't, as the valid range of unsigned types doesn't include negative numbers. If you print uint(interval), you will certainly see a positive number printed.
What you experience is deterministic and you can rely on it (but it doesn't mean you should). It is the result of Go (and most other programming languages) storing signed integer types using the 2's completement representation.
What this means is that in case of negative numbers, using n bit, the value -x (where x is positive) is stored as the binary representation of the positive value 2^n - x. This has the advantage that numbers can be added bitwise, and the result will be correct regardless of whether they are negative or positive.
So when you have a signed negative number, it is basically stored in memory like if you would subtract its absolute value from 0. Which means that if you convert a negative, signed value to unsigned, and you add that to an unsigned value, the result will be correct because overflow will happen, in a useful way.
Converting a value of type int64 to uint64 does not change the memory layout, only the type. So what 8 bytes the int64 had, the converted uint64 will have those same 8 bytes. And as mentioned above, the representation stored in those 8 bytes is the bit pattern identical to the bit pattern of value 0 - abs(x). So the result of the conversion will be a number that you would get if you would subtract abs(x) from 0, in the unsigned world. Yes, this won't be negative (as the type is unsigned), instead it will be a "big" number, counting down from the max value of the uint64 type. But if you add an y number bigger than abs(x) to this "big" number, overflow will happen, and the result will be like y - abs(x).
See this simple example demonstrating what's happening (try it on the Go Playground):
a := uint8(100)
b := int8(-10)
fmt.Println(uint8(b)) // Prints 226 which is: 0 - 10 = 256 - 10
a = a + uint8(b)
fmt.Println(a) // Prints 90 which is: 100 + 226 = 326 = 90
// after overflow: 326 - 256 = 90
As mentioned above, you should not rely on this, as this may cause confusion. If you intend to work with negative numbers, then use signed types.
And if you work with a code base that already uses uint64 values, then do a subtraction instead of addition, using uint64 values:
interval := uint64(3600)
endTime -= interval
Also note that if you have time.Time values, you should take advantage of its Time.Add() method:
func (t Time) Add(d Duration) Time
You may specify a time.Duration to add to the time, which may be negative if you want to go back in time, like this:
t := time.Now()
t = t.Add(-3600 * time.Second)
time.Duration is more expressive: we see the value we specified above uses seconds explicitly.
Bitwise manipulation and Go newbie here :D I am reading some data from sensor with Go and I get it as 2 bytes - let's say 0xFFFE. It is easy too cast it to uint16 since in Go we can just do uint16(0xFFFE) but what I need is to convert it to integer, because the sensor returns in fact values in range from -32768 to 32767. Now I thought "Maybe Go will be this nice and if I do int16(0xFFFE) it will understand what I want?", but no. I ended up using following solution (I translated some python code from web):
x := 0xFFFE
if (x & (1 << 15)) != 0 {
x = x - (1<<16)
}
It seems to work, but A) I am not entirely sure why, and B) It looks a bit ugly to what I imagined should be a trivial solution for casting uint16 to int16. Could anyone give me a hand and clarify why this is only way to do this? Or is there any other possible way?
But what you want works, "Go is nice":
ui := uint16(0xFFFE)
fmt.Println(ui)
i := int16(ui)
fmt.Println(i)
Output (try it on the Go Playground):
65534
-2
int16(0xFFFE) doesn't work because 0xfffe is an untyped integer constant which cannot be represented by a value of type int16, that's why the the compiler complains. But you can certainly convert any uint16 non-constant value to int16.
See possible duplicates:
Golang: on-purpose int overflow
Does go compiler's evaluation differ for constant expression and other expression
Go's builtin len() function returns a signed int. Why wasn't a uint used instead?
Is it ever possible for len() to return something negative?
As far as I can tell, the answer is no:
Arrays: "The number of elements is called the length and is never negative."
Slices: "At any time the following relationship holds: 0 <= len(s) <= cap(s)"
Maps "The number of map elements is called its length". (I couldn't find anything in the spec that explicitly restricts this to a nonnegative value, but it's difficult for me to understand how there could be fewer than 0 elements in a map)
Strings "A string value is a (possibly empty) sequence of bytes.... The length of a string s (its size in bytes) can be discovered using the built-in function len()" (Again, hard to see how a sequence could have a negative number of bytes)
Channels "number of elements queued in channel buffer (ditto)
len() (and cap()) return int because that is what is used to index slices and arrays (not uint). So the question is more "Why does Go use signed integers to index slices/arrays when there are no negative indices?".
The answer is simple: It is common to compute an index and such computations tend to underflow much too easy if done in unsigned integers. Some innocent code like i := a-b+7 might yield i == 4294967291 for innocent values for aand b of 6 and 10. Such an index will probably overflow your slice. Lots of index calculations happen around 0 and are tricky to get right using unsigned integers and these bugs hide behind mathematically totally sensible and sound formulas. This is neither safe nor convenient.
This is a tradeoff based on experience: Underflow tends to happen often for index calculations done with unsigned ints while overflow is much less common if signed integers are used for index calculations.
Additionally: There is basically zero benefit from using unsigned integers in these cases.
There is a proposal in progress "issue 31795 Go 2: change len, cap to
return untyped int if result is constant"
It might be included for Go 1.14 (Q1 2010)
we should be able to do it for len and cap without problems - and indeed
there aren't any in the stdlib as a type-checking it via a modified type
checker shows
See CL 179184 as a PoC: this is still experimental.
As noted below by peterSO, this has been closed.
Robert Griesemer explains:
As you noted, the problem with making len always untyped is the size of the
result. For booleans (and also strings) the size is known, no matter what
kind of boolean (or string).
Russ Cox added:
I am not sure the costs here are worth the benefit. Today there is a simple
rule: len(x) has type int. Changing the type to depend on what x is
will interact in non-orthogonal ways with various code changes. For example,
under the proposed semantics, this code compiles:
const x string = "hello"
func f(uintptr)
...
f(len(x))
but suppose then someone comes along and wants to be able to modify x for
testing or something like that, so they s/const/var/. That's usually fairly
safe, but now the f(len(x)) call fails to type-check, and it will be
mysterious why it ever worked.
This change seems like it might add more rough edges than it removes.
Length and capacity
The built-in functions len and cap take arguments of various types and
return a result of type int. The implementation guarantees that the
result always fits into an int.
Golang is strongly typed language, so if len() was uint then instead of:
i := 0 // int
if len(a) == i {
}
you should write:
if len(a) == uint(i) {
}
or:
if int(len(a)) == i {
}
Also See:
uint either 32 or 64 bits
int same size as uint
uintptr an unsigned integer large enough to store the uninterpreted
bits of a pointer value
Also for compatibility with C: CGo the C.size_t and size of array in C is of type int.
From the spec:
The length is part of the array's type; it must evaluate to a non-negative constant representable by a value of type int. The length of array a can be discovered using the built-in function len. The elements can be addressed by integer indices 0 through len(a)-1. Array types are always one-dimensional but may be composed to form multi-dimensional types.
I realize it's maybe a little circular to say the spec dictates X because the spec dictates Y, but since the length can't exceed the maximum value of an int, it's equally as impossible for len to return a uint-exclusive value as for it to return a negative value.
I'm using the hash function murmur2 which returns me an uint64.
I want then to store it in PostgreSQL, which only support BIGINT (signed 64 bits).
As I'm not interested in the number itself, but just the binary value (as I use it as an id for detecting uniqueness (my set of values being of ~1000 values, a 64bit hash is enough for me) I would like to convert it into int64 by "just" changing the type.
How does one do that in a way that pleases the compiler?
You can simply use a type conversion:
i := uint64(0xffffffffffffffff)
i2 := int64(i)
fmt.Println(i, i2)
Output:
18446744073709551615 -1
Converting uint64 to int64 always succeeds: it doesn't change the memory representation just the type. What may confuse you is if you try to convert an untyped integer constant value to int64:
i3 := int64(0xffffffffffffffff) // Compile time error!
This is a compile time error as the constant value 0xffffffffffffffff (which is represented with arbitrary precision) does not fit into int64 because the max value that fits into int64 is 0x7fffffffffffffff:
constant 18446744073709551615 overflows int64