Why do these two golang integer conversion functions give different results? - go

I wrote a function to convert a byte slice to an integer.
The function I created is actually a loop-based implemtation of
what Rob Pike published here:
http://commandcenter.blogspot.com/2012/04/byte-order-fallacy.html
Here is Rob's code:
i = (data[0]<<0) | (data[1]<<8) | (data[2]<<16) | (data[3]<<24);
My first implementation (toInt2 in the playground) doesn't work as
I expected because it appears to initialize the int value as a uint.
This seems really strange but it must be platform specific because
the go playground reports a different result than my machine (a mac).
Can anyone explain why these functions behave differently on my mac?
Here's the link to the playground with the code: http://play.golang.org/p/FObvS3W4UD
Here's the code from the playground (for convenience):
/*
Output on my machine:
amd64 darwin go1.3 input: [255 255 255 255]
-1
4294967295
Output on the go playground:
amd64p32 nacl go1.3 input: [255 255 255 255]
-1
-1
*/
package main
import (
"fmt"
"runtime"
)
func main() {
input := []byte{255, 255, 255, 255}
fmt.Println(runtime.GOARCH, runtime.GOOS, runtime.Version(), "input:", input)
fmt.Println(toInt(input))
fmt.Println(toInt2(input))
}
func toInt(bytes []byte) int {
var value int32 = 0 // initialized with int32
for i, b := range bytes {
value |= int32(b) << uint(i*8)
}
return int(value) // converted to int
}
func toInt2(bytes []byte) int {
var value int = 0 // initialized with plain old int
for i, b := range bytes {
value |= int(b) << uint(i*8)
}
return value
}

This is an educated guess, but int type can be 64bit or 32bit depending on the platform, on my system and yours it's 64bit, since the playground is running on nacl, it's 32bit.
If you change the 2nd function to use uint all around, it will work fine.
From the spec:
uint either 32 or 64 bits
int same size as uint
uintptr an unsigned integer large enough to store the uninterpreted bits of a pointer value

int is allowed to be 32 or 64 bits, depending on platform/implementation. When it is 64-bits, it is capable of representing 2^32 as a signed positive integer, which is what happens on your machine. When it is 32-bits (the playground), it overflows as you expect.

Related

No panic when converting int to uint?

I'm confused about the following type conversion. I would expect both uint conversions to panic.
a := -1
_ = uint(a) // why no panic?
_ = uint(-1) // panics: constant -1 overflows uint
Why doesn't it panic in line 2?
https://play.golang.org/p/jcfDL8km2C
As mentioned in issue 6923:
T(c) where T is a type and c is a constant means to treat c as having type T rather than one of the default types.
It gives an error if c can not be represented in T, except that for float and complex constants we quietly round to T as long as the value is not too large.
Here:
const x uint = -1
var x uint = -1
This doesn't work because -1 cannot be (implicitly) converted to a uint.
_ = uint(a) // why no panic?
Because a is not an untyped constant, but a typed variable (int). See Playground and "what's wrong with Golang constant overflows uint64":
package main
import "fmt"
func main() {
a := -1
_ = uint(a) // why no panic?
var b uint
b = uint(a)
fmt.Println(b)
// _ = uint(-1) // panics: main.go:7: constant -1 overflows uint
}
Result: 4294967295 (on 32-bits system) or 18446744073709551615 (on 64-bits system), as commented by starriet
That are specific rules for the conversion of non-constant numeric values:
When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended.
It is then truncated to fit in the result type's size.

Two's complement and fmt.Printf

So computers use Two's complement to internally represent signed integers. I.e., -5 is represented as ^5 + 1 = "1111 1011".
However, trying to print the binary representation, e.g. the following code:
var i int8 = -5
fmt.Printf("%b", i)
Outputs -101. Not quite what I'd expect. Is the formatting different or is it not using Two's complement after all?
Interestingly, converting to an unsigned int results in the "correct" bit pattern:
var u uint8 = uint(i)
fmt.Printf("%b", u)
Output is 11111011 - exactly the 2s complement of -5.
So it seems to me the value is internally the really using Two's complement, but the formatting is printing the unsigned 5 and prepending a -.
Can somebody clarify this?
I believe the answer lies in how the fmt module formats binary numbers, rather than the internal format.
If you take a look at fmt.integer, one of the very first actions that the function does is to convert the negative signed integer to a positive one:
165 negative := signedness == signed && a < 0
166 if negative {
167 a = -a
168 }
There's then logic to append - in front of the string that's output here.
IOW -101 really is - appended to 5 in binary.
Note: fmt.integer is called from pp.fmtInt64 in print.go, itself called from pp.printArg in the same function.
Here is a method without using unsafe:
package main
import (
"fmt"
"math/bits"
)
func unsigned8(x uint8) []byte {
b := make([]byte, 8)
for i := range b {
if bits.LeadingZeros8(x) == 0 {
b[i] = 1
}
x = bits.RotateLeft8(x, 1)
}
return b
}
func signed8(x int8) []byte {
return unsigned8(uint8(x))
}
func main() {
b := signed8(-5)
fmt.Println(b) // [1 1 1 1 1 0 1 1]
}
In this case you could also use [8]byte, but the above is better if you have
a positive integer, and want to trim the leading zeros.
https://golang.org/pkg/math/bits#RotateLeft
Unsafe pointers must be used to correctly represent negative numbers in binary format.
package main
import (
"fmt"
"strconv"
"unsafe"
)
func bInt8(n int8) string {
return strconv.FormatUint(uint64(*(*uint8)(unsafe.Pointer(&n))), 2)
}
func main() {
fmt.Println(bInt8(-5))
}
Output
11111011

Displayed size of Go string variable seems unreal

Please see the example: http://play.golang.org/p/6d4uX15EOQ
package main
import (
"fmt"
"reflect"
"unsafe"
)
func main() {
c := "foofoofoofoofoofofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoo"
fmt.Printf("c: %T, %d\n", c, unsafe.Sizeof(c))
fmt.Printf("c: %T, %d\n", c, reflect.TypeOf(c).Size())
}
Output:
c: string, 8 //8 bytes?!
c: string, 8
It seems like so large string can not have so small size! What's going wrong?
Package unsafe
import "unsafe"
func Sizeof
func Sizeof(v ArbitraryType) uintptr
Sizeof returns the size in bytes occupied by the value v. The size is
that of the "top level" of the value only. For instance, if v is a
slice, it returns the size of the slice descriptor, not the size of
the memory referenced by the slice.
The Go Programming Language Specification
Length and capacity
len(s) string type string length in bytes
You are looking at the "top level", the string descriptor, a pointer to and the length of the underlying string value. Use the len function for the length, in bytes, of the underlying string value.
Conceptually and practically, the string descriptor is a struct containing a pointer and a length, whose lengths (32 or 64 bit) are implementation dependent. For example,
package main
import (
"fmt"
"unsafe"
)
type stringDescriptor struct {
str *byte
len int
}
func main() {
fmt.Println("string descriptor size in bytes:", unsafe.Sizeof(stringDescriptor{}))
}
Output (64 bit):
string descriptor size in bytes: 16
Output (32 bit):
string descriptor size in bytes: 8
A string is essentially a pointer the the data, and an int for the length; so on 32bit systems, it's 8 bytes, and 16 bytes on 64-bit systems.
Both unsafe.Sizeof and reflect.TypeOf(foo).Size() show the size of the string header (two words, IIRC). If you want to get the length of a string, use len(foo).
Playground: http://play.golang.org/p/hRw-EIVIQg.

What is the difference between int and int64 in Go?

I have a string containing an integer (which has been read from a file).
I'm trying to convert the string to an int using strconv.ParseInt(). ParseInt requires that I provide a bitsize (bit sizes 0, 8, 16, 32, and 64 correspond to int, int8, int16, int32, and int64).
The integer read from the file is small (i.e. it should fit in a normal int). If I pass a bitsize of 0, however, I get a result of type int64 (presumably because I'm running on a 64-bit OS).
Why is this happening? How do I just get a normal int? (If someone has a quick primer on when and why I should use the different int types, that would awesome!)
Edit: I can convert the int64 to a normal int using int([i64_var]). But I still don't understand why ParseInt() is giving me an int64 when I'm requesting a bitsize of 0.
func ParseInt(s string, base int, bitSize int) (i int64, err error)
ParseInt always returns int64.
bitSize defines the range of values.
If the value corresponding to s cannot be represented by a signed integer of the given size, err.Err = ErrRange.
http://golang.org/pkg/strconv/#ParseInt
type int int
int is a signed integer type that is at least 32 bits in size. It is a distinct type, however, and not an alias for, say, int32.
http://golang.org/pkg/builtin/#int
So int could be bigger than 32 bit in the future or on some systems like int in C.
I guess on some systems int64 might be faster than int32 because that system only works with 64-bit integers.
Here is an example of an error when bitSize is 8:
http://play.golang.org/p/_osjMqL6Nj
package main
import (
"fmt"
"strconv"
)
func main() {
i, err := strconv.ParseInt("123456", 10, 8)
fmt.Println(i, err)
}
Package strconv
func ParseInt
func ParseInt(s string, base int, bitSize int) (i int64, err error)
ParseInt interprets a string s in the given base (2 to 36) and returns
the corresponding value i. If base == 0, the base is implied by the
string's prefix: base 16 for "0x", base 8 for "0", and base 10
otherwise.
The bitSize argument specifies the integer type that the result must
fit into. Bit sizes 0, 8, 16, 32, and 64 correspond to int, int8,
int16, int32, and int64.
The errors that ParseInt returns have concrete type *NumError and
include err.Num = s. If s is empty or contains invalid digits, err.Err
= ErrSyntax; if the value corresponding to s cannot be represented by a signed integer of the given size, err.Err = ErrRange.
ParseInt always returns an int64 value. Depending on bitSize, this value will fit into int, int8, int16, int32, or int64. If the value cannot be represented by a signed integer of the size given by bitSize, then err.Err = ErrRange.
The Go Programming Language Specification
Numeric types
The value of an n-bit integer is n bits wide and represented using
two's complement arithmetic.
int8 the set of all signed 8-bit integers (-128 to 127)
int16 the set of all signed 16-bit integers (-32768 to 32767)
int32 the set of all signed 32-bit integers (-2147483648 to 2147483647)
int64 the set of all signed 64-bit integers (-9223372036854775808 to 9223372036854775807)
There is also a set of predeclared numeric types with
implementation-specific sizes:
uint either 32 or 64 bits
int same size as uint
int is either 32 or 64 bits, depending on the implementation. Usually it's 32 bits for 32-bit compilers and 64 bits for 64-bit compilers.
To find out the size of an int or uint, use strconv.IntSize.
Package strconv
Constants
const IntSize = intSize
IntSize is the size in bits of an int or uint value.
For example,
package main
import (
"fmt"
"runtime"
"strconv"
)
func main() {
fmt.Println(runtime.Compiler, runtime.GOARCH, runtime.GOOS)
fmt.Println(strconv.IntSize)
}
Output:
gc amd64 linux
64
strconv.ParseInt and friends return 64-bit versions to keep the API clean and simple.
Else one would have to create separate versions for each possible return type. Or return interface{}, which would then have to go through a type assertion. None of which are ideal.
int64 is chosen, because it can hold any integer size up to, and including, the supported 64-bits. The bit size you pass into the function, ensures that the value is properly clamped to the correct range. So you can simply do a type conversion on the returned value, to turn it into whatever integer type you require.
As for the difference between int and int64, this is architecture-dependant. int is simply an alias for either a 32-bit or 64-bit integer, depending on the architecture you are compiling for.
For the discerning eye: The returned value is a signed integer. There is a separate strconv.ParseUint function for unsigned integers, which returns uint64 and follows the same reasoning as explained above.
For your purposes, strconv.Atoi() would be more convenient I think.
The other answers have been pretty exhaustive about explaining the int type, but I think a link to the Go language specification is merited here: http://golang.org/ref/spec#Numeric_types
An int is the default signed type in go: it takes 32 bits (4 bytes) on a 32-bit machine and 64 bits(8 bytes) on a 64-bit machine.
Reference- The way to go by Ivo Balbaert
In Go lang, each type is considered as separate data type which can not be used interchangeably with the base type. For example,
type CustomInt64 int64
In the above declaration, CustomInt64 and built-in int64 are two separate data types and cannot be used interchangeably.
The same is the case with int, int32, and int64, all of these are separate data types that can't be used interchangeably. Where int32 is 32 its integer type, int64 is 64 bits and the size of the generic int type is platform dependent. It is 32 bits wide on a 32-bit system and 64-bits wide on a 64-bit system. So we must be careful and specific while specifying generic data types like int, uint, and float. It may cause a problem somewhere in code and will crash application on a different platform.

Convert an integer to a float number

How do I convert an integer value to float64 type?
I tried
float(integer_value)
But this does not work. And can't find any package that does this on Golang.org
How do I get float64 values from integer values?
There is no float type. Looks like you want float64. You could also use float32 if you only need a single-precision floating point value.
package main
import "fmt"
func main() {
i := 5
f := float64(i)
fmt.Printf("f is %f\n", f)
}
Just for the sake of completeness, here is a link to the golang documentation which describes all types. In your case it is numeric types:
uint8 the set of all unsigned 8-bit integers (0 to 255)
uint16 the set of all unsigned 16-bit integers (0 to 65535)
uint32 the set of all unsigned 32-bit integers (0 to 4294967295)
uint64 the set of all unsigned 64-bit integers (0 to 18446744073709551615)
int8 the set of all signed 8-bit integers (-128 to 127)
int16 the set of all signed 16-bit integers (-32768 to 32767)
int32 the set of all signed 32-bit integers (-2147483648 to 2147483647)
int64 the set of all signed 64-bit integers (-9223372036854775808 to 9223372036854775807)
float32 the set of all IEEE-754 32-bit floating-point numbers
float64 the set of all IEEE-754 64-bit floating-point numbers
complex64 the set of all complex numbers with float32 real and imaginary parts
complex128 the set of all complex numbers with float64 real and imaginary parts
byte alias for uint8
rune alias for int32
Which means that you need to use float64(integer_value).
just do these
package main
func main(){
a:= 70
afloat := float64(a)
fmt.Printf("type of a is %T\n", a) // will int
fmt.Printf("type of a is %T\n", afloat) //will float64
}
intutils.ToFloat32
// ToFloat32 converts a int num to a float32 num
func ToFloat32(in int) float32 {
return float32(in)
}
// ToFloat64 converts a int num to a float64 num
func ToFloat64(in int) float64 {
return float64(in)
}
Proper parentheses placement is key:
package main
import (
"fmt"
)
func main() {
var payload uint32
var fpayload float32
payload = 1320
// works
fpayload = float32(payload) / 100.0
fmt.Printf("%T = %d, %T = %f\n", payload, payload, fpayload, fpayload)
// doesn't work
fpayload = float32(payload / 100.0)
fmt.Printf("%T = %d, %T = %f\n", payload, payload, fpayload, fpayload)
}
results:
uint32 = 1320, float32 = 13.200000
uint32 = 1320, float32 = 13.000000
The Go Playground
Type Conversions T() where T is the desired datatype of the result are quite simple in GoLang.
In my program, I scan an integer i from the user input, perform a type conversion on it and store it in the variable f. The output prints the float64 equivalent of the int input. float32 datatype is also available in GoLang
Code:
package main
import "fmt"
func main() {
var i int
fmt.Println("Enter an Integer input: ")
fmt.Scanf("%d", &i)
f := float64(i)
fmt.Printf("The float64 representation of %d is %f\n", i, f)
}
Solution:
>>> Enter an Integer input:
>>> 232332
>>> The float64 representation of 232332 is 232332.000000

Resources