What is the difference between int and int64 in Go? - go

I have a string containing an integer (which has been read from a file).
I'm trying to convert the string to an int using strconv.ParseInt(). ParseInt requires that I provide a bitsize (bit sizes 0, 8, 16, 32, and 64 correspond to int, int8, int16, int32, and int64).
The integer read from the file is small (i.e. it should fit in a normal int). If I pass a bitsize of 0, however, I get a result of type int64 (presumably because I'm running on a 64-bit OS).
Why is this happening? How do I just get a normal int? (If someone has a quick primer on when and why I should use the different int types, that would awesome!)
Edit: I can convert the int64 to a normal int using int([i64_var]). But I still don't understand why ParseInt() is giving me an int64 when I'm requesting a bitsize of 0.

func ParseInt(s string, base int, bitSize int) (i int64, err error)
ParseInt always returns int64.
bitSize defines the range of values.
If the value corresponding to s cannot be represented by a signed integer of the given size, err.Err = ErrRange.
http://golang.org/pkg/strconv/#ParseInt
type int int
int is a signed integer type that is at least 32 bits in size. It is a distinct type, however, and not an alias for, say, int32.
http://golang.org/pkg/builtin/#int
So int could be bigger than 32 bit in the future or on some systems like int in C.
I guess on some systems int64 might be faster than int32 because that system only works with 64-bit integers.
Here is an example of an error when bitSize is 8:
http://play.golang.org/p/_osjMqL6Nj
package main
import (
"fmt"
"strconv"
)
func main() {
i, err := strconv.ParseInt("123456", 10, 8)
fmt.Println(i, err)
}

Package strconv
func ParseInt
func ParseInt(s string, base int, bitSize int) (i int64, err error)
ParseInt interprets a string s in the given base (2 to 36) and returns
the corresponding value i. If base == 0, the base is implied by the
string's prefix: base 16 for "0x", base 8 for "0", and base 10
otherwise.
The bitSize argument specifies the integer type that the result must
fit into. Bit sizes 0, 8, 16, 32, and 64 correspond to int, int8,
int16, int32, and int64.
The errors that ParseInt returns have concrete type *NumError and
include err.Num = s. If s is empty or contains invalid digits, err.Err
= ErrSyntax; if the value corresponding to s cannot be represented by a signed integer of the given size, err.Err = ErrRange.
ParseInt always returns an int64 value. Depending on bitSize, this value will fit into int, int8, int16, int32, or int64. If the value cannot be represented by a signed integer of the size given by bitSize, then err.Err = ErrRange.
The Go Programming Language Specification
Numeric types
The value of an n-bit integer is n bits wide and represented using
two's complement arithmetic.
int8 the set of all signed 8-bit integers (-128 to 127)
int16 the set of all signed 16-bit integers (-32768 to 32767)
int32 the set of all signed 32-bit integers (-2147483648 to 2147483647)
int64 the set of all signed 64-bit integers (-9223372036854775808 to 9223372036854775807)
There is also a set of predeclared numeric types with
implementation-specific sizes:
uint either 32 or 64 bits
int same size as uint
int is either 32 or 64 bits, depending on the implementation. Usually it's 32 bits for 32-bit compilers and 64 bits for 64-bit compilers.
To find out the size of an int or uint, use strconv.IntSize.
Package strconv
Constants
const IntSize = intSize
IntSize is the size in bits of an int or uint value.
For example,
package main
import (
"fmt"
"runtime"
"strconv"
)
func main() {
fmt.Println(runtime.Compiler, runtime.GOARCH, runtime.GOOS)
fmt.Println(strconv.IntSize)
}
Output:
gc amd64 linux
64

strconv.ParseInt and friends return 64-bit versions to keep the API clean and simple.
Else one would have to create separate versions for each possible return type. Or return interface{}, which would then have to go through a type assertion. None of which are ideal.
int64 is chosen, because it can hold any integer size up to, and including, the supported 64-bits. The bit size you pass into the function, ensures that the value is properly clamped to the correct range. So you can simply do a type conversion on the returned value, to turn it into whatever integer type you require.
As for the difference between int and int64, this is architecture-dependant. int is simply an alias for either a 32-bit or 64-bit integer, depending on the architecture you are compiling for.
For the discerning eye: The returned value is a signed integer. There is a separate strconv.ParseUint function for unsigned integers, which returns uint64 and follows the same reasoning as explained above.

For your purposes, strconv.Atoi() would be more convenient I think.
The other answers have been pretty exhaustive about explaining the int type, but I think a link to the Go language specification is merited here: http://golang.org/ref/spec#Numeric_types

An int is the default signed type in go: it takes 32 bits (4 bytes) on a 32-bit machine and 64 bits(8 bytes) on a 64-bit machine.
Reference- The way to go by Ivo Balbaert

In Go lang, each type is considered as separate data type which can not be used interchangeably with the base type. For example,
type CustomInt64 int64
In the above declaration, CustomInt64 and built-in int64 are two separate data types and cannot be used interchangeably.
The same is the case with int, int32, and int64, all of these are separate data types that can't be used interchangeably. Where int32 is 32 its integer type, int64 is 64 bits and the size of the generic int type is platform dependent. It is 32 bits wide on a 32-bit system and 64-bits wide on a 64-bit system. So we must be careful and specific while specifying generic data types like int, uint, and float. It may cause a problem somewhere in code and will crash application on a different platform.

Related

Printing type of the numeric constant causes overflow

I am new to Go and currently following A Tour of Go.
I am currently at page Numeric Constants. Down below is a trimmed down version of the code that runs on that page:
package main
import "fmt"
const Big = 1 << 100
func needFloat(x float64) float64 {
return x * 0.1
}
func main() {
fmt.Println(needFloat(Big))
// fmt.Printf("Type of Big %T", Big)
}
this code compiles successfully with the output 1.2676506002282295e+29
The following code however will not compile and give an error:
package main
import "fmt"
const Big = 1 << 100
func needFloat(x float64) float64 {
return x * 0.1
}
func main() {
fmt.Println(needFloat(Big))
fmt.Printf("Type of Big %T", Big)
}
Output:
./prog.go:9:13: constant 1267650600228229401496703205376 overflows int
Why do you think this happened? I hope you will kindly explain.
The constant Big is an untyped constant. An untyped constant can be arbitrarily large and it doesn't have to fit into any predefined type's limits. It is interpreted and truncated in the context it is used.
The function needFloat gets a float64 argument. At this instance Big is converted to a float64 and used that way.
When you use it for Printf, it tries to pass it in as an int because it is not a decimal number (otherwise it would've converted it to float64), and it causes an overflow. Pass it as float64(Big), and it should work.
I guess the reason is that Big gets computed (i.e. casted right before being passed to needFloat, and gets instead computed as a int64 before the Printf. As a proof, the following statement computes correctly:
package main
import "fmt"
const Big = 1 << 100
func main() {
fmt.Printf("Type of Big %T", float64(Big))
}
Hope this helps.
The untyped constant n must be converted to a type before it can be assigned to the interface{} parameter in the call to fmt.Println.
fmt.Println(a ...interface{})
When the type can’t be inferred from the context, an untyped constant is converted to a bool, int, float64, complex128, string or rune depending of the format of the constant.
In this case the constant is an integer, but n is larger than the maximum value of an int.
However, n can be represented as a float64.
const n = 9876543210 * 9876543210
fmt.Println(float64(n))
For exact representation of big numbers, the math/big package implements arbitrary-precision arithmetic. It supports signed integers, rational numbers and floating-point numbers.
This is taken from https://yourbasic.org/golang/gotcha-constant-overflows-int/.

confusion about convert `uint8` to `int8`

I want to convert uint8 to int, so I write a const 0xfc, and try to use int8(0xfc) to convert it. However the code raises an error:
package main
import (
"fmt"
)
func main() {
a := int8(0xfc) // compile error: constant 252 overflows int8
b := a
fmt.Println(b)
}
But if I defer the type conversion after assignment, the code can work around.
package main
import (
"fmt"
)
func main() {
a := 0xfc
b := int8(a) // ok
fmt.Println(b)
}
My question:
Is there any difference between these two codes?
Why does the first one raise a compile error?
see: https://golang.org/ref/spec#Constant_expressions
The values of typed constants must always be accurately representable by values of the constant type. The following constant expressions are illegal:
uint(-1) // -1 cannot be represented as a uint
int(3.14) // 3.14 cannot be represented as an int
int64(Huge) // 1267650600228229401496703205376 cannot be represented as an int64
Four * 300 // operand 300 cannot be represented as an int8 (type of Four)
Four * 100 // product 400 cannot be represented as an int8 (type of Four)
see:
https://blog.golang.org/constants
not all integer values can fit in all integer types. There are two problems that might arise: the value might be too large, or it might be a negative value being assigned to an unsigned integer type. For instance, int8 has range -128 through 127, so constants outside of that range can never be assigned to a variable of type int8:
var i8 int8 = 128 // Error: too large.
Similarly, uint8, also known as byte, has range 0 through 255, so a large or negative constant cannot be assigned to a uint8:
var u8 uint8 = -1 // Error: negative value.
This type-checking can catch mistakes like this one:
type Char byte
var c Char = '世' // Error: '世' has value 0x4e16, too large.
If the compiler complains about your use of a constant, it's likely a real bug like this.
My actual demand is to convert a byte to int32 when parsing a binary file. I may encounter the constant byte 0xfc, and should transfer it to the int8 before converting it to the int32 with the consideration of sign.
Yes, this is the way to go:
var b byte = 0xff
i32 := int32(int8(b))
fmt.Println(i32) // -1
Is there any difference between these two codes?
The first example uses a constant expression. The second uses plain expressions. Constant expressions are evaluated at compile time with different rules from plain expressions.
Why does the first one raise a compile error?
The int8(0xfc) is a typed constant expression. Values of typed constants must always be accurately representable by values of the constant type. The compiler reports an error because the value 252 cannot be represented by the values of int8.
Based on comments on other answers, I see that the goal is to get an int32 from a byte with sign extension. Given a byte variable b, use the expression int32(int8(b)) to get the int32 value with sign extension.

No panic when converting int to uint?

I'm confused about the following type conversion. I would expect both uint conversions to panic.
a := -1
_ = uint(a) // why no panic?
_ = uint(-1) // panics: constant -1 overflows uint
Why doesn't it panic in line 2?
https://play.golang.org/p/jcfDL8km2C
As mentioned in issue 6923:
T(c) where T is a type and c is a constant means to treat c as having type T rather than one of the default types.
It gives an error if c can not be represented in T, except that for float and complex constants we quietly round to T as long as the value is not too large.
Here:
const x uint = -1
var x uint = -1
This doesn't work because -1 cannot be (implicitly) converted to a uint.
_ = uint(a) // why no panic?
Because a is not an untyped constant, but a typed variable (int). See Playground and "what's wrong with Golang constant overflows uint64":
package main
import "fmt"
func main() {
a := -1
_ = uint(a) // why no panic?
var b uint
b = uint(a)
fmt.Println(b)
// _ = uint(-1) // panics: main.go:7: constant -1 overflows uint
}
Result: 4294967295 (on 32-bits system) or 18446744073709551615 (on 64-bits system), as commented by starriet
That are specific rules for the conversion of non-constant numeric values:
When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended.
It is then truncated to fit in the result type's size.

Displayed size of Go string variable seems unreal

Please see the example: http://play.golang.org/p/6d4uX15EOQ
package main
import (
"fmt"
"reflect"
"unsafe"
)
func main() {
c := "foofoofoofoofoofofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoo"
fmt.Printf("c: %T, %d\n", c, unsafe.Sizeof(c))
fmt.Printf("c: %T, %d\n", c, reflect.TypeOf(c).Size())
}
Output:
c: string, 8 //8 bytes?!
c: string, 8
It seems like so large string can not have so small size! What's going wrong?
Package unsafe
import "unsafe"
func Sizeof
func Sizeof(v ArbitraryType) uintptr
Sizeof returns the size in bytes occupied by the value v. The size is
that of the "top level" of the value only. For instance, if v is a
slice, it returns the size of the slice descriptor, not the size of
the memory referenced by the slice.
The Go Programming Language Specification
Length and capacity
len(s) string type string length in bytes
You are looking at the "top level", the string descriptor, a pointer to and the length of the underlying string value. Use the len function for the length, in bytes, of the underlying string value.
Conceptually and practically, the string descriptor is a struct containing a pointer and a length, whose lengths (32 or 64 bit) are implementation dependent. For example,
package main
import (
"fmt"
"unsafe"
)
type stringDescriptor struct {
str *byte
len int
}
func main() {
fmt.Println("string descriptor size in bytes:", unsafe.Sizeof(stringDescriptor{}))
}
Output (64 bit):
string descriptor size in bytes: 16
Output (32 bit):
string descriptor size in bytes: 8
A string is essentially a pointer the the data, and an int for the length; so on 32bit systems, it's 8 bytes, and 16 bytes on 64-bit systems.
Both unsafe.Sizeof and reflect.TypeOf(foo).Size() show the size of the string header (two words, IIRC). If you want to get the length of a string, use len(foo).
Playground: http://play.golang.org/p/hRw-EIVIQg.

Why do these two golang integer conversion functions give different results?

I wrote a function to convert a byte slice to an integer.
The function I created is actually a loop-based implemtation of
what Rob Pike published here:
http://commandcenter.blogspot.com/2012/04/byte-order-fallacy.html
Here is Rob's code:
i = (data[0]<<0) | (data[1]<<8) | (data[2]<<16) | (data[3]<<24);
My first implementation (toInt2 in the playground) doesn't work as
I expected because it appears to initialize the int value as a uint.
This seems really strange but it must be platform specific because
the go playground reports a different result than my machine (a mac).
Can anyone explain why these functions behave differently on my mac?
Here's the link to the playground with the code: http://play.golang.org/p/FObvS3W4UD
Here's the code from the playground (for convenience):
/*
Output on my machine:
amd64 darwin go1.3 input: [255 255 255 255]
-1
4294967295
Output on the go playground:
amd64p32 nacl go1.3 input: [255 255 255 255]
-1
-1
*/
package main
import (
"fmt"
"runtime"
)
func main() {
input := []byte{255, 255, 255, 255}
fmt.Println(runtime.GOARCH, runtime.GOOS, runtime.Version(), "input:", input)
fmt.Println(toInt(input))
fmt.Println(toInt2(input))
}
func toInt(bytes []byte) int {
var value int32 = 0 // initialized with int32
for i, b := range bytes {
value |= int32(b) << uint(i*8)
}
return int(value) // converted to int
}
func toInt2(bytes []byte) int {
var value int = 0 // initialized with plain old int
for i, b := range bytes {
value |= int(b) << uint(i*8)
}
return value
}
This is an educated guess, but int type can be 64bit or 32bit depending on the platform, on my system and yours it's 64bit, since the playground is running on nacl, it's 32bit.
If you change the 2nd function to use uint all around, it will work fine.
From the spec:
uint either 32 or 64 bits
int same size as uint
uintptr an unsigned integer large enough to store the uninterpreted bits of a pointer value
int is allowed to be 32 or 64 bits, depending on platform/implementation. When it is 64-bits, it is capable of representing 2^32 as a signed positive integer, which is what happens on your machine. When it is 32-bits (the playground), it overflows as you expect.

Resources