Golang overflows int64 - go

I try to use this code, but gives me an error: constant 100000000000000000000000 overflows int64
How can I fix that ?
// Initialise big numbers with small numbers
count, one := big.NewInt(100000000000000000000000), big.NewInt(1)

for example so:
count,one := new(big.Int), big.NewInt(1)
count.SetString("100000000000000000000000",10)
link:
http://play.golang.org/p/eEXooVOs9Z

It won't work because under the hood, big.NewInt is actually allocating an int64. The number that you want to allocate into a big.NewInt would need more than 64bits to exist, so it failed.
However, with how big works, if you wanted to add two large numbers below MaxInt64, you can do that! Even if the sum is greater than MaxInt64. Here is an example I just wrote up (http://play.golang.org/p/Jv52bMLP_B):
func main() {
count := big.NewInt(0);
count.Add( count, big.NewInt( 5000000000000000000 ) );
count.Add( count, big.NewInt( 5000000000000000000 ) );
//9223372036854775807 is the max value represented by int64: 2^63 - 1
fmt.Printf( "count: %v\nmax int64: 9223372036854775807\n", count );
}
Which results in:
count: 10000000000000000000
max int64: 9223372036854775807
Now, if you're still curious about how NewInt works under the hood, here is the function you're using, taken from Go's documentation,:
// NewInt allocates and returns a new Int set to x.
func NewInt(x int64) *Int {
return new(Int).SetInt64(x)
}
Sources:
https://golang.org/pkg/math/big/#NewInt
https://golang.org/src/math/big/int.go?s=1058:1083#L51

Related

How to convert an sha3 hash to an big integer in golang

I generated a hash value using sha3 and I need to convert it to a big.Int value. Is it possible ? or is there a method to get the integervalue of the hash ?
the following code throws an error that cannot convert type hash.Hash to type int64 :
package main
import (
"math/big"
"golang.org/x/crypto/sha3"
"fmt"
)
func main(){
chall := "hello word"
b := byte[](chall)
h := sha3.New244()
h.Write(chall)
h.Write(b)
d := make([]byte, 16)
h.Sum(d)
val := big.NewInt(int64(h))
fmt.Println(val)
}
TL;DR;
sha3.New224() cannot be represented in uint64 type.
There are many hash types - and of differing sizes. Go standard library picks a very generic interface to cover all type of hashes: https://golang.org/pkg/hash/#Hash
type Hash interface {
io.Writer
Sum(b []byte) []byte
Reset()
Size() int
BlockSize() int
}
Having said that some Go hash implementations optionally include extra methods like hash.Hash64:
type Hash64 interface {
Hash
Sum64() uint64
}
others may implement encoding.BinaryMarshaler:
type BinaryMarshaler interface {
MarshalBinary() (data []byte, err error)
}
which one can use to preserve a hash state.
sha3.New224() does not implement the above 2 interfaces, but crc64 hash does.
To do a runtime check:
h64, ok := h.(hash.Hash64)
if ok {
fmt.Printf("64-bit: %d\n", h64.Sum64())
}
Working example: https://play.golang.org/p/uLUfw0gMZka
(See Peter's comment for the simpler version of this.)
Interpreting a series of bytes as a big.Int is the same as interpreting a series of decimal digits as an arbitrarily large number. For example, to convert the digits 1234 into a "number", you'd do this:
Start with 0
Multiply by 10 = 0
Add 1 = 1
Multiply by 10 = 10
Add 2 = 12
Multiply by 10 = 120
Add 3 = 123
Multiply by 10 = 1230
Add 4 = 1234
The same applies to bytes. The "digits" are just base-256 rather than base-10:
val := big.NewInt(0)
for i := 0; i < h.Size(); i++ {
val.Lsh(val, 8)
val.Add(val, big.NewInt(int64(d[i])))
}
(Lsh is a left-shift. Left shifting by 8 bits is the same as multiplying by 256.)
Playground

Printing type of the numeric constant causes overflow

I am new to Go and currently following A Tour of Go.
I am currently at page Numeric Constants. Down below is a trimmed down version of the code that runs on that page:
package main
import "fmt"
const Big = 1 << 100
func needFloat(x float64) float64 {
return x * 0.1
}
func main() {
fmt.Println(needFloat(Big))
// fmt.Printf("Type of Big %T", Big)
}
this code compiles successfully with the output 1.2676506002282295e+29
The following code however will not compile and give an error:
package main
import "fmt"
const Big = 1 << 100
func needFloat(x float64) float64 {
return x * 0.1
}
func main() {
fmt.Println(needFloat(Big))
fmt.Printf("Type of Big %T", Big)
}
Output:
./prog.go:9:13: constant 1267650600228229401496703205376 overflows int
Why do you think this happened? I hope you will kindly explain.
The constant Big is an untyped constant. An untyped constant can be arbitrarily large and it doesn't have to fit into any predefined type's limits. It is interpreted and truncated in the context it is used.
The function needFloat gets a float64 argument. At this instance Big is converted to a float64 and used that way.
When you use it for Printf, it tries to pass it in as an int because it is not a decimal number (otherwise it would've converted it to float64), and it causes an overflow. Pass it as float64(Big), and it should work.
I guess the reason is that Big gets computed (i.e. casted right before being passed to needFloat, and gets instead computed as a int64 before the Printf. As a proof, the following statement computes correctly:
package main
import "fmt"
const Big = 1 << 100
func main() {
fmt.Printf("Type of Big %T", float64(Big))
}
Hope this helps.
The untyped constant n must be converted to a type before it can be assigned to the interface{} parameter in the call to fmt.Println.
fmt.Println(a ...interface{})
When the type can’t be inferred from the context, an untyped constant is converted to a bool, int, float64, complex128, string or rune depending of the format of the constant.
In this case the constant is an integer, but n is larger than the maximum value of an int.
However, n can be represented as a float64.
const n = 9876543210 * 9876543210
fmt.Println(float64(n))
For exact representation of big numbers, the math/big package implements arbitrary-precision arithmetic. It supports signed integers, rational numbers and floating-point numbers.
This is taken from https://yourbasic.org/golang/gotcha-constant-overflows-int/.

Adding/subtracting two numeric strings

I have two variables with big numbers set as strings:
var numA = "340282366920938463463374607431768211456"
var numB = "17014118346046923173168730371588410572"
I want to be able to add and subtract these kinds of large string numbers in Go.
I know I need to use math/big but I still can not for the life of me figure out how, so any example help will be greatly appreciated!
You may use big.NewInt() to create a new big.Int value initialized with an int64 value. It returns you a pointer (*big.Int). Alternatively you could simply use the builtin new() function to allocate a big.Int value which will be 0 like this: new(big.Int), or since big.Int is a struct type, a simple composite literal would also do: &big.Int{}.
Once you have a value, you may use Int.SetString() to parse and set a number given as string. You can pass the base of the string number, and it also returns you a bool value indicating if parsing succeeded.
Then you may use Int.Add() and Int.Sub() to calculate the sum and difference of 2 big.Int numbers. Note that Add() and Sub() write the result into the receiver whose method you call, so if you need the numbers (operands) unchanged, use another big.Int value to calculate and store the result.
See this example:
numA := "340282366920938463463374607431768211456"
numB := "17014118346046923173168730371588410572"
ba, bb := big.NewInt(0), big.NewInt(0)
if _, ok := ba.SetString(numA, 10); !ok {
panic("invalid numA")
}
if _, ok := bb.SetString(numB, 10); !ok {
panic("invalid numB")
}
sum := big.NewInt(0).Add(ba, bb)
fmt.Println("a + b =", sum)
diff := big.NewInt(0).Sub(ba, bb)
fmt.Println("a - b =", diff)
Output (try it on the Go Playground):
a + b = 357296485266985386636543337803356622028
a - b = 323268248574891540290205877060179800884

Is this strconv.ParseFloat() behavior expected or a bug? How can I get around it?

Run this code to see what I mean: Go Playground demo of issue
package main
import (
"fmt"
"strconv"
)
func main() {
// From https://golang.org/src/math/const.go
var SmallestNonzeroFloat64AsString string = "4.940656458412465441765687928682213723651e-324"
var SmallestNonzeroFloat64 float64
var err error
SmallestNonzeroFloat64, err = strconv.ParseFloat(SmallestNonzeroFloat64AsString, 64)
if err != nil {
panic(err)
}
fmt.Printf("SmallestNonzeroFloat64 = %g\n", SmallestNonzeroFloat64)
fmt.Printf("SmallestNonzeroFloat64 = %s\n", strconv.FormatFloat(SmallestNonzeroFloat64, 'f', -1, 64))
}
SmallestNonzeroFloat64 is defined in math/const.go and I assumed it can be represented by a float64 variable.
But when it is parsed into a float64 with strconv.ParseFloat() and printed with strconv.FormatFloat() the result is rounded.
Instead of 4.940656458412465441765687928682213723651e-324 I get 5e-324 (or its non-exponent equivalent, which you can see in the Go Playground results). The result is rounded.
Is there a way to get back the 4.940656458412465441765687928682213723651e-324?
Or is it a bug?
This is not a bug.
You could ask Go to print more digits.
fmt.Printf("SmallestNonzeroFloat64 = %.40g\n", SmallestNonzeroFloat64)
// 4.940656458412465441765687928682213723651e-324
However, 5e-324 and 4.94…e-324 are in fact the same value, so Go is not wrong printing 5e-324. This value (2-1074) is the smallest positive number representable by Float64 (also known as double in other languages). Larger numbers are all multiples of this, e.g. the next smallest number would be 2 × 2-1074 = 1e-323, the next would be 3 × 10-1074 = 1.5e-323, etc.
In the other words, all numbers more precise than 5e-324 would not be representable in Float64. So it makes no sense to print more digit after the "5". And 5e-324 is certainly more readable than 4.94…e-324.

Golang, math/big: what is the max value of *big.Int

What is the max value of *big.Int and max precision of *big.Rat?
Here are the structure definitions :
// A Word represents a single digit of a multi-precision unsigned integer.
type Word uintptr
type nat []Word
type Int struct {
neg bool // sign
abs nat // absolute value of the integer
}
type Rat struct {
// To make zero values for Rat work w/o initialization,
// a zero value of b (len(b) == 0) acts like b == 1.
// a.neg determines the sign of the Rat, b.neg is ignored.
a, b Int
}
There is no explicit limit. The limit will be your memory or, theoretically, the max array size (2^31 or 2^63, depending on your platform).
If you have practical concerns, you might be interested by the tests made in http://golang.org/src/pkg/math/big/nat_test.go, for example the one where 10^100000 is benchmarked.
And you can easily run this kind of program :
package main
import (
"fmt"
"math/big"
)
func main() {
verybig := big.NewInt(1)
ten := big.NewInt(10)
for i:=0; i<100000; i++ {
verybig.Mul(verybig, ten)
}
fmt.Println(verybig)
}
(if you want it to run fast enough for Go Playground, use a smaller exponent than 100000)
The problem won't be the max size but the used memory and the time such computations take.

Resources