I currently have the following codes for my fibonacci calculations. I'm trying to calculate large numbers, but it appears once it gets to 100, the calculations are off. For fib(100), my code returns 3736710778780434371, but when I look at other sources, it tells me the correct calculation should be 354224848179261915075. Is there a problem in my code or does it have to do with my computer hardware or something else?
package main
import "fmt"
func fib(N uint) uint{
var table []uint
table = make([]uint, N+1)
table[0] = 0
table[1] = 1
for i := uint(2); i <= N; i += 1 {
table[i] = table[i-1] + table[i-2]
}
return table[N]
}
func main() {
fmt.Println(fib(100))
}
You're hitting an integer overflow! You can only calculate using a uint up to the size of a uint; once you go beyond its bounds, it will (silently) wrap back round again.
In your case, it looks as though a uint is 64 bits long. (Its size depends on the platform you're running on.) That means that you will be able to store values up to 264-1. If you then add one more, it'll wrap back to 0, and won't return an error.
If you convert the answer you're getting, and the right answer, into hex, then you'll see that this is the case. You're ending up with
33DB76A7C594BFC3
whereas the right answer is
1333DB76A7C594BFC3
Note that your answer is correct as far as it goes... it just doesn't go far enough. You've only got the lower 64 bits of the answer; you're missing the other 13*264.
To correct it, you'll need to use an arbitrary size integer from Package big, instead of a uint.
Here is a version using big.Int which produces the correct answer (playground)
package main
import (
"fmt"
"math/big"
)
func fib(N uint) *big.Int {
var table []*big.Int
table = make([]*big.Int, N+1)
table[0] = new(big.Int).SetInt64(0)
table[1] = new(big.Int).SetInt64(1)
for i := uint(2); i <= N; i += 1 {
table[i] = new(big.Int).Add(table[i-1], table[i-2])
}
return table[N]
}
func main() {
fmt.Println(fib(100))
}
Which produces
354224848179261915075
Related
The math.Floor in Golang returns a float64. But I would like it to return an integer. How can I get the integer value after performing the floor operation? Can I just use int(x) or int32(x) or int64(x)? I worry that the integer range might not match that of a float64 result therefore brings inaccuracy to the operation.
You may just want to check beforehand; if the conversion will perform safely or an overflow will occur.
As John Weldon suggested,
package main
import (
"fmt"
"math"
)
func main() {
var (
a int64
f64 float64
)
// This number doesn't exist in the float64 world,
// just a number to perform the test.
f64 = math.Floor(9223372036854775808.5)
if f64 >= math.MaxInt64 || f64 <= math.MinInt64 {
fmt.Println("f64 is out of int64 range.")
return
}
a = int64(f64)
fmt.Println(a)
}
Go Playground
I hope this will answer your question.
Also, I'd really like to know if any better solution is available. :)
You can compare the float64 value with math.MaxInt64 or math.MinInt64 before doing the conversion.
So computers use Two's complement to internally represent signed integers. I.e., -5 is represented as ^5 + 1 = "1111 1011".
However, trying to print the binary representation, e.g. the following code:
var i int8 = -5
fmt.Printf("%b", i)
Outputs -101. Not quite what I'd expect. Is the formatting different or is it not using Two's complement after all?
Interestingly, converting to an unsigned int results in the "correct" bit pattern:
var u uint8 = uint(i)
fmt.Printf("%b", u)
Output is 11111011 - exactly the 2s complement of -5.
So it seems to me the value is internally the really using Two's complement, but the formatting is printing the unsigned 5 and prepending a -.
Can somebody clarify this?
I believe the answer lies in how the fmt module formats binary numbers, rather than the internal format.
If you take a look at fmt.integer, one of the very first actions that the function does is to convert the negative signed integer to a positive one:
165 negative := signedness == signed && a < 0
166 if negative {
167 a = -a
168 }
There's then logic to append - in front of the string that's output here.
IOW -101 really is - appended to 5 in binary.
Note: fmt.integer is called from pp.fmtInt64 in print.go, itself called from pp.printArg in the same function.
Here is a method without using unsafe:
package main
import (
"fmt"
"math/bits"
)
func unsigned8(x uint8) []byte {
b := make([]byte, 8)
for i := range b {
if bits.LeadingZeros8(x) == 0 {
b[i] = 1
}
x = bits.RotateLeft8(x, 1)
}
return b
}
func signed8(x int8) []byte {
return unsigned8(uint8(x))
}
func main() {
b := signed8(-5)
fmt.Println(b) // [1 1 1 1 1 0 1 1]
}
In this case you could also use [8]byte, but the above is better if you have
a positive integer, and want to trim the leading zeros.
https://golang.org/pkg/math/bits#RotateLeft
Unsafe pointers must be used to correctly represent negative numbers in binary format.
package main
import (
"fmt"
"strconv"
"unsafe"
)
func bInt8(n int8) string {
return strconv.FormatUint(uint64(*(*uint8)(unsafe.Pointer(&n))), 2)
}
func main() {
fmt.Println(bInt8(-5))
}
Output
11111011
I'm newbie in Golan, this should be an easy question for experienced golang devs. I try to do the same test from Spotify to see how fast we can go in Golang :)
The usual bit-twiddling C solutions translate immediately to Go.
package main
import "fmt"
func BitReverse32(x uint32) uint32 {
x = (x&0x55555555)<<1 | (x&0xAAAAAAAA)>>1
x = (x&0x33333333)<<2 | (x&0xCCCCCCCC)>>2
x = (x&0x0F0F0F0F)<<4 | (x&0xF0F0F0F0)>>4
x = (x&0x00FF00FF)<<8 | (x&0xFF00FF00)>>8
return (x&0x0000FFFF)<<16 | (x&0xFFFF0000)>>16
}
func main() {
cases := []uint32{0x1, 0x100, 0x1000, 0x1000000, 0x10000000, 0x80000000, 0x89abcdef}
for _, c := range cases {
fmt.Printf("%08x -> %08x\n", c, BitReverse32(c))
}
}
Note: since 2013, you now have a dedicate math/bits package with Go 1.9 (August 2017).
And it does come with a collection of Reverse() and ReverseBytes() functions: no need to implement one anymore.
Plus, on most architectures, functions in this package are additionally recognized by the compiler and treated as intrinsics for additional performance.
The most straight-forward solution would be converting the bits into a number with strconv and then reversing the number by shifting the bits. I'm not sure how fast it would be, but it should work.
package main
import "fmt"
import "strconv"
func main() {
bits := "10100001"
bits_number := 8
number, _ := strconv.ParseUint(bits, 2, bits_number)
r_number := number - number // reserve type
for i := 0; i < bits_number; i++ {
r_number <<= 1
r_number |= number & 1
number >>= 1
}
fmt.Printf("%s [%d]\n", strconv.FormatUint(r_number, 2), r_number)
}
http://play.golang.org/p/YLS5wkY-iv
What is the max value of *big.Int and max precision of *big.Rat?
Here are the structure definitions :
// A Word represents a single digit of a multi-precision unsigned integer.
type Word uintptr
type nat []Word
type Int struct {
neg bool // sign
abs nat // absolute value of the integer
}
type Rat struct {
// To make zero values for Rat work w/o initialization,
// a zero value of b (len(b) == 0) acts like b == 1.
// a.neg determines the sign of the Rat, b.neg is ignored.
a, b Int
}
There is no explicit limit. The limit will be your memory or, theoretically, the max array size (2^31 or 2^63, depending on your platform).
If you have practical concerns, you might be interested by the tests made in http://golang.org/src/pkg/math/big/nat_test.go, for example the one where 10^100000 is benchmarked.
And you can easily run this kind of program :
package main
import (
"fmt"
"math/big"
)
func main() {
verybig := big.NewInt(1)
ten := big.NewInt(10)
for i:=0; i<100000; i++ {
verybig.Mul(verybig, ten)
}
fmt.Println(verybig)
}
(if you want it to run fast enough for Go Playground, use a smaller exponent than 100000)
The problem won't be the max size but the used memory and the time such computations take.
I am trying to generate random numbers (integers) in Go, to no avail. I found the rand package in crypto/rand, which seems to be what I want, but I can't tell from the documentation how to use it. This is what I'm trying right now:
b := []byte{}
something, err := rand.Read(b)
fmt.Printf("something = %v\n", something)
fmt.Printf("err = %v\n", err)
But unfortunately this always outputs:
something = 0
err = <nil>
Is there a way to fix this so that it actually generates random numbers? Alternatively, is there a way to set the upper bound on the random numbers this generates?
Depending on your use case, another option is the math/rand package. Don't do this if you're generating numbers that need to be completely unpredictable. It can be helpful if you need to get results that are reproducible, though -- just pass in the same seed you passed in the first time.
Here's the classic "seed the generator with the current time and generate a number" program:
package main
import (
"fmt"
"math/rand"
"time"
)
func main() {
rand.Seed(time.Now().Unix())
fmt.Println(rand.Int())
}
crypto/rand provides only binary stream of random data, but you can read integers from it using encoding/binary:
package main
import "encoding/binary"
import "crypto/rand"
func main() {
var n int32
binary.Read(rand.Reader, binary.LittleEndian, &n)
println(n)
}
As of 1 april 2012, after the release of the stable version of the lang, you can do the following:
package main
import "fmt"
import "time"
import "math/rand"
func main() {
rand.Seed(time.Now().UnixNano()) // takes the current time in nanoseconds as the seed
fmt.Println(rand.Intn(100)) // this gives you an int up to but not including 100
}
You can also develop your own random number generator, perhaps based upon a simple "desert island PRNG", a Linear Congruential Generator. Also, look up L'Ecuyer (1999), Mersenne Twister, or Tausworthe generator...
https://en.wikipedia.org/wiki/Pseudorandom_number_generator
(Avoid RANDU, it was popular in the 1960's, but the random numbers generated fall on 15 hyperplanes in 3-space).
package pmPRNG
import "errors"
const (
Mersenne31 = 2147483647 // = 2^31-1
Mersenne31Inv = 1.0 / 2147483647.0 // = 4.656612875e-10
// a = 16807
a = 48271
)
// Each stream gets own seed
type PRNGStream struct {
state int
}
func PRNGStreamNew(seed int) *PRNGStream {
prng := (&PRNGStream{})
prng.SetSeed(seed)
return prng
}
// enforce seed in [1, 2^31-1]
func (r*PRNGStream) SetSeed(seed int) error {
var err error
if seed < 1 || seed > Mersenne31 {
err = errors.New("Seed OOB")
}
if seed > Mersenne31 { seed = seed % Mersenne31 }
if seed < 1 { seed = 1 }
r.state = seed
return err
}
// Dig = Park-Miller DesertIslandGenerator
// integer seed in [1, 2^31-1]
func (r*PRNGStream) Dig(seed int) float32 {
xprev := r.state // x[i-1]
xnext := (a * xprev) % Mersenne31 // x[i] = (a*x[i-1])%m
r.state = xnext // x[i-1] = x[i]
Ri := float32(xnext) * Mersenne31Inv // convert Ui to Ri
return Ri
}
func (r*PRNGStream) Rand() float32 {
r.state = (uint64_t)*r.state * Multby % 0x7fffffff
return float32(r.state) * Mersenne31Inv
}
A few relevant links:
https://en.wikipedia.org/wiki/Lehmer_random_number_generator
You might use this function to update your x[i+1], instead of the one above,
val = ((state * 1103515245) + 12345) & 0x7fffffff
(basically, different values of a, c, m)
https://www.redhat.com/en/blog/understanding-random-number-generators-and-their-limitations-linux
https://www.iro.umontreal.ca/~lecuyer/myftp/papers/handstat.pdf
https://www.math.utah.edu/~alfeld/Random/Random.html
https://learn.microsoft.com/en-us/archive/msdn-magazine/2016/august/test-run-lightweight-random-number-generation