I'm newbie in Golan, this should be an easy question for experienced golang devs. I try to do the same test from Spotify to see how fast we can go in Golang :)
The usual bit-twiddling C solutions translate immediately to Go.
package main
import "fmt"
func BitReverse32(x uint32) uint32 {
x = (x&0x55555555)<<1 | (x&0xAAAAAAAA)>>1
x = (x&0x33333333)<<2 | (x&0xCCCCCCCC)>>2
x = (x&0x0F0F0F0F)<<4 | (x&0xF0F0F0F0)>>4
x = (x&0x00FF00FF)<<8 | (x&0xFF00FF00)>>8
return (x&0x0000FFFF)<<16 | (x&0xFFFF0000)>>16
}
func main() {
cases := []uint32{0x1, 0x100, 0x1000, 0x1000000, 0x10000000, 0x80000000, 0x89abcdef}
for _, c := range cases {
fmt.Printf("%08x -> %08x\n", c, BitReverse32(c))
}
}
Note: since 2013, you now have a dedicate math/bits package with Go 1.9 (August 2017).
And it does come with a collection of Reverse() and ReverseBytes() functions: no need to implement one anymore.
Plus, on most architectures, functions in this package are additionally recognized by the compiler and treated as intrinsics for additional performance.
The most straight-forward solution would be converting the bits into a number with strconv and then reversing the number by shifting the bits. I'm not sure how fast it would be, but it should work.
package main
import "fmt"
import "strconv"
func main() {
bits := "10100001"
bits_number := 8
number, _ := strconv.ParseUint(bits, 2, bits_number)
r_number := number - number // reserve type
for i := 0; i < bits_number; i++ {
r_number <<= 1
r_number |= number & 1
number >>= 1
}
fmt.Printf("%s [%d]\n", strconv.FormatUint(r_number, 2), r_number)
}
http://play.golang.org/p/YLS5wkY-iv
Related
So I am new to Go and fairly inexperienced with programming in general so I hope I don't get downvoted again for asking stupid questions.
I am working my way through the project euler problems and at problem 25 "1000-digit Fibonacci number" I encountered what seems to be strange behavior. The following is the code I wrote that resulted in this behavior.
package main
import (
"fmt"
"math/big"
)
func main() {
index := 2
l := new(big.Int)
pl := big.NewInt(1)
i := big.NewInt(1)
for {
l = i
i.Add(i, pl)
pl = l
index++
if len(i.String()) == 1000 {
break
}
}
fmt.Println(i, "\nindex: ", index)
}
Naturally this did not generate the correct answer so in the process of determining why I discovered that I had inadvertently discovered a neat way to generate powers of 2. I made the following changes and this did generate the correct result.
package main
import (
"fmt"
"math/big"
)
func main() {
index := 2
l := new(big.Int)
pl := big.NewInt(1)
i := big.NewInt(1)
for {
l.Set(i)
i.Add(i, pl)
pl.Set(l)
index++
if len(i.String()) == 1000 {
break
}
}
fmt.Println(i, "\nindex: ", index)
}
My question is what is happening in the first example that causes each big Int variable to be set to the value of i and why this did not generate an error if this was not the correct way to assign a big Int var value? Is i = l, etc a legitimate big Int operation that is simply incorrect for this situation?
The lines
l = i
and
pl = l
aren't doing what you think they are.
l, pl, and i are pointers, and assigning them to each other copies the pointer value, not the big.Int value.
After executing l = i, l is now the same pointer value as i, pointing to the same big.Int. When you use l.Set(i), it sets l's big.Int value to i's big.Int value, but l and i still point to two separate values.
I was wondering, how do you convert a base10 number from one base to another without usage of strconv in Golang ?
Could you please give me some advice ?
package main
import (
"fmt"
"math/big"
)
func main() {
fmt.Println(big.NewInt(1000000000000).Text(62))
}
Demo
Use the math package and a log identify:
log_77(x) = log(x) / log(77)
This is probably cheating but I guess you could look at the implementation of strconv.FormatInt, and build some of your own code using that as an example. That way you aren't using it directly, you have implemented it yourself.
You can use this function to convert any decimal number to any base with the character set of your choice.
func encode(nb uint64, buf *bytes.Buffer, base string) {
l := uint64(len(base))
if nb/l != 0 {
encode(nb/l, buf, base)
}
buf.WriteByte(base[nb%l])
}
func decode(enc, base string) uint64 {
var nb uint64
lbase := len(base)
le := len(enc)
for i := 0; i < le; i++ {
mult := 1
for j := 0; j < le-i-1; j++ {
mult *= lbase
}
nb += uint64(strings.IndexByte(base, enc[i]) * mult)
}
return nb
}
You can use it like that:
// encoding
var buf bytes.Buffer
encode(100, &buf, "0123456789abcdef")
fmt.Println(buf.String())
// 64
// decoding
val := decode("64", "0123456789abcdef")
fmt.Println(val)
// 100
When printing out some values from a map of structs. I see certain float64 values with alternative notation. The test passes but how do you read this notation (4e-06). Is this value indeed the same as "0.000004"?
package main
import (
"fmt"
"strconv"
"testing"
)
func TestXxx(t *testing.T) {
num := fmt.Sprintf("%f", float64(1.225788)-float64(1.225784)) // 0.000004
f, _ := strconv.ParseFloat(num, 64)
if f == 0.000004 {
t.Log("Success")
} else {
t.Error("Not Equal", num)
}
if getFloat(f) == 0.000004 {
t.Log("Success")
}else{
t.Error("Fail", getFloat(f))
}
}
func getFloat(f float64) float64 {
fmt.Println("My Float:",f) // 4e-06
return f
}
The notation is called Scientific notation, and it is a convenient way to print very small or very large numbers in compact, short form.
It has the form of
m × 10n
(m times ten raised to the power of n)
In programming languages it is written / printed as:
men
See Spec: Floating-point literals.
Your number: 4e-06, where m=4 and n=-6, which means 4*10-6 which equals to 0.000004.
In order to print your floats in a regular way you can do something like this example:
package main
import (
"fmt"
"strconv"
)
func main() {
a, _ := strconv.ParseFloat("0.000004", 64)
b, _ := strconv.ParseFloat("0.0000000004", 64)
c := fmt.Sprintf("10.0004")
cc, _ := strconv.ParseFloat(c, 64)
fmt.Printf("%.6f\n", a) // 6 numbers after the point
fmt.Printf("%.10f\n", b) // 10 numbers afer the point
fmt.Printf("%.4f\n", cc) // 4 numbers after the point
}
Output:
0.000004
0.0000000004
10.0004
It is the same number. You can use fmt.Printf("My Float: %.6f\n",f) if you don't like the scientific notation. (This format requests that 6 digits will be printed after the decimal point.)
I currently have the following codes for my fibonacci calculations. I'm trying to calculate large numbers, but it appears once it gets to 100, the calculations are off. For fib(100), my code returns 3736710778780434371, but when I look at other sources, it tells me the correct calculation should be 354224848179261915075. Is there a problem in my code or does it have to do with my computer hardware or something else?
package main
import "fmt"
func fib(N uint) uint{
var table []uint
table = make([]uint, N+1)
table[0] = 0
table[1] = 1
for i := uint(2); i <= N; i += 1 {
table[i] = table[i-1] + table[i-2]
}
return table[N]
}
func main() {
fmt.Println(fib(100))
}
You're hitting an integer overflow! You can only calculate using a uint up to the size of a uint; once you go beyond its bounds, it will (silently) wrap back round again.
In your case, it looks as though a uint is 64 bits long. (Its size depends on the platform you're running on.) That means that you will be able to store values up to 264-1. If you then add one more, it'll wrap back to 0, and won't return an error.
If you convert the answer you're getting, and the right answer, into hex, then you'll see that this is the case. You're ending up with
33DB76A7C594BFC3
whereas the right answer is
1333DB76A7C594BFC3
Note that your answer is correct as far as it goes... it just doesn't go far enough. You've only got the lower 64 bits of the answer; you're missing the other 13*264.
To correct it, you'll need to use an arbitrary size integer from Package big, instead of a uint.
Here is a version using big.Int which produces the correct answer (playground)
package main
import (
"fmt"
"math/big"
)
func fib(N uint) *big.Int {
var table []*big.Int
table = make([]*big.Int, N+1)
table[0] = new(big.Int).SetInt64(0)
table[1] = new(big.Int).SetInt64(1)
for i := uint(2); i <= N; i += 1 {
table[i] = new(big.Int).Add(table[i-1], table[i-2])
}
return table[N]
}
func main() {
fmt.Println(fib(100))
}
Which produces
354224848179261915075
I am trying to generate random numbers (integers) in Go, to no avail. I found the rand package in crypto/rand, which seems to be what I want, but I can't tell from the documentation how to use it. This is what I'm trying right now:
b := []byte{}
something, err := rand.Read(b)
fmt.Printf("something = %v\n", something)
fmt.Printf("err = %v\n", err)
But unfortunately this always outputs:
something = 0
err = <nil>
Is there a way to fix this so that it actually generates random numbers? Alternatively, is there a way to set the upper bound on the random numbers this generates?
Depending on your use case, another option is the math/rand package. Don't do this if you're generating numbers that need to be completely unpredictable. It can be helpful if you need to get results that are reproducible, though -- just pass in the same seed you passed in the first time.
Here's the classic "seed the generator with the current time and generate a number" program:
package main
import (
"fmt"
"math/rand"
"time"
)
func main() {
rand.Seed(time.Now().Unix())
fmt.Println(rand.Int())
}
crypto/rand provides only binary stream of random data, but you can read integers from it using encoding/binary:
package main
import "encoding/binary"
import "crypto/rand"
func main() {
var n int32
binary.Read(rand.Reader, binary.LittleEndian, &n)
println(n)
}
As of 1 april 2012, after the release of the stable version of the lang, you can do the following:
package main
import "fmt"
import "time"
import "math/rand"
func main() {
rand.Seed(time.Now().UnixNano()) // takes the current time in nanoseconds as the seed
fmt.Println(rand.Intn(100)) // this gives you an int up to but not including 100
}
You can also develop your own random number generator, perhaps based upon a simple "desert island PRNG", a Linear Congruential Generator. Also, look up L'Ecuyer (1999), Mersenne Twister, or Tausworthe generator...
https://en.wikipedia.org/wiki/Pseudorandom_number_generator
(Avoid RANDU, it was popular in the 1960's, but the random numbers generated fall on 15 hyperplanes in 3-space).
package pmPRNG
import "errors"
const (
Mersenne31 = 2147483647 // = 2^31-1
Mersenne31Inv = 1.0 / 2147483647.0 // = 4.656612875e-10
// a = 16807
a = 48271
)
// Each stream gets own seed
type PRNGStream struct {
state int
}
func PRNGStreamNew(seed int) *PRNGStream {
prng := (&PRNGStream{})
prng.SetSeed(seed)
return prng
}
// enforce seed in [1, 2^31-1]
func (r*PRNGStream) SetSeed(seed int) error {
var err error
if seed < 1 || seed > Mersenne31 {
err = errors.New("Seed OOB")
}
if seed > Mersenne31 { seed = seed % Mersenne31 }
if seed < 1 { seed = 1 }
r.state = seed
return err
}
// Dig = Park-Miller DesertIslandGenerator
// integer seed in [1, 2^31-1]
func (r*PRNGStream) Dig(seed int) float32 {
xprev := r.state // x[i-1]
xnext := (a * xprev) % Mersenne31 // x[i] = (a*x[i-1])%m
r.state = xnext // x[i-1] = x[i]
Ri := float32(xnext) * Mersenne31Inv // convert Ui to Ri
return Ri
}
func (r*PRNGStream) Rand() float32 {
r.state = (uint64_t)*r.state * Multby % 0x7fffffff
return float32(r.state) * Mersenne31Inv
}
A few relevant links:
https://en.wikipedia.org/wiki/Lehmer_random_number_generator
You might use this function to update your x[i+1], instead of the one above,
val = ((state * 1103515245) + 12345) & 0x7fffffff
(basically, different values of a, c, m)
https://www.redhat.com/en/blog/understanding-random-number-generators-and-their-limitations-linux
https://www.iro.umontreal.ca/~lecuyer/myftp/papers/handstat.pdf
https://www.math.utah.edu/~alfeld/Random/Random.html
https://learn.microsoft.com/en-us/archive/msdn-magazine/2016/august/test-run-lightweight-random-number-generation