How can I generate a random 64 bit unsigned integer in Go?
First I need to call
rand.Seed(0)
and then I need a function that returns a uint64 with the following signature
func random(min, max uint64) uint64 {
}
The function above should return a random 64 bit unsigned integer in the
range [min, max] (min and max included)
I'm not sure why you are being downvoted. I think you are worried about the case where max - min is greater than MaxInt64 in which case rand.Int63n would fail as you have remarked. I would handle that case separately.
const maxInt64 uint64 = 1 << 63 - 1
func random(min, max uint64) uint64 {
return randomHelper(max - min) + min
}
func randomHelper(n uint64) uint64 {
if n < maxInt64 {
return uint64(rand.Int63n(int64(n+1)))
}
x := rand.Uint64()
for x > n {
x = rand.Uint64()
}
return x
}
Related
I keep getting the error "cannot use a (type int) as type float64 in argument to math.Pow, cannot use x (type int) as type float64 in argument to math.Pow,
invalid operation: math.Pow(a, x) % n (mismatched types float64 and int)"
func pPrime(n int) bool {
var nm1 int = n - 1
var x int = nm1/2
a := 1;
for a < n {
if (math.Pow(a, x)) % n == nm1 {
return true
}
}
return false
}
func powInt(x, y int) int {
return int(math.Pow(float64(x), float64(y)))
}
In case you have to reuse it and keep it a little more clean.
If your inputs are int and the output is always expected to be int, then you're dealing with 32-bit numbers. It's more efficient to write your own function to handle this multiplication rather than using math.Pow. math.Pow, as mentioned in the other answers, expects 64-bit values.
Here's a Benchmark comparison for 15^15 (which approaches the upper limits for 32-bit representation):
// IntPow calculates n to the mth power. Since the result is an int, it is assumed that m is a positive power
func IntPow(n, m int) int {
if m == 0 {
return 1
}
result := n
for i := 2; i <= m; i++ {
result *= n
}
return result
}
// MathPow calculates n to the mth power with the math.Pow() function
func MathPow(n, m int) int {
return int(math.Pow(float64(n), float64(m)))
}
The result:
go test -cpu=1 -bench=.
goos: darwin
goarch: amd64
pkg: pow
BenchmarkIntPow15 195415786 6.06 ns/op
BenchmarkMathPow15 40776524 27.8 ns/op
I believe the best solution is that you should write your own function similar to IntPow(m, n int) shown above. My benchmarks show that it runs more than 4x faster on a single CPU core compared to using math.Pow.
Since nobody mentioned an efficient way (logarithmic) to do Pow(x, n) for integers x and n is as follows if you want to implement it yourself:
// Assumption: n >= 0
func PowInts(x, n int) int {
if n == 0 { return 1 }
if n == 1 { return x }
y := PowInts(x, n/2)
if n % 2 == 0 { return y*y }
return x*y*y
}
If you want the exact exponentiation of integers, use (*big.Int).Exp. You're likely to overflow int64 pretty quickly with powers larger than two.
I am drawing bar charts and i've come across a tricky problem. How to programmatically set the max value for the y axis label depending on the max value for a given series. So if you had a bar with a value of 7, you might want the y axis to go up to 10
My approach is not ideal but works like this:
Get a number to round, like 829
Count the number of digits (3)
Use a loop to convert to a string of 0s ("000")
Add a 1 to the start of the string then convert to a float (1000)
Find the difference (1000 - 829 = 171)
Get the first digit of the difference (1) and then add that to the first digit of the float, with the remaining set to zero ("900"), then convert to a number (900)
This means that 725 will see a y axis max label number of 800, and 829 of 900
My code works, but I feel like it's a piece of crap with a hacky approach
I have to code for big numbers. For example, if the float I want to find the max value for is >10000 then take the first two digits, and add 1000 to it. If >100,000 add 10,000
How can I improve here? I'm a little stuck, is my idea of converting to strings even right?!
Full code here:
package main
import (
"fmt"
"strconv"
)
func main() {
myFloat := 899175.0
x := getMaxYAxisValueForChart(myFloat)
fmt.Println("The number to find the maximum value for is: ", myFloat)
fmt.Println("This should be the max value for the y axis: ", x)
}
func getMaxYAxisValueForChart(float float64) (YAxisMaximum float64) {
//Convert to string with no decimals
floatAsString := fmt.Sprintf("%.f", float)
//Get length of the string float
floatAsStringLength := len(floatAsString)
//For each digit in the string, make a zero-string
stringPowerTen := "0"
for i := 1; i < floatAsStringLength; i++ {
stringPowerTen += "0"
}
//Add a 1 to the 0 string to get the difference from the float
stringPowerTenWithOne := "1" + stringPowerTen
//Convert the number string to a float
convertStringPowerTenToFloat := ConvertStringsToFloat(stringPowerTenWithOne)
//Get the difference from the denominator from the numerator
difference := convertStringPowerTenToFloat - float
//We want to isolate the first digit to check how far the float is (100 is far from 1000) and then correct if so
floatAsStringDifference := fmt.Sprintf("%.f", difference)
runes := []rune(floatAsStringDifference)
floatAsStringDifferenceFirstDigit := string(runes[0])
//For the denominator we want to take away the difference that is rounded to the nearest ten, hundred etc
runes = []rune(stringPowerTen)
differenceLastDigitsAsString := ""
if difference < 10 {
differenceLastDigitsAsString = "1"
} else if difference < 30 && difference < 100 {
differenceLastDigitsAsString = "0"
} else {
differenceLastDigitsAsString = floatAsStringDifferenceFirstDigit + string(runes[1:])
}
//Convert the number difference string from total to a float
convertDifferenceStringPowerTenToFloat := ConvertStringsToFloat(differenceLastDigitsAsString)
YAxisMaximum = convertStringPowerTenToFloat - convertDifferenceStringPowerTenToFloat
//If float is less than 10,0000
if float < 10000 && (YAxisMaximum-float >= 500) {
YAxisMaximum = YAxisMaximum - 500
}
if float < 10000 && (YAxisMaximum-float < 500) {
YAxisMaximum = YAxisMaximum
}
//If number bigger than 10,000 then get the nearest 1,000
if float > 10000 {
runes = []rune(floatAsString)
floatAsString = string(runes[0:2])
runes = []rune(stringPowerTen)
stringPowerTen = string(runes[2:])
runes = []rune(stringPowerTenWithOne)
stringPowerTenWithOne = string(runes[0:(len(stringPowerTenWithOne) - 2)])
YAxisMaximum = ConvertStringsToFloat(floatAsString+stringPowerTen) + ConvertStringsToFloat(stringPowerTenWithOne)
}
if float > 10000 {
runes = []rune(floatAsString)
floatAsString = string(runes[0:2])
runes = []rune(stringPowerTen)
stringPowerTen = string(runes[:])
runes = []rune(stringPowerTenWithOne)
stringPowerTenWithOne = string(runes[0:(len(stringPowerTenWithOne))])
YAxisMaximum = ConvertStringsToFloat(floatAsString+stringPowerTen) + ConvertStringsToFloat(stringPowerTenWithOne)
}
return YAxisMaximum
}
func ConvertStringsToFloat(stringToConvert string) (floatOutput float64) {
floatOutput, Error := strconv.ParseFloat(stringToConvert, 64)
if Error != nil {
fmt.Println(Error)
}
return floatOutput
}
Here is the solution based off of Matt Timmermans answer, but converted to work in Go:
func testing(float float64) (YAxisMaximum float64) {
place := 1.0
for float >= place*10.0 {
place *= 10.0
}
return math.Ceil(float/place) * place
}
Wow, that's a pretty complicated procedure you have. This is how I would do it if the numbers aren't enormous. I don't know go, so I'm going to guess about how to write it in that language:
func getMaxYAxisValueForChart(float float64) {
place := 1.0;
while float >= place*10.0 {
place *= 10.0;
}
return math.Ceil(float/place) * place;
}
You can get the magnitude of a number using Math.Log10
int magnitude = (int)Math.Pow(10, (int)Math.Log10(value));
Use that to divide the number down, calculate ceiling and then scale it back up.
No strings, no while loops.
Take the length of the string and calculate that 10 to the power of that length
Or...better take the Log base 10, get the integer part, add 1 and then return that to the power of 10 :)
import (
"fmt"
"math"
)
//func PowerScale(x int) int64{
// return int64(math.Pow(10,float64(len((fmt.Sprintf("%d",x))))))
//}
func PowerScale(x int) int64 {
return int64(math.Pow(10,float64(int(math.Log10(float64(x))+1))))
}
func main() {
fmt.Println(PowerScale(829))
fmt.Println(PowerScale(7))
}
Since 829 is an int, or can be cast to, a pure integer solution :
func getMaxYAxisValueForChart(int int64) {
base := 10;
while int > base*10 {
base := 10 * base;
}
return int + (base - int) % base;
}
I am running into an issue, which seems to be related to int32 vs int data type. My program is returning different values on different environments.
For example, on go playground, I notice the value returned is -4 (which is expected value). But the same on Leetcode with same input returns a value of 4294967292. While it returned this value, when I print it, I get -4 (see output added later).
I tried casting to int32(res) but didn't help. Also didn't find any directly related in textbook. Please help me understand why this is different on go playground vs Leetcode.
https://play.golang.org/p/qXMd9frlhbe
package main
import (
"fmt"
)
func main() {
fmt.Printf("%v", singleNumber([]int{-2,-2,1,1,-3,1,-3,-3,-4,-2}))
}
func singleNumber(nums []int) int {
sum := make([]int, 32)
for _, v := range nums {
for i := 0; i < 32; i++ {
if sum[i] != 0 {
sum[i] += 1 & (v >> uint32(i))
} else {
sum[i] = 1 & (v >> uint32(i))
}
}
}
res := 0
for k, v := range sum {
if (v%3) != 0 {
res |= (v%3) << uint32(k)
}
}
fmt.Printf("res %+v\n", res)
return res
}
Same on Leetcode gives the output:
Input:
[-2,-2,1,1,-3,1,-3,-3,-4,-2]
Output:
4294967292
Expected:
-4
Stdout:
res -4
The textbook you are looking for is
The Go Programming Language Specification
Numeric types
A numeric type represents sets of integer or floating-point values.
The predeclared architecture-independent numeric types are:
uint32 set of all unsigned 32-bit integers (0 to 4294967295)
uint64 set of all unsigned 64-bit integers (0 to 18446744073709551615)
int32 set of all signed 32-bit integers (-2147483648 to 2147483647)
int64 set of all signed 64-bit integers (-9223372036854775808 to 9223372036854775807)
There is also a set of predeclared numeric types with
implementation-specific sizes:
uint either 32 or 64 bits
int same size as uint
Check the size of type int. On the Go Playground it's 4 bytes or 32 bits.
package main
import (
"fmt"
"runtime"
"unsafe"
)
func main() {
fmt.Println("arch", runtime.GOARCH)
fmt.Println("int", unsafe.Sizeof(int(0)))
}
Playground: https://play.golang.org/p/2A6ODvhb1Dx
Output (Playground):
arch amd64p32
int 4
Run the program in your (LeetCode) environment. It's likely 8 bytes or 64 bits.
For example, in my environment,
Output (Local):
arch amd64
int 8
Here are some fixes to your code,
package main
import (
"fmt"
"runtime"
)
func main() {
fmt.Println(runtime.GOARCH)
fmt.Printf("%v\n", singleNumber([]int{-2, -2, 1, 1, -3, 1, -3, -3, -4, -2}))
}
func singleNumber(nums []int) int {
sum := make([]int, 64)
for _, v := range nums {
for i := range sum {
sum[i] += 1 & (v >> uint(i))
}
}
res := 0
for k, v := range sum {
if (v % 3) != 0 {
res |= (v % 3) << uint(k)
}
}
fmt.Printf("res %+v\n", res)
return res
}
Playground: https://play.golang.org/p/kaoSuesu2Oj
Output (Playground):
amd64p32
res -4
-4
Output (Local):
amd64
res -4
-4
Here is my Go code: http://play.golang.org/p/CDUagFZ-rk
package main
import "fmt"
func main() {
var max int = 0
for i := 0; i < 1000000; i++ {
var len int = GetCollatzSeqLen(i)
if len > max {
max = len
}
}
fmt.Println(max)
}
func GetCollatzSeqLen(n int) int {
var len int = 1
for n > 1 {
len++
if n%2 == 0 {
n = n / 2
} else {
n = 3*n + 1
}
}
return len
}
On my local machine, when I run the program, I get 525 as the output. When I run it on the Go Playground, the output is 476.
I am wondering what's different.
It's because of the implementation-specific size of int, 32 or 64 bits. Use int64 for consistent results. For example,
package main
import "fmt"
func main() {
var max int64 = 0
for i := int64(0); i < 1000000; i++ {
var len int64 = GetCollatzSeqLen(i)
if len > max {
max = len
}
}
fmt.Println(max)
}
func GetCollatzSeqLen(n int64) int64 {
var len int64 = 1
for n > 1 {
len++
if n%2 == 0 {
n = n / 2
} else {
n = 3*n + 1
}
}
return len
}
Output:
525
Playground: http://play.golang.org/p/0Cdic16edP
The Go Programming Language Specification
Numeric types
int32 the set of all signed 32-bit integers (-2147483648 to 2147483647)
int64 the set of all signed 64-bit integers (-9223372036854775808 to 9223372036854775807)
The value of an n-bit integer is n bits wide and represented using
two's complement arithmetic.
There is also a set of predeclared numeric types with
implementation-specific sizes:
uint either 32 or 64 bits
int same size as uint
To see the implementation-specific size of int, run this program.
package main
import (
"fmt"
"runtime"
"strconv"
)
func main() {
fmt.Println(
"For "+runtime.GOARCH+" the implementation-specific size of int is",
strconv.IntSize, "bits.",
)
}
Output:
For amd64 the implementation-specific size of int is 64 bits.
On Go Playground: http://play.golang.org/p/7O6dEdgDNd
For amd64p32 the implementation-specific size of int is 32 bits.
In the following code, I iterate over a string rune by rune, but I'll actually need an int to perform some checksum calculation. Do I really need to encode the rune into a []byte, then convert it to a string and then use Atoi to get an int out of the rune? Is this the idiomatic way to do it?
// The string `s` only contains digits.
var factor int
for i, c := range s[:12] {
if i % 2 == 0 {
factor = 1
} else {
factor = 3
}
buf := make([]byte, 1)
_ = utf8.EncodeRune(buf, c)
value, _ := strconv.Atoi(string(buf))
sum += value * factor
}
On the playground: http://play.golang.org/p/noWDYjn5rJ
The problem is simpler than it looks. You convert a rune value to an int value with int(r). But your code implies you want the integer value out of the ASCII (or UTF-8) representation of the digit, which you can trivially get with r - '0' as a rune, or int(r - '0') as an int. Be aware that out-of-range runes will corrupt that logic.
For example, sum += (int(c) - '0') * factor,
package main
import (
"fmt"
"strconv"
"unicode/utf8"
)
func main() {
s := "9780486653556"
var factor, sum1, sum2 int
for i, c := range s[:12] {
if i%2 == 0 {
factor = 1
} else {
factor = 3
}
buf := make([]byte, 1)
_ = utf8.EncodeRune(buf, c)
value, _ := strconv.Atoi(string(buf))
sum1 += value * factor
sum2 += (int(c) - '0') * factor
}
fmt.Println(sum1, sum2)
}
Output:
124 124
why don't you do only "string(rune)".
s:="12345678910"
var factor,sum int
for i,x:=range s{
if i%2==0{
factor=1
}else{
factor=3
}
xstr:=string(x) //x is rune converted to string
xint,_:=strconv.Atoi(xstr)
sum+=xint*factor
}
fmt.Println(sum)
val, _ := strconv.Atoi(string(v))
Where v is a rune
More concise but same idea as above