I'm using Golang to program a arduino uno with tinygo. I am trying to map two value ranges.
One is an encoder with a range between 0-1000 and the other is tinygo's ADC range between 0-65535. I am reading the ADC range and need to covert it to the range of 0-1000 (encoder).
I have tried several things but the basic issue that I'm running into is data types. The below formula for example equals 0:
var encoderValue uint16 = 35000
float := float64(1000/65535) * float(encoderValue)
1000/65535 is an integer division and will result in 0. It doesn't matter if you convert the result to float64, then it'll be 0.0.
Use floating point constant(s):
var encoderValue uint16 = 35000
x := float64(1000.0/65535) * float64(encoderValue)
fmt.Println(x)
This will output (try it on the Go Playground):
534.0657663843748
Related
my code:
step := 10.0
precision := int(math.Log10(1/step))
fmt.PrintLn(precision)
I want precision == -1 but got 0...
Float to integer conversion truncates, so if your float number is e.g. 0.99, converting it to integer will be 0 and not 1.
If you want to round to an integer, you may simply use math.Round() (which returns float64 so you still need to manually convert to int, but the result will be what you expect):
step := 10.0
precision := int(math.Log10(1 / step))
fmt.Println(precision)
precision = int(math.Round(math.Log10(1 / step)))
fmt.Println(precision)
This will output (try it on the Go Playground):
0
-1
If you want to round to a specific fraction (and not to integer), see Golang Round to Nearest 0.05.
I've written the following code to create a random number between 0.0 and 10.0.
const minRand = 0
const maxRand = 10
v := minRand + rand.Float64()*(maxRand-minRand)
However, I would like to set the granularity to 0.05, so having all the digits as the least significant decimal should not be allowed, only 0 and 5 should be allowed, e.g.:
the value 7.73 is NOT VALID,
the values 7.7 and 7.75 ARE VALID.
How can I produce such numbers in Go?
You can divide with the granularity, get a pseudo random integer and then multiply with the granularity to scale the result down.
const minRand = 8
const maxRand = 10
v := float64(rand.Intn((maxRand-minRand)/0.05))*0.05 + minRand
fmt.Printf("%.2f\n", v)
This will print:
8.05
8.35
8.35
8.95
8.05
9.90
....
If you don't want to get the same sequence every time rand.Seed(time.Now().UTC().UnixNano()).
From the docs
Seed uses the provided seed value to initialize the default Source to a deterministic state. If Seed is not called, the generator behaves as if seeded by Seed(1). Seed values that have the same remainder when divided by 2^31-1 generate the same pseudo-random sequence. Seed, unlike the Rand.Seed method, is safe for concurrent use.
With lower bounds
const minRand = 0
const maxRand = 10
const stepRand = 0.05
v := float64(rand.Intn((maxRand-minRand)/stepRand))*stepRand + minRand
fmt.Printf("%.2f\n", v)
I am trying to implement socks5 proxy server.
Most things are clear according to the rfc but I'm stuck interpreting client port and writing my port number in bytes.
I made a function that tkes an int and returns 2 bytes. This function first converts number into binary then literally splits the bits as string then converts them back to byte.However this seems wrong because if the right most bits are 0 they are lost.
Here is the function
func getBytesOfInt(i int) []byte {
binary := fmt.Sprintf("%b", i)
if i < 255 {
return []byte{byte(i)}
}
first := binary[:8]
last := binary[9:]
fmt.Println(binary, first, last)
i1, _ := strconv.ParseInt(first, 2, 64)
i2, _ := strconv.ParseInt(last, 2, 64)
return []byte{byte(i1), byte(i2)}
}
Can you please explain me how am i supposed to parse the number and get 2 bytes and most importantly how am i going to cast it back to an integer.
Currently if you give 1024 to this function it will return []byte{0x80, 0x0} which is 128 in decimals but as you see the right bits are lost theres only one 0 which is useless.
Your code has multiple problem. First :8 and 9: miss an element ([8]), see: https://play.golang.org/p/yuhh4ZeJFNL
And also, you should interept the second byte as lowbyte of the int and the first as highbyte, not literally cut the binary string. for example 4 should be interept as [0x0,0x4] instead of [0x4,0x0] which shoulld be 1024.
If you want to keep using strconv you should use:
n := len(binary)
first := binary[:n-8]
last := binary[n-8:]
However it is very unefficient.
I would suggest b[0],b[1] = i >> 8, i & 255, and i = b[0]<<8 + b[1] .
I want to convert a float64 number, let's say it 1.003 to 1003 (integer type). My implementation is simply multiply the float64 with 1000 and cast it to int.
package main
import "fmt"
func main() {
var f float64 = 1.003
fmt.Println(int(f * 1000))
}
But when I run that code, what I got is 1002 not 1003. Because Go automatically stores 1.003 as 1.002999... in the variable. What is the correct approach to do this kind of operation on Golang?
Go spec: Conversions:
Conversions between numeric types
When converting a floating-point number to an integer, the fraction is discarded (truncation towards zero).
So basically when you convert a floating-point number to an integer, only the integer part is kept.
If you just want to avoid errors arising from representing with finite bits, just add 0.5 to the number before converting it to int. No external libraries or function calls (from standard library) required.
Since float -> int conversion is not rounding but keeping the integer part, this will give you the desired result. Taking into consideration both the possible smaller and greater representation:
1002.9999 + 0.5 = 1003.4999; integer part: 1003
1003.0001 + 0.5 = 1003.5001; integer part: 1003
So simply just write:
var f float64 = 1.003
fmt.Println(int(f * 1000 + 0.5))
To wrap this into a function:
func toint(f float64) int {
return int(f + 0.5)
}
// Using it:
fmt.Println(toint(f * 1000))
Try them on the Go Playground.
Note:
Be careful when you apply this in case of negative numbers! For example if you have a value of -1.003, then you probably want the result to be -1003. But if you add 0.5 to it:
-1002.9999 + 0.5 = -1002.4999; integer part: -1002
-1003.0001 + 0.5 = -1002.5001; integer part: -1002
So if you have negative numbers, you have to either:
subtract 0.5 instead of adding it
or add 0.5 but subtract 1 from the result
Incorporating this into our helper function:
func toint(f float64) int {
if f < 0 {
return int(f - 0.5)
}
return int(f + 0.5)
}
As Will mentions, this comes down to how floats are represented on various platforms. Essentially you need to round the float rather than let the default truncating behavior to happen. There's no standard library function for this, probably because there's a lot of possible behavior and it's trivial to implement.
If you knew you'd always have errors of the sort described, where you're slightly below (1299.999999) the value desired (1300.00000) you could use the math library's Ceil function:
f := 1.29999
n := math.Ceil(f*1000)
But if you have different kinds of floating error and want a more general sorting behavior? Use the math library's Modf function to separate the your floating point value by the decimal point:
f := 1.29999
f1,f2 := math.Modf(f*1000)
n := int(f1) // n = 1299
if f2 > .5 {
n++
}
fmt.Println(n)
You can run a slightly more generalized version of this code in the playground yourself.
This is probably likely a problem with floating points in general in most programming languages though some have different implementations than others. I wouldn't go into the intricacies here but most languages usually have a "decimal" approach either as a standard library or a third party library to get finer precision.
For instance, I've found the inf.v0 package largely useful. Underlying the library is a Dec struct that holds the exponents and the integer value. Therefore, it's able to hold 1.003 as 1003 * 10^-3. See below for an example:
package main
import (
"fmt"
"gopkg.in/inf.v0"
)
func main() {
// represents 1003 * 10^-3
someDec := inf.NewDec(1003, 3)
// multiply someDec by 1000 * 10^0
// which translates to 1003 * 10^-3 * 1000 * 10^0
someDec.Mul(someDec, inf.NewDec(1000, 0))
// inf.RoundHalfUp rounds half up in the 0th scale, eg. 0.5 rounds to 1
value, ok := someDec.Round(someDec, 0, inf.RoundHalfUp).Unscaled()
fmt.Println(value, ok)
}
Hope this helps!
The problem with the following code:
var x uint64 = 18446744073709551615
var y int64 = int64(x)
is that y is -1. Without loss of information, is the only way to convert between these two number types to use an encoder and decoder?
buff bytes.Buffer
Encoder(buff).encode(x)
Decoder(buff).decode(y)
Note, I am not attempting a straight numeric conversion in your typical case. I am more concerned with maintaining the statistical properties of a random number generator.
Your conversion does not lose any information in the conversion. All the bits will be untouched. It is just that:
uint64(18446744073709551615) = 0xFFFFFFFFFFFFFFFF
int64(-1) = 0xFFFFFFFFFFFFFFFF
Try:
var x uint64 = 18446744073709551615 - 3
and you will have y = -4.
For instance: playground
var x uint64 = 18446744073709551615 - 3
var y int64 = int64(x)
fmt.Printf("%b\n", x)
fmt.Printf("%b or %d\n", y, y)
Output:
1111111111111111111111111111111111111111111111111111111111111100
-100 or -4
Seeing -1 would be consistent with a process running as 32bits.
See for instance the Go1.1 release notes (which introduced uint64)
x := ^uint32(0) // x is 0xffffffff
i := int(x) // i is -1 on 32-bit systems, 0xffffffff on 64-bit
fmt.Println(i)
Using fmt.Printf("%b\n", y) can help to see what is going on (see ANisus' answer)
As it turned out, the OP wheaties confirms (in the comments) it was run initially in 32 bits (hence this answer), but then realize 18446744073709551615 is 0xffffffffffffffff (-1) anyway: see ANisusanswer;
The types uint64 and int64 can both represent 2^64 discrete integer values.
The difference between the two is that uint64 holds only positive integers (0 thru 2^64-1), where as int64 holds both negative and positive integers using 1 bit to hold the sign (-2^63 thru 2^63-1).
As others have said, if your generator is producing 0xffffffffffffffff, uint64 will represent this as the raw integer (18,446,744,073,709,551,615) whereas int64 will interpret the two's complement value and return -1.