How to format floating point numbers into a string using Go - go

Using Go I'm trying to find the "best" way to format a floating point number into a string. I've looked for examples however I cannot find anything that specifically answers the questions I have. All I want to do is use the "best" method to format a floating point number into a string. The number of decimal places may vary but will be known (eg. 2 or 4 or zero).
An example of what I want to achieve is below.
Based on the example below should I use fmt.Sprintf() or strconv.FormatFloat() or something else?
And, what is the normal usage of each and differences between each?
I also don't understand the significance of using either 32 or 64 in the following which currently has 32:
strconv.FormatFloat(float64(fResult), 'f', 2, 32)
Example:
package main
import (
"fmt"
"strconv"
)
func main() {
var (
fAmt1 float32 = 999.99
fAmt2 float32 = 222.22
)
var fResult float32 = float32(int32(fAmt1*100) + int32(fAmt2*100)) / 100
var sResult1 string = fmt.Sprintf("%.2f", fResult)
println("Sprintf value = " + sResult1)
var sResult2 string = strconv.FormatFloat(float64(fResult), 'f', 2, 32)
println("FormatFloat value = " + sResult2)
}

Both fmt.Sprintf and strconv.FormatFloat use the same string formatting routine under the covers, so should give the same results.
If the precision that the number should be formatted to is variable, then it is probably easier to use FormatFloat, since it avoids the need to construct a format string as you would with Sprintf. If it never changes, then you could use either.
The last argument to FormatFloat controls how values are rounded. From the documentation:
It rounds the
result assuming that the original was obtained from a floating-point
value of bitSize bits (32 for float32, 64 for float64)
So if you are working with float32 values as in your sample code, then passing 32 is correct.

You will have with Go 1.12 (February 2019) and the project cespare/ryu a faster alternative to strconv:
Ryu is a Go implementation of Ryu, a fast algorithm for converting floating-point numbers to strings.
It is a fairly direct Go translation of Ulf Adams's C library.
The strconv.FormatFloat latency is bimodal because of an infrequently-taken slow path that is orders of magnitude more expensive (issue 15672).
The Ryu algorithm requires several lookup tables.
Ulf Adams's C library implements a size optimization (RYU_OPTIMIZE_SIZE) which greatly reduces the size of the float64 tables in exchange for a little more CPU cost.
For a small fraction of inputs, Ryu gives a different value than strconv does for the last digit.
This is due to a bug in strconv: issue 29491.
Go 1.12 might or might not include that new implementation directly in strconv, but if it does not, you can use this project for faster conversion.

Related

Why use the + sign in printfl?

What's the difference between:
var x float64 = 3.141592
fmt.Println("the value is" + x)
and
var x float64 = 3.141592
fmt.Println("the value is", x)
What does the + means?
Why is the first one wrong and the second correct?
fmt.Println is a variadic function whose arguments are generic interfaces. Any type can fulfill this interfere, including strings and floats. The second example works for this reason.
The first example, however, involves the binary operator +. As https://golang.org/ref/spec#Operators says, binary operators work in identical types. This means you can't "add" a float to a string without first explicitly casting to a string.
In general, this is a decision the golang inventors made. If you read the design tenets of go, I think you'll find this aligns well. But for the purposes of your question, it's sufficient to say, that's how it was made to work.

How do I format a currency with commas and 2 decimal places?

I am trying to format some numbers as a currency, with commas and 2 decimal places. I've found "github.com/dustin/go-humanize" for the commas but it doesn't allow for specifying the number of decimal places. fmt.Sprintf will do the currency and decimal formatting but not the commas.
for _, fl := range []float64{123456.789, 123456.0, 123456.0100} {
log.Println(humanize.Commaf(fl))
}
Results:
123,456.789
123,456
123,456.01
I am expecting:
$123,456.79
$123,456.00
$123,456.01
That would be what the humanize.FormatFloat() does:
// FormatFloat produces a formatted number as string based on the following user-specified criteria:
// * thousands separator
// * decimal separator
// * decimal precision
In your case:
FormatFloat("$#,###.##", afloat)
That being said, as commented by LenW, float (in Go, float64) is not a good fit for currency.
See floating-point-gui.de.
Using a package like go-inf/inf (previously go/dec, used for instance in this currency implementation) is better.
See Dec.go:
// A Dec represents a signed arbitrary-precision decimal.
// It is a combination of a sign, an arbitrary-precision integer coefficient
// value, and a signed fixed-precision exponent value.
// The sign and the coefficient value are handled together as a signed value
// and referred to as the unscaled value.
That type Dec does include a Format() method.
Since July 2015, you now have leekchan/accounting from Kyoung-chan Lee (leekchan) with the same advice:
Please do not use float64 to count money. Floats can have errors when you perform operations on them.
Using big.Rat (< Go 1.5) or big.Float (>= Go 1.5) is highly recommended. (accounting supports float64, but it is just for convenience.)
fmt.Println(ac.FormatMoneyBigFloat(big.NewFloat(123456789.213123))) // "$123,456,789.21"
There is a good blog post about why you should never use floats to represent currency here - http://engineering.shopspring.com/2015/03/03/decimal/
From their examples you can :
d := New(-12345, -3)
println(d.String())
Will give you :
-12.345
fmt.Printf("%.2f", 12.3456)
-- output is 12.34

golang what is the right way to use math.max on two uint values?

This is what I do, it is extremely ugly.
What is the right way to use math.Max for 2 uint s?
vs.curView.Viewnum =uint(math.Max(float64(args.Viewnum+1), float64(vs.curView.Viewnum)))
The main reason math.Max exists is to ensure some of the special cases of IEEE floating point are handled correctly (positive and negative infinity, NaN and signed zeroes).
These issues are not relevant for simple integers, so you may as well just use the obvious implementation. Something like:
if args.Viewnum+1 > vs.curView.Viewnum {
vs.curView.Viewnum = args.Viewnum+1
}
Though the question is old, maybe this package will save someone's time and efforts. It is obtainable via go get and importing by URL, as usual.
Usage:
import (
"fmt"
"<Full URL>/go-imath/ix" // Functions for int type
)
...
fmt.Println(ix.Max(100, 152)) // Output: 152
fmt.Println(ix.Maxs(234, 55, 180)) // Output: 234
fmt.Println(ix.MaxSlice([]int{2, 29, 8, -1})) // Output: 29

Re-map a number from one range to another

Is there any equivalent in go for the Arduino map function?
map(value, fromLow, fromHigh, toLow, toHigh)
Description
Re-maps a number from one range to another. That is, a value of
fromLow would get mapped to toLow, a value of fromHigh to toHigh,
values in-between to values in-between, etc
If not, how would I implement this in go?
Is there any equivalent in go for the Arduino map function?
The standard library, or more specifically the math package, does not offer such a function, no.
If not, how would I implement this in go?
By taking the original code and translating it to Go. C and Go are very related syntactically and therefore this task is very, very easy. The manual page for map that you linked gives you the code. A translation to go is, as already mentioned, trivial.
Original from the page you linked:
For the mathematically inclined, here's the whole function
long map(long x, long in_min, long in_max, long out_min, long out_max)
{
return (x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min;
}
You would translate that to something like
func Map(x, in_min, in_max, out_min, out_max int64) int64 {
return (x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min
}
Here is an example on the go playground.
Note that map is not a valid function name in Go since there is already the map built-in type which makes map a reserved keyword. keyword for defining map types, similar to the []T syntax.

Crash when casting the result of arc4random() to Int

I've written a simple Bag class. A Bag is filled with a fixed ratio of Temperature enums. It allows you to grab one at random and automatically refills itself when empty. It looks like this:
class Bag {
var items = Temperature[]()
init () {
refill()
}
func grab()-> Temperature {
if items.isEmpty {
refill()
}
var i = Int(arc4random()) % items.count
return items.removeAtIndex(i)
}
func refill() {
items.append(.Normal)
items.append(.Hot)
items.append(.Hot)
items.append(.Cold)
items.append(.Cold)
}
}
The Temperature enum looks like this:
enum Temperature: Int {
case Normal, Hot, Cold
}
My GameScene:SKScene has a constant instance property bag:Bag. (I've tried with a variable as well.) When I need a new temperature I call bag.grab(), once in didMoveToView and when appropriate in touchesEnded.
Randomly this call crashes on the if items.isEmpty line in Bag.grab(). The error is EXC_BAD_INSTRUCTION. Checking the debugger shows items is size=1 and [0] = (AppName.Temperature) <invalid> (0x10).
Edit Looks like I don't understand the debugger info. Even valid arrays show size=1 and unrelated values for [0] =. So no help there.
I can't get it to crash isolated in a Playground. It's probably something obvious but I'm stumped.
Function arc4random returns an UInt32. If you get a value higher than Int.max, the Int(...) cast will crash.
Using
Int(arc4random_uniform(UInt32(items.count)))
should be a better solution.
(Blame the strange crash messages in the Alpha version...)
I found that the best way to solve this is by using rand() instead of arc4random()
the code, in your case, could be:
var i = Int(rand()) % items.count
This method will generate a random Int value between the given minimum and maximum
func randomInt(min: Int, max:Int) -> Int {
return min + Int(arc4random_uniform(UInt32(max - min + 1)))
}
The crash that you were experiencing is due to the fact that Swift detected a type inconsistency at runtime.
Since Int != UInt32 you will have to first type cast the input argument of arc4random_uniform before you can compute the random number.
Swift doesn't allow to cast from one integer type to another if the result of the cast doesn't fit. E.g. the following code will work okay:
let x = 32
let y = UInt8(x)
Why? Because 32 is a possible value for an int of type UInt8. But the following code will fail:
let x = 332
let y = UInt8(x)
That's because you cannot assign 332 to an unsigned 8 bit int type, it can only take values 0 to 255 and nothing else.
When you do casts in C, the int is simply truncated, which may be unexpected or undesired, as the programmer may not be aware that truncation may take place. So Swift handles things a bit different here. It will allow such kind of casts as long as no truncation takes place but if there is truncation, you get a runtime exception. If you think truncation is okay, then you must do the truncation yourself to let Swift know that this is intended behavior, otherwise Swift must assume that is accidental behavior.
This is even documented (documentation of UnsignedInteger):
Convert from Swift's widest unsigned integer type,
trapping on overflow.
And what you see is the "overflow trapping", which is poorly done as, of course, one could have made that trap actually explain what's going on.
Assuming that items never has more than 2^32 elements (a bit more than 4 billion), the following code is safe:
var i = Int(arc4random() % UInt32(items.count))
If it can have more than 2^32 elements, you get another problem anyway as then you need a different random number function that produces random numbers beyond 2^32.
This crash is only possible on 32-bit systems. Int changes between 32-bits (Int32) and 64-bits (Int64) depending on the device architecture (see the docs).
UInt32's max is 2^32 − 1. Int64's max is 2^63 − 1, so Int64 can easily handle UInt32.max. However, Int32's max is 2^31 − 1, which means UInt32 can handle numbers greater than Int32 can, and trying to create an Int32 from a number greater than 2^31-1 will create an overflow.
I confirmed this by trying to compile the line Int(UInt32.max). On the simulators and newer devices, this compiles just fine. But I connected my old iPod Touch (32-bit device) and got this compiler error:
Integer overflows when converted from UInt32 to Int
Xcode won't even compile this line for 32-bit devices, which is likely the crash that is happening at runtime. Many of the other answers in this post are good solutions, so I won't add or copy those. I just felt that this question was missing a detailed explanation of what was going on.
This will automatically create a random Int for you:
var i = random() % items.count
i is of Int type, so no conversion necessary!
You can use
Int(rand())
To prevent same random numbers when the app starts, you can call srand()
srand(UInt32(NSDate().timeIntervalSinceReferenceDate))
let randomNumber: Int = Int(rand()) % items.count

Resources