I get input string "400.00" and need to display 400.00.
Expected float_value :400.00
I used to_f to do the task since to_i will return 400 alone. The code is
x="400.00"
float_value=x.to_f
But in this case, I am getting the output as 400.0 which is not acceptable for my case.
current float_value :400.0
Both are equal and have no difference, but its not for any calculation purpose, its for some other display purpose.
Use sprintf for fomatting:
sprintf("%.2f", 400)
#=> "400.00"
Related
I'm trying to find the decimal value from a percentage that a user inputs.
For example, if a user inputs "15", i will need to do a calculation of 0.15 * number.
I've tried using .to_f, but it returns 15.0:
15.to_f
#=> 15.0
I also tried to add 0. to the beginning of the percentage, but it just returns 0:
15.to_s.rjust(4, "0.").to_i
#=> 0
Divide by 100.0
The easiest way to do what you're trying to do is to divide your input value by a Float (keeping in mind the inherent inaccuracy of floating point values). For example:
percentage = 15
percentage / 100.0
#=> 0.15
One benefit of this approach (among others) is that it can handle fractional percentages just as easily. Consider:
percentage = 15.6
percentage / 100.0
#=> 0.156
If floating point precision isn't sufficient for your use case, then you should consider using Rational or BigDecimal numbers instead of a Float. Your mileage will very much depend on your semantic intent and accuracy requirements.
Caveats
Make sure you have ahold of a valid Integer in the first place. While others might steer you towards String#to_i, a more robust validation is to use Kernel#Integer so that an exception will be raised if the value can't be coerced into a valid Integer. For example:
print "Enter integer: "
percentage = Integer gets
If you enter 15\n then:
percentage.class
#=> Integer
If you enter something that can't be coerced to an Integer, like foo\n, then:
ArgumentError (invalid value for Integer(): "foo\n")
Using String#to_i is much more permissive, and can return 0 when you aren't expecting it, such as when called on nil, an empty string, or alphanumeric values that don't start with an integer. It has other interesting edge cases as well, so it's not always the best option for validating input.
I'm trying to find the amount from a percentage that a user inputs
If you retrieve the input via gets, you typically convert it to a numeric value first, e.g.
percentage = gets.to_i
#=> 15
Ruby is not aware that this 15 is a percentage. And since there's no Percentage class, you have to convert it into one of the existing numeric classes.
15% is equal to the fraction 15/100, the ratio 15:100, or the decimal number 0.15.
If you want the number as a (maybe inexact) Float, you can divide it by 100 via fdiv:
15.fdiv(100)
#=> 0.15
If you prefer a Rational you can use quo: (it might also return an Integer)
15.quo(100)
#=> (3/20)
Or maybe BigDecimal for an arbitrary-precision decimal number:
require 'bigdecimal'
BigDecimal(15) / 100
#=> 0.15e0
BigDecimal also accepts strings, so you could pass the input without prior conversion:
input = gets
BigDecimal(input) / 100
#=> 0.15e0
I need to make a number out of the string. to do that i use well known maneuver looking something like this:
Float(string_var) rescue nil
This works nicely, however I do have a tiny, little problem. If a string is "2.50", variable I get is 2.5. Is it even possible to create Float object with 'unnecessary' 0 digit at the end? can I literally translate "2.50" into 2.50 ?
In short, the answer is no, given the question, as any Float, when examined, will use Float's to_s function, eliciting an answer without trailing zeroes.
Float will always give you a numeric value that can be interpreted any way you wish, though. In your example, you will get a float value (given a string that is a parsable float). What you are asking then, is how to display that value with trailing zeroes. To do that, you will be turning the float value back into a string.
Easiest way to accomplish that is to use the format given by one of your respondents, namely
string_var = "2.50"
float_value = Float(string_var) rescue nil # 2.5
with_trailing_zeroes = "%0.2f" % float_value # '2.50'
I'm trying to check if a variable I have is equals to NaN in my Ruby on Rails application.
I saw this answer, but it's not really useful because in my code I want to return 0 if the variable is NaN and the value otherwise:
return (average.nan ? 0 : average.round(1))
The problem is that if the number is not a NaN I get this error:
NoMethodError: undefined method `nan?' for 10:Fixnum
I can't check if the number is a Float instance because it is in both cases (probably, I'm calculating an average).
What can I do?
It is strange only to me that a function to check if a variable is equals to NaN is avaible only to NaN objects?
Quickest way is to use this:
under_the_test.to_f.nan? # gives you true/false e.g.:
123.to_f.nan? # => false
(123/0.0).to_f.nan? #=> true
Also note that only Floats have #nan? method defined on them, that's the reason why I'm using #to_f in order to convert result to float first.
Tip: if you have integer calculation that potentially can divide by zero this will not work:
(123/0).to_f.nan?
Because both 123 and 0 are integers and that will throw ZeroDivisionError, in order to overcome that issue Float::NAN constant can be useful - for example like this:
return Float::NAN if divisor == 0
return x / divisor
I found this answer while duckducking for something that is neither NaN nor Infinity (e.g., a finite number). Hence I'll add my discovery here for next googlers.
And as always in ruby, the answer was just to type my expectation while searching in the Float documentation, and find finite?
n = 1/0.0 #=> Infinity
n.nan? #=> false
n.finite? #=> false
The best way to avoid this kind of problem is to rely on the fact that a NaN isn't even equal to itself:
a = 0.0/0.0
a != a
# -> True !
This is likely not going to be an issue with any other type.
I am trying to understand how range.cover? works and following seems confusing -
("as".."at").cover?("ass") # true and ("as".."at").cover?("ate") # false
This example in isolation is not confusing as it appears to be evaluated dictionary style where ass comes before at followed by ate.
("1".."z").cover?(":") # true
This truth seems to be based on ASCII values rather than dictionary style, because in a dictionary I'd expect all special characters to precede even digits and the confusion starts here. If what I think is true then how does cover? decide which comparison method to employ i.e. use ASCII values or dictionary based approach.
And how does range work with arrays. For example -
([1]..[10]).cover?([9,11,335]) # true
This example I expected to be false. But on the face of it looks like that when dealing with arrays, boundary values as well as cover?'s argument are converted to string and a simple dictionary style comparison yields true. Is that correct interpretation?
What kind of objects is Range equipped to handle? I know it can take numbers (except complex ones), handle strings, able to mystically work with arrays while boolean, nil and hash values among others cause it to raise ArgumentError: bad value for range
Why does ([1]..[10]).cover?([9,11,335]) return true
Let's take a look at the source. In Ruby 1.9.3 we can see a following definition.
static VALUE
range_cover(VALUE range, VALUE val)
{
VALUE beg, end;
beg = RANGE_BEG(range);
end = RANGE_END(range);
if (r_le(beg, val)) {
if (EXCL(range)) {
if (r_lt(val, end))
return Qtrue;
}
else {
if (r_le(val, end))
return Qtrue;
}
}
return Qfalse;
}
If the beginning of the range isn't lesser or equal to the given value cover? returns false. Here lesser or equal to is determined in terms of the r_lt function, which uses the <=> operator for comparison. Let's see how does it behave in case of arrays
[1] <=> [9,11,335] # => -1
So apparently [1] is indeed lesser than [9,11,335]. As a result we go into the body of the first if. Inside we check whether the range excludes its end and do a second comparison, once again using the <=> operator.
[10] <=> [9,11,335] # => 1
Therefore [10] is greater than [9,11,335]. The method returns true.
Why do you see ArgumentError: bad value for range
The function responsible for raising this error is range_failed. It's called only when range_check returns a nil. When does it happen? When the beginning and the end of the range are uncomparable (yes, once again in terms of our dear friend, the <=> operator).
true <=> false # => nil
true and false are uncomparable. The range cannot be created and the ArgumentError is raised.
On a closing note, Range.cover?'s dependence on <=> is in fact an expected and documented behaviour. See RubySpec's specification of cover?.
I'm initializing an instance of a class that tests the equality of two formulas.
The formula's calculated values are in fact equal:
RubyChem::Chemical.new("SOOS").fw
=> 96.0
RubyChem::Chemical.new("OSSO").fw
= 96.0
When I created a new class to check the equality of these two instances I'm a bit surprised by the results:
x = RubyChem::BalanceChem.new("SOOS","OSSO")
x.balanced
=>false
y = RubyChem::BalanceChem.new("SOOS","SOOS")
y.balanced
=> true
the RubyChem::BalanceChem initialize method is here:
def initialize(formula1, formula2)
#balanced = RubyChem::Chemical.new(formula1).fw == RubyChem::Chemical.new(formula2).fw
end
Why doesn't ruby fetch the fw values for formula1 and formula2 and check the equality of those values? What are the order of operations in Ruby and what is Ruby doing? I can see I lack an understanding of this issue. How can I make this work? Thank you in advance.
Ruby 1.8 has a bug when converting floats to string. Sometimes the given string not a good representation of the float. Here is an example with 0.56:
0.5600000000000005.to_s == 0.56.to_s #=> true
# should have returned false, since:
0.5600000000000005 == 0.56 #=> false
This explains why two apparently identical results are not actually identical.
You probably want to do compare within a certain margin of error, do some rounding before doing a comparison, or use exact types like BigDecimal or Rational.
You probably do not want to check floating point numbers for equality. Instead, you should compare deltas.
Try this in irb:
x = 1.000001
y = 1.0
x == y
(x-y).abs < 0.00001
So, you find a delta like 0.00001 that you feel would handle any variation in floating point arithmetic, and use it that way. You should never == floats.
This is likely another problem caused by floating point precision.
I can assure you that those values calculated before the equality is evaluated.
See the Ruby's operator precedence