I'm trying to check if a variable I have is equals to NaN in my Ruby on Rails application.
I saw this answer, but it's not really useful because in my code I want to return 0 if the variable is NaN and the value otherwise:
return (average.nan ? 0 : average.round(1))
The problem is that if the number is not a NaN I get this error:
NoMethodError: undefined method `nan?' for 10:Fixnum
I can't check if the number is a Float instance because it is in both cases (probably, I'm calculating an average).
What can I do?
It is strange only to me that a function to check if a variable is equals to NaN is avaible only to NaN objects?
Quickest way is to use this:
under_the_test.to_f.nan? # gives you true/false e.g.:
123.to_f.nan? # => false
(123/0.0).to_f.nan? #=> true
Also note that only Floats have #nan? method defined on them, that's the reason why I'm using #to_f in order to convert result to float first.
Tip: if you have integer calculation that potentially can divide by zero this will not work:
(123/0).to_f.nan?
Because both 123 and 0 are integers and that will throw ZeroDivisionError, in order to overcome that issue Float::NAN constant can be useful - for example like this:
return Float::NAN if divisor == 0
return x / divisor
I found this answer while duckducking for something that is neither NaN nor Infinity (e.g., a finite number). Hence I'll add my discovery here for next googlers.
And as always in ruby, the answer was just to type my expectation while searching in the Float documentation, and find finite?
n = 1/0.0 #=> Infinity
n.nan? #=> false
n.finite? #=> false
The best way to avoid this kind of problem is to rely on the fact that a NaN isn't even equal to itself:
a = 0.0/0.0
a != a
# -> True !
This is likely not going to be an issue with any other type.
Related
I am looking for a concise way to deal with the following situation: Given a variable (in practince, an instance variable in a class, though I don't think this matters here), which is known to be either nil or hold some Integer. If it is an Integer, the variable should be incremented. If it is nil, it should be initialized with 1.
These are obvious solutions to this, taking #counter as the variable to deal with:
# Separate the cases into two statements
#counter ||= 0
#counter += 1
or
# Separate the cases into one conditional
#counter = #counter ? (#counter + 1) : 1
I don't like these solutions because they require to repeat the name of the variable. The following attempt failed:
# Does not work
(#counter ||= 0) += 1
This can't be done, because the result of the assignment operators is not an lvalue, though the actual error message is a bit obscure. In this case, you get the error _unexpected tOP_ASGN, expecting end_.
Is there a good idiom to code my problem, or do I have to stick with one of my clumsy solutions?
The question is clear:
A variable is known to hold nil or an integer. If nil the variable is to be set equal to 1, else it is to be set equal to its value plus 1.
What is the best way to implement this in Ruby?
First, two points.
The question states, "If it is nil, it should be initialized with 1.". This contradicts the statement that the variable is known to be nil or an integer, meaning that it has already been initialized, or more accurately, defined. In the case of an instance variable, this distinction is irrelevant as Ruby initializes undefined instance variables to nil when they are referenced as rvalues. It's an important distinction for local variables, however, as an exception is raised when an undefined local variable is referenced as an rvalue.
The comments largely address situations where the variable holds an object other than nil or an integer. They are therefore irrelevant. If the OP wishes to broaden the question to allow the variable to hold objects other than nil or an integer (an array or hash, for example), a separate question should be asked.
What criteria should be used in deciding what code is best? Of the various possibilities that have been mentioned, I do not see important differences in efficiency. Assuming that to be the case, or that relative efficiency is not important in the application, we are left with readability (and by extension, maintainability) as the sole criterion. If x equals nil or an integer, or is an undefined instance variable, perhaps the clearest code is the following:
x = 0 if x.nil?
x += 1
or
x = x.nil? ? 1 : x+1
Ever-so-slightly less readable:
x = (x || 0) + 1
and one step behind that:
x = x.to_i + 1
which requires the reader to know that nil.to_i #=> 0.
The OP may regard these solutions as "clumsy", but I think they are all beautiful.
Can an expression be written that references x but once? I can't think of a way and one has not been suggested in the comments, so if there is a way (doubtful, I believe) it probably would not meet the test for readability.
Consider now the case where the local variable x may not have been defined. In that case we might write:
x = (defined?(x) ? (x || 0) : 0) + 1
defined? is a Ruby keyword.
I am attempting to create a method which generates a true or false return value when a number is given as its argument to detect if the number is a Fibonacci sequence, although I have never encountered a number like so:
2.073668380220713e+21 .
Forgive my ignorance but is there a way to deal with this type of value in ruby to make it work in the method below?
def is_fibonacci?(num)
high_square = Math.sqrt((5 * num) * num + 4)
low_square = Math.sqrt((5 * num) * num - 4)
# If high_square or low_square are perfect squares, return true, else false.
high_square == high_square.round || low_square == low_square.round ? true : false
end
puts is_fibonacci?(927372692193078999171) # Trying to return false, but returns true. The sqrt of this number is 2.073668380220713e+21.
puts is_fibonacci?(987) # Returns true as expected.
I believe because it's such a large number it's being displayed as scientific notation by Ruby and not able to work in your is_fibonacci? method with the basic Math library.
You might want to look into using the BigMath library for Ruby http://ruby-doc.org/stdlib-1.9.3/libdoc/bigdecimal/rdoc/BigMath.html
Edit As Niel pointed out, it's a Ruby float and has therefore lost precision. Big Math should still do the trick for you.
Why on earth does comparing a float value of 1.0, to an integer value of 1, return true?
puts '1.0'.to_i
puts '1.0'.to_i == 1.0 #so 1 == 1.0 is true?
puts 1.0 == 1 #wtf?
Does Ruby only read the first part of the floatin point value and then short circuit? Wuld someone be able to explain with a link to some documentation please? I have flipped through the API but I don't even know what to look for in this case...
== compares the value, the value of 1.0 is equal to 1 in math, so it's not much surprising. To compare value as well as type, you can use eql?:
1.0 == 1
#=> true
1.0.eql? 1
#=> false
In Ruby, == is a method. That means to understand it you need to look at the specific class calling it.
1 == 1.0
The caller is 1, a Fixnum. So you need to look at Fixnum#==.
1.0 == 1
The caller is 1.0, a Float. So you need to look at Float#==.
A surprising result of this is that == is not necessarily symmetric: a == b and b == a could call completely different methods and return completely different results. In this case though, both == methods end up calling the C function rb_integer_float_eq which converts both operands to the same data type before comparing them.
Actually there already is a nice answer "What's the difference between equal?, eql?, ===, and ==?" about equality in Ruby, with references and stuff. There is a surprising amount of ways to compare for equality in Ruby, each with its own purpose.
Since mathematical meaning is not enough for you, you can compare differently. Like, say, eql? that is heavily used in Hashes to determine if two keys are the same. And it turns out that 1.0 and 1 are different keys! his is what I get in IRB of Ruby 2.1.2:
> 1.0.eql?(1.0)
=> true
> 1.eql?(1)
=> true
> 1.eql?(1.0)
=> false
Ruby is coercing the operands of == (as needed) to the same type, then performing a comparison of their numeric values. This is normally what you want for numeric comparisons.
Most languages will automatically increase the precision of a variable's type in order to perform operations on them such as compare, add, multiply, etc. They "usually" do not decrease the precision, nor unnecessarily increase it. E.g. 1/2 = 0, but 1.0/2.0 = 0.5.
I am trying to understand how range.cover? works and following seems confusing -
("as".."at").cover?("ass") # true and ("as".."at").cover?("ate") # false
This example in isolation is not confusing as it appears to be evaluated dictionary style where ass comes before at followed by ate.
("1".."z").cover?(":") # true
This truth seems to be based on ASCII values rather than dictionary style, because in a dictionary I'd expect all special characters to precede even digits and the confusion starts here. If what I think is true then how does cover? decide which comparison method to employ i.e. use ASCII values or dictionary based approach.
And how does range work with arrays. For example -
([1]..[10]).cover?([9,11,335]) # true
This example I expected to be false. But on the face of it looks like that when dealing with arrays, boundary values as well as cover?'s argument are converted to string and a simple dictionary style comparison yields true. Is that correct interpretation?
What kind of objects is Range equipped to handle? I know it can take numbers (except complex ones), handle strings, able to mystically work with arrays while boolean, nil and hash values among others cause it to raise ArgumentError: bad value for range
Why does ([1]..[10]).cover?([9,11,335]) return true
Let's take a look at the source. In Ruby 1.9.3 we can see a following definition.
static VALUE
range_cover(VALUE range, VALUE val)
{
VALUE beg, end;
beg = RANGE_BEG(range);
end = RANGE_END(range);
if (r_le(beg, val)) {
if (EXCL(range)) {
if (r_lt(val, end))
return Qtrue;
}
else {
if (r_le(val, end))
return Qtrue;
}
}
return Qfalse;
}
If the beginning of the range isn't lesser or equal to the given value cover? returns false. Here lesser or equal to is determined in terms of the r_lt function, which uses the <=> operator for comparison. Let's see how does it behave in case of arrays
[1] <=> [9,11,335] # => -1
So apparently [1] is indeed lesser than [9,11,335]. As a result we go into the body of the first if. Inside we check whether the range excludes its end and do a second comparison, once again using the <=> operator.
[10] <=> [9,11,335] # => 1
Therefore [10] is greater than [9,11,335]. The method returns true.
Why do you see ArgumentError: bad value for range
The function responsible for raising this error is range_failed. It's called only when range_check returns a nil. When does it happen? When the beginning and the end of the range are uncomparable (yes, once again in terms of our dear friend, the <=> operator).
true <=> false # => nil
true and false are uncomparable. The range cannot be created and the ArgumentError is raised.
On a closing note, Range.cover?'s dependence on <=> is in fact an expected and documented behaviour. See RubySpec's specification of cover?.
I'm initializing an instance of a class that tests the equality of two formulas.
The formula's calculated values are in fact equal:
RubyChem::Chemical.new("SOOS").fw
=> 96.0
RubyChem::Chemical.new("OSSO").fw
= 96.0
When I created a new class to check the equality of these two instances I'm a bit surprised by the results:
x = RubyChem::BalanceChem.new("SOOS","OSSO")
x.balanced
=>false
y = RubyChem::BalanceChem.new("SOOS","SOOS")
y.balanced
=> true
the RubyChem::BalanceChem initialize method is here:
def initialize(formula1, formula2)
#balanced = RubyChem::Chemical.new(formula1).fw == RubyChem::Chemical.new(formula2).fw
end
Why doesn't ruby fetch the fw values for formula1 and formula2 and check the equality of those values? What are the order of operations in Ruby and what is Ruby doing? I can see I lack an understanding of this issue. How can I make this work? Thank you in advance.
Ruby 1.8 has a bug when converting floats to string. Sometimes the given string not a good representation of the float. Here is an example with 0.56:
0.5600000000000005.to_s == 0.56.to_s #=> true
# should have returned false, since:
0.5600000000000005 == 0.56 #=> false
This explains why two apparently identical results are not actually identical.
You probably want to do compare within a certain margin of error, do some rounding before doing a comparison, or use exact types like BigDecimal or Rational.
You probably do not want to check floating point numbers for equality. Instead, you should compare deltas.
Try this in irb:
x = 1.000001
y = 1.0
x == y
(x-y).abs < 0.00001
So, you find a delta like 0.00001 that you feel would handle any variation in floating point arithmetic, and use it that way. You should never == floats.
This is likely another problem caused by floating point precision.
I can assure you that those values calculated before the equality is evaluated.
See the Ruby's operator precedence