I spent some time on a quite simple task about splitting an array. Until I found that: 2 == 5/2 and -3 == -5/2. To get -2 I need to pull the minus out of the parentheses: -2 == -(5/2). Why does this happen?
As I understand it, the result rounds to the smallest integer, but (-2.5).to_i == -2. Very curious.
# https://www.codewars.com/kata/swap-the-head-and-the-tail/train/ruby
# -5/2 != -(5/2)
def swap_head_tail a
a[-(a.size/2)..-1] + a[a.size/2...-(a.size/2)] + a[0...a.size/2]
end
Why does this happen?
It's not quite clear what kind of answer your are looking for other than because that is how it is specified (bold emphasis mine):
15.2.8.3.4 Integer#/
/(other)
Visibility: public
Behavior:
a) If other is an instance of the class Integer:
1) If the value of other is 0, raise a direct instance of the class ZeroDivisionError.
2) Otherwise, let n be the value of the receiver divided by the value of other. Return an instance of the class Integer whose value is the largest integer smaller than or equal to n.
NOTE The behavior is the same even if the receiver has a negative value. For example, -5 / 2 returns -3.
As you can see, the specification even contains your exact example.
It is also specified in the Ruby/Spec:
it "supports dividing negative numbers" do
(-1 / 10).should == -1
end
Compare this with the specification for Float#to_i (bold emphasis mine):
15.2.9.3.14 Float#to_i
to_i
Visibility: public
Behavior: The method returns an instance of the class Integer whose value is the integer part of the receiver.
And in the Ruby/Spec:
it "returns self truncated to an Integer" do
899.2.send(#method).should eql(899)
-1.122256e-45.send(#method).should eql(0)
5_213_451.9201.send(#method).should eql(5213451)
1.233450999123389e+12.send(#method).should eql(1233450999123)
-9223372036854775808.1.send(#method).should eql(-9223372036854775808)
9223372036854775808.1.send(#method).should eql(9223372036854775808)
end
Related
I am looking for a concise way to deal with the following situation: Given a variable (in practince, an instance variable in a class, though I don't think this matters here), which is known to be either nil or hold some Integer. If it is an Integer, the variable should be incremented. If it is nil, it should be initialized with 1.
These are obvious solutions to this, taking #counter as the variable to deal with:
# Separate the cases into two statements
#counter ||= 0
#counter += 1
or
# Separate the cases into one conditional
#counter = #counter ? (#counter + 1) : 1
I don't like these solutions because they require to repeat the name of the variable. The following attempt failed:
# Does not work
(#counter ||= 0) += 1
This can't be done, because the result of the assignment operators is not an lvalue, though the actual error message is a bit obscure. In this case, you get the error _unexpected tOP_ASGN, expecting end_.
Is there a good idiom to code my problem, or do I have to stick with one of my clumsy solutions?
The question is clear:
A variable is known to hold nil or an integer. If nil the variable is to be set equal to 1, else it is to be set equal to its value plus 1.
What is the best way to implement this in Ruby?
First, two points.
The question states, "If it is nil, it should be initialized with 1.". This contradicts the statement that the variable is known to be nil or an integer, meaning that it has already been initialized, or more accurately, defined. In the case of an instance variable, this distinction is irrelevant as Ruby initializes undefined instance variables to nil when they are referenced as rvalues. It's an important distinction for local variables, however, as an exception is raised when an undefined local variable is referenced as an rvalue.
The comments largely address situations where the variable holds an object other than nil or an integer. They are therefore irrelevant. If the OP wishes to broaden the question to allow the variable to hold objects other than nil or an integer (an array or hash, for example), a separate question should be asked.
What criteria should be used in deciding what code is best? Of the various possibilities that have been mentioned, I do not see important differences in efficiency. Assuming that to be the case, or that relative efficiency is not important in the application, we are left with readability (and by extension, maintainability) as the sole criterion. If x equals nil or an integer, or is an undefined instance variable, perhaps the clearest code is the following:
x = 0 if x.nil?
x += 1
or
x = x.nil? ? 1 : x+1
Ever-so-slightly less readable:
x = (x || 0) + 1
and one step behind that:
x = x.to_i + 1
which requires the reader to know that nil.to_i #=> 0.
The OP may regard these solutions as "clumsy", but I think they are all beautiful.
Can an expression be written that references x but once? I can't think of a way and one has not been suggested in the comments, so if there is a way (doubtful, I believe) it probably would not meet the test for readability.
Consider now the case where the local variable x may not have been defined. In that case we might write:
x = (defined?(x) ? (x || 0) : 0) + 1
defined? is a Ruby keyword.
I am attempting to create a method which generates a true or false return value when a number is given as its argument to detect if the number is a Fibonacci sequence, although I have never encountered a number like so:
2.073668380220713e+21 .
Forgive my ignorance but is there a way to deal with this type of value in ruby to make it work in the method below?
def is_fibonacci?(num)
high_square = Math.sqrt((5 * num) * num + 4)
low_square = Math.sqrt((5 * num) * num - 4)
# If high_square or low_square are perfect squares, return true, else false.
high_square == high_square.round || low_square == low_square.round ? true : false
end
puts is_fibonacci?(927372692193078999171) # Trying to return false, but returns true. The sqrt of this number is 2.073668380220713e+21.
puts is_fibonacci?(987) # Returns true as expected.
I believe because it's such a large number it's being displayed as scientific notation by Ruby and not able to work in your is_fibonacci? method with the basic Math library.
You might want to look into using the BigMath library for Ruby http://ruby-doc.org/stdlib-1.9.3/libdoc/bigdecimal/rdoc/BigMath.html
Edit As Niel pointed out, it's a Ruby float and has therefore lost precision. Big Math should still do the trick for you.
Why on earth does comparing a float value of 1.0, to an integer value of 1, return true?
puts '1.0'.to_i
puts '1.0'.to_i == 1.0 #so 1 == 1.0 is true?
puts 1.0 == 1 #wtf?
Does Ruby only read the first part of the floatin point value and then short circuit? Wuld someone be able to explain with a link to some documentation please? I have flipped through the API but I don't even know what to look for in this case...
== compares the value, the value of 1.0 is equal to 1 in math, so it's not much surprising. To compare value as well as type, you can use eql?:
1.0 == 1
#=> true
1.0.eql? 1
#=> false
In Ruby, == is a method. That means to understand it you need to look at the specific class calling it.
1 == 1.0
The caller is 1, a Fixnum. So you need to look at Fixnum#==.
1.0 == 1
The caller is 1.0, a Float. So you need to look at Float#==.
A surprising result of this is that == is not necessarily symmetric: a == b and b == a could call completely different methods and return completely different results. In this case though, both == methods end up calling the C function rb_integer_float_eq which converts both operands to the same data type before comparing them.
Actually there already is a nice answer "What's the difference between equal?, eql?, ===, and ==?" about equality in Ruby, with references and stuff. There is a surprising amount of ways to compare for equality in Ruby, each with its own purpose.
Since mathematical meaning is not enough for you, you can compare differently. Like, say, eql? that is heavily used in Hashes to determine if two keys are the same. And it turns out that 1.0 and 1 are different keys! his is what I get in IRB of Ruby 2.1.2:
> 1.0.eql?(1.0)
=> true
> 1.eql?(1)
=> true
> 1.eql?(1.0)
=> false
Ruby is coercing the operands of == (as needed) to the same type, then performing a comparison of their numeric values. This is normally what you want for numeric comparisons.
Most languages will automatically increase the precision of a variable's type in order to perform operations on them such as compare, add, multiply, etc. They "usually" do not decrease the precision, nor unnecessarily increase it. E.g. 1/2 = 0, but 1.0/2.0 = 0.5.
I am trying to make an RPN calculator. I have to implement my own .to_i and .to_f method. I cannot use send, eval, Float(str) or String(str) method. The assignment is done, but I still want to know how to implement it.
The input: atof("255.25") as string type
Output: 255.55 as float type
Here is my code for atoi
ASCII_NUM_START = 48 # start of ascii code for 0
def ascii_to_i(int_as_str)
array_ascii = int_as_str.bytes
converted_arr = array_ascii.map {|ascii| ascii - ASCII_NUM_START }
converted_arr.inject { |sum, n| sum * 10 + n }
end
def ascii_to_f(float_as_str)
???
end
I got it working doing the following (and utilizing your ascii_to_i function).
ASCII_NUM_START = 48 # start of ascii code for 0
def ascii_to_i(int_as_str)
array_ascii = int_as_str.bytes
converted_arr = array_ascii.map {|ascii| ascii - ASCII_NUM_START }
converted_arr.inject { |sum, n| sum * 10 + n }
end
def ascii_to_f(float_as_str)
int_split = float_as_str.split(".")
results = []
int_split.each { |val| results << ascii_to_i(val) }
results[0] + (results[1] / (10.0 ** int_split.last.length))
end
I can see you have made a reasonable effort at ascii_to_i.
The code for ascii_to_f can be similar, and in addition you will need to divide the result by the number of decimal places that you have processed.
Probably the easiest adaptation is:
find the position of the . character (ASCII code 46) in the String, save that as a variable
remove the . character (ASCII code 46) from your array of bytes
calculate the Integer value from the array of bytes as before
divide by 10.0 (must be a Float) to the power of (the length of the remaining array minus the position you found the . in).
I am not giving code, because it is an assignment. See if you can figure out the correct syntax, looking at documentation for the Array class for finding the position of a specific value, for deleting a specific value, and for getting length of the array.
I am trying to understand how range.cover? works and following seems confusing -
("as".."at").cover?("ass") # true and ("as".."at").cover?("ate") # false
This example in isolation is not confusing as it appears to be evaluated dictionary style where ass comes before at followed by ate.
("1".."z").cover?(":") # true
This truth seems to be based on ASCII values rather than dictionary style, because in a dictionary I'd expect all special characters to precede even digits and the confusion starts here. If what I think is true then how does cover? decide which comparison method to employ i.e. use ASCII values or dictionary based approach.
And how does range work with arrays. For example -
([1]..[10]).cover?([9,11,335]) # true
This example I expected to be false. But on the face of it looks like that when dealing with arrays, boundary values as well as cover?'s argument are converted to string and a simple dictionary style comparison yields true. Is that correct interpretation?
What kind of objects is Range equipped to handle? I know it can take numbers (except complex ones), handle strings, able to mystically work with arrays while boolean, nil and hash values among others cause it to raise ArgumentError: bad value for range
Why does ([1]..[10]).cover?([9,11,335]) return true
Let's take a look at the source. In Ruby 1.9.3 we can see a following definition.
static VALUE
range_cover(VALUE range, VALUE val)
{
VALUE beg, end;
beg = RANGE_BEG(range);
end = RANGE_END(range);
if (r_le(beg, val)) {
if (EXCL(range)) {
if (r_lt(val, end))
return Qtrue;
}
else {
if (r_le(val, end))
return Qtrue;
}
}
return Qfalse;
}
If the beginning of the range isn't lesser or equal to the given value cover? returns false. Here lesser or equal to is determined in terms of the r_lt function, which uses the <=> operator for comparison. Let's see how does it behave in case of arrays
[1] <=> [9,11,335] # => -1
So apparently [1] is indeed lesser than [9,11,335]. As a result we go into the body of the first if. Inside we check whether the range excludes its end and do a second comparison, once again using the <=> operator.
[10] <=> [9,11,335] # => 1
Therefore [10] is greater than [9,11,335]. The method returns true.
Why do you see ArgumentError: bad value for range
The function responsible for raising this error is range_failed. It's called only when range_check returns a nil. When does it happen? When the beginning and the end of the range are uncomparable (yes, once again in terms of our dear friend, the <=> operator).
true <=> false # => nil
true and false are uncomparable. The range cannot be created and the ArgumentError is raised.
On a closing note, Range.cover?'s dependence on <=> is in fact an expected and documented behaviour. See RubySpec's specification of cover?.