How do I implement atof (ascii to float) method in ruby? - ruby

I am trying to make an RPN calculator. I have to implement my own .to_i and .to_f method. I cannot use send, eval, Float(str) or String(str) method. The assignment is done, but I still want to know how to implement it.
The input: atof("255.25") as string type
Output: 255.55 as float type
Here is my code for atoi
ASCII_NUM_START = 48 # start of ascii code for 0
def ascii_to_i(int_as_str)
array_ascii = int_as_str.bytes
converted_arr = array_ascii.map {|ascii| ascii - ASCII_NUM_START }
converted_arr.inject { |sum, n| sum * 10 + n }
end
def ascii_to_f(float_as_str)
???
end

I got it working doing the following (and utilizing your ascii_to_i function).
ASCII_NUM_START = 48 # start of ascii code for 0
def ascii_to_i(int_as_str)
array_ascii = int_as_str.bytes
converted_arr = array_ascii.map {|ascii| ascii - ASCII_NUM_START }
converted_arr.inject { |sum, n| sum * 10 + n }
end
def ascii_to_f(float_as_str)
int_split = float_as_str.split(".")
results = []
int_split.each { |val| results << ascii_to_i(val) }
results[0] + (results[1] / (10.0 ** int_split.last.length))
end

I can see you have made a reasonable effort at ascii_to_i.
The code for ascii_to_f can be similar, and in addition you will need to divide the result by the number of decimal places that you have processed.
Probably the easiest adaptation is:
find the position of the . character (ASCII code 46) in the String, save that as a variable
remove the . character (ASCII code 46) from your array of bytes
calculate the Integer value from the array of bytes as before
divide by 10.0 (must be a Float) to the power of (the length of the remaining array minus the position you found the . in).
I am not giving code, because it is an assignment. See if you can figure out the correct syntax, looking at documentation for the Array class for finding the position of a specific value, for deleting a specific value, and for getting length of the array.

Related

Ruby language curious integer arithmetic : (-5/2) != -(5/2)

I spent some time on a quite simple task about splitting an array. Until I found that: 2 == 5/2 and -3 == -5/2. To get -2 I need to pull the minus out of the parentheses: -2 == -(5/2). Why does this happen?
As I understand it, the result rounds to the smallest integer, but (-2.5).to_i == -2. Very curious.
# https://www.codewars.com/kata/swap-the-head-and-the-tail/train/ruby
# -5/2 != -(5/2)
def swap_head_tail a
a[-(a.size/2)..-1] + a[a.size/2...-(a.size/2)] + a[0...a.size/2]
end
Why does this happen?
It's not quite clear what kind of answer your are looking for other than because that is how it is specified (bold emphasis mine):
15.2.8.3.4 Integer#/
/(other)
Visibility: public
Behavior:
a) If other is an instance of the class Integer:
1) If the value of other is 0, raise a direct instance of the class ZeroDivisionError.
2) Otherwise, let n be the value of the receiver divided by the value of other. Return an instance of the class Integer whose value is the largest integer smaller than or equal to n.
NOTE The behavior is the same even if the receiver has a negative value. For example, -5 / 2 returns -3.
As you can see, the specification even contains your exact example.
It is also specified in the Ruby/Spec:
it "supports dividing negative numbers" do
(-1 / 10).should == -1
end
Compare this with the specification for Float#to_i (bold emphasis mine):
15.2.9.3.14 Float#to_i
to_i
Visibility: public
Behavior: The method returns an instance of the class Integer whose value is the integer part of the receiver.
And in the Ruby/Spec:
it "returns self truncated to an Integer" do
899.2.send(#method).should eql(899)
-1.122256e-45.send(#method).should eql(0)
5_213_451.9201.send(#method).should eql(5213451)
1.233450999123389e+12.send(#method).should eql(1233450999123)
-9223372036854775808.1.send(#method).should eql(-9223372036854775808)
9223372036854775808.1.send(#method).should eql(9223372036854775808)
end

How do I convert a spreadsheet "letternamed" column coordinate to an integer?

In spreadsheets I have cells named like "F14", "BE5" or "ALL1". I have the first part, the column coordinate, in a variable and I want to convert it to a 0-based integer column index.
How do I do it, preferably in an elegant way, in Ruby?
I can do it using a brute-force method: I can imagine loopping through all letters, converting them to ASCII and adding to a result, but I feel there should be something more elegant/straightforward.
Edit: Example: To simplify I do only speak about the column coordinate (letters). Therefore in the first case (F14) I have "F" as the input and I expect the result to be 5. In the second case I have "BE" as input and I expect getting 56, for "ALL" I want to get 999.
Not sure if this is any clearer than the code you already have, but it does have the advantage of handling an arbitrary number of letters:
class String
def upcase_letters
self.upcase.split(//)
end
end
module Enumerable
def reverse_with_index
self.map.with_index.to_a.reverse
end
def sum
self.reduce(0, :+)
end
end
def indexFromColumnName(column_str)
start = 'A'.ord - 1
column_str.upcase_letters.map do |c|
c.ord - start
end.reverse_with_index.map do |value, digit_position|
value * (26 ** digit_position)
end.sum - 1
end
I've added some methods to String and Enumerable because I thought it made the code more readable, but you could inline these or define them elsewhere if you don't like that sort of thing.
We can use modulo and the length of the input. The last character will
be used to calculate the exact "position", and the remainders to count
how many "laps" we did in the alphabet, e.g.
def column_to_integer(column_name)
letters = /[A-Z]+/.match(column_name).to_s.split("")
laps = (letters.length - 1) * 26
position = ((letters.last.ord - 'A'.ord) % 26)
laps + position
end
Using decimal representation (ord) and the math tricks seems a neat
solution at first, but it has some pain points regarding the
implementation. We have magic numbers, 26, and constants 'A'.ord all
over.
One solution is to give our code better knowlegde about our domain, i.e.
the alphabet. In that case, we can switch the modulo with the position of
the last character in the alphabet (because it's already sorted in a zero-based array), e.g.
ALPHABET = ('A'..'Z').to_a
def column_to_integer(column_name)
letters = /[A-Z]+/.match(column_name).to_s.split("")
laps = (letters.length - 1) * ALPHABET.size
position = ALPHABET.index(letters.last)
laps + position
end
The final result:
> column_to_integer('F5')
=> 5
> column_to_integer('AK14')
=> 36
HTH. Best!
I have found particularly neat way to do this conversion:
def index_from_column_name(colname)
s=colname.size
(colname.to_i(36)-(36**s-1).div(3.5)).to_s(36).to_i(26)+(26**s-1)/25-1
end
Explanation why it works
(warning spoiler ;) ahead). Basically we are doing this
(colname.to_i(36)-('A'*colname.size).to_i(36)).to_s(36).to_i(26)+('1'*colname.size).to_i(26)-1
which in plain English means, that we are interpreting colname as 26-base number. Before we can do it we need to interpret all A's as 1, B's as 2 etc. If only this is needed than it would be even simpler, namely
(colname.to_i(36) - '9'*colname.size).to_i(36)).to_s(36).to_i(26)-1
unfortunately there are Z characters present which would need to be interpreted as 10(base 26) so we need a little trick. We shift every digit 1 more then needed and than add it at the end (to every digit in original colname)
`

String to BigNum and back again (in Ruby) to allow circular shift

As a personal challenge I'm trying to implement the SIMON block cipher in Ruby. I'm running into some issues finding the best way to work with the data. The full code related to this question is located at: https://github.com/Rami114/Personal/blob/master/Simon/Simon.rb
SIMON requires both XOR, shift and circular shift operations, the last of which is forcing me to work with BigNums so I can perform the left circular shift with math rather than a more complex/slower double loop on byte arrays.
Is there a better way to convert a string to a BigNum and back again.
String -> BigNum (where N is 64 and pt is a string of plaintext)
pt = pt.chars.each_slice(N/8).map {|x| x.join.unpack('b*')[0].to_i(2)}.to_a
So I break the string into individual characters, slice into N-sized arrays (the word size in SIMON) and unpack each set into a BigNum. That appears to work fine and I can convert it back.
Now my SIMON code is currently broken, but that's more the math I think/hope and not the code. The conversion back is (where ct is an array of bignums representing the ciphertext):
ct.map { |x| [x.to_s(2).rjust(128,'0')].pack('b*') }.join
I seem to have to right-justify pad the string as bignums are of undefined width so I have no leading 0s. Unfortunately the pack requires the defined with to have sensible output.
Is this a valid method of conversion? Is there a better way? I'm not sure on either count and hoping someone here can help out.
E: For #torimus, the circular shift implementation I'm using (From link above)
def self.lcs (bytes, block_size, shift)
((bytes << shift) | (bytes >> (block_size - shift))) & ((1<< block_size)-1)
end
If you would be equally happy with unpack('B*') with msb first binary numbers (which you could well be if all your processing is circular), then you could also use .unpack('Q>') instead of .unpack('B*')[0].to_i(2) for generating pt:
pt = "qwertyuiopasdfghjklzxcvbnmQWERTYUIOPASDFGHJKLZXCVBNM1234567890!#"
# Your version (with 'B' == msb first) for comparison:
pt_nums = pt.chars.each_slice(N/8).map {|x| x.join.unpack('B*')[0].to_i(2)}.to_a
=> [8176115190769218921, 8030025283835160424, 7668342063789995618, 7957105551900562521,
6145530372635706438, 5136437062280042563, 6215616529169527604, 3834312847369707840]
# unpack to 64-bit unsigned integers directly
pt_nums = pt.unpack('Q>8')
=> [8176115190769218921, 8030025283835160424, 7668342063789995618, 7957105551900562521,
6145530372635706438, 5136437062280042563, 6215616529169527604, 3834312847369707840]
There are no native 128-bit pack/unpacks to return in the other direction, but you can use Fixnum to solve this too:
split128 = 1 << 64
ct = pt # Just to show round-trip
ct.map { |x| [ x / split128, x % split128 ].pack('Q>2') }.join
=> "\x00\x00\x00\x00\x00\x00\x00\x00qwertyui . . . " # truncated
This avoids a lot of the temporary stages on your code, but at the expense of using a different byte coding - I don't know enough about SIMON to comment whether this is adaptable to your needs.

Recursively counting the number of characters in a string. (Ruby)

I need to write a recursive function that utilizes just two string methods, .empty? and .chop.
No, I can't use .length (Can you tell it's homework yet?)
So far I'm stuck on writing the function itself, I passed it the string, but I am unsure on how to recursively go through the characters with the .chop string method. Would I just have a counter? Syntax for this thing seems tricky to me.
def stringLength(string)
if string.empty?
return 0
else
.....
end
end
I wish I could put more down, but this is what I'm stuck at.
return 1 + stringLength(string.chop)
Thats your missing line. Here is an example of how this will work:
stringLength("Hello") = 1 + stringLength("Hell")
stringLength("Hell") = 1 + stringLength("Hel")
stringLength("Hel") = 1 + stringLength("He")
stringLength("He") = 1 + stringLength("H")
stringLength("H") = 1 + stringLength("")
stringLength("") = 0

Calculating the size of an Array pack struct format in Ruby

In the case of e.g. ddddd, d is the native format for the system, so I can't know exactly how big it will be.
In python I can do:
import struct
print struct.calcsize('ddddd')
Which will return 40.
How do I get this in Ruby?
I haven't found a built-in way to do this, but I've had success with this small function when I know I'm dealing with only numeric formats:
def calculate_size(format)
# Only for numeric formats, String formats will raise a TypeError
elements = 0
format.each_char do |c|
if c =~ /\d/
elements += c.to_i - 1
else
elements += 1
end
end
([ 0 ] * elements).pack(format).length
end
This constructs an array of the proper number of zeros, calls pack() with your format, and returns the length (in bytes). Zeros work in this case because they're convertible to each of the numeric formats (integer, double, float, etc).
I don't know of a shortcut but you can just pack one and ask how long it is:
length_of_five_packed_doubles = 5 * [1.0].pack('d').length
By the way, a ruby array combined with the pack method appears to be functionally equivalent to python's struct module. Ruby pretty much copied perl's pack and put them as methods on the Array class.

Resources