Related
I'm trying to minimize procedural code in Ruby. I have the following function build_array. It procedurally builds up a array in a pair of nested each loops.
def build_array(max)
array = []
(1..max).each do |size|
(0..size).each do |n|
array << [n, size - n]
end
end
array
end
The function works correctly and produces output like this:
> build_array 2
=> [[0, 1], [1, 0], [0, 2], [1, 1], [2, 0]]
> build_array 3
=> [[0, 1], [1, 0], [0, 2], [1, 1], [2, 0], [0, 3], [1, 2], [2, 1], [3, 0]]
Can this function be further optimized with a reduce somehow? The building up of a new object based on the result of a series of operations seems to fit that pattern, but I can't figure it out. The order of the subarrays is not important.
Thanks!
Can this function be further optimized with a reduce somehow?
I don't know about "optimized", but yes, you definitely can use reduce. And I didn't even read your code before I answered!
There is an interesting reason why I was able to give that answer without even looking at your question, and that is that reduce is a general iteration operator. Now, what do I mean by that?
If you think about what reduce does, on a deep level, here is how it works. A collection can either be empty or not empty. Those are the only two cases. And the reduce operator takes three arguments:
The collection being reduced,
What to do when the collection is not empty, and
What to do when the collection is empty.
Now, you might say "three arguments, I only see one", but that's because we are using Ruby:
The first argument is the hidden self argument, i.e. the collection that you are calling reduce on,
The second argument is the block, and
The third argument is the actual argument that Ruby's Enumerable#inject takes.
Now, typically we think about, and call the third argument the "accumulator". But in some other languages, it is also called the "zero", and there you can more easily see the connection to processing an empty collection.
But think about it: when you iterator over a collection, you generally have to deal with the fact that the collection is empty. And you have to somehow produce the next step in case the collection is not empty. There is really nothing else you need to do when iterating.
But those two things are exactly what is being covered by the last two arguments of the reduce operator!
So, it turns out that reduce can do everything that iteration can do. Really, everything. Everything that can be done with for / in or each in Ruby, foreach in C#, for / in, for / of, and forEach in JavaScript, the enhanced for loop in Java and C++, etc. can be done with reduce.
That means that every method that exists in Enumerable can be done with reduce.
Another way to think about it, is that a collection is a program written in a language that has only two commands: ELEMENT(x) and STOP. And reduce is an interpreter for that programming language that allows you to customize and provide your own implementations of ELEMENT(x) and STOP.
That is why the answer to the question "can this iteration thing be done with reduce" is always "Yes", regardless of what the "thing" that you're doing actually is.
So, back to your question, here is what a naive, blind, mechanical 1:1 translation of your code to reduce would look like:
def build_array(max)
(1..max).reduce([]) do |acc, size|
(0..size).reduce(acc) do |acc, n|
acc << [n, size - n]
end
end
end
Whenever you have a code pattern of the form:
Create some object
Iterate over a collection and add to that object in each iteration
Return the object
That is exactly reduce: the object that you are creating is the accumulator and your loop body is the reducing operation. You can see that here:
Note that in general reduce is considered to be part of functional programming, and we are actually mutating the accumulator here. It would be more "pure" to instead return a new accumulator in each iteration:
def build_array(max)
(1..max).reduce([]) do |acc, size|
(0..size).reduce(acc) do |acc, n|
acc + [[n, size - n]]
end
end
end
Alternatively, Ruby also has an "impure" version of reduce called each_with_object that is specifically designed for mutating the accumulator. In particular, reduce uses the return value of the block as the accumulator object for the next iteration, whereas each_with_object always passes the same object and just ignores the return value of the block:
def build_array(max)
(1..max).each_with_object([]) do |size, array|
(0..size).each_with_object(array) do |n, array|
array << [n, size - n]
end
end
end
However, note that just because everything can be expressed as a reduce operation, not everything should be expressed as a reduce operation. There are several more specific operations, and if possible, those should be used.
In particular, in our case we are actually mostly transforming values, and that's what map is for. Or, flat_map if you want to process a nested collection into a non-nested one:
def build_array(max)
(1..max).flat_map do |size|
(0..size).map do |n|
[n, size - n]
end
end
end
However, the most elegant solution in my opinion is the recursive one:
def build_array(max)
return [] if max.zero?
build_array(max.pred) + (0..max).map do |n|
[n, max - n]
end
end
However, note that this solution might be blow the stack for high values of max. Although that may or may not be a problem in your case because the result array also grows pretty large very quickly, so at the point where you run out of stack space, you pretty much also would run out of RAM, even with a non-recursive solution. On my laptop with YARV, I was able to determine that it breaks somewhere between 10000 and 20000, at which point the array uses well over 1GB.
The solution for this is to use a lazy infinite stream that generates each element one-by-one as it is consumed:
build_array = Enumerator.new do |y|
(1..).each do |size|
(0..size).each do |n|
y << [n, size - n]
end
end
end
loop do
print build_array.next
end
# [0, 1][1, 0][0, 2][1, 1][2, 0][0, 3][1, 2][2, 1][3, 0][0, 4][1, 3][2, 2]
# [3, 1][4, 0][0, 5][1, 4][2, 3][3, 2][4, 1][5, 0][0, 6][1, 5][2, 4][3, 3]
# [4, 2][5, 1][6, 0][0, 7][1, 6][2, 5][3, 4][4, 3][5, 2][6, 1][7, 0][0, 8]
# [1, 7][2, 6][3, 5][4, 4][5, 3][6, 2][7, 1][8, 0][0, 9][1, 8][2, 7][3, 6]
# [4, 5][5, 4][6, 3][7, 2][8, 1][9, 0][0, 10][1, 9][2, 8][3, 7][4, 6][5, 5]
# [6, 4][7, 3][8, 2][9, 1][10, 0][0, 11][1, 10][2, 9][3, 8][4, 7][5, 6][6, 5]
# [7, 4][8, 3][9, 2][10, 1][11, 0][0, 12][1, 11][2, 10][3, 9][4, 8][5, 7]
# [6, 6][7, 5][8, 4][9, 3][10, 2][11, 1][12, 0][0, 13][1, 12][2, 11]
# [3, 10][4, 9][5, 8][6, 7][7, 6][8, 5][9, 4][10, 3][11, 2][12, 1][13, 0]
# [0, 14][1, 13][2, 12][3, 11][4, 10][5, 9][6, 8][7, 7][8, 6][9, 5][10, 4]
# [11, 3][12, 2][13, 1][14, 0][0, 15][1, 14][2, 13][3, 12][4, 11][5, 10]
# [6, 9][7, 8][8, 7][9, 6][10, 5][11, 4][12, 3][13, 2][14, 1][15, 0]
# [0, 16][1, 15][2, 14][3, 13][4, 12][5, 11][6, 10][7, 9][8, 8][9, 7][10, 6]
# [11, 5][12, 4][13, 3][14, 2][15, 1][16, 0][0, 17][1, 16][2, 15][3, 14]
# [4, 13][5, 12][6, 11][7, 10][8, 9][9, 8][10, 7][11, 6][12, 5][13, 4][14, 3]
# [15, 2][16, 1][17, 0][0, 18][1, 17][2, 16][3, 15][4, 14][5, 13][6, 12]
# [7, 11][8, 10][9, 9][10, 8] …
You can let this run forever, it will never use any more memory. On my laptop, the Ruby process never grows beyond 9MB, where it was using multiple GB previously.
You can implement this with map instead of each ... essentially the same code but you don't need to shovel it into an array as map returns an array for you.
def build_array(max)
(1..max).map{|s|(0..s).map{|n|[n,s-n]}}.flatten(1)
end
reduce or the alias inject would require you to still insert into an array, so I can't see it would be better than your original method. I'd only use reduce if I wanted to return an object that consolidates the input array into a smaller collection.
I think this question is mostly for Code Review
But let's try to help you:
Here is example with Enumerable#inject and Array#new
def new_build_array(max)
(1..max).inject([]) do |array, size|
array += Array.new(size+1) { |e| [e, size-e] }
end
end
> new_build_array 2
=> [[0, 1], [1, 0], [0, 2], [1, 1], [2, 0]]
> new_build_array 3
=> [[0, 1], [1, 0], [0, 2], [1, 1], [2, 0], [0, 3], [1, 2], [2, 1], [3, 0]]
This is just a thought exercise and I'd be interested in any opinions. Although if it works I can think of a few ways I'd use it.
Traditionally, if you wanted to perform a function on the results of a nested loop formed from arrays or ranges etc, you would write something like this:
def foo(x, y)
# Processing with x, y
end
iterable_one.each do |x|
iterable_two.each do |y|
my_func(x, y)
end
end
However, what if I had to add another level of nesting. Yes, I could just add an additonal level of looping. At this point, let's make foo take a variable number of arguments.
def foo(*inputs)
# Processing with variable inputs
end
iterable_one.each do |x|
iterable_two.each do |y|
iterable_three.each do |z|
my_func(x, y, x)
end
end
end
Now, assume I need to add another level of nesting. At this point, it's getting pretty gnarly.
My question, therefore is this: Is it possible to write something like the below?
[iterable_one, iterable_two, iterable_three].nested_each(my_func)
or perhaps
[iterable_one, iterable_two, iterable_three].nested_each { |args| my_func(args) }
Perhaps passing the arguments as actual arguments isn't feasible, could you maybe pass an array to my_func, containing parameters from combinations of the enumerables?
I'd be curious to know if this is possible, it's probably not that likely a scenario but after it occurred to me I wanted to know.
Array.product yields combinations of enums as if they were in nested loops. It takes multiple arguments. Demo:
a = [1,2,3]
b = %w(a b c)
c = [true, false]
all_enums = [a,b,c]
all_enums.shift.product(*all_enums) do |combi|
p combi
end
#[1, "a", true]
#[1, "a", false]
#[1, "b", true]
#...
You can use product:
[1,4].product([5,6],[3,5]) #=> [[1, 5, 3], [1, 5, 5], [1, 6, 3], [1, 6, 5], [4, 5, 3], [4, 5, 5], [4, 6, 3], [4, 6, 5]]
One of the things I commonly get hooked up on in ruby is recursion patterns. For example, suppose I have an array, and that may contain arrays as elements to an unlimited depth. So, for example:
my_array = [1, [2, 3, [4, 5, [6, 7]]]]
I'd like to create a method which can flatten the array into [1, 2, 3, 4, 5, 6, 7].
I'm aware that .flatten would do the job, but this problem is meant as an example of recursion issues I regularly run into - and as such I'm trying to find a more reusable solution.
In short - I'm guessing there's a standard pattern for this sort of thing, but I can't come up with anything particularly elegant. Any ideas appreciated
Recursion is a method, it does not depend on the language. You write the algorithm with two kind of cases in mind: the ones that call the function again (recursion cases) and the ones that break it (base cases). For example, to do a recursive flatten in Ruby:
class Array
def deep_flatten
flat_map do |item|
if item.is_a?(Array)
item.deep_flatten
else
[item]
end
end
end
end
[[[1]], [2, 3], [4, 5, [[6]], 7]].deep_flatten
#=> [1, 2, 3, 4, 5, 6, 7]
Does this help? anyway, a useful pattern shown here is that when you are using recusion on arrays, you usually need flat_map (the functional alternative to each + concat/push).
Well, if you know a bit of C , you just have to visit the docs and click the ruby function to get the C source and it is all there..
http://www.ruby-doc.org/core-1.9.3/Array.html#method-i-flatten
And for this case, here is a Ruby implementation
def flatten values, level=-1
flat = []
values.each do |value|
if level != 0 && value.kind_of?(Array)
flat.concat(flatten(value, level-1))
else
flat << value
end
end
flat
end
p flatten [1, [2, 3, [4, 5, [6, 7]]]]
#=> [1, 2, 3, 4, 5, 6, 7]
Here's an example of a flatten that's written in a tail recursive style.
class Array
# Monkeypatching the flatten class
def flatten(new_arr = [])
self.each do |el|
if el.is_a?(Array)
el.flatten(new_arr)
else
new_arr << el
end
end
new_arr
end
end
p flatten [1, [2, 3, [4, 5, [6, 7]]]]
#=> [1, 2, 3, 4, 5, 6, 7]
ruby
Although it looks like ruby isn't always optimized for tail recursion: Does ruby perform tail call optimization?
Let's say I have the following array:
arr = [[5, 1], [2, 7]]
and I want to find the minimum element, comparing the second element of the elements. The minimum element will be [5, 1] since 1 is less than 7. I can use the following code:
arr.min {|a,b| a[1] <=> b[1]}
For calculating the maximum, I can do the same:
arr.max {|a,b| a[1] <=> b[1]}
That gives [2, 7].
I use the same block all the time. I would like to have that block somewhere and provide it to the min/max function. I hoped something like:
blo = lambda {|a,b| a[1] <=> b[1]}
arr.min blo
would work, but it didn't. Any idea on how I can do this?
Use the & operator to turn a Proc object into a block.
arr.min &blo
#sepp2k's answer is the more general one, but in your specific case, I would just use
arr.min_by(&:last)
arr.max_by(&:last)
since that is much more obvious than all those curly braces and square brackets and array indices floating around.
If all that you need is minimum and maximum, you might use Enumerable#minmax method and calculate both at once:
min, max = arr.minmax {|a,b| a[1] <=> b[1]}
#=> [[5, 1], [2, 7]]
min
#=> [5, 1]
max
#=> [2, 7]
Edit: Hell, I just noticed there is also minmax_by, so you can combine it with last method, and have:
min, max = arr.minmax_by &:last
how about this?
=> [[5, 4], [9, 5], [2, 7]]
>> arr.sort!{|x,y| x[1]<=>y[1] }
=> [[5, 4], [9, 5], [2, 7]]
>> min,max=arr[0],arr[-1]
=> [[5, 4], [2, 7]]
A more general solution to problems like this is to avoid nested arrays entirely and use a class instead. You can then define the <=> operator for that class, giving you access to all the functions in the Comparable mixin (http://ruby-doc.org/core/classes/Comparable.html) gives you the <, <=, ==, >=, and > operators and the method 'between?'
This is just an example, in real life you would use classes that describe what they store:
class Duo
include Comparable
def initialize( a, b )
#a = a
#b = b
end
def <=>(rhs)
#b <=> rhs.b
end
end
If you have an array of Duo object you can then use the min, max, and sort functions without having to define the comparison operator. So...
#a = Duo.new( 1, 10 )
#b = Duo.new( 2, 5 )
#c = Duo.new( 3, 1 )
[ #a, #b, #c ].sort
would return the array [ #c, #b, #a ]
And
[#a, #b, #c].max
would return #a
This is much more the 'Ruby Way' than nested data-structures with logic that relies on positions in arrays. It takes slightly more work at the start, but you'll find it much better in the long run.
Ruby is a very object oriented programming language and provides very powerful tools for you to use. I thoroughly recommend reading a book like "The Ruby Programming Language" or "The Ruby Way" to get a proper overview of the power of the language.
I'd really like to handle this without monkey-patching but I haven't been able to find another option yet.
I have an array (in Ruby) that I need to sort by multiple conditions. I know how to use the sort method and I've used the trick on sorting using an array of options to sort by multiple conditions. However, in this case I need the first condition to sort ascending and the second to sort descending. For example:
ordered_list = [[1, 2], [1, 1], [2, 1]]
Any suggestions?
Edit: Just realized I should mention that I can't easily compare the first and second values (I'm actually working with object attributes here). So for a simple example it's more like:
ordered_list = [[1, "b"], [1, "a"], [2, "a"]]
How about:
ordered_list = [[1, "b"], [1, "a"], [2, "a"]]
ordered_list.sort! do |a,b|
[a[0],b[1]] <=> [b[0], a[1]]
end
I was having a nightmare of a time trying to figure out how to reverse sort a specific attribute but normally sort the other two. Just a note about the sorting for those that come along after this and are confused by the |a,b| block syntax. You cannot use the {|a,b| a.blah <=> b.blah} block style with sort_by! or sort_by. It must be used with sort! or sort. Also, as indicated previously by the other posters swap a and b across the comparison operator <=> to reverse the sort order. Like this:
To sort by blah and craw normally, but sort by bleu in reverse order do this:
something.sort!{|a,b| [a.blah, b.bleu, a.craw] <=> [b.blah, a.bleu, b.craw]}
It is also possible to use the - sign with sort_by or sort_by! to do a reverse sort on numerals (as far as I am aware it only works on numbers so don't try it with strings as it just errors and kills the page).
Assume a.craw is an integer. For example:
something.sort_by!{|a| [a.blah, -a.craw, a.bleu]}
I had this same basic problem, and solved it by adding this:
class Inverter
attr_reader :o
def initialize(o)
#o = o
end
def <=>(other)
if #o.is && other.o.is
-(#o <=> other.o)
else
#o <=> other.o
end
end
end
This is a wrapper that simply inverts the <=> function, which then allows you to do things like this:
your_objects.sort_by {|y| [y.prop1,Inverter.new(y.prop2)]}
Enumerable#multisort is a generic solution that can be applied to arrays of any size, not just those with 2 items. Arguments are booleans that indicate whether a specific field should be sorted ascending or descending (usage below):
items = [
[3, "Britney"],
[1, "Corin"],
[2, "Cody"],
[5, "Adam"],
[1, "Sally"],
[2, "Zack"],
[5, "Betty"]
]
module Enumerable
def multisort(*args)
sort do |a, b|
i, res = -1, 0
res = a[i] <=> b[i] until !res.zero? or (i+=1) == a.size
args[i] == false ? -res : res
end
end
end
items.multisort(true, false)
# => [[1, "Sally"], [1, "Corin"], [2, "Zack"], [2, "Cody"], [3, "Britney"], [5, "Betty"], [5, "Adam"]]
items.multisort(false, true)
# => [[5, "Adam"], [5, "Betty"], [3, "Britney"], [2, "Cody"], [2, "Zack"], [1, "Corin"], [1, "Sally"]]
I've been using Glenn's recipe for quite a while now. Tired of copying code from project to project over and over again, I've decided to make it a gem:
http://github.com/dadooda/invert