I would like to do something like join with an Array, but instead of getting the result as a String, I would like to get an Array. I will call this interpolate. For example, given:
a = [1, 2, 3, 4, 5]
I expect:
a.interpolate(0) # => [1, 0, 2, 0, 3, 0, 4, 0, 5]
a.interpolate{Array.new} # => [1, [], 2, [], 3, [], 4, [], 5]
What is the best way to get this? The reason I need it to take a block is because when I use it with a block, I want different instances for each interpolator that comes in between.
After getting great answers from many, I came up with some modified ones.
This one is a modification from tokland's answer. I made it accept nil for conj1. And also moved if conj2 condition to outside of the flat_map loop to make it faster.
class Array
def interpolate conj1 = nil, &conj2
return [] if empty?
if conj2 then first(length - 1).flat_map{|e| [e, conj2.call]}
else first(length - 1).flat_map{|e| [e, conj1]}
end << last
end
end
This one is a modification of Victor Moroz's answer. I added the functionality to accept a block.
class Array
def interpolate conj1 = nil, &conj2
return [] if empty?
first, *rest = self
if conj2 then rest.inject([first]) {|a, e| a.push(conj2.call, e)}
else rest.inject([first]) {|a, e| a.push(conj1, e)}
end
end
end
After benchmark test, the second one looks faster. It seems that flat_map, although looking beautiful, is slow.
Use zip:
a.zip(Array.new(a.size) { 0 }).flatten(1)[0...-1]
Another way
class Array
def interpolate(pol=nil)
new_ary = self.inject([]) do |memo, orig_item|
pol = yield if block_given?
memo += [orig_item, pol]
end
new_ary.pop
new_ary
end
end
[1,2,3].interpolate("A")
#=> [1, "A", 2, "A", 3]
[1,2,3].interpolate {Array.new}
#=> [1, [], 2, [], 3]
class Array
def interpolate_with val
res = []
self.each_with_index do |el, idx|
res << val unless idx == 0
res << el
end
res
end
end
Usage:
ruby-1.9.3-p0 :021 > [1,2,3].interpolate_with 0
=> [1, 0, 2, 0, 3]
ruby-1.9.3-p0 :022 > [1,2,3].interpolate_with []
=> [1, [], 2, [], 3]
Not really sure what you want to do with a block, but I would do it this way:
class Array
def interpolate(sep)
h, *t = self
t.empty? ? [h] : t.inject([h]) { |a, e| a.push(sep, e) }
end
end
UPDATE:
Benchmarks (array size = 100):
user system total real
inject 0.730000 0.000000 0.730000 ( 0.767565)
zip 1.030000 0.000000 1.030000 ( 1.034664)
Actually I am a bit surprised, I thought zip would be faster.
UPDATE2:
zip is faster, flatten is not.
Here's a simple version (which can handle multiple values and/or a block) using flat_map and each_cons:
class Array
def interpolate *values
each_cons(2).flat_map do |e, _|
[e, *values, *(block_given? ? yield(e) : [])]
end << last
end
end
[1,2,3].interpolate(0, "") # => [1, 0, "", 2, 0, "", 3]
[1,2,3].interpolate(&:even?) # => [1, false, 2, true, 3]
This does it inplace:
class Array
def interpolate(t = nil)
each_with_index do |e, i|
t = yield if block_given?
insert(i, t) if i % 2 == 1
end
end
end
This works because t is inserted before the element with the current index, which makes the just inserted t the element with the current index, which means that the iteration can continue normally.
So many ways to do this. For example (Ruby 1.9):
class Array
def intersperse(item = nil)
return clone if self.empty?
take(self.length - 1).flat_map do |x|
[x, item || yield]
end + [self.last]
end
end
p [].intersperse(0)
#=> []
p [1, 2, 3, 4, 5].intersperse(0)
#= >[1, 0, 2, 0, 3, 0, 4, 0, 5]
p [1, 2, 3, 4, 5].intersperse { 0 }
#= >[1, 0, 2, 0, 3, 0, 4, 0, 5]
(I use the Haskell function name: intersperse.)
Here is one way:
theArray.map {|element| [element, interpolated_obj]}.flatten
Related
In JavaScript a reduce function may look like:
array.reduce((acc, cur, idx, arr) => {
// body
}, starting_value);
I'm trying to somehow have that arr argument, which is a copy of the original array, I've seen it being useful plenty of times. This is as far as I could take it:
array.each_with_index.reduce (starting_value) do |acc (cur, idx)|
# body
end
I've been browsing through the Ruby documentation for quite some time (I actually copied the .each_with_index since I found it somewhere), looking for anything even remotely like what I've been looking for.
To be quite honest functionally I could split it into multiple lines and store something in a variable, but if I can keep my functional approach in JavaScript with Ruby, I would be super happy.
In essence: is there any way to get the arr parameter within the body?
reduce – being an Enumerable method – is not aware of the collection it is enumerating.
You have to incorporate the array yourself, for example via then / yield_self:
[1, 2, 3].then do |arr|
arr.each_with_index.reduce(4) do |acc, (cur, idx)|
p acc: acc, cur: cur, idx: idx, arr: arr
acc + cur
end
end
# {:acc=>4, :cur=>1, :idx=>0, :arr=>[1, 2, 3]}
# {:acc=>5, :cur=>2, :idx=>1, :arr=>[1, 2, 3]}
# {:acc=>7, :cur=>3, :idx=>2, :arr=>[1, 2, 3]}
#=> 10
or somewhere within the chain:
[1, 2, 3].then do |arr|
arr.map { |x| x * 2 }.then do |arr_2|
arr_2.each_with_index.reduce(4) do |acc, (cur, idx)|
p acc: acc, cur: cur, idx: idx, arr: arr, arr_2: arr_2
acc + cur
end
end
end
# {:acc=>4, :cur=>2, :idx=>0, :arr=>[1, 2, 3], :arr_2=>[2, 4, 6]}
# {:acc=>6, :cur=>4, :idx=>1, :arr=>[1, 2, 3], :arr_2=>[2, 4, 6]}
# {:acc=>10, :cur=>6, :idx=>2, :arr=>[1, 2, 3], :arr_2=>[2, 4, 6]}
#=> 16
It is possible to create a custom reduce method:
module Enumerable
def reduce_with_self(initial_or_sym, sym = nil)
if initial_or_sym.is_a?(Symbol)
operator = initial_or_sym
initial = nil
else
initial = initial_or_sym
operator = sym
end
accumulator = initial
each_with_index do |item, index|
if index.zero? && initial.nil?
accumulator = item
next
end
accumulator = operator.nil? ? yield(accumulator, item, self) : accumulator.send(operator, item)
end
accumulator
end
end
The third argument of the block will be a reference to a collection:
> [1,2,3,4].reduce_with_self(0) do |acc, item, array|
> p array
> acc += item
> end
[1, 2, 3, 4]
[1, 2, 3, 4]
[1, 2, 3, 4]
[1, 2, 3, 4]
=> 10
> [1,2,3,4].reduce_with_self(2,:+)
=> 12
> [1,2,3,4].reduce_with_self(:+)
=> 10
Of course, this implementation will be slower than the original one:
require 'benchmark'
Benchmark.bm do |x|
x.report('reduce') { 1000.times { (0..10000).reduce(0) { |acc, item| acc += item } } }
x.report('reduce_with_self') { 1000.times { (0..10000).reduce_with_self(0) { |acc, item, array| acc += item } } }
end
user system total real
reduce 0.501833 0.000000 0.501833 ( 0.502698)
reduce_with_self 0.955978 0.000000 0.955978 ( 0.956809)
I want my function to return the longest Array within a nested array (including the array itself) so
nested_ary = [[1,2],[[1,2,[[1,2,3,4,[5],6,7,11]]]],[1,[2]]
deep_max(nested_ary)
=> [1,2,3,4,[5],6,7,11]
simple_ary = [1,2,3,4,5]
deep_max(simple_ary)
=> returns: [1,2,3,4,5]
I created a function to collect all arrays. I have to get the max value in another function.
my code:
def deep_max(ary)
ary.inject([ary]) { |memo, elem|
if elem.is_a?(Array)
memo.concat(deep_max(elem))
else
memo
end }
end
This gives me what I want:
deep_max(nested_ary).max_by{ |elem| elem.size }
Is there a way to get this max inside of the function?
def deep_max(arr)
biggest_so_far = arr
arr.each do |e|
if e.is_a?(Array)
candidate = deep_max(e)
biggest_so_far = candidate if candidate.size > biggest_so_far.size
end
end
biggest_so_far
end
deep_max [[1, 2], [[1, 2, [[1, 2, 3, 4, [5], 6, 7, 11]]]], [1, [2]]]
#=> [1, 2, 3, 4, [5], 6, 7, 11]
You can unroll it:
def deep_max(ary)
arys = []
ary = [ary]
until ary.empty?
elem = ary.pop
if elem.is_a?(Array)
ary.push(*elem)
arys.push(elem)
end
end
arys.max_by(&:size)
end
Or you can cheat, by introducing an optional parameter that changes how your recursion works on top level vs how it behaves down the rabbit hole.
This is my array and custom method to reverse an array output without using the reverse method. not sure where it broke, tried running it in console, no dice.
numbers = [1, 2, 3, 4, 5, 6]
def reversal(array)
do |item1, item2| item2 <=> item1
end
p reversal(numbers)
Here's one way to handle this. This is not very efficient but works.
def reversal(array)
reversed = []
loop do
reversed << array.pop
break if array.empty?
end
reversed
end
Here is another implementation that does the same thing:
def reversal(array)
array.each_with_index.map do |value, index|
array[array.count-index-1]
end
end
So many ways... Here are three (#1 being my preference).
numbers6 = [1, 2, 3, 4, 5, 6]
numbers5 = [1, 2, 3, 4, 5]
For all methods my_rev below,
my_rev(numbers6)
#=> [6, 5, 4, 3, 2, 1]
my_rev(numbers5)
#=> [5, 4, 3, 2, 1]
#1
def my_rev(numbers)
numbers.reverse_each.to_a
end
#2
def my_rev(numbers)
numbers.each_index.map { |i| numbers[-1-i] }
end
#3
def my_rev(numbers)
(numbers.size/2).times.with_object(numbers.dup) do |i,a|
a[i], a[-1-i] = a[-1-i] , a[i]
end
end
there are so many ways to do this
1 Conventional way
a=[1,2,3,4,5,6,7,8]
i=1
while i <= a.length/2 do
temp = a[i-1]
a[i-1] = a[a.length-i]
a[a.length-i] = temp
i+=1
end
2 Using pop
a=[1,2,3,4,5,6]
i=0
b=[]
t=a.length
while i< t do
b << a.pop
i+=1
end
3 Using pop and loop
a=[1,2,3,4,5,6]
b=[]
loop do
b << a.pop
break if a.empty?
end
a = [1,2,3,4,5]
b = []
a.length.times { |i| b << a[(i+1)*-1] }
b
=> [5,4,3,2,1]
I want to get each value of inject.
For example [1,2,3].inject(3){|sum, num| sum + num} returns 9, and I want to get all values of the loop.
I tryed [1,2,3].inject(3).map{|sum, num| sum + num}, but it didn't work.
The code I wrote is this, but I feel it's redundant.
a = [1,2,3]
result = []
a.inject(3) do |sum, num|
v = sum + num
result << v
v
end
p result
# => [4, 6, 9]
Is there a way to use inject and map at same time?
Using a dedicated Eumerator perfectly fits here, but I would show more generic approach for this:
[1,2,3].inject(map: [], sum: 3) do |acc, num|
acc[:map] << (acc[:sum] += num)
acc
end
#⇒ => {:map => [4, 6, 9], :sum => 9}
That way (using hash as accumulator) one might collect whatever she wants. Sidenote: better use Enumerable#each_with_object here instead of inject, because the former does not produce a new instance of an object on each subsequent iteration:
[1,2,3].each_with_object(map: [], sum: 3) do |num, acc|
acc[:map] << (acc[:sum] += num)
end
The best I could think
def partial_sums(arr, start = 0)
sum = 0
arr.each_with_object([]) do |elem, result|
sum = elem + (result.empty? ? start : sum)
result << sum
end
end
partial_sums([1,2,3], 3)
You could use an enumerator:
enum = Enumerator.new do |y|
[1, 2, 3].inject (3) do |sum, n|
y << sum + n
sum + n
end
end
enum.take([1,2,3].size) #=> [4, 6, 9]
Obviously you can wrap this up nicely in a method, but I'll leave that for you to do. Also don't think there's much wrong with your attempt, works nicely.
def doit(arr, initial_value)
arr.each_with_object([initial_value]) { |e,a| a << e+a[-1] }.drop 1
end
arr = [1,2,3]
initial_value = 4
doit(arr, initial_value)
#=> [5, 7, 10]
This lends itself to being generalized.
def gen_doit(arr, initial_value, op)
arr.each_with_object([initial_value]) { |e,a| a << a[-1].send(op,e) }.drop 1
end
gen_doit(arr, initial_value, :+) #=> [5,7,10]
gen_doit(arr, initial_value, '-') #=> [3, 1, -2]
gen_doit(arr, initial_value, :*) #=> [4, 8, 24]
gen_doit(arr, initial_value, '/') #=> [4, 2, 0]
gen_doit(arr, initial_value, :**) #=> [4, 16, 4096]
gen_doit(arr, initial_value, '%') #=> [0, 0, 0]
I have a map which either changes a value or sets it to nil. I then want to remove the nil entries from the list. The list doesn't need to be kept.
This is what I currently have:
# A simple example function, which returns a value or nil
def transform(n)
rand > 0.5 ? n * 10 : nil }
end
items.map! { |x| transform(x) } # [1, 2, 3, 4, 5] => [10, nil, 30, 40, nil]
items.reject! { |x| x.nil? } # [10, nil, 30, 40, nil] => [10, 30, 40]
I'm aware I could just do a loop and conditionally collect in another array like this:
new_items = []
items.each do |x|
x = transform(x)
new_items.append(x) unless x.nil?
end
items = new_items
But it doesn't seem that idiomatic. Is there a nice way to map a function over a list, removing/excluding the nils as you go?
You could use compact:
[1, nil, 3, nil, nil].compact
=> [1, 3]
I'd like to remind people that if you're getting an array containing nils as the output of a map block, and that block tries to conditionally return values, then you've got code smell and need to rethink your logic.
For instance, if you're doing something that does this:
[1,2,3].map{ |i|
if i % 2 == 0
i
end
}
# => [nil, 2, nil]
Then don't. Instead, prior to the map, reject the stuff you don't want or select what you do want:
[1,2,3].select{ |i| i % 2 == 0 }.map{ |i|
i
}
# => [2]
I consider using compact to clean up a mess as a last-ditch effort to get rid of things we didn't handle correctly, usually because we didn't know what was coming at us. We should always know what sort of data is being thrown around in our program; Unexpected/unknown data is bad. Anytime I see nils in an array I'm working on, I dig into why they exist, and see if I can improve the code generating the array, rather than allow Ruby to waste time and memory generating nils then sifting through the array to remove them later.
'Just my $%0.2f.' % [2.to_f/100]
Try using reduce or inject.
[1, 2, 3].reduce([]) { |memo, i|
if i % 2 == 0
memo << i
end
memo
}
I agree with the accepted answer that we shouldn't map and compact, but not for the same reasons.
I feel deep inside that map then compact is equivalent to select then map. Consider: map is a one-to-one function. If you are mapping from some set of values, and you map, then you want one value in the output set for each value in the input set. If you are having to select before-hand, then you probably don't want a map on the set. If you are having to select afterwards (or compact) then you probably don't want a map on the set. In either case you are iterating twice over the entire set, when a reduce only needs to go once.
Also, in English, you are trying to "reduce a set of integers into a set of even integers".
Ruby 2.7+
There is now!
Ruby 2.7 is introducing filter_map for this exact purpose. It's idiomatic and performant, and I'd expect it to become the norm very soon.
For example:
numbers = [1, 2, 5, 8, 10, 13]
enum.filter_map { |i| i * 2 if i.even? }
# => [4, 16, 20]
In your case, as the block evaluates to falsey, simply:
items.filter_map { |x| process_x url }
"Ruby 2.7 adds Enumerable#filter_map" is a good read on the subject, with some performance benchmarks against some of the earlier approaches to this problem:
N = 100_000
enum = 1.upto(1_000)
Benchmark.bmbm do |x|
x.report("select + map") { N.times { enum.select { |i| i.even? }.map{ |i| i + 1 } } }
x.report("map + compact") { N.times { enum.map { |i| i + 1 if i.even? }.compact } }
x.report("filter_map") { N.times { enum.filter_map { |i| i + 1 if i.even? } } }
end
# Rehearsal -------------------------------------------------
# select + map 8.569651 0.051319 8.620970 ( 8.632449)
# map + compact 7.392666 0.133964 7.526630 ( 7.538013)
# filter_map 6.923772 0.022314 6.946086 ( 6.956135)
# --------------------------------------- total: 23.093686sec
#
# user system total real
# select + map 8.550637 0.033190 8.583827 ( 8.597627)
# map + compact 7.263667 0.131180 7.394847 ( 7.405570)
# filter_map 6.761388 0.018223 6.779611 ( 6.790559)
Definitely compact is the best approach for solving this task. However, we can achieve the same result just with a simple subtraction:
[1, nil, 3, nil, nil] - [nil]
=> [1, 3]
In your example:
items.map! { |x| process_x url } # [1, 2, 3, 4, 5] => [1, nil, 3, nil, nil]
it does not look like the values have changed other than being replaced with nil. If that is the case, then:
items.select{|x| process_x url}
will suffice.
If you wanted a looser criterion for rejection, for example, to reject empty strings as well as nil, you could use:
[1, nil, 3, 0, ''].reject(&:blank?)
=> [1, 3, 0]
If you wanted to go further and reject zero values (or apply more complex logic to the process), you could pass a block to reject:
[1, nil, 3, 0, ''].reject do |value| value.blank? || value==0 end
=> [1, 3]
[1, nil, 3, 0, '', 1000].reject do |value| value.blank? || value==0 || value>10 end
=> [1, 3]
You can use #compact method on the resulting array.
[10, nil, 30, 40, nil].compact => [10, 30, 40]
each_with_object is probably the cleanest way to go here:
new_items = items.each_with_object([]) do |x, memo|
ret = process_x(x)
memo << ret unless ret.nil?
end
In my opinion, each_with_object is better than inject/reduce in conditional cases because you don't have to worry about the return value of the block.
One more way to accomplish it will be as shown below. Here, we use Enumerable#each_with_object to collect values, and make use of Object#tap to get rid of temporary variable that is otherwise needed for nil check on result of process_x method.
items.each_with_object([]) {|x, obj| (process x).tap {|r| obj << r unless r.nil?}}
Complete example for illustration:
items = [1,2,3,4,5]
def process x
rand(10) > 5 ? nil : x
end
items.each_with_object([]) {|x, obj| (process x).tap {|r| obj << r unless r.nil?}}
Alternate approach:
By looking at the method you are calling process_x url, it is not clear what is the purpose of input x in that method. If I assume that you are going to process the value of x by passing it some url and determine which of the xs really get processed into valid non-nil results - then, may be Enumerabble.group_by is a better option than Enumerable#map.
h = items.group_by {|x| (process x).nil? ? "Bad" : "Good"}
#=> {"Bad"=>[1, 2], "Good"=>[3, 4, 5]}
h["Good"]
#=> [3,4,5]