The hash map is populated as such:
{"1"=>["2"], "2"=>["3", "7"], "3"=>["4"], "5"=>["6", "2"], "6"=>["7"], "7"=>["8", "4"]}
so that each key can have multiple values. These values represent bus stops from a set of routes e.g from bus stop 1 you can get to 2, from bus stop 2 you can get to 3 or 7 and so on.
I am trying to traverse this graph structure and find all possible paths of length greater than 1 from a given starting point. For example, for starting point of 1, the list of all possible paths would be
[[1,2] , [1,2,3] , [1,2,3,4] , [1,2,7] , [1,2,7,8], [1,2,7,4]]
I am trying to solve this problem recursively, where I iterate over the children of the current key (the values of that key) and call a function which essentially expands the children of that key. Here is my code so far:
if hash.has_key?(origin.to_s)
tmp = origin.to_s
tmp << hash[origin.to_s][0]
paths.push(tmp)
expand_node(hash,paths,paths.length-1,tmp[tmp.length-1])
end
Origin is the starting point. To start, I add the first path (in this case [1,2]) to an array called paths, then I expand the last value added to the previous path (in this case 2).
def expand_node(hash,paths,curr_paths_index,node)
curr_node = node
if hash.has_key?(node.to_s)
counter = 0
hash[curr_node].each do |child|
tmp = paths[curr_paths_index]
tmp << child
paths << tmp
curr_paths_index += counter + 1
puts paths
expand_node(hash,paths,curr_paths_index,child)
end
end
end
The argument curr_paths_index keeps track of the path from which I expand.
The argument path is the whole list of paths currently found.
The argument node is the current value being expanded.
Printing paths after the function finishes yields:
123
123
1234
1234
1234
12347
12347
12347
12347
123478
123478
123478
123478
123478
1234784
1234784
1234784
1234784
1234784
1234784
Is there any way to modify this code so that it produces the desired output (shown above)? Is there a better way of solving this problem in general?
Thanks.
One way to solve a recursive problem is to first break it down into a single case that is easy to solve and then generalize that one case. Here is an easier case:
Given a graph and path through that graph, determine which nodes can be added to the end of that path without creating a loop.
If we can solve this problem we can easily solve the larger problem as well.
Start with a path that is just your starting node
Find all nodes that can be added to that path without creating a loop
For each such node, output a path which is your initial path plus that node and...
Recursively repeat from step 2, using the new path
Note that if no new nodes are found in step 2, the recursive call will terminate because steps 3 and 4 have nothing to do.
Here is how I would solve step 2, I'll leave the recursive part to you
def find_next_nodes graph, path
graph[path[-1]].reject { |node| path.include? node }
end
I would do it like this, pretty sure this can be optimized further and there maybe an issue with performance too.
require 'set'
dt = {"1"=>["2"], "2"=>["3", "7"], "3"=>["4"], "5"=>["6", "2"], "6"=>["7"], "7"=>["8", "4"]}
def traverse(initial_hash, key_arrays, traversed_keys = [])
value = []
key_arrays.map do |key|
n = [*traversed_keys, key]
value << n unless n.count == 1 # prevent first key to be added
another_keys = initial_hash.fetch(key, nil) # try to load the value, fallback to nil if key not found
if another_keys.nil? # stop condition
value << n
elsif another_keys.is_a?(Enumerable) # if key found
another_keys.map do |ank| # recursive
value.concat([*traverse(initial_hash, [ank], n)]) # splat operator to unpack the array
end
end
end
value.to_set # only return unique value, can be converted to array if needed
end
traverse(dt, [1].map(&:to_s)).map { |val| val.join('') }
Sorry, but I have not debugged your code, so I think there's an issue with the temp variable there since the path which can't be expanded anymore is carried over to next iteration.
def doit(graph, start)
return [] unless graph.key?(start)
recurse(graph, start).drop(1)
end
def recurse(graph, start)
(graph[start] || []).each_with_object([[start]]) { |next_node, arr|
recurse(graph, next_node).each { |a| arr << [start, *a] } }
end
graph = { 1=>[2], 2=>[3, 7], 3=>[4], 5=>[6, 2], 6=>[7], 7=>[8, 4] }
doit(graph, 1)
#=> [[1, 2], [1, 2, 3], [1, 2, 3, 4], [1, 2, 7], [1, 2, 7, 8],
# [1, 2, 7, 4]]
doit(graph, 2)
#=> [[2, 3], [2, 3, 4], [2, 7], [2, 7, 8], [2, 7, 4]]
doit(graph, 3)
#=> [[3, 4]]
doit(graph, 5)
#=> [[5, 6], [5, 6, 7], [5, 6, 7, 8], [5, 6, 7, 4], [5, 2],
# [5, 2, 3], [5, 2, 3, 4], [5, 2, 7], [5, 2, 7, 8], [5, 2, 7, 4]]
doit(graph, 6)
#=> [[6, 7], [6, 7, 8], [6, 7, 4]]
doit(graph, 7)
#=> [[7, 8], [7, 4]]
doit(graph, 4)
#=> []
doit(graph, 99)
#=> []
Related
I’m getting some weird results implementing cyclic permutation on the children of a multidimensional array.
When I manually define the array e.g.
arr = [
[1, 2, 3, 4, 5], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5]
]
the output is different from when I obtain that same array by calling a method that builds it.
I’ve compared the manual array to the generated version and they’re exactly the same (class and values, etc).
I tried writing the same algorithm in JS and encountered the same issue.
Any idea what might be going on?
def Build_array(child_arr, n)
#Creates larger array with arr as element, n times over. For example Build_array([1,2,3], 3) returns [[1,2,3], [1,2,3], [1,2,3]]
parent_arr = Array.new(4)
0.upto(n) do |i|
parent_arr[i] = child_arr
end
return parent_arr
end
def Cylce_child(arr, steps_tocycle)
# example: Cylce_child([1, 2, 3, 4, 5], 2) returns [4, 5, 1, 2, 3]
0.upto(steps_tocycle - 1) do |i|
x = arr.pop()
arr.unshift(x)
end
return arr
end
def Permute_array(parent_array, x, y, z)
#x, y, z = number of steps to cycle each child array
parent_array[0] = Cylce_child(parent_array[0], x)
parent_array[1] = Cylce_child(parent_array[1], y)
parent_array[2] = Cylce_child(parent_array[2], z)
return parent_array
end
arr = Build_array([1, 2, 3, 4, 5], 4)
# arr = [[1, 2, 3, 4, 5], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5]]
puts "#{Permute_array(arr, 1, 2, 3)}"
# Line 34: When arr = Build_array([1, 2, 3, 4, 5], 4)
# Result (WRONG):
# [[5, 1, 2, 3, 4], [5, 1, 2, 3, 4], [5, 1, 2, 3, 4], [5, 1, 2, 3, 4]]
#
# Line 5: When arr = [[1, 2, 3, 4, 5], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5], [1, # 2, 3, 4, 5]]
# Result (CORRECT):
# [[5, 1, 2, 3, 4], [4, 5, 1, 2, 3], [3, 4, 5, 1, 2], [1, 2, 3, 4, 5]]
#
The problem is in the way you build the array.
This line:
parent_arr[i] = child_arr
does not put in parent_arr[i] a copy of child_arr but a reference to it.
This means your initial array contains four references to the same child array. Later on, when the code changes parent_arr[0], it changes the same array that child_arr was referring to in the build method. And that array is also parent_arr[1] and parrent_arr[2] and so on.
A simple solution to the problem is to put in parent_arr[i] a copy of child_arr:
parent_arr[i] = Array.new(child_arr)
I see where the bug was. Added the clone method to line 8 so that it now reads:
parent_arr[i] = child_arr.clone
#Old: parent_arr[i] = child_arr
Thanks Robin, for pointing me in the right direction.
This is a fairly common mistake to make in Ruby since arrays do not contain objects per-se, but object references, which are effectively pointers to a dynamically allocated object, not the object itself.
That means this code:
Array.new(4, [ ])
Will yield an array containing four identical references to the same object, that object being the second argument.
To see what happens:
Array.new(4, [ ]).map(&:object_id)
# => => [70127689565700, 70127689565700, 70127689565700, 70127689565700]
Notice four identical object IDs. All the more obvious if you call uniq on that.
To fix this you must supply a block that yields a different object each time:
Array.new(4) { [ ] }.map(&:object_id)
# => => [70127689538260, 70127689538240, 70127689538220, 70127689538200]
Now adding to one element does not impact the others.
That being said, there's a lot of issues in your code that can be resolved by employing Ruby as it was intended (e.g. more "idiomatic" code):
def build_array(child_arr, n)
# Duplicate the object given each time to avoid referencing the same thing
# N times. Each `dup` object is independent.
Array.new(4) do
child_arr.dup
end
end
def cycle_child(arr, steps_tocycle)
# Ruby has a rotate method built-in
arr.rotate(steps_tocycle)
end
# Using varargs (*args) you can just loop over how many positions were given dynamically
def permute_array(parent_array, *args)
# Zip is great for working with two arrays in parallel, they get "zippered" together.
# Also map is what you use for transforming one array into another in a 1:1 mapping
args.zip(parent_array).map do |a, p|
# Rotate each element the right number of positions
cycle_child(p, -a)
end
end
arr = build_array([1, 2, 3, 4, 5], 4)
# => [[1, 2, 3, 4, 5], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5]]
puts "#{permute_array(arr, 1, 2, 3)}"
# => [[5, 1, 2, 3, 4], [4, 5, 1, 2, 3], [3, 4, 5, 1, 2]]
A lot of these methods boil down to some very simple Ruby so they're not especially useful now, but this adapts the code as directly as possible for educational purposes.
I'm trying to remove pairs of the smallest and largest elements from an Array and store them in a second one. Is there a better way to do this or a Ruby method I don't know about that could accomplish something like this?
Here's my code:
nums = [1, 2, 3, 4, 5, 6]
pairs = []; for n in nums
pairs << [n, nums.last]
nums.delete nums.last
nums.delete n
end
Current result:
nums
#=> [2, 4]
pairs
#=> [[1, 6], [3, 5]]
Expected result:
nums
#=> []
pairs
#=> [[1, 6], [2, 5], [3, 4]]
Assuming nums is sorted and can be modified, I like this way because it has a mechanical feel about it:
pairs = (nums.size/2).times.map { [nums.shift, nums.pop] }
#=> [[1, 6], [2, 5], [3, 4]]
nums
#=> []
I see #Drenmi has the same idea of using shift and pop.
If you don't want to modify nums, you could of course operate on a copy.
Enumerating over an Array while deleting it's content is generally not advisible. Here's an alternative solution:
nums = *(1..6)
#=> [1, 2, 3, 4, 5, 6]
pairs = []
#=> []
until nums.size < 2 do
pairs << [nums.shift, nums.pop]
end
pairs
#=> [[1, 6], [2, 5], [3, 4]]
I'm trying to operate on certain elements of an array while referencing their index in the block. Operating on the whole array is easy
arr = [1, 2, 3, 4, 5, 6, 7, 8]
arr.each_with_index { |num, index| puts "#{num}, "#{index}" }
But what if I want to work just with elements 4, 6 to return
4, 3
6, 5
I can create a new array composed of certain elements of the original and run the block on that, but then the index changes.
How can I select the elements and their index?
Just put a condition on it:
indice = [3, 5]
arr.each_with_index do
|num, index| puts "#{num}, #{index}" if indice.include?(index)
end
This is another style:
indice = [3, 5]
arr.each_with_index do
|num, index|
next unless indice.include?(index)
puts "#{num}, #{index}"
end
I cannot tell from the question whether you are given values in the array and want to obtain their indices, or vice-versa. I therefore will suggest one method for each task. I will use this array for examples:
arr = [1, 2, 3, 4, 5, 6, 7, 8]
Values to Indices
If you are given values:
vals = [4, 6]
you can retrieve the number-index pairs like this:
vals.map { |num| [num, arr.index(num)] }
#=> [[4, 3], [6, 5]]
or print them directly:
vals.each { |num| puts "#{num}, #{arr.index(num)}" }
# 4, 3
# 6, 5
#=> [4, 6]
If an element of vals is not present in arr:
vals = [4, 99]
vals.map { |num| [num, arr.index(num)] }
#=> [[4, 3], [99, nil]]
Indices to Values
If you are given indices:
indices = [3, 5]
you can retrieve the index-value pairs like this:
indices.zip(arr.values_at(*indices))
#=> [[3, 4], [5, 6]]
and then print in whatever format you like.
If an index is out-of-range, nil will be returned:
indices.zip(arr.values_at(*[3, 99]))
#=> [[3, 4], [5, nil]]
I'm generating an array of grouped indexes. The indexes are points within an array that meet my grouping requirements. For example I'm grouping indexes from a grid where things are horizontally "close" to each other. This is kind of what I'll be working with.
[[0,1,2],[3],[4,5],[5,6],[7,8],[8,9]]
I would like to merge by common indexes. So the result should look like.
[[0,1,2],[3],[4,5,6],[7,8,9]]
It feels like it should be an inject :+ on pairs if any inner items match. But I don't see the Ruby way to do this.
x.sort.inject([]) do |y, new|
(((y.last || []) & new).length > 0) ? y[0..-2].push(y.last | new) : y.push(new)
end.map(&:sort)
Knowing Ruby, there's probably a more concise way to do this, but this should give you what you want:
foo = [[0,1,2],[3],[4,5],[5,6],[7,8],[8,9]]
foo.inject([]) {|result,element|
if (result and existing = result.find_index{|a| !(element & [*a]).empty?})
tmp = result[existing]
result.delete_at(existing)
result << (tmp | element).sort
else
result << element
end
}.sort
Output:
=> [[0, 1, 2], [3], [4, 5, 6], [7, 8, 9]]
Logic:
For each element in the original array, check the newly-built array-so-far (result) for any entry which contains any of the same numbers as the next element using array intersection -- !(element & [*a]).empty? ...
if found, remove said entry from the result, union it with the new element from the original array -- (tmp | element) -- then add it back to the result
if not found, simply concatenate the element from the original array to the result
Someone might find a more compact method, but this works...
array = [[0,1,2],[3],[4,5],[5,6],[7,8],[8,9]]
(0...array.length).each do |a|
(a+1...array.length).each do |b|
unless array[a].to_a & array[b].to_a == []
array[a].push(array[b]).flatten!.uniq!.sort!
array.delete_at(b)
b -= 1
end
end
end
p array
=> [[0, 1, 2], [3], [4, 5, 6], [7, 8, 9]]
The solutions so far seem overly-complicated to me. I suggest this (assuming each element of arr is non-empty and contains only integers):
arr = [[0, 1, 2], [3],
[4, 5], [5, 6],
[7, 8, 9], [9, 10], [10, 11],
[12, 13], [13], [13, 14]]
arr.each_with_object([]) do |a,b|
if b.any? && b.last.last == a.first
b[-1] += a[1..-1]
else
b << a
end
end
#=> [[0, 1, 2], [3], [4, 5, 6], [7, 8, 9, 10, 11], [12, 13, 14]]
You could alternatively do it by stepping through arr with an enumerator:
enum = arr.each
b = [enum.next]
loop do
a = enum.next
if b.last.last == a.first
b[-1] += a[1..-1]
else
b << a
end
end
b
How does one merge N sorted arrays (or other list-like data structures) lazily in Ruby? For example, in Python you would use heapq.merge. There must be something like this built into Ruby, right?
Here's a (slightly golfed) solution that should work on arrays of any 'list-like' collections that support #first, #shift, and #empty?. Note that it is destructive - each call to lazymerge removes one item from one collection.
def minheap a,i
r=(l=2*(m=i)+1)+1 #get l,r index
m = l if l< a.size and a[l].first < a[m].first
m = r if r< a.size and a[r].first < a[m].first
(a[i],a[m]=a[m],a[i];minheap(a,m)) if (m!=i)
end
def lazymerge a
(a.size/2).downto(1){|i|minheap(a,i)}
r = a[0].shift
a[0]=a.pop if a[0].empty?
return r
end
p arrs = [ [1,2,3], [2,4,5], [4,5,6],[3,4,5]]
v=true
puts "Extracted #{v=lazymerge (arrs)}. Arr= #{arrs.inspect}" while v
Output:
[[1, 2, 3], [2, 4, 5], [4, 5, 6], [3, 4, 5]]
Extracted 1. Arr= [[2, 3], [2, 4, 5], [4, 5, 6], [3, 4, 5]]
Extracted 2. Arr= [[3], [2, 4, 5], [4, 5, 6], [3, 4, 5]]
Extracted 2. Arr= [[4, 5], [3], [4, 5, 6], [3, 4, 5]]
Extracted 3. Arr= [[4, 5], [3, 4, 5], [4, 5, 6]]
Extracted 3. Arr= [[4, 5], [4, 5], [4, 5, 6]]
Extracted 4. Arr= [[5], [4, 5], [4, 5, 6]]
Extracted 4. Arr= [[5], [5], [4, 5, 6]]
Extracted 4. Arr= [[5, 6], [5], [5]]
Extracted 5. Arr= [[6], [5], [5]]
Extracted 5. Arr= [[5], [6]]
Extracted 5. Arr= [[6]]
Extracted 6. Arr= [[]]
Extracted . Arr= [[]]
Note also that this algorithm is also lazy about maintaining the heap property - it is not maintained between calls. This probably causes it to do more work than needed, since it does a complete heapify on each subsequent call. This could be fixed by doing a complete heapify once up front, then calling minheap(a,0) before the return r line.
I ended up writing it myself using the data structures from the 'algorithm' gem. It wasn't as bad as I expected.
require 'algorithms'
class LazyHeapMerger
def initialize(sorted_arrays)
#heap = Containers::Heap.new { |x, y| (x.first <=> y.first) == -1 }
sorted_arrays.each do |a|
q = Containers::Queue.new(a)
#heap.push([q.pop, q])
end
end
def each
while #heap.length > 0
value, q = #heap.pop
#heap.push([q.pop, q]) if q.size > 0
yield value
end
end
end
m = LazyHeapMerger.new([[1, 2], [3, 5], [4]])
m.each do |o|
puts o
end
Here's an implementation which should work on any Enumerable, even infinite ones. It returns Enumerator.
def lazy_merge *list
list.map!(&:enum_for) # get an enumerator for each collection
Enumerator.new do |yielder|
hash = list.each_with_object({}){ |enum, hash|
begin
hash[enum] = enum.next
rescue StopIteration
# skip empty enumerators
end
}
loop do
raise StopIteration if hash.empty?
enum, value = hash.min_by{|k,v| v}
yielder.yield value
begin
hash[enum] = enum.next
rescue StopIteration
hash.delete(enum) # remove enumerator that we already processed
end
end
end
end
Infinity = 1.0/0 # easy way to get infinite range
p lazy_merge([1, 3, 5, 8], (2..4), (6..Infinity), []).take(12)
#=> [1, 2, 3, 3, 4, 5, 6, 7, 8, 8, 9, 10]
No, there's nothing built in to do that. At least, nothing that springs instantly to mind. However, there was a GSoC project to implement the relevant data types a couple of years ago, which you could use.