Problem with Ruby recursive lambda call - ruby

I have the following code that correctly traverses all the nodes in a graph like so:
seen = {}
dfs = lambda do |node|
return if seen[node]
seen[node] = true
$edges[node].each {|n| dfs.call n}
end
dfs.call 0
However, I would like to write it this way, which I understand is correct:
$edges[node].each &dfs
However, when I do this it appears that dfs is only being called on the first element of the list of nodes in $edge[node]. What gives?

Surprisingly enough, your problem is not in the recursion! It actually is because of the shared seen collection among all calls in $nodes[node].each &dfs.
Let's go through the operation: the call to $nodes[node].first should not have any problems, because we know the snippet works for any one node. There is a problem however: seen is not reset, and you are already going to the next node! You already saw all the nodes, so when you try even one on the next cycle, it will immediately return out of the proc because of the condition. The same will happen for every other call as you loop through $nodes. It only seems that the calls to the rest of the nodes never happened!
To solve your problem, isolate seen to the scope of each call of dfs, which we can still do in functional programming:
dfs = lambda do |node|
seen = []
sub_dfs = lambda do |sub_node|
return if seen.include? sub_node
seen << sub_node
$edges[sub_node].each &sub_dfs
end
sub_dfs.call node
end
$edges[some_node].each &dfs
Now seen is safely isolated in each call to dfs.

Another way to make recursive lambdas:
fac = lambda{|n, &context| n.zero? ? 1 : n * eval("fac.call(#{n-1}) {}",context.binding)}
But have to be called with empty block though
fac.call(2){} = 2
fac.call(3){} = 6
fac.call(4){} = 24
binding is used to evaluate the code outside of lambda scope

Related

Stack Level Too Deep for Merge Sort in Ruby

I'm practicing Merge Sort in Ruby and am running into an error where my Stack Level is too Deep. Here is the code:
def sort(numbers)
num_length = numbers.length
if num_length <= 1
numbers
end
half_of_elements = (num_length / 2).round
left = numbers.take(half_of_elements)
right = numbers.drop(half_of_elements)
sorted_left = sort(left)
sorted_right = sort(right)
print sorted_left, sorted_right
#merge(sorted_left, sorted_right)
end
I've commented out the merge method because I just want to see the sorted arrays, but my code keeps getting stuck and I get the error. Can anyone help me figure out what's wrong?
You failed to return numbers in your guard condition. So it just skips past that and calls itself recursively with the same argument.
When you get a stack level too deep on a recursive call that usually means you have a recursive case that winds up making the same call. I debug those by adding a print at the start of the recursive call. Once I see the call that is leading to calling itself, I walk through what happens for that case very carefully.

Is there a good way to exit a DFS procedure?

I've learned that a recursive depth-first search procedure searches a whole tree by its depth, tracing through all possible choices.
However, I want to modify the function such that I can call a "total exit" in the middle, which will completely stop the recursion. Is there an efficient way to do this?
There are 3 ways to do this.
Return a value saying you are done, and check for it after every call.
Throw an exception and catch it at the top level.
Switch from recursion to a stack and then break the loop.
The third is the most efficient but takes the most work. The first is the clearest. The second is simple and works..but tends to make code more complicated and is inefficient in many languages.
A common DFS works like this:
DFS(u){
mark[u] = true
for each v connected to u:
if(!mark[v]) DFS(v)
}
You can try something like this:
static bool STOP = false;
DFS(u){
if(STOP) return;
mark[u] = true
for each v connected to u:
if(!mark[v]) DFS(v)
}
placing a static bool in the beginning of the DFS should guarantee that nothing important will be done from now on with the stacked recursive calls of the DFS once you set STOP to true. Unfortunately it won't just ignore the function calls stacked, but they will finish immediatly.
coroutines
Billy's accepted answer presents a false trichotomy. There are more than three (3) ways to do this. Coroutines are perfectly suited for this because they are pausable, resumable, and cancellable -
def dfs(t):
if not t:
return # <- stop
else:
yield from dfs(t.left) # <- delegate control to left branch
yield t.value # <- yield a value and pause
yield from dfs(t.right) # <- delegate control to right branch
The caller has control over coroutines execution -
def search(t, query):
for val in dfs(t):
if val == query:
return val # <- return stops dfs as soon as query is matched
return None # <- otherwise return None if dfs is exhausted
Languages that support coroutines typically have a handful of other generic functions that makes them useful in a wide variety of ways
persistent iterators
Another option similar to coroutines is streams, or persistent iterators. See this Q&A for a concrete example.

Can someone explain me this code intuitively?

I understand recursion and what the advantages it brings to writing code efficiently. While I can code recursive functions, I cannot seem to wrap my head around how they work. I would like someone to explain me recursion instinctively.
For example, this code:
int fact(int n)
{ if n<0:
return -1
elif n==0:
return 1
else
return n*fact(n-1)
}
These are some of my questions:
Let's say n=5. On entering the function,the control goes to the last return statement since none of the previous conditions are satisfied.
Now, roughly, the computer 'writes' something like this: 5*(fact(4))
Again, the fact() function is called and the same process gets repeated except now we have n=4.
So, how exactly does the compiler multiply 5*4 and so on until 2 since its not exactly 5*4 but 5*fact(4). How does it 'remember' that it has to multiply two integers and where does it store the temporary value since we haven't provided any explicit data structure?
Again let's say n=5. The same process goes on and eventually n gets decremented to 0. My question is why/how doesn't the function simply return 1 as stated in the return statement. Similar to my previous question, how does the compiler 'remember' that it also has 180 stored for displaying?
I'd be really thankful if someone explains this to me completely so that can understand recursion better and intuitively.
Yeah, for beginners recursion can be quite confusing. But, you are already on the right track with your explanation under "1.".
The function will be called recursively until a break condition is satisfied. In this case, the break condition is satisfied when n equals 0. At this point, no recursive calls will be made anymore. The result of each recursive call is returned to the caller. The callers always "wait" until they get a result. That's how the algorithm "knows" the receiver of the results. The flow of this procedure is handled by the so called stack.
Hence, in your informal notation (in this example n equals 3):
3*(fact(2)) = 3*(2*fact(1)) = 3*(2*(1*fact(0))).
Now, n equals 0. The inner fact(0) therefore returns 1:
3*(2*(1*(1)))) = 3*(2*(1)) = 3*(2) = 6
You can see a bit like this
The function fact(int n) is like a class and every time you call fact(int n) you create an instance of that class. By creating them (calling them) from the same function, you are creating a chain of instances. Once you reach break condition, those functions start returning one by one and the value they returned to calculate a new value in the return statement return n*fact(n-1) e.g. return 3*fact(2);

Lua pathfinding code needs optimization

After working on my code for a while, optimizing the most obvious things, I've resulted in this:
function FindPath(start, finish, path)
--Define a table to hold the paths
local paths = {}
--Make a default argument
path = path or {start}
--Loop through connected nodes
for i,v in ipairs(start:GetConnectedParts()) do
--Determine if backtracking
local loop = false
for i,vv in ipairs(path) do
if v == vv then
loop = true
end
end
if not loop then
--Make a path clone
local npath = {unpack(path)}
npath[#npath+1] = v
if v == finish then
--If we reach the end add the path
return npath
else
--Otherwise add the shortest part extending from this node
paths[#paths+1] = FindPath(v, finish, npath) --NOTED HERE
end
end
end
--Find and return the shortest path
if #paths > 0 then
local lengths = {}
for i,v in ipairs(paths) do
lengths[#lengths+1] = #v
end
local least = math.min(unpack(lengths))
for i,v in ipairs(paths) do
if #v == least then
return v
end
end
end
end
The problem being, the line noted gets some sort of game script timeout error (which I believe is a because of mass recursion with no yielding). I also feel like once that problem is fixed, it'll probably be rather slow even on the scale of a pacman board. Is there a way I can further optimize it, or perhaps a better method I can look into similar to this?
UPDATE: I finally decided to trash my algorithm due to inefficiency, and implemented a Dijkstra algorithm for pathfinding. For anybody interested in the source code it can be found here: http://pastebin.com/Xivf9mwv
You know that Roblox provides you with the PathfindingService? It uses C-side A* pathing to calculate quite quickly. I'd recommend using it
http://wiki.roblox.com/index.php?title=API:Class/PathfindingService
Try to remodel your algorithm to make use of tail calls. This is a great mechanism available in Lua.
A tail call is a type of recursion where your function returns a function call as the last thing it does. Lua has proper tail calls implementation and it will dress this recursion as a 'goto' under the scenes, so your stack will never blow.
Passing 'paths' as one of the arguments of FindPath might help with that.
I saw your edit about ditching the code, but just to help others stumbling on this question:
ipairs is slower than pairs, which is slower than a numeric for-loop.
If performance matters, never use ipairs, but use a for i=1,#tab loop
If you want to clone a table, use a for-loop. Sometimes, you have to use unpack (returning dynamic amount of trailing nils), but this is not such a case. Unpack is also a slow function.
Replacing ipairs with pairs or numeric for-loops and using loops instead of unpack will increase the performance a lot.
If you want to get the lowest value in a table, use this code snippet:
local lowestValue = values[1]
for k,v in pairs(values) do
if v < lowestValue then
lowestValue = k,v
end
end
This could be rewritten for your path example as so:
local least = #path[1]
for k,v in pairs(path) do
if #v < least then
least = v
end
end
I have to admit, you're very inventive. Not a lot of people would use math.min(unpack(tab)) (not counting the fact it's bad)

subset-sum algorithm and recursion

After some research, below is a modified version of the subset_sum recursion I found on SO. The modified version attempts to not only return the exact sum if there is one, but also returns the closest set of integers if an exact sum cannot be found. Furthermore, there is a list size requirement that determines how many numbers must be added up to determine the final sum
def findFourPlus(itemCount, seq, goal):
goalDifference = float("inf")
closestPartial = []
subset_sum(itemCount, seq, goal, goalDifference, closestPartial, partial=[])
print(closestPartial)
def subset_sum(itemCount, seq, goal, goalDifference, closestPartial, partial):
s = sum(partial)
# check if the partial sum is equals to target
if(len(partial) == itemCount):
if s == goal:
print(partial)
else:
if( abs(goal - s) < goalDifference):
goalDifference = abs(goal - s)
closestPartial = partial
for i in range(len(seq)):
n = seq[i]
remaining = seq[i+1:]
subset_sum(itemCount, remaining, goal, goalDifference, closestPartial, partial + [n])
The problem I am facing right now is that closesetPartial will always be an empty list, because each call of subset_sum() will refresh cloestPartial back to an empty list. I tried to move goalDifference and cloestPartial initialization outside of subset_sum function, but I am return with local variable 'goalDifference' referenced before assignment error.
What can I do to both preserve the recursive algorithm while keeping track of the closest-sum so far? and is there a better way of approaching this problem?
Initialize closestPartial and goalDifference outside of the call to subset_sum and pass them in as parameters. Either maintain a single closestPartial that is passed by reference to every subset_sum call, or else pass a copy of closestPartial to each subset_sum call - the former will likely be more efficient, while the latter will be easier to implement / reason about because it will be free of side-effects.

Resources