Not sure if I should ask here or a different stack exchange but.
Basically I'm wondering if there is a known way to find the shortest path between two values given a number of potential transformations?
Brute force solution/example in python
from itertools import permutations, groupby
start = ["A", "B", "C", "D"]
goal = ["A", "X", "C", "Y"]
Transforms = [
(None,None,"B","D"),
("F",None,None,"Y"),
(None,"X","C",None),
(None,None,"G","Y"),
("D","X",None,None),
(None,"X",None,None)
]
def apply_transform(value, transform):
for x in range(4):
if transform[x] is None: continue
value[x] = transform[x]
perms = permutations(range(len(Transforms)))
results = []
for order in perms:
value = start.copy()
moves = 0
for o in order:
moves += 1
apply_transform(value, Transforms[o])
if value == goal:
results.append([moves, order[0:moves]])
break
# just printing sorted unique in a formated way...I'd be just picking the first one not listing all potential ones
results.sort( key=lambda x: x[0])
results = list(k for k,_ in groupby(results))
print("\n".join(f"moves {m} | {' -> '.join(str(s) for s in ms)}" for m,ms in results))
results that correctly move the start to the goal.
moves 2 | 3 -> 2
moves 3 | 0 -> 3 -> 2
moves 3 | 3 -> 5 -> 2
moves 3 | 5 -> 3 -> 2
moves 4 | 0 -> 3 -> 5 -> 2
moves 4 | 0 -> 5 -> 3 -> 2
moves 4 | 5 -> 0 -> 3 -> 2
so picking the first item in the sorted list as the lowest number of transformations. (applying transformation "3" and then transformation "2").
Obviously, this exact brute force "algorithm" can be improved by breaking out of a permutation if its already started getting longer than the lowest number of jumps... but is there a better solution to this problem I'm not seeing? Some sort of graph? Permutations aren't the best for speed but it might be the only option. Are there other small optimizations that can be done with this?
One possible optimization would be to find transformations that have to be last ones, and work your way backwards.
So, here only transformations 2 and 5 can be the last ones, and 5 is the subgroup of 2 so it can be ignored (one more optimization: ignore transformations that are parts of other transformations), and the only that remains is 2.
Now you are looking how to reach state (A, *, *, Y) using remaining transformations. Transformations 1 and 3 are the only candidates, and 3 -> 2 makes the solution.
This algorithm is a bit complicated, because it requires recursion and backtracking (if you do it the easy way, depth-first), or some queue processing (if you do it the better way, breadth-first), but it will be faster than trying all possible permutations.
Think of it as a graph. Each value is a node, and the transformations are edges. There are known algorithms for the shortest path in a graph, e. g. Dijkstra or A*.
Related
I was looking at interview problems and come across this one, failed to find a liable solution.
Actual question was asked on Leetcode discussion.
Given multiple school children and the paths they took from their school to their homes, find the longest most common path (paths are given in order of steps a child takes).
Example:
child1 : a -> g -> c -> b -> e
child2 : f -> g -> c -> b -> u
child3 : h -> g -> c -> b -> x
result = g -> c -> b
Note: There could be multiple children.The input was in the form of steps and childID. For example input looked like this:
(child1, a)
(child2, f)
(child1, g)
(child3, h)
(child1, c)
...
Some suggested longest common substring can work but it will not example -
1 a-b-c-d-e-f-g
2 a-b-c-x-y-f-g
3 m-n-o-p-f-g
4 m-x-o-p-f-g
1 and 2 will give abc, 3 and 4 give pfg
now ans will be none but ans is fg
it's like graph problem, how can we find longest common path between k graphs ?
You can construct a directed graph g with an edge a->b present if and only if it is present in all individual paths, then drop all nodes with degree zero.
The graph g will have have no cycles. If it did, the same cycle would be present in all individual paths, and a path has no cycles by definition.
In addition, all in-degrees and out-degrees will be zero or one. For example, if a node a had in-degree greater than one, there would be two edges representing two students arriving at a from two different nodes. Such edges cannot appear in g by construction.
The graph will look like a disconnected collection of paths. There may be multiple paths with maximum length, or there may be none (an empty path if you like).
In the Python code below, I find all common paths and return one with maximum length. I believe the whole procedure is linear in the number of input edges.
import networkx as nx
path_data = """1 a-b-c-d-e-f-g
2 a-b-c-x-y-f-g
3 m-n-o-p-f-g
4 m-x-o-p-f-g"""
paths = [line.split(" ")[1].split("-") for line in path_data.split("\n")]
num_paths = len(paths)
# graph h will include all input edges
# edge weight corresponds to the number of students
# traversing that edge
h = nx.DiGraph()
for path in paths:
for (i, j) in zip(path, path[1:]):
if h.has_edge(i, j):
h[i][j]["weight"] += 1
else:
h.add_edge(i, j, weight=1)
# graph g will only contain edges traversed by all students
g = nx.DiGraph()
g.add_edges_from((i, j) for i, j in h.edges if h[i][j]["weight"] == num_paths)
def longest_path(g):
# assumes g is a disjoint collection of paths
all_paths = list()
for node in g.nodes:
path = list()
if g.in_degree[node] == 0:
while True:
path.append(node)
try:
node = next(iter(g[node]))
except:
break
all_paths.append(path)
if not all_paths:
# handle the "empty path" case
return []
return max(all_paths, key=len)
print(longest_path(g))
# ['f', 'g']
Approach 1: With Graph construction
Consider this example:
1 a-b-c-d-e-f-g
2 a-b-c-x-y-f-g
3 m-n-o-p-f-g
4 m-x-o-p-f-g
Draw a directed weighted graph.
I am a lazy person. So, I have not drawn the direction arrows but believe they are invisibly there. Edge weight is 1 if not marked on the arrow.
Give the length of longest chain with each edge in the chain having Maximum Edge Weight MEW.
MEW is 4, our answer is FG.
Say AB & BC had edge weight 4, then ABC should be the answer.
The below example, which is the case of MEW < #children, should output ABC.
1 a-b-c-d-e-f-g
2 a-b-c-x-y-f-g
3 m-n-o-p-f-h
4 m-x-o-p-f-i
If some kid is like me, the kid will keep roaming multiple places before reaching home. In such cases, you might see MEW > #children and the solution would become complicated. I hope all the children in our input are obedient and they go straight from school to home.
Approach 2: Without Graph construction
If luckily the problem mentions that the longest common piece of path should be present in the paths of all the children i.e. strictly MEW == #children then you can solve by easier way. Below picture should give you clue on what to do.
Take the below example
1 a-b-c-d-e-f-g
2 a-b-c-x-y-f-g
3 m-n-o-p-f-g
4 m-x-o-p-f-g
Method 1:
Get longest common graph for first two: a-b-c, f-g (Result 1)
Get longest common graph for last two: p-f-g (Result 2)
Using Result 1 & 2 we get: f-g (Final Result)
Method 2:
Get longest common graph for first two: a-b-c, f-g (Result 1)
Take Result 1 and next graph i.e. m-n-o-p-f-g: f-g (Result 2)
Take Result 2 and next graph i.e. m-x-o-p-f-g: f-g (Final Result)
The beauty of the approach without graph construction is that even if kids roam same pieces of paths multiple times, we get the right solution.
If you go a step ahead, you could combine the approaches and use approach 1 as a sub-routine in approach 2.
Expected Behaviour of the algorithm
I have two strings a and b, with a being the shorter string. I would like to find the substring of b, that has the biggest similarity to a. The substring has to be of len(a), or has to be placed at the end of b.
e.g. for the following two strings:
a = "aa"
b = "bbaba"
the possible substrings of b would be
"bb"
"ba"
"ab"
"ba"
"a"
""
The edit distance is defined as amount of Insertions and Deletion. Substitutions are not possible (Insertion + Deletion has to be used instead). The similarity between the two strings is calulated according to the following equation: norm = 1 - distance / (len(a) + len(substring)).
So the substrings above would provide the following results:
"bb" -> 2 DEL + 2 INS -> 1 - 4 / 4 = 0
"ba" -> 1 DEL + 1 INS -> 1 - 2 / 4 = 0.5
"ab" -> 1 DEL + 1 INS -> 1 - 2 / 4 = 0.5
"ba" -> 1 DEL + 1 INS -> 1 - 2 / 4 = 0.5
"a" -> 1 INS -> 1 - 1 / 3 = 0.66
"" -> 2 INS -> 1 - 2 / 2 = 0
So the algorithm should return 0.66.
Different implementations
A similar ratio is implemented by the Python library FuzzyWuzzy in the form of fuzz.partial_ratio. It calculates the ratio in two steps:
searches for matching subsequences in the longer sequence using difflib.SequenceMatcher.get_matching_blocks
calculates the ratio for substrings of len(shorter_string) starting at the matching subsequences and returns the maximum ratio
This is really slow, so it uses python-Levenshtein for this similarity calculation when it is available. This performs the same calculation based on the Levenshtein distance, which is faster. However in edge cases the calculated matching_blocks used for the ratio calculation is completely wrong (see issue 16), which does not make it a suitable replacement, when the correctness is relevant.
Current implementation
I currently use a C++ port of difflib in combination with a fast bitparallel implementation of the Levenshtein distance with the weights insertion=1, deletion=1 and substitution=2. The current implementation can be found here:
extracting matching_blocks: matching_blocks
calculating weighted Levenshtein: weighted Levenshtein
combining them to calculate the end ratio: partial_ratio
Question
Is there a faster algorithm to calculate this kind of similarity. Requirements are:
only uses Replacement/Insertion (or gives substitutions the weight 2, which has a similar effect)
allows a gap at the beginning of the longer string
allows a gap at the end of the longer string, as long as the remaining substring does not become shorter, than the length of the shorter string.
optimally it enforces, that the substring has a similar length (when it is not in the end), so it matches the behaviour of FuzzyWuzzy, but it would be fine when it allows longer subsequences to be matched aswell: e.g. for aaba:aaa this would mean, that it is allowed to use aaba as optimal subsequence instead of aab.
I'm searching for a data structure that can be sorted as fast as a plain list and which should allow to remove elements in the following way. Let's say we have a list like this:
[{2,[1]},
{6,[2,1]},
{-4,[3,2,1]},
{-2,[4,3,2,1]},
{-4,[5,4,3,2,1]},
{4,[2]},
{-6,[3,2]},
{-4,[4,3,2]},
{-6,[5,4,3,2]},
{-10,[3]},
{18,[4,3]},
{-10,[5,4,3]},
{2,[4]},
{0,[5,4]},
{-2,[5]}]
i.e. a list containing tuples (this is Erlang syntax). Each tuple contains a number, and a list which includes the members of a list used to compute previous number. What I want to do with the list is the following. First, sort it, then take the head of the list, and finally clean the list. With clean I mean to remove all the elements from the tail that contain elements that are in the head, or, in other words, all the elements from the tail which intersection with head is not empty. For example, after sorting the head is {18,[4,3]}. Next step is removing all the elements of the list that contain 4 or 3, i.e. the resulting list should be this one:
[{6,[2,1]},
{4,[2]},
{2,[1]},
{-2,[5]}]
The process follows by taking the new head and cleaning again till the whole list is consumed. Note that if the the clean process preserves the order, there is no need to resorting the list each iteration.
The bottleneck here is the clean process. I would need some structure which allows me to do the cleaning in a faster way than now.
Does anyone know some structure that allows to do this in an efficient way without losing the order or at least allowing fast sorting?
Yes, you can get faster than this. Your problem is that you are representing the second tuple members as lists. Searching them is cumbersome and quite unnecessary. They are all contiguous substrings of 5..1. You could simply represent them as a tuple of indices!
And in fact you don't even need a list with these index tuples. Put them in a two-dimensional array right at the position given by the respective tuple, and you'll get a triangular array:
h\l| 1 2 3 4 5
---+----------------------
1 | 2
2 | 6 2
3 | -4 -6 -10
4 | -2 -4 18 2
5 | -4 -10 -10 0 -2
Instead of storing the data in a two-dimensional array, you might want to store them in a simple array with some index magic to account for the triangular shape (if your programming language only allows for rectangular two-dimensional arrays), but that doesn't affect complexity.
This is all the structure you need to quickly filter the "list" by simply looking the things up.
Instead of sorting first and getting the head, we simply iterate once through the whole structure to find the maximum value and its indices:
max_val = 18
max = (4, 3) // the two indices
The filter is quite simple. If we don't use lists (not (any (substring `contains`) selection)) or sets (isEmpty (intersect substring selection)) but tuples then it's just sel.high < substring.low || sel.low > substring.high. And we don't even need to iterate the whole triangular array, we can simple iterate the higer and the lower triangles:
result = []
for (i from 1 until max[1])
for (j from i until max[1])
result.push({array[j][i], (j,i)})
for (i from max[0] until 5)
for (j from i until 5)
result.push({array[j+1][i+1], (j+1,i+1)})
And you've got the elements you need:
[{ 2, (1,1)},
{ 6, (2,1)},
{ 4, (2,2)},
{-2, (5,5)}]
Now you only need to sort that and you've got your result.
Actually the overall complexity doesn't get better with the triangular array. You still got O(n) from building the list and finding the maximum. Whether you filter in O(n) by testing against every substring index tuple, or filter in O(|result|) by smart selection doesn't matter any more, but you were specifically asking about a fast cleaning step. This still might be beneficial in reality if the data is large, or when you need to do multiple cleanings.
The only thing affecting overall complexity is to sort only the result, not the whole input.
I wonder if your original data structure can be seen as an adjacency list for a directed graph? E.g;
{2,[1]},
{6,[2,1]}
means you have these nodes and edges;
node 2 => node 1
node 6 => node 2
node 6 => node 1
So your question can be rewritten as;
If I find a node that links to nodes 4 and 3, what happens to the graph if I delete nodes 4 and 3?
One approach would be to build an adjacency matrix; an NxN bit matrix where every edge is the 1-bit. Your problem now becomes;
set every bit in the 4-row, and every bit in the 4-column, to zero.
That is, nothing links in or out of this deleted node.
As an optimisation, keep a bit array of length N. The bit is set if the node hasn't been deleted. So if nodes 1, 2, 4, and 5 are 'live' and 3 and 6 are 'deleted', the array looks like
[1,1,0,1,1,0]
Now to delete '4', you just clear the bit;
[1,1,0,0,1,0]
When you're done deleting, go through the adjacency matrix, but ignore any edge that's encoded in a row or column with 0 set.
Full example. Lets say you have
[ {2, [1,3]},
{3, [1]},
{4, [2,3]} ]
That's the adjacency matrix
1 2 3 4
1 0 0 0 0 # no entry for 1
2 1 0 1 0 # 2, [1,3]
3 1 0 0 0 # 3, [1]
4 0 1 1 0 # 4, [2,3]
and the mask
[1 1 1 1]
To delete node 2, you just alter the mask;
[1 0 1 1]
Now, to figure out the structure, pseudocode like:
rows = []
for r in 1..4:
if mask[r] == false:
# this row was deleted
continue;
targets = []
for c in 1..4:
if mask[c] == true && matrix[r,c]:
# this node wasn't deleted and was there before
targets.add(c)
if (!targets.empty):
rows.add({ r, targets})
Adjacency matrices can get large - it's NxN bits, after all - so this will only better on small, dense matrices, not large, sparse ones.
If this isn't great, you might find that it's easier to google for graph algorithms than invent them yourself :)
I came across this in the solution presented by Saurabh Kr Vats at this http://www.careercup.com/question?id=14990323
He says:
# Finally, the sequence could be "rho-shaped." In this
# case, the sequence looks something like this:
#
# x_0 -> x_1 -> ... x_k -> x_{k+1} ... -> x_{k+j}
# ^ |
# | |
# +-----------------------+
#
# That is, the sequence begins with a chain of elements that enters a cycle,
# then cycles around indefinitely. We'll denote the first element of the cycle
# that is reached in the sequence the "entry" of the cycle.
I searched online and reached cycle detection. I could see the rho shaped being formed when we reach the start/end of a cycle and try to go to an element which is not adjacent to it. I did not however understand the representation of the sequence or its usage.
It would be great if someone could explain it with an example.
It means literally in the shape of the Greek letter rho, which is "ρ". The idea is that if you map the values out as a graph, the visual representation forms this shape. You could also think of it as "d" shaped or "p" shaped. But look carefully at the font and notice that the line or stem extends slightly past the loop, while it doesn't on a rho. Rho is a better description of the shape because the loop never exits; i.e., there shouldn't be any lines leading out of the loop. That and mathematicians love Greek letters.
You have some number of values which do not repeat; these form a line or the "stem" of the "letter". The values then enter a loop or cycle, forming a circle or the "loop" of the "letter".
For example, consider the repeating decimals 7/12 (0.5833333...) and 3227/55 (5.81441441444...). If you make your sequence the digits in the number, then you can graph these out to form a rho shape. Let's look at 3227/55.
x0 = 5
x1 = 8
x2 = 1
x3 = 4
x4 = 4
x5 = 1 = x2
x6 = 4 = x3
x7 = 4 = x4
...
You can graph it like so:
5 -> 8 -> 1
^ \
/ v
4 <- 4
You can see this forms a "ρ" shape.
The comment in the code snippet looks incomplete. In context, I think that
# x_0 -> x_1 -> ... x_k -> x_{k+1} ... -> x_{k+j}
should have been
# x_0 -> x_1 -> ... x_k -> x_{k+1} ... -> x_{k+j} = x_k
which would make j the length of the cycle and x_0 -> x_1 -> ... -> x_{k-1} the "tail" of the sequence before you get to the circle that the tail is attached to.
A nice example is provided by the 3n+1 problem. This is where you start with a seed number which is a positive integer and either divide it by 2 if it is even or multiply it by 3 and add 1 if it is odd. With seed 5 this gives the sequence
5 -> 16 -> 8 -> 4 -> 2 -> 1 -> 4 -> 2 -> 1 -> ...
which can be written like
5 -> 16 -> 8 -> 4
/ \
1 <- 2
which sort of looks like a rho which has fallen over.
The Collatz Conjecture is that all seeds yield rho-shaped sequences which end up in the same cycle of length 3.
If you have a sequence that turns into a cycle then at the point where the initial sequence meets the cycle there is a value that you can get to in two ways, either from the initial sequence or from the cycle.
I don't know if this is a representative example, but suppose that the array holds {1,2,3,1,0} and you start at 0. Then you end up with 0->1->2->3->1->2->3->1... and you find that f(0)=f(3)=1
Several years ago I took an algorithms course where we were giving the following problem (or one like it):
There is a building of n floors with an elevator that can only go up 2 floors at a time and down 3 floors at a time. Using dynamic programming write a function that will compute the number of steps it takes the elevator to get from floor i to floor j.
This is obviously easy using a stateful approach, you create an array n elements long and fill it up with the values. You could even use a technically non-stateful approach that involves accumulating a result as recursively passing it around. My question is how to do this in a non-stateful manner by using lazy evaluation and tying the knot.
I think I've devised the correct mathematical formula:
where i+2 and i-3 are within the allowed values.
Unfortunately I can't get it to terminate. If I put the i+2 case first and then choose an even floor I can get it to evaluate the even floors below the target level but that's it. I suspect that it shoots straight to the highest even floor for everything else, drops 3 levels, then repeats, forever oscillating between the top few floors.
So it's probably exploring the infinite space (or finite but with loops) in a depth first manner. I can't think of how to explore the space in a breadth first fashion without using a whole lot of data structures in between that effectively mimic a stateful approach.
Although this simple problem is disappointingly difficult I suspect that having seen a solution in 1 dimension I might be able to make it work for a 2 dimensional variation of the problem.
EDIT: A lot of the answers tried to solve the problem in a different way. The problem itself isn't interesting to me, the question is about the method used. Chaosmatter's approach of creating a minimal function which can compare potentially infinite numbers is possibly a step in the right direction. Unfortunately if I try to create a list representing a building with 100 floors the result takes too long to compute, since the solutions to sub problems are not reused.
I made an attempt to use a self-referencing data structure but it doesn't terminate, there is some kind of infinite loop going on. I'll post my code so you can understand what it is I'm going for. I'll change the accepted answer if someone can actually solve the problem using dynamic programming on a self-referential data structure using laziness to avoid computing things more than once.
levels = go [0..10]
where
go [] = []
go (x:xs) = minimum
[ if i == 7
then 0
else 1 + levels !! i
| i <- filter (\n -> n >= 0 && n <= 10) [x+2,x-3] ]
: go xs
You can see how 1 + levels !! i tries to reference the previously calculated result and how filter (\n -> n >= 0 && n <= 10) [x+2,x-3] tries to limit the values of i to valid ones. As I said, this doesn't actually work, it simply demonstrates the method by which I want to see this problem solved. Other ways of solving it are not interesting to me.
Since you're trying to solve this in two dimensions, and for other problems than the one described, let's explore some more general solutions. We are trying to solve the shortest path problem on directed graphs.
Our representation of a graph is currently something like a -> [a], where the function returns the vertices reachable from the input. Any implementation will additionally require that we can compare to see if two vertices are the same, so we'll need Eq a.
The following graph is problematic, and introduces almost all of the difficulty in solving the problem in general:
problematic 1 = [2]
problematic 2 = [3]
problematic 3 = [2]
problematic 4 = []
When trying to reach 4 from 1, there are is a cycle involving 2 and 3 that must be detected to determine that there is no path from 1 to 4.
Breadth-first search
The algorithm Will presented has, if applied to the general problem for finite graphs, worst case performance that is unbounded in both time and space. We can modify his solution to attack the general problem for graphs containing only finite paths and finite cycles by adding cycle detection. Both his original solution and this modification will find finite paths even in infinite graphs, but neither is able to reliably determine that there is no path between two vertices in an infinite graph.
acyclicPaths :: (Eq a) => (a->[a]) -> a -> a -> [[a]]
acyclicPaths steps i j = map (tail . reverse) . filter ((== j).head) $ queue
where
queue = [[i]] ++ gen 1 queue
gen d _ | d <= 0 = []
gen d (visited:t) = let r = filter ((flip notElem) visited) . steps . head $ visited
in map (:visited) r ++ gen (d+length r-1) t
shortestPath :: (Eq a) => (a->[a]) -> a -> a -> Maybe [a]
shortestPath succs i j = listToMaybe (acyclicPaths succs i j)
Reusing the step function from Will's answer as the definition of your example problem, we could get the length of the shortest path from floor 4 to 5 of an 11 story building by fmap length $ shortestPath (step 11) 4 5. This returns Just 3.
Let's consider a finite graph with v vertices and e edges. A graph with v vertices and e edges can be described by an input of size n ~ O(v+e). The worst case graph for this algorithm is to have one unreachable vertex, j, and the remaining vertexes and edges devoted to creating the largest number of acyclic paths starting at i. This is probably something like a clique containing all the vertices that aren't i or j, with edges from i to every other vertex that isn't j. The number of vertices in a clique with e edges is O(e^(1/2)), so this graph has e ~ O(n), v ~ O(n^(1/2)). This graph would have O((n^(1/2))!) paths to explore before determining that j is unreachable.
The memory required by this function for this case is O((n^(1/2))!), since it only requires a constant increase in the queue for each path.
The time required by this function for this case is O((n^(1/2))! * n^(1/2)). Each time it expands a path, it must check that the new node isn't already in the path, which takes O(v) ~ O(n^(1/2)) time. This could be improved to O(log (n^(1/2))) if we had Ord a and used a Set a or similar structure to store the visited vertices.
For non-finite graphs, this function should only fail to terminate exactly when there doesn't exists a finite path from i to j but there does exist a non-finite path from i to j.
Dynamic Programming
A dynamic programming solution doesn't generalize in the same way; let's explore why.
To start with, we'll adapt chaosmasttter's solution to have the same interface as our breadth-first search solution:
instance Show Natural where
show = show . toNum
infinity = Next infinity
shortestPath' :: (Eq a) => (a->[a]) -> a -> a -> Natural
shortestPath' steps i j = go i
where
go i | i == j = Zero
| otherwise = Next . foldr minimal infinity . map go . steps $ i
This works nicely for the elevator problem, shortestPath' (step 11) 4 5 is 3. Unfortunately, for our problematic problem, shortestPath' problematic 1 4 overflows the stack. If we add a bit more code for Natural numbers:
fromInt :: Int -> Natural
fromInt x = (iterate Next Zero) !! x
instance Eq Natural where
Zero == Zero = True
(Next a) == (Next b) = a == b
_ == _ = False
instance Ord Natural where
compare Zero Zero = EQ
compare Zero _ = LT
compare _ Zero = GT
compare (Next a) (Next b) = compare a b
we can ask if the shortest path is shorter than some upper bound. In my opinion, this really shows off what's happening with lazy evaluation. problematic 1 4 < fromInt 100 is False and problematic 1 4 > fromInt 100 is True.
Next, to explore dynamic programming, we'll need to introduce some dynamic programming. Since we will build a table of the solutions to all of the sub-problems, we will need to know the possible values that the vertices can take. This gives us a slightly different interface:
shortestPath'' :: (Ix a) => (a->[a]) -> (a, a) -> a -> a -> Natural
shortestPath'' steps bounds i j = go i
where
go i = lookupTable ! i
lookupTable = buildTable bounds go2
go2 i | i == j = Zero
| otherwise = Next . foldr minimal infinity . map go . steps $ i
-- A utility function that makes memoizing things easier
buildTable :: (Ix i) => (i, i) -> (i -> e) -> Array i e
buildTable bounds f = array bounds . map (\x -> (x, f x)) $ range bounds
We can use this like shortestPath'' (step 11) (1,11) 4 5 or shortestPath'' problematic (1,4) 1 4 < fromInt 100. This still can't detect cycles...
Dynamic programming and cycle detection
The cycle detection is problematic for dynamic programming, because the sub-problems aren't the same when they are approached from different paths. Consider a variant of our problematic problem.
problematic' 1 = [2, 3]
problematic' 2 = [3]
problematic' 3 = [2]
problematic' 4 = []
If we are trying to get from 1 to 4, we have two options:
go to 2 and take the shortest path from 2 to 4
go to 3 and take the shortest path from 3 to 4
If we choose to explore 2, we will be faced with the following option:
go to 3 and take the shortest path from 3 to 4
We want to combine the two explorations of the shortest path from 3 to 4 into the same entry in the table. If we want to avoid cycles, this is really something slightly more subtle. The problems we faced were really:
go to 2 and take the shortest path from 2 to 4 that doesn't visit 1
go to 3 and take the shortest path from 3 to 4 that doesn't visit 1
After choosing 2
go to 3 and take the shortest path from 3 to 4 that doesn't visit 1 or 2
These two questions about how to get from 3 to 4 have two slightly different answers. They are two different sub-problems which can't fit in the same spot in a table. Answering the first question eventually requires determining that you can't get to 4 from 2. Answering the second question is straightforward.
We could make a bunch of tables for each possible set of previously visited vertices, but that doesn't sound very efficient. I've almost convinced myself that we can't do reach-ability as a dynamic programming problem using only laziness.
Breadth-first search redux
While working on a dynamic programming solution with reach-ability or cycle detection, I realized that once we have seen a node in the options, no later path visiting that node can ever be optimal, whether or not we follow that node. If we reconsider problematic':
If we are trying to get from 1 to 4, we have two options:
go to 2 and take the shortest path from 2 to 4 without visiting 1, 2, or 3
go to 3 and take the shortest path from 3 to 4 without visiting 1, 2, or 3
This gives us an algorithm to find the length of the shortest path quite easily:
-- Vertices first reachable in each generation
generations :: (Ord a) => (a->[a]) -> a -> [Set.Set a]
generations steps i = takeWhile (not . Set.null) $ Set.singleton i: go (Set.singleton i) (Set.singleton i)
where go seen previouslyNovel = let reachable = Set.fromList (Set.toList previouslyNovel >>= steps)
novel = reachable `Set.difference` seen
nowSeen = reachable `Set.union` seen
in novel:go nowSeen novel
lengthShortestPath :: (Ord a) => (a->[a]) -> a -> a -> Maybe Int
lengthShortestPath steps i j = findIndex (Set.member j) $ generations steps i
As expected, lengthShortestPath (step 11) 4 5 is Just 3 and lengthShortestPath problematic 1 4 is Nothing.
In the worst case, generations requires space that is O(v*log v), and time that is O(v*e*log v).
The problem is that min needs to fully evaluate both calls to f,
so if one of them loops infinitly min will never return.
So you have to create a new type, encoding that the number returned by f is Zero or a Successor of Zero.
data Natural = Next Natural
| Zero
toNum :: Num n => Natural -> n
toNum Zero = 0
toNum (Next n) = 1 + (toNum n)
minimal :: Natural -> Natural -> Natural
minimal Zero _ = Zero
minimal _ Zero = Zero
minimal (Next a) (Next b) = Next $ minimal a b
f i j | i == j = Zero
| otherwise = Next $ minimal (f l j) (f r j)
where l = i + 2
r = i - 3
This code actually works.
standing on the floor i of n-story building, find minimal number of steps it takes to get to the floor j, where
step n i = [i-3 | i-3 > 0] ++ [i+2 | i+2 <= n]
thus we have a tree. we need to search it in breadth-first fashion until we get a node holding the value j. its depth is the number of steps. we build a queue, carrying the depth levels,
solution n i j = case dropWhile ((/= j).snd) queue
of [] -> Nothing
((k,_):_) -> Just k
where
queue = [(0,i)] ++ gen 1 queue
The function gen d p takes its input p from d notches back from its production point along the output queue:
gen d _ | d <= 0 = []
gen d ((k,i1):t) = let r = step n i1
in map (k+1 ,) r ++ gen (d+length r-1) t
Uses TupleSections. There's no knot tying here, just corecursion, i.e. (optimistic) forward production and frugal exploration. Works fine without knot tying because we only look for the first solution. If we were searching for several of them, then we'd need to eliminate the cycles somehow.
see also: https://en.wikipedia.org/wiki/Corecursion#Discussion
With the cycle detection:
solutionCD1 n i j = case dropWhile ((/= j).snd) queue
of [] -> Nothing
((k,_):_) -> Just k
where
step n i visited = [i2 | let i2=i-3, not $ elem i2 visited, i2 > 0]
++ [i2 | let i2=i+2, not $ elem i2 visited, i2 <=n]
queue = [(0,i)] ++ gen 1 queue [i]
gen d _ _ | d <= 0 = []
gen d ((k,i1):t) visited = let r = step n i1 visited
in map (k+1 ,) r ++
gen (d+length r-1) t (r++visited)
e.g. solution CD1 100 100 7 runs instantly, producing Just 31. The visited list is pretty much a copy of the instantiated prefix of the queue itself. It could be maintained as a Map, to improve time complexity (as it is, sol 10000 10000 7 => Just 3331 takes 1.27 secs on Ideone).
Some explanations seem to be in order.
First, there's nothing 2D about your problem, because the target floor j is fixed.
What you seem to want is memoization, as your latest edit indicates. Memoization is useful for recursive solutions; your function is indeed recursive - analyzing its argument into sub-cases, synthetizing its result from results of calling itself on sub-cases (here, i+2 and i-3) which are closer to the base case (here, i==j).
Because arithmetics is strict, your formula is divergent in the presence of any infinite path in the tree of steps (going from floor to floor). The answer by chaosmasttter, by using lazy arithmetics instead, turns it automagically into a breadth-first search algorithm which is divergent only if there's no finite paths in the tree, exactly like my first solution above (save for the fact that it's not checking for out-of-bounds indices). But it is still recursive, so indeed memoization is called for.
The usual way to approach it first, is to introduce sharing by "going through a list" (inefficient, because of sequential access; for efficient memoization solutions see hackage):
f n i j = g i
where
gs = map g [0..n] -- floors 1,...,n (0 is unused)
g i | i == j = Zero
| r > n = Next (gs !! l) -- assuming there's enough floors in the building
| l < 1 = Next (gs !! r)
| otherwise = Next $ minimal (gs !! l) (gs !! r)
where r = i + 2
l = i - 3
not tested.
My solution is corecursive. It needs no memoization (just needs to be careful with the duplicates), because it is generative, like the dynamic programming is too. It proceeds away from its starting case, i.e. the starting floor. An external accessor chooses the appropriate generated result.
It does tie a knot - it defines queue by using it - queue is on both sides of the equation. I consider it the simpler case of knot tying, because it is just about accessing the previously generated values, in disguise.
The knot tying of the 2nd kind, the more complicated one, is usually about putting some yet-undefined value in some data structure and returning it to be defined by some later portion of the code (like e.g. a back-link pointer in doubly-linked circular list); this is indeed not what my1 code is doing. What it does do is generating a queue, adding at its end and "removing" from its front; in the end it's just a difference list technique of Prolog, the open-ended list with its end pointer maintained and updated, the top-down list building of tail recursion modulo cons - all the same things conceptually. First described (though not named) in 1974, AFAIK.
1 based entirely on the code from Wikipedia.
Others have answered your direct question about dynamic programming. However, for this kind of problem I think the greedy approach works the best. It's implementation is very straightforward.
f i j :: Int -> Int -> Int
f i j = snd $ until (\(i,_) -> i == j)
(\(i,x) -> (i + if i < j then 2 else (-3),x+1))
(i,0)