Detecting cycles in incremental graph - algorithm

Given that I have an initially empty graph, to which will, incrementally, be added edges (one by one), what would be the best way to detect and identify appearing cycles?
Would I have to check for cycles in the entire graph every time a new edge is added? This approach doesn't take advantage of computations already made. Is there an algorithm that I still haven't found?
Thanks.

You can use the Quick Union Find algorithm here to 1st check whether the two ends of the new edge are already connected.
EDIT:
As noted in the comment, this solution works only for an undirected graph. For a directed graph, following link may help https://cs.stackexchange.com/questions/7360/directed-union-find.

implemented the algo that #Abhishek linked in python ..
q=QFUF(10)
In [131]: q.is_cycle([(0,1),(1,3),(2,3),(0,3)])
Out[131]: True
In [132]: q.is_cycle([0,1,4,3,1])
Out[132]: True
only cares about first two elements of the tuple
In [134]: q.is_cycle([(0,1,0.3),(1,3,0.7),(2,3,0.4),(0,3,0.2)])
Out[134]: True
here
import numpy as np
class QFUF:
""" Detect cycles using Union-Find algorithm """
def __init__(self, n):
self.n = n
self.reset()
def reset(self): self.ids = np.arange(self.n)
def find(self, a): return self.ids[a]
def union(self, a, b):
#assign cause can be updated in the for loop
aid = self.ids[a]
bid = self.ids[b]
if aid == bid : return
for x in range(self.n) :
if self.ids[x] == aid : self.ids[x] = bid
#given next ~link/pair check if it forms a cycle
def step(self, a, b):
# print(f'{a} => {b}')
if self.find(a) == self.find(b) : return True
self.union(a, b)
return False
def is_cycle(self, seq):
self.reset()
#if not seq of pairs, make them
if not isinstance(seq[0], tuple) :
seq = zip(seq[:-1], seq[1:])
for tpl in seq :
a,b = tpl[0], tpl[1]
if self.step(a, b) : return True
return False

Related

Algorithm to group items in groups of 3

I am trying to solve a problem where I have pairs like:
A C
B F
A D
D C
F E
E B
A B
B C
E D
F D
and I need to group them in groups of 3 where I must have a triangule of matching from that list. Basically I need a result if its possible or not to group a collection.
So the possible groups are (ACD and BFE), or (ABC and DEF) and this collection is groupable since all letters can be grouped in groups of 3 and no one is left out.
I made a script where I can achieve this for small ammounts of input but for big ammounts it gets too slow.
My logic is:
make nested loop to find first match (looping untill I find a match)
> remove 3 elements from the collection
> run again
and I do this until I am out of letters. Since there can be different combinations I run this multiple times starting on different letters until I find a match.
I can understand that this gives me loops in order at least N^N and can get too slow. Is there a better logic for such problems? can a binary tree be used here?
This problem can be modeled as a graph Clique cover problem. Every letter is a node and every pair is an edge and you want to partition the graph into vertex-disjoint cliques of size 3 (triangles). If you want the partitioning to be of minimum cardinality then you want a minimum clique cover.
Actually this would be a k-clique cover problem, because in the clique cover problem you can have cliques of arbitrary/different sizes.
As Alberto Rivelli already stated, this problem is reducible to the Clique Cover problem, which is NP-hard.
It is also reducible to the problem of finding a clique of particular/maximum size. Maybe there are others, not NP-hard problems to which your particular case could be reduced to, but I didn't think of any.
However, there do exist algorithms which can find the solution in polynomial time, although not always for worst cases. One of them is Bron–Kerbosch algorithm, which is known by far to be the most efficient algorithm for finding the maximum clique and can find a clique in the worst case of O(3^(n/3)). I don't know the size of your inputs, but I hope it will be sufficient for your problem.
Here is the code in Python, ready to go:
#!/usr/bin/python3
# #by DeFazer
# Solution to:
# stackoverflow.com/questions/40193648/algorithm-to-group-items-in-groups-of-3
# Input:
# N P - number of vertices and number of pairs
# P pairs, 1 pair per line
# Output:
# "YES" and groups themselves if grouping is possible, and "NO" otherwise
# Input example:
# 6 10
# 1 3
# 2 6
# 1 4
# 4 3
# 6 5
# 5 2
# 1 2
# 2 3
# 5 4
# 6 4
# Output example:
# YES
# 1-2-3
# 4-5-6
# Output commentary:
# There are 2 possible coverages: 1-2-3*4-5-6 and 2-5-6*1-3-4.
# If required, it can be easily modified to return all possible groupings rather than just one.
# Algorithm:
# 1) List *all* existing triangles (1-2-3, 1-3-4, 2-5-6...)
# 2) Build a graph where vertices represent triangles and edges connect these triangles with no common... vertices. Sorry for ambiguity. :)
# 3) Use [this](en.wikipedia.org/wiki/Bron–Kerbosch_algorithm) algorithm (slightly modified) to find a clique of size N/3.
# The grouping is possible if such clique exists.
N, P = map(int, input().split())
assert (N%3 == 0) and (N>0)
cliquelength = N//3
pairs = {} # {a:{b, d, c}, b:{a, c, f}, c:{a, b}...}
# Get input
# [(0, 1), (1, 3), (3, 2)...]
##pairlist = list(map(lambda ab: tuple(map(lambda a: int(a)-1, ab)), (input().split() for pair in range(P))))
pairlist=[]
for pair in range(P):
a, b = map(int, input().split())
if a>b:
b, a = a, b
a, b = a-1, b-1
pairlist.append((a, b))
pairlist.sort()
for pair in pairlist:
a, b = pair
if a not in pairs:
pairs[a] = set()
pairs[a].add(b)
# Make list of triangles
triangles = []
for a in range(N-2):
for b in pairs.get(a, []):
for c in pairs.get(b, []):
if c in pairs[a]:
triangles.append((a, b, c))
break
def no_mutual_elements(sortedtupleA, sortedtupleB):
# Utility function
# TODO: if too slow, can be improved to O(n) since tuples are sorted. However, there are only 9 comparsions in case of triangles.
return all((a not in sortedtupleB) for a in sortedtupleA)
# Make a graph out of that list
tgraph = [] # if a<b and (b in tgraph[a]), then triangles[a] has no common elements with triangles[b]
T = len(triangles)
for t1 in range(T):
s = set()
for t2 in range(t1+1, T):
if no_mutual_elements(triangles[t1], triangles[t2]):
s.add(t2)
tgraph.append(s)
def connected(a, b):
if a > b:
b, a = a, b
return (b in tgraph[a])
# Finally, the magic algorithm!
CSUB = set()
def extend(CAND:set, NOT:set) -> bool:
# while CAND is not empty and there is no vertex in NOT connected to *all* vertexes in CAND
while CAND and all((any(not connected(n, c) for c in CAND)) for n in NOT):
v = CAND.pop()
CSUB.add(v)
newCAND = {c for c in CAND if connected(c, v)}
newNOT = {n for n in NOT if connected(n, v)}
if (not newCAND) and (not newNOT) and (len(CSUB)==cliquelength): # the last condition is the algorithm modification
return True
elif extend(newCAND, newNOT):
return True
else:
CSUB.remove(v)
NOT.add(v)
if extend(set(range(T)), set()):
print("YES")
# If the clique itself is not needed, it's enough to remove the following 2 lines
for a, b, c in [triangles[c] for c in CSUB]:
print("{}-{}-{}".format(a+1, b+1, c+1))
else:
print("NO")
If this solution is still too slow, perphaps it may be more efficient to solve the Clique Cover problem instead. If that's the case, I can try to find a proper algorithm for it.
Hope that helps!
Well i have implemented the job in JS where I feel most confident. I also tried with 100000 edges which are randomly selected from 26 letters. Provided that they are all unique and not a point such as ["A",A"] it resolves in like 90~500 msecs. The most convoluted part was to obtain the nonidentical groups, those without just the order of the triangles changing. For the given edges data it resolves within 1 msecs.
As a summary the first reduce stage finds the triangles and the second reduce stage groups the disconnected ones.
function getDisconnectedTriangles(edges){
return edges.reduce(function(p,e,i,a){
var ce = a.slice(i+1)
.filter(f => f.some(n => e.includes(n))), // connected edges
re = []; // resulting edges
if (ce.length > 1){
re = ce.reduce(function(r,v,j,b){
var xv = v.find(n => e.indexOf(n) === -1), // find the external vertex
xe = b.slice(j+1) // find the external edges
.filter(f => f.indexOf(xv) !== -1 );
return xe.length ? (r.push([...new Set(e.concat(v,xe[0]))]),r) : r;
},[]);
}
return re.length ? p.concat(re) : p;
},[])
.reduce((s,t,i,a) => t.used ? s
: (s.push(a.map((_,j) => a[(i+j)%a.length])
.reduce((p,c,k) => k-1 ? p.every(t => t.every(n => c.every(v => n !== v))) ? (c.used = true, p.push(c),p) : p
: [p].every(t => t.every(n => c.every(v => n !== v))) ? (c.used = true, [p,c]) : [p])),s)
,[]);
}
var edges = [["A","C"],["B","F"],["A","D"],["D","C"],["F","E"],["E","B"],["A","B"],["B","C"],["E","D"],["F","D"]],
ps = 0,
pe = 0,
result = [];
ps = performance.now();
result = getDisconnectedTriangles(edges);
pe = performance.now();
console.log("Disconnected triangles are calculated in",pe-ps, "msecs and the result is:");
console.log(result);
You may generate random edges in different lengths and play with the code here

Group a range of integers such as no pair of numbers is shared by two or more groups

You are given two numbers, N and G. Your goal is to split a range of integers [1..N] in equal groups of G numbers each. Each pair of numbers must be placed in exactly one group. Order does not matter.
For example, given N=9 and G=3, I could get these 12 groups:
1-2-3
1-4-5
1-6-7
1-8-9
2-4-6
2-5-8
2-7-9
3-4-9
3-5-7
3-6-8
4-7-8
5-6-9
As you can see, each possible pair of numbers from 1 to 9 is found in exactly one group. I should also mention that such grouping cannot be done for every possible combination of N and G.
I believe that this problem can be modelled best with hypergraphs: numbers are vertexes, hyperedges are groups, each hyperedge must connect exactly $G$ vertexes and no pair of vertexes can be shared by any two hyperedges.
At first, I've tried to bruteforce this problem - recursively pick valid vertexes until either running out of vertexes or finding a solution. It was way too slow, so I've started to look for ways to cut off some definitely wrong groups. If a lesser set of groups was found to be invalid, then we can predict any other set of groups which includes that one to be invalid too.
Here is the code I have so far (I hope that lack of comments is not a big concern):
#!/usr/bin/python3
# Input format:
# vertexes group_size
# Example:
# 9 3
from collections import deque
import itertools
def log(frmt, *args, **kwargs):
'Lovely logging subroutine'
if len(args)==len(kwargs)==0:
print(frmt, file=stderr)
else:
print(frmt.format(*args, **kwargs), file=stderr)
v, g = map(int, input().split())
linkcount = (v*(v-1)) // 2
if (linkcount % g) != 0:
print("INVALID GROUP SIZE")
exit
groupcount = linkcount // g
def pairs(it):
return itertools.combinations(it, 2)
# --- Vertex connections routines ---
connections = [[False for dst in range(v)] for src in range(v)]
#TODO: optimize matrix to eat up less space for large graphs
#...that is, when the freaking SLOWNESS is fixed for graph size to make any difference. >_<
def connected(a, b):
if a==b:
return True
if a>b:
a, b = b, a
# assert a<b
return connections[a][b]
def setconnect(a, b, value):
if a==b:
return False
if a>b:
a, b = b, a
# assert a<b
connections[a][b] = value
def connect(*vs):
for v1, v2 in pairs(vs):
setconnect(v1, v2, True)
def disconnect(*vs):
for v1, v2 in pairs(vs):
setconnect(v1, v2, False)
# --
# --- Failure prediction routines ---
failgroups = {}
def addFailure(groupId):
'Mark current group set as unsuccessful'
cnode = failgroups
sgroups = sorted(groups[:groupId+1])
for gp in groups:
if gp not in cnode:
cnode[gp]={}
cnode=cnode[gp]
cnode['!'] = True # Aka "end of node"
def findInSubtree(node, string, stringptr):
if stringptr>=len(string):
return False
c = string[stringptr]
if c in node:
if '!' in node[c]:
return True
else:
return findInSubtree(node[c], string, stringptr+1)
else:
return findInSubtree(node, string, stringptr+1)
def predictFailure(groupId) -> bool:
'Predict if the current group set will be unsuccessful'
sgroups = sorted(groups[:groupId+1])
return findInSubtree(failgroups, sgroups, 0)
# --
groups = [None for grp in range(groupcount)]
def debug_format_groups():
return ' '.join(('-'.join((str(i+1)) for i in group) if group else '?') for group in groups) # fluffy formatting for debugging
def try_group(groupId):
for cg in itertools.combinations(range(v), g):
groups[groupId]=cg
# Predict whether or not this group will be unsuccessful
if predictFailure(groupId):
continue
# Verify that all vertexes are unconnected
if any(connected(v1,v2) for v1,v2 in pairs(cg)):
continue
# Connect all vertexes
connect(*cg)
if groupId==groupcount-1:
return True # Last group is successful! Yupee!
elif try_group(groupId+1):
# Next group was successful, so -
return True
# Disconnect these vertexes
disconnect(*cg)
# Mark this group set as unsuccessful
addFailure(groupId)
else:
groups[groupId]=None
return False
result = try_group(0)
if result:
formatted_groups = sorted(['-'.join(str(i+1) for i in group) for group in groups])
for f in formatted_groups:
print(f)
else:
print("NO SOLUTION")
Is this an NP-complete problem? Can it be generalized as another, well-known problem, and if so - as which?
P.S. That's not a homework or contest task, if anything.
P.P.S. Sorry for my bad English, it's not my native language. I have no objections if someone edited my question for more clear phrasing. Thanks! ^^
In the meantime, I'm ready to clarify all confusing moments here.
UPDATE:
I've been thinking about it and realized than there is a better way than backtracking. First, we could build a graph where vertices represent all possible groups and edges connect all groups without common pairs of numbers. Then, each clique of power (N(N-1)/2G) would represent a solution! Unfortunately, clique problem is an NP-complete task, and generating all possible binom(N, G) groups would eat up much memory on large values of N and G. Is it possible to find a better solution?

Simple linear equation with binary search? [duplicate]

This question already has answers here:
Solving a linear equation
(11 answers)
Closed 9 years ago.
Is there a way to solve a simple linear equation like
-x+3 = x+5
Using binary search? or any other numerical method?
BACKGROUND:
My question comes because I want to solve equations like "2x+5-(3x+2)=x+5" Possible operators are: *, -, + and brackets.
I thought first of converting it to infix notation both sides of the equation, and then performing some kind of binary search.
What do you think of this approach? I'm supposed to solve this in less than 40 min in an interview.
It is not hard to write a simple parser that solves $-x+3 -(x+5) = 0$ or any other similar expression algebraically to $a*x + b = 0$ for cumulated constants $a$ and $b$. Then, one could easily compute the exact solution to be $x = -b/a$.
If you really want a numerical approach, observe that both sides describe their own linear function graph, i.e., $y_l = -x_l+3$ on the left an $y_r = x_r + 5$ on the right. Thus, finding a solution to this equation is the same as finding an intersection point of both functions. Therefore you can start with any value $x=x_l=x_r$ and evaluate both sides to get the corresponding left and right $y$-values $y_l$ and $y_r$. If their difference is $0$, then you found a solution (either the unique intersection point by luck, or both lines are equal as in $2x = 2x$). Otherwise, check, e.g., position $x+1$. If the new difference $y_l - y_r$ is unchanged to before, both lines are parallel (for example $2x = 2x + 7$). Otherwise the difference has gone farer away or nearer towards 0 (from positive or negative side). So, now you have all that you need to numerically test further points $x$ (e.g., in a binary search fashion if you at first look for some $x$ that achieves a positive $y$-difference and another $x$ that achieves a negative $y$-difference and then run binary search between them) to approximate the $x$-value for which the difference $y_l - y_r$ is $0$. (Of course, you could alternatively compute the solution algebraically again, since evaluating the lines at two positions gives you all information that you need to compute the intersection point exactly).
Thus, the numerical approach is quite absurd here, but it motivates this algorithmic way of thinking.
Do you really need to solve it with a numerical approach? I'm pretty sure you can, but it's not so hard to parse the expression to solve it analytically. I mean, if it is indeed a linear equation, it's just a matter to discover what is the coeficient of x and the free term when the equation is reduced. In the 26 minutes of this question, I made a simple parser to do that, by hand:
import re, sys, json
TOKENS = {
'FREE': '[0-9]+',
'XTERM': '[0-9]*x',
'ADD': '\+',
'SUB': '-',
'POW': '\^',
'MUL': '\*',
'EQL': '=',
'LPAREN': '\(',
'RPAREN': '\)',
'EOF': '$'
}
class Token:
EOF = lambda p: Token('EOF', '', p)
def __init__(self, name, raw, position):
self.name = name
self.image = raw.strip()
self.raw = raw
self.position = position
class Expr:
def __init__(self, x, c):
self.x = x
self.c = c
def add(self, e):
return Expr(self.x + e.x, self.c + e.c)
def sub(self, e):
return Expr(self.x - e.x, self.c - e.c)
def mul(self, e):
return Expr(self.x * e.c + e.x * self.c, self.c * e.c)
def neg(self):
return Expr(-self.x, -self.c)
class Scanner:
def __init__(self, expr):
self.expr = expr
self.position = 0
def match(self, name):
match = re.match('^\s*'+TOKENS[name], self.expr[self.position:])
return Token(name, match.group(), self.position) if match else None
def peek(self, *allowed):
for match in map(self.match, allowed):
if match: return match
def next(self, *allowed):
token = self.peek(*TOKENS)
self.position += len(token.raw)
return token
def maybe(self, *allowed):
if self.peek(*allowed):
return self.next(*allowed)
def following(self, value, *allowed):
self.next(*allowed)
return value
def expect(self, **actions):
token = self.next(*actions.keys())
return actions[token.name](token)
def evaluate(expr, variables={}):
tokens = Scanner(expr)
def Binary(higher, **ops):
e = higher()
while tokens.peek(*ops):
e = ops[tokens.next(*ops).name](e, higher())
return e
def Equation():
left = Add()
tokens.next('EQL')
right = Add()
return left.sub(right)
def Add(): return Binary(Mul, ADD=Expr.add, SUB=Expr.sub)
def Mul(): return Binary(Neg, MUL=Expr.mul)
def Neg():
return Neg().neg() if tokens.maybe('SUB') else Primary()
def Primary():
return tokens.expect(
FREE = lambda x: Expr(0, float(x.image)),
XTERM = lambda x: Expr(float(x.image[:-1] or 1), 0),
LPAREN = lambda x: tokens.following(Add(), 'RPAREN'))
expr = tokens.following(Equation(), 'EOF')
return -expr.c / float(expr.x)
print evaluate('2+2 = x')
print evaluate('-x+3 = x+5')
print evaluate('2x+5-(3x+2)=x+5')
First, your question must be related to Solving Binary Tree. A method that you can use is to construct a binary try putting the root the operator with highest priority, following lower priority operators and operations are leaf nodes. You can learn about this method in solving equation.

Is there any idiom to get the identity element (0,1) for an operation (:+,:*) on a Ruby object?

Given a certain object that respond_to? :+ I would like to know what it's the identity element for that operation on that object. For example, if a is Fixnum then it should give 0 for operation :+ because a + 0 == a for any Fixnum. Of course I already know the identity element for :+ and :* when talking about Fixnums, but is there any standard pattern/idiom to obtain those dynamically for all Numeric types and operations?.
More specifically I have write some code (see below) to calculate shortest path between v1 and v2 (vertexes in a graph) where the cost/distance/weigh of each edge in the graph is given in a user-specified type. In the current implementation the cost/weight of the edges could be a Fixnum, a Float or anything that implements Comparable and can add 0 to itself and return self.
But I was wondering what is the best pattern:
requiring that type used must support a + 0 == a
requiring that type provide some kind of addition identity element discovery 'a.class::ADDITION_IDENTITY_ELEMENT
??
My Dijkstra algorithm implementation
def s_path(v1,v2)
dist = Hash.new { nil}
pred = {}
dist[v1] = 0 # distance from v1 to v1 is zero
#pq = nodes
pq = [v1]
while u = pq.shift
for edge in from(u)
u,v,cost = *edge
new_dist = cost + dist[u]
if dist[v].nil? or new_dist < dist[v]
dist[v] = new_dist
pred[v] = u
pq << v
end
end
end
path = [v2]
path << pred[path.last] while pred[path.last]
path.reverse
end
I think the a.class::ADDITION_IDENTITY_ELEMENT is pretty good except I would call it a.class::Zero.
Another option would be to do (a-a).
Personally I wouldn't try to make things so abstract and I would just require that every distance be a Numeric (e.g. Float or Integer). Then you can just keep using 0.

How to easily know if a maze has a road from start to goal?

I implemented a maze using 0,1 array. The entry and goal is fixed in the maze. Entry always be 0,0 point of the maze. Goal always be m-1,n-1 point of the maze. I'm using breadth-first search algorithm for now, but the speed is not good enough. Especially for large maze (100*100 or so). Could someone help me on this algorithm?
Here is my solution:
queue = []
position = start_node
mark_tried(position)
queue << position
while(!queue.empty?)
p = queue.shift #pop the first element
return true if maze.goal?(p)
left = p.left
visit(queue,left) if can_visit?(maze,left)
right = p.right
visit(queue,right) if can_visit?(maze,right)
up = p.up
visit(queue,up) if can_visit?(maze,up)
down = p.down
visit(queue,down) if can_visit?(maze,down)
end
return false
the can_visit? method check whether the node is inside the maze, whether the node is visited, whether the node is blocked.
worst answer possible.
1) go front until you cant move
2) turn left
3) rinse and repeat.
if you make it out , there is an end.
A better solution.
Traverse through your maze keeping 2 lists for open and closed nodes. Use the famous A-Star algorithm
to choose evaluate the next node and discard nodes which are a dead end. If you run out of nodes on your open list, there is no exit.
Here is a simple algorithm which should be much faster:
From start/goal move to to the first junction. You can ignore anything between that junction and the start/goal.
Locate all places in the maze which are dead ends (they have three walls). Move back to the next junction and take this path out of the search tree.
After you have removed all dead ends this way, there should be a single path left (or several if there are several ways to reach the goal).
I would not use the AStar algorithm there yet, unless I really need to, because this can be done with some simple 'coloring'.
# maze is a m x n array
def canBeTraversed(maze):
m = len(maze)
n = len(maze[0])
colored = [ [ False for i in range(0,n) ] for j in range(0,m) ]
open = [(0,0),]
while len(open) != 0:
(x,y) = open.pop()
if x == m-1 and y == n-1:
return True
elif x < m and y < n and maze[x][y] != 0 not colored[x][y]:
colored[x][y] = True
open.extend([(x-1,y), (x,y-1), (x+1,y), (x,y+1)])
return False
Yes it's stupid, yes it's breadfirst and all that.
Here is the A* implementation
def dist(x,y):
return (abs(x[0]-y[0]) + abs(x[1]-y[1]))^2
def heuristic(x,y):
return (x[0]-y[0])^2 + (x[1]-y[1])^2
def find(open,f):
result = None
min = None
for x in open:
tmp = f[x[0]][x[1]]
if min == None or tmp < min:
min = tmp
result = x
return result
def neighbors(x,m,n):
def add(result,y,m,n):
if x < m and y < n: result.append(y)
result = []
add(result, (x[0]-1,x[1]), m, n)
add(result, (x[0],x[1]-1), m, n)
add(result, (x[0]+1,x[1]), m, n)
add(result, (x[0],x[1]+1), m, n)
return result
def canBeTraversedAStar(maze):
m = len(maze)
n = len(maze[0])
goal = (m-1,n-1)
closed = set([])
open = set([(0,0),])
g = [ [ 0 for y in range(0,n) ] for x in range(0,m) ]
h = [ [ heuristic((x,y),goal) for y in range(0,n) ] for x in range(0,m) ]
f = [ [ h[x][y] for y in range(0,n) ] for x in range(0,m) ]
while len(open) != 0:
x = find(open,f)
if x == (m-1,n-1):
return True
open.remove(x)
closed.add(x)
for y in neighbors(x,m,n):
if y in closed: continue
if y not in open:
open.add(y)
g[y[0]][y[1]] = g[x[0]][x[1]] + dist(x,y)
h[y[0]][y[1]] = heuristic(y,goal)
f[y[0]][y[1]] = g[y[0]][y[1]] + h[y[0]][y[1]]
return True
Here is my (simple) benchmark code:
def tryIt(func,size, runs):
maze = [ [ 1 for i in range(0,size) ] for j in range(0,size) ]
begin = datetime.datetime.now()
for i in range(0,runs): func(maze)
end = datetime.datetime.now()
print size, 'x', size, ':', (end - begin) / runs, 'average on', runs, 'runs'
tryIt(canBeTraversed,100,100)
tryIt(canBeTraversed,1000,100)
tryIt(canBeTraversedAStar,100,100)
tryIt(canBeTraversedAStar,1000,100)
Which outputs:
# For canBeTraversed
100 x 100 : 0:00:00.002650 average on 100 runs
1000 x 1000 : 0:00:00.198440 average on 100 runs
# For canBeTraversedAStar
100 x 100 : 0:00:00.016100 average on 100 runs
1000 x 1000 : 0:00:01.679220 average on 100 runs
The obvious here: going A* to run smoothly requires a lot of optimizations I did not bother to go after...
I would say:
Don't optimize
(Expert only) Don't optimize yet
How much time are you talking about when you say too much ? Really a 100x100 grid is so easily parsed in brute force it's a joke :/
I would have solved this with an AStar implementation. If you want even more speed, you can optimize to only generate the nodes from the junctions rather than every tile/square/step.
A method you can use that does not need to visit all nodes in the maze is as follows:
create an integer[][] with one value per maze "room"
create a queue, add [startpoint, count=1, delta=1] and [goal, count=-1, delta=-1]
start coloring the route by:
popping an object from the head of the queue, put the count at the maze point.
check all reachable rooms for a count with sign opposite to that of the rooms delta, if you find one the maze is solved: run both ways and connect the routes with the biggest steps up and down in room counts.
otherwise add all reachable rooms that have no count to the tail of the queue, with delta added to the room count.
if the queue is empty no path through the maze is possible.
This not only determines if there is a path, but also shows the shortest path possible through the maze.
You don't need to backtrack, so its O(number of maze rooms)

Resources