Edge lists vs adjacency lists vs and adjacency matrix - algorithm

I'm preparing to create a maze solving program. As with stated in these two questions: graphs representation : adjacency list vs matrix &&
Size of a graph using adjacency list versus adjacency matrix? they give an explanation of the differences in using the adjacency lists vs adjacency matrices. Unfortunately, I cannot decide on the pros and cons of an edge lists compared to these other two since I have found very little on adjacency matrices and edge lists.
An example of going through the adjacent list for the maze (I think) would be:
insertVertex(V) : O(1)
insertEdge(Vertex, Vertex, E) : O(1)
removeVertex(Vertex) : O(deg(v))
removeEdge(Edge) : O(m)
vertices() : O(n)
edges() : O(m)
areAdjacent(Vertex, Vertex) : O(min(deg(v),deg(w))
endVertices(Edge) : O(1)
incidentEdges(Vertex) : O(deg(v))
space complexity : O(n+m)
So my question is, which has the best time cost an edge list, adjacency list, or adjacency matrix for this maze solving problem?

Let's start from "classical" mazes. They are defined as rectangular grid, each cell of which is either corridor or wall. Player can move one cell at the time in one of four directions (top, left, bottom, right). Maze example:
S..#.##E
.#.#.#..
.#...#.#
.#.###.#
##.....#
Player starts at position marked S and should reach position E.
For now let's present each blank cell as graph vertex. Then each vertex can have at most 4 neighbours. In terms of space usage adjacency list clearly wins - 4*V vs V^2.
Simplest efficient shortest path algorithm for grid maze would be BFS. For huge mazes it can be replaced by A*. Both of these algorithms have only one "edge related" operation: take all neighbours for given node. This is O(1) (we have at most 4 neighbours) for adjacency list and O(V) for adjacency matrix.
To save space we can create vertices only for crossroads. However this has no impact on calculations above (number of vertices will go down but it will be still greater than 4).
As a conclusion, for grid representation of a maze adjacency list wins in terms of both time and space usage.
General case
Every maze can be modelled as a set of rooms (vertices) with corridors (edges) that lead to different rooms. Usually number of rooms is much bigger than number of corridors for single room. In this case arguments for adjacency lists still holds.
Additional note. For grid maze it's often more easy just to use grid representation as is (2-dimensional array with booleans) without creation of additional graph structures.

Related

Single pair shortest path in a matrix

Let a MxN matrix where the start is at position (0,0) and the finish at (M-1,N-1)
. Every square has a positive integer which is the cost to move on this square. What algorithm should I use for the shortest path from start to finish? We can recreate the problem using graphs. Where the squares are the vertices and the costs are weighted edges.
Example
Use Dijkstra. As soon as you hear "shortest path", look at Dijkstra. In particular, if you search for "dijkstra adjacency matrix" on stack overflow, you will get over a dozen questions discussing various aspects of how to apply Dijkstra on a graph represented as a matrix. You can build an adjacency matrix from your input matrix by looping through the input as follows:
create a (rows * cols) * (rows * cols) adjacency matrix
for each square at position row,col in the input matrix
let v = row*cols + col
for all neighboring positions u (at row,col+1; row+1,col; and so on)
set the cost of going from v to u to input[row][col]
You can even skip building the adjacency matrix, and simply calculate neighbors and distance-to-neighbors on the fly. This can save quite a lot of memory, at the expense of extra runtime.
You can also save some space by representing the graph as an adjacency list, but they are slightly more complicated to implement, and you seem to be just starting out.

Which Graph Algorithms prefer adjacency matrix and why?

I heard that adjacency lists are used in most graph algorithms (but not all). I'm just wondering what algorithms prefer adjacency matrices and why?
So far I’ve found that Floyd Warshall uses adjacency matrices.
Adjacency lists are generally faster than adjacency matrices in algorithms in which the key operation performed per node is “iterate over all the nodes adjacent to this node.” That can be done in time O(deg(v)) time for an adjacency list, where deg(v) is the degree of node v, while it takes time Θ(n) in an adjacency matrix. Similarly, adjacency lists make it fast to iterate over all of the edges in a graph - it takes time O(m + n) to do so, compared with time Θ(n2) for adjacency matrices.
Some of the most-commonly-used graph algorithms (BFS, DFS, Dijkstra’s algorithm, A* search, Kruskal’s algorithm, Prim’s algorithm, Bellman-Ford, Karger’s algorithm, etc.) require fast iteration over all edges or the edges incident to particular nodes, so they work best with adjacency lists.
You mentioned that Floyd-Warshall uses adjacency matrices. While Floyd-Warshall does maintain an internal matrix tracking shortest paths seen so far, it doesn’t actually require the original graph to be an adjacency matrix. The overall cost of the dynamic programming work is Θ(n3), which is bigger than the O(n2) cost of converting an adjacency list into an adjacency matrix or vice-versa.
There are only a few places where an adjacency matrix is faster than an adjacency list. Adjacency matrices take time O(1) to test whether a particular edge is present in the graph, which is faster than the O(deg(v)) cost of the corresponding operation on an adjacency list. Since the cost of converting an adjacency list to an adjacency matrix is Θ(n2), the only cases where an adjacency matrix would outperform an adjacency list are in situations where (1) random access of the edges are required and (2) the total runtime of the algorithm is o(n2). I only know a few algorithms that do this. For example, there’s the celebrity-finding problem where you’re given a graph and are asked to find whether there’s a node with incoming edges from each node and outgoing edges to no nodes. This can be done in time O(n) using an adjacency matrix, faster than what can be done with an adjacency list.
(That being said, you could also use an adjacency list represented using cuckoo hash tables rather than regular lists and match the same runtime bounds as above, though with the cost of creating the adjacency list now only expected to be fast rather than actually worst-case efficient.)
The main reason I’ve found adjacency matrices to be useful is in thinking about graphs from a different perspective. For example, raising an adjacency matrix to the kth power makes a new matrix that counts the number of paths from one node to another using exactly k hops. This can be used to count and find triangles in graphs faster than the naive algorithm, for example. Similarly, the Four Russians algorithm for computing transitive closures of graphs works by representing the graph as a matrix and using some clever techniques (treating blocks of bits as integers then used in a lookup table) to outperform the naive search.
Hope this helps!

Polygons from a list of edges

Given N points in a map of edges Map<Point, List<Edge>>, it's possible to get the polygons formed by these edges in O(N log N)?
What I know is that you have to walk all the vertices and get the edges containing that vertex as a starting point. These are edges of a voronoi diagram, and each vertex has, at most, 3 artists containing it. So, in the map, the key is a vertex, and the value is a list where the vertex is the start node.
For example:
Points: a,b,c,d,e,f,g
Edges: [a,b]; [a,c]; [a,d], [b,c], [d,e], [e,g], [g,f]
My idea is to iterate the map counterclockwise until I get the initial vertex. That is a polygon, then I put it in a list of polygons and keep looking for others. The problem is I do not want to overcome the complexity O(N log N)
Thanks!
You can loop through the edges and compute the distance from midpoint of the edge to all sites. Then sort the distances in ascending order and for inner voronoi polygons pick the first and the second. For outer polygons pick the first. Basically an edge separate/divide 2 polygons.
It's something O(m log n).
If I did find a polynomial solution to this problem I would not post it here because I am fairly certain this is at least NP-Hard. I think your best bet is to do a DFS. You might find this link useful Finding all cycles in undirected graphs.
You might be able to use the below solution if you can formulate your graph as a directed graph. There are 2^E directed graphs (because each edge can be represented in 2 directions). You could pick a random directed graph and use the below solution to find all of the cycles in this graph. You could do this multiple times for different random directed graphs keeping track of all the cycles and until you've reached a satisfactory error bounds.
You can efficiently create a directed graph with a little bit of state (Maybe store a + or - with an edge to note the direction?) And once you do this in O(n) the first time you can randomly flip x << E directions to get a new graph in what will essentially be constant time.
Since you can create subsequent directed graphs in constant time you need to choose the number of times to run the cycle finding algorithm to have it still be polynomial and efficient.
UPDATE - The below only works for directed graphs
Off the top of my head it seems like it's a better idea to think of this as a graph problem. Your map of vertices to edges is a graph representation. Your problem reduces to finding all of the loops in the graph because each cycle will be a polygon. I think "Tarjan's strongly connected components algorithm" will be of use here as it can do this in O(v+e).
You can find more information on the algorithm here https://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm

Using Single Source Shortest Path to traverse a chess board

Say we have a n x n chess board (or a matrix in other words) and each square has a weight to it. A piece can move horizontally or vertically, but it can't move diagonally. The cost of each movement will be equal to the difference of the two squares on the chess board. Using an algorithm, I want to find the minimum cost for a single chess piece to move from the square (1,1) to square (n,n) which has a worst-case time complexity in polynomial time.
Could dikstras algorithm be used to solve this? Would my algorithm below be able to solve this problem? Diijkstras can already be ran in polynomial time, but what makes it this time complexity?
Pseudocode:
We have some empty set S, some integer V, and input a unweighted graph. After that we complete a adjacency matrix showing the cost of an edge without the infinity weighted vertices and while the matrix hasn't picked all the vertices we find a vertex and if the square value is less then the square we're currently on, move to that square and update V with the difference between the two squares and update S marking each vertices thats been visited. We do this process until there are no more vertices.
Thanks.
Since you are trying to find a minimum cost path, you can use Dijkstra's for this. Since Dijkstra is O(|E| + |V|log|V|) in the worst case, where E is the number of edges and V is the number of verticies in the graph, this satisfies your polynomial time complexity requirement.
However, if your algorithm considers only costs associated with the beginning and end square of a move, and not the intermediate nodes, then you must connect all possible beginning and end squares together so that you can take "short-cuts" around the intermediate nodes.

Time/Space complexity of adjacency matrix and adjacency list

I am reading "Algorithms Design" By Eva Tardos and in chapter 3 it is mentioned that adjacency matrix has the complexity of O(n^2) while adjacency list has O(m+n) where m is the total number of edges and n is the total number of nodes. It says that in-case of adjacency list we will need only lists of size m for each node.
Won't we end up with something similar to matrix in case of adjacency list as we have lists,which are also 1D arrays. So basically it is O(m*n) according to me. Please guide me.
An adjacency matrix keeps a value (1/0) for every pair of nodes, whether the edge exists or not, so it requires n*n space.
An adjacency list only contains existing edges, so its length is at most the number of edges (or the number of nodes in case there are fewer edges than nodes).
It says that in-case of adjacency list we will need only lists of size
m for each node.
I think you misunderstood that part. An adjacency list does not hold a list of size m for every node, since m is the number of edges overall.
In a fully connected graph, there is an edge between every pair of nodes so both adjacency list and matrix will require n*n of space, but for every other case - an adjacency list will be smaller.

Resources