I am creating a graph with the help of adjacency matrix, how can I store the values in the matrix? do I need to change the size of the body with each insertion of the node?
If the number of nodes are not fixed then use an adjacency list instead of a matrix. if you want to use the adjacency matrix firstly scan all the inputs and find the distinct N nodes, after that you can create NxN matrix to store the results. in this way you need to scan the list twice firstly to get the distinct nodes and then the vslues.
Related
I have two list of 2d vectors of size m and n and I need to find minimum distance between each node of first list with the nodes of 2d list. I wonder if it is possible to do it in better time than O(nm). Let's assume that we can change data structures as we please. What do you think?
I don't understand why inserting an edge in adjacency matrix takes O(1) time.
For example we want to add an edge from vertex 3 to 5, in oriented graph we need to change graph[2][4] to 1. In oriented do the other way round also.
How can it possible be O(1), if we at least once have to go find the correct row in array, so its already O(|V|)?
In 2D array all operations are considered as O(1).
In 2D array you don't go linearly to find the find the row and the column to add the data.
Here
a[i][[j] = k
is an O(1) operation as you can refer the position of the array directly as index rather than going linearly.
However in Linkedlist it is true that you have to go and find the row/column by visiting all the row/column one by one.
In my algorithms class I've been told that a draw back of Adjacency Lists for graph representation is the O(n) look up time for iterating through the array of adjacent nodes corresponding to each node. I implement my adjacency list by using a HashMap that maps nodes to a HashSet of their adjacent nodes, wouldn't that only take O(1) look up time? Is there something I'm missing?
As you know look up for value using key in HashMap is O(1). However, in adjacency list the value of the HashMap is also a list of its adjacent nodes. The main purpose of the adjacency list is to iterate the adjacent nodes. For example: graph traversal algorithms like DFS and BFS. In your case HashSet. Suppose number of elements in HashSet is n. Then for iterating all the elements even in HashSet is O(n).
So, total complexity would be O(1)+O(n).
Where O(1)= look up in HashMap
O(n)= iterate all the elements
Generally, Adjacency List is preferable for sparse graph because it is the graph with only a few edges. It means the number of adjacent elements in each node(key of HashMap) is less. So the look up for a element wont cost more.
I implement my adjacency list by using a HashMap that maps nodes to a HashSet of their adjacent nodes, wouldn't that only take O(1) look up time? [emphasis mine]
Right — but "adjacency list" normally implies a representation as an array or a linked-list rather than a HashSet: in other words, adjacency lists are optimized for iterating over a vertex's neighbors rather than for querying if two vertices are neighbors.
It may be possible to produce more time-efficient graph representations than adjacency lists, particularly for graphs where vertices vertex often have many edges.
With a map of vertices where each vertex contains a map of neighbor vertices and/or edge objects, we can look if nodes are connected in O(1) time by indexing a vertex id and then indexing a neighbor. That's potentially a big savings over an adjacency list where we might have to loop over many edges to find specific neighbors. Furthermore, a map-of-maps data structure can allow us to store arbitrary data in edge objects. That's useful for weighted graphs and features of actions/edges
This version of Kruskal's algorithm represents the edges with a adjacency list.
How would I modify the pseudo-code to instead use a adjacency matrix?
I was thinking you we would need to use the weight of edges for instance (i,j), as long as its not zero. Assigning the vertices to i,j. I may be a bit confused on this pseudo-code of Kruskals.
As pointed out by Henry the pseudocode did not specify what concrete data structures to be used. It just appears that the adjacency list representation of graph is more convenient than the adjacency matrix representation in this case.
For adjacency matrix, you simply have to scan every entries of your matrix to sort the edges of graph G on line 4. And you are doing exactly the same thing when using the adjacency list representation.
In your case you may, for example, use a PriorityQueue to sort the edges by weight in non-decreasing order and discard entries with disconnected vertices. You can then iterate this data structure in the for-loop on line 5.
I'm looking for an efficent way to implement a weighted undirected graph knowing only the number of edges ahead of time.
sample input:
N (number of edges)
A B x (x is the distance from A to B)
.
.
I've thinked to use adjacency lists of Node* (I need to know neighbours) and stored nodes in a dynamic hash table (I don't know how many nodes I'll take so I need a dynamic - search/insert - container).
Are there better ways to do it?
Sorry for my bad english! :D
Given the format you're getting the input in, a very reasonable approach would be to use either a hash table of lists, where the keys are the nodes and the values are lists of pairs of (node, distance). Alternatively, if you have a dense graph and want to be able to quickly determine the distance from one node to another, it might be good to have a hash table of hash tables, where the top level hash table maps nodes to a second hash table, which then maps each node the original node has an edge to to its cost. This still lets you iterate across a node's outgoing edges, but gives you faster lookup of distances.
Another idea (depending on the use case) would be to start off by building the first data structure (the hash table of lists), then to post process it by building an adjacency matrix. This would be useful if you didn't need to iterate across a node's outgoing edges and needed fast random access to distances between nodes. It is similar to the hash table of hash tables, but is probably more space efficient.
Hope this helps!