I don't understand why inserting an edge in adjacency matrix takes O(1) time.
For example we want to add an edge from vertex 3 to 5, in oriented graph we need to change graph[2][4] to 1. In oriented do the other way round also.
How can it possible be O(1), if we at least once have to go find the correct row in array, so its already O(|V|)?
In 2D array all operations are considered as O(1).
In 2D array you don't go linearly to find the find the row and the column to add the data.
Here
a[i][[j] = k
is an O(1) operation as you can refer the position of the array directly as index rather than going linearly.
However in Linkedlist it is true that you have to go and find the row/column by visiting all the row/column one by one.
Related
I have two list of 2d vectors of size m and n and I need to find minimum distance between each node of first list with the nodes of 2d list. I wonder if it is possible to do it in better time than O(nm). Let's assume that we can change data structures as we please. What do you think?
Multiple sources state that the time complexity of adding a vertex to an adjacency list is O(1) and my understanding right now is that this is because of optimizations with hash tables.
If we use an array of linked lists, then the time complexity is O(V) right? Because to add a new vertex we have to make a new array of size V + 1.
I just wanted to confirm my line of thinking against pre-existing information.
There are many variants of this question asking the solution in O(|V|) time.
But what is the worst case bound if I wanna compute if there is a universal sink in the graph and I have graph represented in adjacency lists. This is important because all other algorithms seem to be better for adjacency lists, so if finding universal sink is not too frequent operation that I need, I will definitely go ahead for lists rather than matrix.
In my opinion, the time complexity would be the size of the graph, that is O(|V| + |E|). the algorithm for finding universal sink of a graph is as follows. Assuming in-neighbor list, Start from the index 1 of a graph. Check the length of adjacency list at index 1, if it is |V| - 1, then traverse the list to check if there is a self loop. If list does not have a self loop and all other vertices are part of a list, store the list index. Then, we must go through other lists to check if this vertex is part of their list. If it is, then the stored vertex cannot be a universal sink. Continue the search from the next index. Even if list is out-neighbor list, we will have to search the vertices which have list with length = 0, then search all other lists to check if this vertex exists in their respective lists.
As it can be concluded from above explanation, no matter what form of adjacency list is considered, in worst case, finding the universal sink must traverse through all the vertices and edges once, hence the complexity is the size of the graph, i.e. O(|V|+|E|)
But my friend who has recently joined as a assistant professor at a university, mentioned it has to be O(|V|*|V|). I am reviewing his notes before he starts teaching the course in the spring, but before correcting it I wanna be one hundred percent sure.
You're quite correct. We can build the structures we need to track all of the intermediate results, but the basic complexity is still straightforward: we go through all of our edges once, marking and counting references. We can even build a full transition matrix in O(E) time.
Depending on the data structures, we may find an improvement by a second pass over all edges, but 2 * O(E) is still O(E).
Then we traverse each node once, looking for in/out counts and a self-loop.
I have a problem that can be represented as a multigraph. To represent this graph internally, I’m thinking of a matrix. I like the idea of a matrix because I want to count the number of edges for a vertex. This would be O(n) time because all I would have to do is loop through the correct column so the time complexity would be linear to the amount of vertices in the graph, right?. HOWEVER, I’m also thinking of the space complexity. If this graph were to grow, there could be a lot of wasted space. This leads me to using an adjacency list. This may reduce my space complexity but sounds like my time complexity just increased. How would I represent the time complexity if I wanted to determine the number of edges for a particular vertex? I know the operation would first be to find the vertex so this operation would be O(n), but then I would also have to scan the list of edges which could also be O(n). So does this mean my time complexity for this operation is O(n^2)?
EDIT:
I guess if I were to use a HASH table, the first operation would be O(1) so does that mean my operation to find number of edges for a vertex is O(n)?
It will be O(|e|), |e| can be O(|v|**2) but you wanna use adjacency list because the matrix is sparse so |e|<<|v| so it's better to say O(|e|).
Rank finding problem: In the 2 dimension space, we shall say that a point A=(a1,a2) dominates a point B=(b1,b2) if and only if a1>b1 and b1>b2. Given a set of n points, the rank of a point X is the number of points dominated by X. Design an algorithm to find the rank of every point.
Sort points by their first coordinate. Then insert them into order-statistics tree, which sorts them by second coordinate.
Rank of the point in the order-statistics tree at the time it is inserted is exactly the number of points, dominated by this point.
Use a stable sort twice to sort by the first attribute and then by the second attribute. The positon in the final sorted array gives the number of points that a given point is dominating.
The wavelet tree data structure solves this problem. I think the construction of it is essentially the same process as what Evgeny and pogo have described.