My job is to generate a random undirected, unweighted graph with a given diameter d. What I did already is to generate a random distance matrix D where each element Dij represents the distance between the i-th and j-th vertices of the graph. So, basically I am doing this:
if (i == j) {
D[i][j] = 0;
} else {
D[i][j] = D[j][i] = random.nextInt(d + 1);
}
The diagonal is zero because it always needs zero effort to reach the same vertex, am I right?
Also, Dij = Dji because it is undirected. Are my assumptions right?
I want to use java, but I tagged the question as language-agnostic because I need an algorithm and not a code.
My next step is to use Dijkstra's algorithm to generate a random graph by generating an adjacency matrix. I think that Dijkstra's algorithm is to find the shortest path, but can I use it for my case?
EDIT #1:
As you can see in the figure above, the diameter is 4 because the most distanced vertices are 2 and 7 have a distance of 4. For that reason, we have D[2][7] = D[7][2] = 4. Another example is D[3][6] = D[6][3] = 2, because if we want to go from 3 to 6, we can go 3 -> 5 -> 6, or 3 -> 1 -> 6 and vice versa for going from 6 to 3.
What I am looking for is to generate a random graph by knowing the diameter which is the maximum distance between two vertices in the graph. I know there are a lot of possibilities of the graph, but I need any of them.
I have an idea which is assuming that the number of vertices is d + 1, then connecting each vertex to the following vertex. In this case we will have a linear graph.
Example (diameter = 2, number of vertices = 3):
v
1
2
3
1
0
1
2
2
1
0
1
3
2
1
0
The diagonal = zero
D1,2 = D2,1 = D2,3 = D3,2 = 1, because to go from 1 to 2, or 2 to 3, there is a direct link
D1,3 = D3,1 = 2, because to go from 1 to 3, the shortest path is 1
-> 2 -> 3
Here is the graph associated to the above distance matrix:
I am looking for a better approach.
Let's call an undirected graph of n vertices p-interesting, if the following conditions fulfill:
the graph contains exactly 2n + p edges;
the graph doesn't contain self-loops and multiple edges;
for any integer k (1 ≤ k ≤ n), any subgraph consisting of k vertices contains at most 2k + p edges.
A subgraph of a graph is some set of the graph vertices and some set of the graph edges. At that, the set of edges must meet the condition: both ends of each edge from the set must belong to the chosen set of vertices.
The task is to find a p-interesting graph consisting of n vertices.
To see the problem statement click here
I can not even understand the tutorial explained here.
If anyone can point me to the theory required for the background or some obscure theorem related to this problem. I'd be very glad.
That's a somewhat muddled editorial. Let's focus on creating 0-interesting graphs first. The key fact from graph theory is the following formula.
sum_{vertices v} degree(v) = 2 #edges
In a graph where every vertex has degree 4 (4-regular graph), the left-hand side is 4n, so the number of edges is exactly 2n. Every n'-vertex subgraph of a 4-regular graph has vertices of degree at most 4, so the left-hand side is at most 4n', and the number of edges is at most 2n'. Thus, every 4-regular graph is 0-interesting. There are many ways to obtain a 4-regular graph; one is to connect vertex i to vertices i - 2, i - 1, i + 1, i + 2 modulo n.
Assuming that n >= 5, the editorial aims to prove that a graph comprised of edges (1, v) and (2, v) for all v from 3 to n and (1, 2) is "(-3)-interesting", which technically doesn't work out because each 1-vertex subgraph should have at most 2(1) - 3 = -1 edges (oops). Since the actual p of interest are nonnegative and there are no self-loops, though, this problem will resolve itself when we add the additional edges as below. For n'-vertex subgraphs with n' >= 2, we consider four cases, two of which are symmetric. The first case is that the subgraph includes neither 1 nor 2. This subgraph has no edges, and n' >= 2 implies that 0 < 2n' - 3. The second case is that the subgraph includes 1 but not 2. This subgraph can have edges from 1 to each of its other vertices, at most n' - 1 <= 2n' - 3 edges. The third case is that the subgraph includes 2 but not 1; it is symmetric to the second case. The fourth case is that the subgraph includes both 1 and 2, in which case it has at most 1 edge between 1 and 2, n' - 2 edges from 1 to other vertices, and n' - 2 edges from 2 to other vertices, for a total of at most 2n' - 3 edges.
For p-interesting graphs, the observation is that, by adding p new edges to a 0-interesting graph, the number of edges in the new graph is 2n + p, as required. The number of edges in each n'-vertex subgraph is the number of old edges plus the number of new edges. The number of old edges is at most 2n', as before. The number of new edges is at most p.
How do I, other than brute forcing, efficiently draw a correct Complete 5-Vertex Undirected Graph with distinct edge weights {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} that satisfies the Triangle inequality? I'm unaware of any algorithm that exists to generate a correct graph G for the edge weights provided.
Here's an example of a Complete 5-Vertex Graph
Here's one that works.
(2,1): 1
(3,1): 2
(3,2): 3
(4,1): 4
(4,2): 5
(4,3): 6
(5,1): 7
(5,2): 8
(5,3): 9
(5,4): 10
The generalization to an n-vertex complete graph should be clear. The proof of correctness is inductive. For n = 0, it's obvious. For higher n, the inductive hypothesis is equivalent to the proposition that every violation of the triangle inequality involves vertex n. The edges involving vertex n are longer than the others, so n is not the transit vertex of a violation. Thus every hypothetical violation (up to symmetry) looks like n -> v -> w. There exists some constant c such that n -> v has length c + v and n -> w has length c + w. Hence, if v -> w is a violation, then it has length less than w - v, which, by inspection, is impossible.
Is there any Algorithm exists for counting unique pair (pair of vertices) in undirected graph (vertices repetition not allowed). I think it could be a variation of bipartite graph but if there is any better way to find out .. please comment.
[I think Problem Belongs to Perfect Matching Algorithm]
Problem Statement:
I have an undirected graph which consists of n vertexes and m edges. I can delete edges from the graph. Now I'm interested in one question : is it possible to delete edges in the graph so that the degree of each vertex in the graph will be equal 1.. There can be multiple edges in the graph, but can not be any loops
Example: n = #vertices, m = #edges
n = 4, m = 6
1 2
1 3
1 4
2 3
2 4
3 4
Unique sequence could be (1 2, 3 4) (1 4, 2 3) (1 3, 2 4)
The set of edges that covers the entire graph without using the same vertex multiple times is called a matching or independent edge set, see wikipedia.
In that article is also mentioned that the number of distinct matchings in a graph (which is the number you are after) is called the Hosoya index, see this wikipedia article.
Algorithms to compute this number are not trivial and Stack Overflow wouldn't be the right place to try to explain them, but I at least I hope you have enough pointers to investigate further.
Here is pseudo code, it should run in O(|E|) time, i.e. linear of number of edges :
Suppose G = (V, E) is your initial graph, with E - initial set of all edges
count = 0;
while(E is not empty) {
//1. pick up any edge e = (n1, n2) from E
//2. remove e from G
E = E - e;
//3. calculate number of edges in G remaining if nodes n1 and n2 were removed
// -> these are edges making pair with e
edges_not_connected_to_e = |E| - |n1| - |n2|;
// where |n1| - degree of n1 in updated G (already without edge e)
//4. update the count
count += edges_not_connected_to_e;
}
return count;
Let me know if you need more clarifications. And probably someone could fix my Graph math notations, in case they are incorrect.
I understand what Dijkstra's algorithm is, but I don't understand why it works.
When selecting the next vertex to examine, why does Dijkstra's algorithm select the one with the smallest weight? Why not just select a vertex arbitrarily, since the algorithm visits all vertices anyway?
You can think of Djikstra's algorithm as a water-filling algorithm (i.e. a pruned breadth-first search). At each stage, the goal is to cover more of the whole graph with the lowest-cost path possible. Suppose you have vertices at the edge of the area you've filled in, and you list them in terms of distance:
v0 <= v1 <= v2 <= v3 ...
Could there possibly be a cheaper way to get to vertex v1? If so, the path must go through v0, since no untested vertex could be closer. So you examine vertex v0 to see where you can get to, checking if any path through v0 is cheaper (to any other vertex one step away).
If you peel away the problem this way, you're guaranteed that your distances are all minimums, because you always check exactly that vertex that could lead to a shortest path. Either you find that shortest path, or you rule it out, and move on to the next vertex. Thus, you're guaranteed to consume one vertex per step.
And you stop without doing any more work than you need to, because you stop when your destination vertex occupies the "I am smallest" v0 slot.
Let's look at a brief example. Suppose we're trying to get from 1 to 12 by multiplication, and the cost between nodes is the number you have to multiply by. (We'll restrict the vertices to the numbers from 1 to 12.)
We start with 1, and we can get to any other node by multiplying by that value. So node 2 has cost 2, 3 has cost 3, ... 12 has cost 12 if you go in one step.
Now, a path through 2 could (without knowing about the structure) get to 12 fastest if there was a free link from 2 to 12. There isn't, but if there was, it would be fastest. So we check 2. And we find that we can get to 4 for cost 2, to 6 for 3, and so on. We thus have a table of costs like so:
3 4 5 6 7 8 9 10 11 12 // Vertex
3 4 5 5 7 6 9 7 11 8 // Best cost to get there so far.
Okay, now maybe we can get to 12 from 3 for free! Better check. And we find that 3*2==6 so the cost to 6 is the cost to 3 plus 2, and to 9 is plus 3, and 12 is plus 4.
4 5 6 7 8 9 10 11 12
4 5 5 7 6 6 7 11 7
Fair enough. Now we test 4, and we see we can get to 8 for an extra 2, and to 12 for an extra 3. Again, the cost to get to 12 is thus no more than 4+3 = 7:
5 6 7 8 9 10 11 12
5 5 7 6 8 7 11 7
Now we try 5 and 6--no improvements so far. This leaves us with
7 8 9 10 11 12
7 6 8 7 11 7
Now, for the first time, we see that the cost of getting to 8 is less than the cost of getting to 7, so we had better check that there isn't some free way to get to 12 from 8. There isn't--there's no way to get there at all with integers--so we throw it away.
7 9 10 11 12
7 8 7 11 7
And now we see that 12 is as cheap as any path left, so the cost to reach 12 must be 7. If we'd kept track of the cheapest path so far (only replacing the path when it's strictly better), we'd find that 3*4 is the first cheapest way to hit 12.
Dijkstra's algorithm picks the vertex with the least path cost thus far, because a path through any other vertex is at least as costly as a path through the vertex with the least path cost.
Visiting any other vertex, therefore, if it is more costly (which is quite possible) would necessitate visiting not only that other vertex, but also the one with the least path cost thus far, so you would have to visit more vertices before finding the shortest path. In fact, you would end up with the Bellman-Ford algorithm if you did that.
I should also add that the vertex doesn't have a weight, it is the edge that has a weight. The key for a given vertex is the cost of the shortest path found thus far to that vertex from the source vertex.
The reason why Dijsktra's algorithm works the way it does is in part because it exploits the fact that the shortest path between node u and w that includes point v also contains the shortest path from u to v and from v to w. If there existed something shorter between u to v, then it wouldn't be the shortest path.
To really understand why Dijkstra's algorithm works, look into the basics of dynamic programming, sounds hard but it's really pretty easy to understand the principles.
It likes greedy strategy.My English is not good.It translates by google.If you don't understand,I am very sorry.
Dijkstra algorithm, a G from S to all vertices of the shortest path length.
We assume that each vertex of G in V have been given a flag L (V), it is either a number, either ∞. Suppose P is the set of vertices of G, P contains S, to satisfy:
A) If V is P, then L (V) from V S to the length of the shortest path, and the existence of such V from S to the shortest path: path on the vertices in P in
B) If V does not belong to P, then L (V) from S to V satisfy the following restrictions on the length of the shortest path: V is the only path P does not belong to the vertex.
We can use induction to prove P Dijkstra algorithm in line with the above definition of the collection:
1) When the number of elements in P = 1, P corresponds to the first step in the algorithm, P = (S), is clearly satisfied.
2) Suppose P is k, the number of elements, P satisfy the above definition, see the algorithm below the third step
3) P in and the first to find out is not marked with the minimum vertex U, marked as L (U), can be proved from S to U of U outside the shortest path, in addition to P does not contain the elements do not belong.
Because if there is outside the other vertices except U, then the shortest path to S, P1, P2 ... Pn, Q1, Q2 ... Qn, U (P1, P2 ... Pn is P; Q1, Q2, ... Qn does not belong to P), from the nature of B) the shortest path length L (Q1) + PATH (Q1, U)> L (U).
Which is greater than S, P1, P2 ... Pn, U of the channel length L (U), is not the shortest path. Therefore, from the S to the U of U outside the shortest path, in addition to P does not contain the elements do not belong to U from S to the length of the shortest path from the L (U) is given.
U is added to P in the form P ', clearly P' to meet the nature of the A).
Take V does not belong to P ', obviously does not belong to V P, then from S to V except the shortest path and meet all the vertices outside V in P' in the path there are two possibilities, i) contains U, ii) does not contain U.
On i) S, P1, P2 ... Pn, U, V = L (U) + W (U, V)
ii) S, P1, P2 ... Pn, V = L (V)
Obviously the two are given in the smallest V from S to meet the minimum access and outside addition to all the vertices are V P 'in length.
The third step to algorithm given in P 'with k +1 elements and meet the A), B).
By the induction proposition may permit.
Here is the source.
It checks the path with the lowest weight first because this is most likely (without additional information) to reduce the number of paths checked. For example:
a->b->c cost is 20
a->b->d cost is 10
a->b->d->e cost is 12
If goal is to get from a to e, we don't need to even check the cost of:
a->b->c->e
Because we know it's at least 20 so we know it's not optimal because there is already another path with a cost of 12. You can maximize this effect by checking the lowest weights first. This is similar (same?) to how minimax works in chess and other games to reduce the branching factor of the game tree.
Dijsktra's algorithm is a greedy algorithm which follows the problem solving heuristic of making the locally optimal choice at each stage with the hope of finding a global optimum.
For understand the basic concept of this algorithm, I wrote this code for you. There is an explanation of how it works.
graph = {}
graph["start"] = {}
graph["start"]["a"] = 6
graph["start"]["b"] = 2
graph["a"] = {}
graph["a"]["finish"] = 1
graph["b"] = {}
graph["b"]["a"] = 3
graph["b"]["finish"] = 5
graph["finish"] = {}
infinity = float("inf")
costs = {}
costs["a"] = 6
costs["b"] = 2
costs["finish"] = infinity
print "The weight of each node is: ", costs
parents = {}
parents["a"] = "start"
parents["b"] = "start"
parents["finish"] = None
processed = []
def find_lowest_cost_node(costs):
lowest_cost = float("inf")
lowest_cost_node = None
for node in costs:
cost = costs[node]
if cost < lowest_cost and node not in processed:
lowest_cost = cost
lowest_cost_node = node
return lowest_cost_node
node = find_lowest_cost_node(costs)
print "Start: the lowest cost node is", node, "with weight",\
graph["start"]["{}".format(node)]
while node is not None:
cost = costs[node]
print "Continue execution ..."
print "The weight of node {} is".format(node), cost
neighbors = graph[node]
if neighbors != {}:
print "The node {} has neighbors:".format(node), neighbors
else:
print "It is finish, we have the answer: {}".format(cost)
for neighbor in neighbors.keys():
new_cost = cost + neighbors[neighbor]
if costs[neighbor] > new_cost:
costs[neighbor] = new_cost
parents[neighbor] = node
processed.append(node)
print "This nodes we researched:", processed
node = find_lowest_cost_node(costs)
if node is not None:
print "Look at the neighbor:", node
# to draw graph
import networkx
G = networkx.Graph()
G.add_nodes_from(graph)
G.add_edge("start", "a", weight=6)
G.add_edge("b", "a", weight=3)
G.add_edge("start", "b", weight=2)
G.add_edge("a", "finish", weight=1)
G.add_edge("b", "finish", weight=5)
import matplotlib.pyplot as plt
networkx.draw(G, with_labels=True)
plt.show()