Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I am just learning Dijkstra algorithm and just a little bit confuse in this
If min(A,B) = x;
min(A,C) = y;
min(B,C) = must be x-y;
Please justify it or i am wrong?
Okay here's what you meant to say:
I will be referring to a directed non-negative weight graph in all of this.
The shortest path problem:
For a digraph G and a node r in V and a real cost vector (c_e:e in E) (I wish we had LaTeX here)
we wish to find:
for each v in V a dipath from r to v of least cost (supposing it exists)
Here's the gist of what you want:
suppose we know there's a path from r to v of cost y_v for each v in V, and we find an edge vw in E satisfying y_v + c_vw < y_w
Since appending vw to the dipath to v (to get a path to w) gives a path of length y_v+c_vw
A least cost dipath satisfies:
y_v+c_vw >= y_w for all vw in E
We call such a y vector a "feasible potential"
Proposition: y_v is minimal
Let y be a feasible potential, and let P be a dipath from r to v, then it follows c(P) >= y_v
Proof:
c(P) = sum c_ei (the the ith edge in the path's cost)
Recall that a feasible potential statisfies y_v + c_vw >= y_w
So c_vw >= y_w - y_v this is what you have
Thus
c(P) >= sum (y_vi-y_v{i-1}) (the cost to the ith item take the cost of the previous one)
if you write it as sum (-y_v{i-1} + y_vi) then expand the sum: (y_v0 = 0 obviously)
-y_v0+y_v1 -y_v1 + y_v2 - .... -y_v{k-2} + y_v{k-1} -y_v{k-1} + y_vk
you see all the terms cancel out, giving:
c(P) >= y_vk - y_v0 = y_vk
Thus we have shown c(P) >= y_vk
It is wrong, think about any equilateral triangle, the difference of two sides is 0 and the length of the third size is not.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
We are given a directed graph G = (V, E) on which each edge (u, v) ∈ E has an associated
value r(u, v), which is a real number in the range 0 ≤ r(u, v) ≤ 1 that represents the reliability of a communication channel from vertex u to vertex v. We interpret r(u, v) as the
probability that the channel from u to will not fail, and we assume that these probabilities
are independent. Give an efficient algorithm to find the most reliable path between two given
vertices.
a
/ \
b<--c a directed to c; c directed to b; b directed to a
Let's say this is graph G = (V, E); vertex a is the root, one of the edge is a to c. a = u & c = v so edge is (u, v). I want to use Dijkstra’s algorithm to solve this, but not sure how.
a
\
b<--c path c: a -> c & b: a -> c -> b
Can someone explain the most reliable path in the simplest way possible?
This question is from Introduction to Algorithms, 3rd edition, chp 24.3
Thanks in advance!
We interpret r(u, v) as the probability that the channel from u to
will not fail, and we assume that these probabilities are independent.
From this you can deduce that the probability that a given path will not fail is equal to the product of the r(u,v) of all edges (u,v) that make up the path.
You want to maximize that product.
This is exactly like the shortest path problem, for which you surely know an algorithm, except instead of minimizing a sum, you are trying to maximize a product.
There is a cool tool to go from products to sums: it's the logarithm. Logarithm is an increasing function, thus maximizing a product is the same as maximizing the logarithm of that product. But logarithm has the additional cool property that the logarithm of a product is equal to the sum of the logarithms:
log(a * b * c * d * ...) = log(a) + log(b) + log(c) + log(d) + ...
Thus maximizing the product of the reliabilities r(u,v) is the same as maximizing the sum of the log-reliabilities log(r(u,v)).
Since the reliabilities are probabilities of the edges, they are values between 0 (excluded) and 1 (included). You can exclude 0 because if an edge had a reliability of 0, you can remove that edge from the graph. Since 0 < r(u,v) <= 1, it follows that log(r(u,v)) is negative or 0.
So you are trying to maximize a sum of negative values. This is exactly the same as minimizing the sum of the opposite values.
Thus: apply your shortest-path algorithm, using -log(r(u,v)) as the lengths of the edges.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have an app where people can give marks to each other, out of ten points. At midnight, each day, I would like to compute a "match" for each member. I would like to make everyone as much happy as possible, in average.
So at the midnight, I have an oriented graph like so :
1 -> 2 : 7.5 // P1 give a 7.5/10 to P2
1 -> 3 : 5
1 -> 4 : 9
2 -> 3 : 6
2 -> 1 : 4
etc.
To make things more simple let's say that if P1 give P2 a 5 and P2 give P1 a 7, the match P1 - P2 will have a weight of 5 + 7 - (7-5)/2 = 11 (I substract the difference because, for a same sum of grades, it's better if they are close to each other, that is, a (7/10 - 7/10) is a better match than a (10/10 - 4/10)).
So with this done, we have a non-oriented graph. Mathematically speaking, for my purpose, I think that I need to find an algorithm that finds, among all the maximum-sized matchings that have this graph, the one that has the maximum weight sum. Does such an algorithm exist ?
I've already looked into "Mariage stable problem" and "assignment problem" but these are for graph that can be divided in 2 classes (men/women, men/task ..)
A way to do that is to modify your graph and then find a maximum weight matching on it.
I need to find an algorithm that finds, among all the maximum-sized matchings that have this graph, the one that has the maximum weight sum. Does such an algorithm exist ?
Let's consider your graph G = (V, E, w) where w is your weight function. Let's denote by n the size of V, i.e the number of vertices in your graph, and by M the maximum weight among the edges.
Then, all you have to do is to define w' in this way: for any edge e of E, w'(e) = w(e) + n*M.
In this case, a maximum weight matching on G' = (V, E, w') corresponds to a matching of maximum size in G = (V, E, w) that also has a maximum weight.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I wish to use integer programming to enumerate pareto optimal solutions.
I would like to implement an algorithm that uses gurobi or a similar single-objective integer programming solver to do this, but I don't know any such algorithms. Could someone please propose an algorithm for enumerating the efficient frontier?
In this answer, I'll address how to enumerate all pareto efficient solutions of 2-objective pure integer optimization problems of the form
min_x {g'x, h'x}
s.t. Ax <= b
x integer
Algorithm
We start the algorithm by optimizing for one of the objectives (we'll use g here). Since this is a standard single-objective integer optimization problem, it can be easily solved with gurobi or any other LP solver:
min_x g'x
s.t. Ax <= b
x integer
We initialize a set P, which will eventually contain all the pareto efficient solutions, to P = {x*}, where x* is the optimal solution of this model. To get the next point on the efficient frontier, which has the second-smallest g'x value and an improved h'x value, we can add the following constraints to our model:
Remove x* from the feasible set of solutions (details on these x != x* constraints are provided later in the answer).
Add a constraint that h'x <= h'x*
The new optimization model that we need to solve is:
min_x g'x
s.t. Ax <= b
x != x* for all x* in P
h'x <= h'x* for all x* in P
x integer
Again, this is a single-objective integer optimization model that can be solved with gurobi or another solver (once you follow the details below on how to model x != x* constraints). As you repeatedly solve this model, adding the optimal solutions to P, solutions will get progressively larger (worse) g'x values and progressively smaller (better) h'x values. Eventually, the model will become infeasible, which means no more points on the pareto frontier exist, and the algorithm terminates.
At this point, there may be some pairs of solutions x, y in P for which g'x = g'y and h'x > h'y, in which case x is dominated by y and can be removed. After filtering in this way, the set P represents the full pareto efficient frontier.
x != x* Constraints
All that remains is to model constraints of the form x != x*, where x and x* are n-dimensional vectors of integer variables. Even in one dimension this is a non-convex constraint (see here for details), so we need to add auxiliary variables to help us model the constraint.
Denote the n variables stored in the optimization model (collectively denoted x) as x_1, x_2, ..., x_n, and similarly denote the variable values in x* as x*_1, x*_2, ..., x*_n. We can add new binary variables y_1, y_2, ..., y_n to the model, where y_i is set to 1 when x_i > x*_i. Because x_i and x*_i are integer valued, this is the same as saying x_i >= x*_i + 1, and this can be implemented with the following constraints (M is a large constant):
x_i - x*_i >= 1 - M(1-y_i) for all i = 1, ..., n
Similarly, we can add new binary variables z_1, z_2, ..., z_n to the model, where z_i is set to 1 when x_i < x*_i. Because x_i and x*_i are integer valued, this is the same as saying x_i <= x*_i - 1, and this can be implemented with the following big-M constraints:
x_i - x*_i <= -1 + M(1-z_i) for all i = 1, ..., n
If at least one of the y or z variables is set, then we know that our x != x* constraint is satisfied. Therefore, we can replace x != x* with:
y_1 + y_2 + ... + y_n + z_1 + z_2 + ... + z_n >= 1
In short, each x != x* constraint can be handled by adding 2n binary variables and 2n+1 constraints to the model, where n is the number of variables in the original model.
PolySCIP is an academic open-source solver for multi-objective mixed integer linear programs. In case you do not want to implement your own solver or want to compare yours with another one.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I was trying to solve the KOPC12A problem in SPOJ.
Link to problem: http://www.spoj.com/problems/KOPC12A/
Problem in brief:
Given n buildings, each of different height(number of bricks), with
each building having a cost for adding or removing a brick, find the
minimum cost to make all the buildings have the same height.
After trying to solve this problem, though in vain, I came across a solution that used ternary search, after sorting the input based on their heights.
I was not able to understand how the cost for equalizing the heights of the buildings becomes unimodal(because ternary search can only be applied on unimodal functions)
This stumped me and I was not able to proceed much further.
Any insights on this is much appreciated.
Thanks-
To expand on sasha's comment, we can define the (strong) unimodality of a function f as the condition
for all x < y < z, f(y) < max(f(x), f(z))
and the (strong) convexity of a function f as the condition
z - y y - x
for all x < y < z, f(y) < ----- f(x) + ----- f(z).
z - x z - x
Let the heights of the buildings be h1, ..., hn and the unit alteration costs be c1, ..., cn. The cost f(h') to make all buildings height h' is
sum i in {1, ..., n} of ci |h' - hi|.
Now here is a sequence of propositions, each with a fairly simple proof, leading via induction to the conclusion that f is unimodal.
The function g where g(x) = |x| is convex.
For all constants h, for all convex functions g1, the function g2 where g2(x) = g1(x - h) is convex.
For all constants c > 0, for all convex functions g1, the function g2 where g2(x) = c g1(x) is convex.
For all convex functions g1 and g2, the function g3 where g3(x) = g1(x) + g2(x) is convex.
All convex functions are unimodal.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
A valid labelling of the vertices in V wrt. a preflow x is a function d[.] : V -> Z satisfying:
d[s] = n ^ d[t] = 0
for all (v,w) belong to E : d[v] <= d[w] + 1
supposed we have 4 verticies including (s and t)
then we have d[s] = 4
according to valid labeling we should have d[v] <= d[w]+1, but for edges which are coming from 's', it is not
valid because 4 <= 1 is false. Is this logic is not only source?
Am I understading it right? Please correct me.
Thanks for your time and help
Your definition of a valid labelling is close, but not quite correct.
You claim that d[v] <= d[w] + 1 for all (v,w) belonging to E.
However, this actually only needs to be true for all (v,w) belonging to R, where R is a residual edge.
A residual edge is an edge where the current flow is less than the capacity on the edge.
There is a good explanation at topcoder.
Consider this diagram:
In the labels on the edges (such as 2/3) the first number gives the current flow, and the second number gives the capacity of the edge.
The numbers on the nodes give the height function d for each node.
The green edges are the residual edges because they have spare capacity.
So to check the height constraint we only need to check the S->A edge, and the B->T edge.