Denote edge with index i by E[i].
Let S be an array of a solution.
If a maximum flow contains E[i], S[i] = 1. Otherwise, S[i] = 0.
I want to get a maximum flow whose solution is alphabetically minimum.
I can get maximum flow with Ford-Fulkerson, but I don't know how can I get the solution that is alphabetically minimum.
Algorithm
One approach is to compute the maximum flow and then for each edge in turn try:
Remove the edge from the graph
Compute the maximum flow (for efficiency start from your previous solution)
If the maximum flow has decreased then replace the edge
At the end of this process the edges used will be the lowest in alphabetical order.
Example
For your example of 0010001, 1000000, 0001100 suppose we started with the solution 1000000.
We would first try removing edge 1, the new maximum flow would have the same value, (now using edges 00100001), so we permanently remove edge 1.
Edge 2 is not included so we can permanently remove it.
Edge 3 is included, so we try removing it and computing the new maximum flow. The new flow has the same value and uses edges 0001100 so we remove edge 3 permanently.
We now try removing edge 4. However, in this case the value of maximum flow decreases so we have to keep edge 4.
Similarly we will find we need to keep edge 5, but can remove edges 6 and 7.
You can formulate it as a min cost max flow problem, by assigning smaller costs to edges with smaller indices.
Related
Consider the following directed graph. For a given n, the vertices of the graph correspond to the
integers 1 through n. There is a directed edge from vertex i to vertex j if i divides j.
Draw the graph for n = 12. Perform a DFS of the above graph with n = 12. Record the discovery
and finish times of each vertex according to your DFS and classify all the edges of the graph into tree, back, forward, and cross edges. You can pick any start vertex (vertices) and any order of visiting the vertices.
I do not see how it is possible to traverse this graph because of the included stipulations. It is not possible to get a back edge because a dividing a smaller number by a larger number does not produce an integer and will never be valid.
Say we go by this logic and create a directed graph with the given instructions. Vertex 1 is able to travel to vertex 2, because 2 / 1 is a whole number. However, it is impossible to get to vertex 3 as vertex 2 can only travel to vertex 4, 6, 8, or 10. Since you cannot divide by a bigger number it will never be possible to visit a lower vertex once taking one of these paths and therefore not possible to reach vertex 3.
Your assumption about the back tracks is correct. You cannot have back tracks because you don't have any circles. Consider the following example:
When we start at 2 we will find the edges 2->4, 4->8, 4->12, 2->6 and 2->10. The edges 2->8 and 2->12 are forward edges, they are like shortcuts to get forward much faster. The edge 6->12 is a cross edge because you are switching from one branch to another branch. And since we have no circles to somehow get back to a previous node, we don't have any backward edges.
Suppose if all the edges have positive Weights the minimum product spanning tree can be obtained by taking the log of every edge and then apply Kruskal or Prim. But if some weights are negative, we can't apply this procedure. since we need to include odd number of negative edges, and those edges must be of the maximum weight. How to do in such case?
I highly doubt you can modify Prims algorithm to work for this problem because negative numbers completely change it. If you manage to get a negative result then the absolute value has to be maximized which means the edges with the highest absolute values have to be used, hence trying to optimize a result found by Prims algo and taking the log(abs()) will not work, unless it is impossible to get a negative result, then this will actually return the best solution.
This makes the problem a little simpler, because we only have to look for the best negative solution and if we don't find any we use Prims with log(abs()).
If we assign each vertice a value of 1, then two vertices can be merged by creating a new vertice with all the edges of both vertices except the one connecting them and the value is the product of the values of the removed vertices and edge.
Based on this we can start to simplify by merging all nodes with only one edge. Parallel to each merge step the removed edge has to be marked as used in the original graph, so that the tree can be reconstructed from the marked edges in the end.
Additionally we can merge all nodes with only positive or only negative edges removing the edge with the highest absolute value. After merging the new node can have several connections to the same node, you can discard all but the negative and positive edge with the highest absolute value (so max 2 edges to the same node). Btw. as soon as we have 2 edges to the same node (following the removal conditions above) we know a solution <= 0 has to exist.
If you end up with one node and it is negative then the problem was solved successfully, if it is positive there is no negative solution. If we have a 0 vertice we can merge the rest of the nodes in any order. More likely we end up with a highly connected graph where each node has at least one negative and one positive edge. If we have an odd number of negative vertices then we want to merge the nodes with an even number of negative edges and vice versa.
Always merge by the edge with the highest absolute value. If the resulting vertice is <= 0 then you found the best solution. Otherwise it gets complicated. You could look at all the unused edges, try to add it, see which edges can be removed to make it a tree again, only look at those with different sign and build the ratio abs(added_edge/removed_edge). Then finally do the change with the best ratio (if you found any combination with opposite signs otherwise there is no negative solution). But I am not 100% sure if this would always give the best result.
Here is a simple solution. If there is at least one negative edge, find the most optimal spanning tree that maximizes log(abs(edge)) sum. Then, check if the actual product (without abs) is negative. If negative output the current spanning tree, else replace one of the positive edges with a negative edge or negative with positive to get the solution.
If none of the edges are negative, minimizing for log(edge) sum should work.
Complexity: O(n^2) with a naive solution.
More explanation on naive algorithm:
Select the edge that has the lowest absolute value for removal. Removing this edge will split the tree into two parts. We could go through every pair between those sets (should be positive or negative depending on the case) whose edge value is the largest. Complexity of this part is O(n^2).
We might have to try removing multiple edges to reach the best solution. Assuming we go through every edge, complexity is O(n^3).
I am very confident this could be improved though.
We have a directed weighted graph where an edge between two nodes can have more than one possible cost value (more precisely, at most 2 costs). I need to use a time-dependent variant of the Dijkstra's algorithm that can handle two possible ways of getting from one node to another, the cost between the nodes (edge cost) being dependant on the time at which we arrive at the source node and the type of edge we are about to use. When traversing from one node to the other only one of these edges is picked and its cost is added to the same total cost.
I currently model the two possible costs for an edge as two separate edges between the same nodes.
There is a similar problem I found here and it was suggested to augment the graph by duplicating the nodes. However, this does not allow returning to the original graph and implies the overhead of, well, duplicating all the nodes and possibly edges between them and original nodes.
Do you have any suggestions as to how to tackle this problem with as little overhead as possible? (The original graph is expected to be huge)
Thanks
Edit:
I provided more details about the problem in the first paragraph
You can safely ignore the largest of the two costs for algorithm purposes.
Assume there is a shortest path the uses the largest cost between two vertices, you can change it to use the smallest cost and the path will cost less, and that contradicts the assumption.
I think you can hack step 3 of Dijsktra's algorithm :
For the current node, consider all of its unvisited neighbors and calculate their tentative distances. Compare the newly calculated tentative distance to the current assigned value and assign the smaller one. For example, if the current node A is marked with a distance of 6, and the edge connecting it with a neighbor B has length 2, then the distance to B (through A) will be 6 + 2 = 8. If B was previously marked with a distance greater than 8 then change it to 8. Otherwise, keep the current value.
In your setup, you have two distances from A to B, depending on how late it is. You use the second one if your current distance to A is above your time treshold.
This step becomes :
if current distance to A above threshold :
current distance to B = min(current distance to B, current distance to A + d2(A, B))
else:
current distance to B = min(current distance to B, current distance to A + d1(A, B))
Problem:
You are given a rooted tree where each node is numbered from 1 to N. Initially each node contains some positive value, say X. Now we are to perform two type of operations on the tree. Total 100000 operation.
First Type:
Given a node nd and a positive integer V, you need to decrease the value of all the nodes by some amount. If a node is at a distance of d from the given node then decrease its value by floor[v/(2^d)]. Do this for all the nodes.
That means value of node nd will be decreased by V (i.e, floor[V/2^0]). Values of its nearest neighbours will be decreased by floor[V/2] . And so on.
Second Type:
You are given a node nd. You have to tell the number of nodes in the subtree rooted at nd whose value is positive.
Note: Number of nodes in the tree may be upto 100000 and the initial values, X, in the nodes may be upto 1000000000. But the value of V by which the the decrement operation is to performed will be at most 100000.
How can this be done efficiently? I am stuck with this problem for many days. Any help is appreciated.
My Idea : I am thinking to solve this problem offline. I will store all the queries first. then, if somehow I can find the time[After which operation] when some node nd's value becomes less than or equal to zero(say it death time, for each and every node. Then we can do some kind of binary search (probably using Binary Indexed Trees/ Segment Trees) to answer all the queries of second type. But the problem is I am unable to find the death time for each node.
Also I have tried to solve it online using Heavy Light Decomposition but I am unable to solve it using it either.
Thanks!
Given a tree with vertex weights, there exists a vertex that, when chosen as the root, has subtrees whose weights are at most half of the total. This vertex is a "balanced separator".
Here's an O((n + k) polylog(n, k, D))-time algorithm, where n is the number of vertices and k is the number of operations and D is the maximum decrease. In the first phase, we compute the "death time" of each vertex. In the second, we count the live vertices.
To compute the death times, first split each decrease operation into O(log(D)) decrease operations whose arguments are powers of two between 1 and 2^floor(lg(D)) inclusive. Do the following recursively. Let v be a balanced separator, where the weight of a vertex is one plus the number of decrease operations on it. Compute distances from v, then determine, for each time and each power of two, the cumulative number of operations on v with that effective argument (i.e., if a vertex at distance 2 from v is decreased by 2^i, then record a -1 change in the 2^(i - 2) coefficient for v). Partition the operations and vertices by subtree. For each subtree, repeat this cumulative summary for operations originating in the subtree, but make the coefficients positive instead of negative. By putting the summary for a subtree together with v's summary, we determine the influence of decrease operations originating outside of the subtree. Finally, we recurse on each subtree.
Now, for each vertex w, we compute the death time using binary search. The decrease operations affecting w are given in a logarithmic number of summaries computed in the manner previously described, so the total cost for one vertex is log^2.
It sounds as though you, the question asker, know how the next part goes, but for the sake of completeness, I'll describe it. Do a preorder traversal to assign new labels to vertices and also compute for each vertex the interval of labels that comprises its subtree. Initialize a Fenwick tree mapping each vertex to one (live) or zero (dead), initially one. Put the death times and queries in a priority queue. To process a death, decrease the value of that vertex by one. To process a query, sum the values of vertices in the subtree interval.
I read this statement in "Hitchhiker's guide to algorithms". But, I'm not able to visualize it as in a LIS problem all we have is a sequence of numbers. How can I modulate it as a graph problem?
Imagine the problem of a 2D grid. You're on the bottom left square and you need to get to the top right square. Can you imagine an acyclic DAG out of this scheme?
Now imagine some of the squares are forbidden. Making squares forbidden may lead to a 'lock' (you might find yourself trapped), and now chosing which paths to follow is actually important.
In terms of graph, you can think of forbidding squares as removing vertices, and your goal is go from the root to one specific node (the sink).
Now let's go back to the LIS. When solving the LIS, what you actually need to do is decide which numbers you'll pick, and which you won't. The restriction is that whenever you pick one number, you can't pick any number which is smaller than this one (you pick the numbers in order).
Now we can mix both things. Imagine that you'll build a graph out of your sequence of numbers:
Every number will be a node.
Number-node A has an edge to number-node B iff
B comes after A in the sequence
B is greater than A in value
There's a special node end
Every node has an edge to end
Every path on this graph is a valid increasing subsequence. The problem of finding the LIS is now the problem of finding the longest path on this graph.
If we have an array of numbers, say 1, 5, 4, 8 for example. We can construct a DAG like the following.
Add each number as a vertex.
For each number vertex, add outgoing edges of weight 1 to all the greater numbers after it.
Add a node S that has outgoing edges of weight 0 to all the number vertices.
Add a node E that has incoming edges of weight 0 from all the numbers vertices.
Thus the Longest Increasing Subsequence problem turns into the Longest Path from S to E problem.