Can you help explain this Held-Karp TSP Pseudocode? - algorithm

I am trying to implement the Held-Karp algorithm for the Traveling Salesman Problem by following this pseudocode:
(which I found here: https://en.wikipedia.org/wiki/Held%E2%80%93Karp_algorithm#Example.5B4.5D )
I can do the algorithm by hand but am having trouble actually implementing it in code. It would be great if someone could provide an easy-to-follow explanation.
I also don't understand this:
I thought this part was for setting the distance from the starting city to it's connected cities. If that was the case, wouldn't it be it C({1}, k) := d1,k and not C({k}, k) := d1,k? Am I just completely misunderstanding this?
I have also heard that this algorithm does not perform very well past about 15-20 cities so for around 40 cities, what would be a good alternative?

Held-Karp is a dynamic programming approach.
In dynamic programming, you break the task into subtasks and use "dynamic function" to solve larger subtasks using already computed results of smaller subtasks, until you finally solve your task.
To understand a DP algorithm it's imperative to understand how it defines subtask and dynamic function.
In the case of Held-Karp, the subtask is following:
For a given set of vertices S and a vertex k   (1 ∉ S, k ∈ S)
C(S,k) is the minimal length of the path that starts with vertex 1, traverses all vertices in S and ends with the vertex k.
Given this subtask definition, it's clear why initialization is:
C({k}, k) := d(1,k)
The minimal length of the path from 1 to k, traversing through {k}, is just the edge from 1 to k.
Next, the "dynamic function".
A side note, DP algorithm could be written as top-down or bottom-up. This pseudocode is bottom-up, meaning it computes smaller tasks first and uses their results for larger tasks. To be more specific, it computes tasks in the order of increasing size of the set S, starting from |S|=1 and going up to |S| = n-1 (i.e. S containing all vertices, except 1).
Now, consider a task, defined by some S, k. Remember, it corresponds to path from 1, through S, ending in k.
We break it into a:
path from 1, through all vertices in S except k (S\k), which ends in the vertex m   (m ∈ S, m ≠ k):  C(S\k, m)
an edge from m to k
It's easy to see, that if we look through all possible ways to break C(S,k) like this, and find the minimal path among them, we'll have the answer for C(S, k).
Finally, having computed all C(S, k) for |S| = n-1, we check all of them, completing the cycle with the missing edge from k to 1:  d(1,k). The minimal cycle is the final result.
Regarding:
I have also heard that this algorithm does not perform very well past about 15-20 cities so for around 40 cities, what would be a good alternative?
Held-Karp has algorithmic complexity of θ(n²2n). 40² * 240 ≈ 1.75 * 1015 which, I would say, is unfeasible to compute on a single machine in reasonable time.
As David Eisenstat suggested, there are approaches using mixed integer programming that can solve this problem fast enough for N=40.
For example, see this blog post, and this project that builds upon it.

Related

Is dynamic programming backtracking with cache

I've always wondered about this. And no books state this explicitly.
Backtracking is exploring all possibilities until we figure out one possibility cannot lead us to a possible solution, in that case we drop it.
Dynamic programming as I understand it is characterized by overlapping sub-problems. So, can dynamic programming can be stated as backtracking with cache (for previously explored paths) ?
Thanks
This is one face of dynamic programming, but there's more to it.
For a trivial example, take Fibonacci numbers:
F (n) =
n = 0: 0
n = 1: 1
else: F (n - 2) + F (n - 1)
We can call the above code "backtracking" or "recursion".
Let us transform it into "backtracking with cache" or "recursion with memoization":
F (n) =
n in Fcache: Fcache[n]
n = 0: 0, and cache it as Fcache[0]
n = 1: 1, and cache it as Fcache[1]
else: F (n - 2) + F (n - 1), and cache it as Fcache[n]
Still, there is more to it.
If a problem can be solved by dynamic programming, there is a directed acyclic graph of states and dependencies between them.
There is a state that interests us.
There are also base states for which we know the answer right away.
We can traverse that graph from the vertex that interests us to all its dependencies, from them to all their dependencies in turn, etc., stopping to branch further at the base states.
This can be done via recursion.
A directed acyclic graph can be viewed as a partial order on vertices. We can topologically sort that graph and visit the vertices in sorted order.
Additionally, you can find some simple total order which is consistent with your partial order.
Also note that we can often observe some structure on states.
For example, the states can be often expressed as integers or tuples of integers.
So, instead of using generic caching techniques (e.g., associative arrays to store state->value pairs), we may be able to preallocate a regular array which is easier and faster to use.
Back to our Fibonacci example, the partial order relation is just that state n >= 2 depends on states n - 1 and n - 2.
The base states are n = 0 and n = 1.
A simple total order consistent with this order relation is the natural order: 0, 1, 2, ....
Here is what we start with:
Preallocate array F with indices 0 to n, inclusive
F[0] = 0
F[1] = 1
Fine, we have the order in which to visit the states.
Now, what's a "visit"?
There are again two possibilities:
(1) "Backward DP": When we visit a state u, we look at all its dependencies v and calculate the answer for that state u:
for u = 2, 3, ..., n:
F[u] = F[u - 1] + F[u - 2]
(2) "Forward DP": When we visit a state u, we look at all states v that depend on it and account for u in each of these states v:
for u = 1, 2, 3, ..., n - 1:
add F[u] to F[u + 1]
add F[u] to F[u + 2]
Note that in the former case, we still use the formula for Fibonacci numbers directly.
However, in the latter case, the imperative code cannot be readily expressed by a mathematical formula.
Still, in some problems, the "forward DP" approach is more intuitive (no good example for now; anyone willing to contribute it?).
One more use of dynamic programming which is hard to express as backtracking is the following: Dijkstra's algorithm can be considered DP, too.
In the algorithm, we construct the optimal paths tree by adding vertices to it.
When we add a vertex, we use the fact that the whole path to it - except the very last edge in the path - is already known to be optimal.
So, we actually use an optimal solution to a subproblem - which is exactly the thing we do in DP.
Still, the order in which we add vertices to the tree is not known in advance.
No. Or rather sort of.
In backtracking, you go down and then back up each path. However, dynamic programming works bottom-up, so you only get the going-back-up part not the original going-down part. Furthermore, the order in dynamic programming is more breadth first, whereas backtracking is usually depth first.
On the other hand, memoization (dynamic programming's very close cousin) does very often work as backtracking with a cache, as you describede.
Yes and no.
Dynamic Programming is basically an efficient way to implement a recursive formula, and top-down DP is many times actually done with recursion + cache:
def f(x):
if x is in cache:
return cache[x]
else:
res <- .. do something with f(x-k)
cahce[x] <- res
return res
Note that bottom-up DP is implemented completely different however - but still pretty much follows the basic principles of the recursive approach, and at each step 'calculates' the recursive formula on the smaller (already known) sub-problems.
However, in order to be able to use DP - you need to have some characteristics for the problem, mainly - an optimal solution to the problem consists of optimal solutions to its sub-problems. An example where it holds is shortest-path problem (An optimal path from s to t that goes through u must consist of an optimal path from s to u).
It does not exist on some other problems such as Vertex-Cover or Boolean satisfiability Problem , and thus you cannot replace the backtracking solution for it with DP.
No. What you call backtracking with cache is basically memoization.
In dynamic programming, you go bottom-up. That is, you start from a place where you don't need any subproblems. In particular, when you need to calculate the nth step, all the n-1 steps are already calculated.
This is not the case for memoization. Here, you start off from the kth step (the step you want) and go on solving the previous steps wherever required. And obviously keep these values stored somewhere so that you may access these later.
All these being said, there are no differences in running time in case of memoization and dynamic programming.

A greedy or dynamic algorithm to subset selection

I have a simple algorithmic question. I would be grateful if you could help me.
We have some 2 dimensional points. A positive weight is associated to them (a sample problem is attached). We want to select a subset of them which maximizes the weights and neither of two selected points overlap each other (for example, in the attached file, we cannot select both A and C because they are in the same row, and in the same way we cannot select both A and B, because they are in the same column.) If there is any greedy (or dynamic) approach I can use. I'm aware of non-overlapping interval selection algorithm, but I cannot use it here, because my problem is 2 dimensional.
Any reference or note is appreciated.
Regards
Attachment:
A simple sample of the problem:
A (30$) -------- B (10$)
|
|
|
|
C (8$)
If you are OK with a good solution, and do not demand the best solution - you can use heuristical algorithms to solve this.
Let S be the set of points, and w(s) - the weightening function.
Create a weight function W:2^S->R (from the subsets of S to real numbers):
W(U) = - INFINITY is the solution is not feasible
Sigma(w(u)) for each u in U otherwise
Also create a function next:2^S -> 2^2^S (a function that gets a subset of S, and returns a set of subsets of S)
next(U) = V you can get V from U by adding/removing one element to/from U
Now, given that data - you can invoke any optimization algorithm in the Artificial Intelligence book, such as Genetic Algorithm or Hill Climbing.
For example, Hill Climbing with random restarts, will be something like that:
1. best<- -INFINITY
2. while there is more time
3. choose a random subset s
4. NEXT <- next(s)
5. if max{ W(v) | for each v in NEXT} < W(s): //s is a local maximum
5.1. if W(s) > best: best <- W(s) //if s is better then the previous result - store it.
5.2. go to 2. //restart the hill climbing from a different random point.
6. else:
6.1. s <- max { NEXT }
6.2. goto 4.
7. return best //when out of time, return the best solution found so far.
The above algorithm is anytime - meaning it will produce better results if given more time.
This can be treated as a linear assignment problem, which can be solved using an algorithm like the Hungarian algorithm. The algorithm tries to minimize the sum of costs, so just negate your weights, and use them as the costs. The assignment of rows to columns will give you the subset of points that you need. There are sparse variants for cases where not every (row,column) pair has an associated point, but you can also just use a large positive cost for these.
Well you can think of this as a binary constraint optimization problem, and there are various algorithms. The easiest algorithm for this problem is backtracking and arc propogation. However, it takes exponential time in the worst case. I am not sure if there are any specific algorithms to take advantage of the geometrical nature of the problem.
This can be solved by a pretty straight forward dynamic programming approach with a exponential time complexity
s = {A, B, C ...}
getMaxSum(s) = max( A.value + getMaxSum(compatibleSubSet(s, A)),
B.value + getMaxSum(compatibleSubSet(s, B)),
...)
where compatibleSubSet(s, A) gets the subset of s that does not overlap with A
To optimize it, you can memorize the result for each subset
Some way to do it:
Write a function that generates subsets ordered from the subset off maximum weight to the subset off minimum weight while ignoring the constraints.
Then call this function repeatedly until a subset that honors the constraints pops up.
In order to improve the performance, you can write a not so dumb generator function that for instance honors the not-on-the-same-row constraint but that ignores the not-on-the-same-column one.

Skipping more than one node in Floyd's cycle finding Algorithm

Today I was reading Floyd's algorithm of detecting loop in a linked list.
I was just wondering that won't it be better if we skip more than one node, (say 2)
for faster loop detection?
For example:
fastptr=fastptr->next->next->next.
Note that the side effects will be taken into account while changing fastptr.
Your suggestion still is correct, but it doesn't change the speed of algorithm. If you take a look at tortoise and hare algorithm in wiki:
The key insight in the algorithm is that, for any integers i ≥ μ and k
≥ 0, xi = xi + kλ, where λ is the length of the
loop to be found and μ is start position of loop. In particular,
whenever i = kλ ≥ μ, it follows that xi =
x2i.
In the bold part, you could also say xi = x3i, or any other coefficient, but key insight is finding the i, it's not important with how many jumps you will find it, and order of algorithm, depends to the location of i.

Selecting k sub-posets

I ran into the following algorithmic problem while experimenting with classification algorithms. Elements are classified into a polyhierarchy, what I understand to be a poset with a single root. I have to solve the following problem, which looks a lot like the set cover problem.
I uploaded my Latex-ed problem description here.
Devising an approximation algorithm that satisfies 1 & 2 is quite easy, just start at the vertices of G and "walk up" or start at the root and "walk down". Say you start at the root, iteratively expand vertexes and then remove unnecessary vertices until you have at least k sub-lattices. The approximation bound depends on the number of children of a vertex, which is OK for my application.
Does anyone know if this problem has a proper name, or maybe the tree-version of the problem? I would be interested to find out if this problem is NP-hard, maybe someone has ideas for a good NP-hard problem to reduce or has a polynomial algorithm to solve the problem. If you have both collect your million dollar price. ;)
The DAG version is hard by (drum roll) a reduction from set cover. Set k = 2 and do the obvious: condition (2) prevents us from taking the root. (Note that (3) doesn't actually imply (2) because of the lower bound k.)
The tree version is a special case of the series-parallel poset version, which can be solved exactly in polynomial time. Here's a recursive formula that gives a polynomial p(x) where the coefficient of xn is the number of covers of cardinality n.
Single vertex to be covered: p(x) = x.
Other vertex: p(x) = 1 + x.
Parallel composition, where q and r are the polynomials for the two posets: q(x) r(x).
Series composition, where q is the polynomial for the top poset and r, for the bottom: If the top poset contains no vertices to be covered, then p(x) = (q(x) - 1) + r(x); otherwise, p(x) = q(x).

Point covering problem

I recently had this problem on a test: given a set of points m (all on the x-axis) and a set n of lines with endpoints [l, r] (again on the x-axis), find the minimum subset of n such that all points are covered by a line. Prove that your solution always finds the minimum subset.
The algorithm I wrote for it was something to the effect of:
(say lines are stored as arrays with the left endpoint in position 0 and the right in position 1)
algorithm coverPoints(set[] m, set[][] n):
chosenLines = []
while m is not empty:
minX = min(m)
bestLine = n[0]
for i=1 to length of n:
if n[i][0] <= minX and n[i][1] > bestLine[1] then
bestLine = n[i]
add bestLine to chosenLines
for i=0 to length of m:
if m[i] <= bestLine[1] then delete m[i] from m
return chosenLines
I'm just not sure if this always finds the minimum solution. It's a simple greedy algorithm so my gut tells me it won't, but one of my friends who is much better than me at this says that for this problem a greedy algorithm like this always finds the minimal solution. For proving mine always finds the minimal solution I did a very hand wavy proof by contradiction where I made an assumption that probably isn't true at all. I forget exactly what I did.
If this isn't a minimal solution, is there a way to do it in less than something like O(n!) time?
Thanks
Your greedy algorithm IS correct.
We can prove this by showing that ANY other covering can only be improved by replacing it with the cover produced by your algorithm.
Let C be a valid covering for a given input (not necessarily an optimal one), and let S be the covering according to your algorithm. Now lets inspect the points p1, p2, ... pk, that represent the min points you deal with at each iteration step. The covering C must cover them all as well. Observe that there is no segment in C covering two of these points; otherwise, your algorithm would have chosen this segment! Therefore, |C|>=k. And what is the cost (segments count) in your algorithm? |S|=k.
That completes the proof.
Two notes:
1) Implementation: Initializing bestLine with n[0] is incorrect, since the loop may be unable to improve it, and n[0] does not necessarily cover minX.
2) Actually this problem is a simplified version of the Set Cover problem. While the original is NP-complete, this variation results to be polynomial.
Hint: first try proving your algorithm works for sets of size 0, 1, 2... and see if you can generalise this to create a proof by induction.

Resources