Maximum weighted pairing algorithm for complete graph - algorithm

The mathematical problem
Let there be 2n persons, and C(i,j) the "cost" of having i and j work together (the function C is quick to compute, in my case it is a given matrix, and is symmetric). The question is to find the arrangement of 2n pairs of persons that minimizes the sum of the costs of each pair.
This should be done in polynomial complexity in n, and implemented relatively easily in the Scilab language (input : cost matrix, output : pairings, for instance a n-by-2 matrix of indexes). I am aware that "relatively easily" is subject to interpretation...
Previous research
This problem is actually solved by the Blossom algorithm. See for instance this paper.
However, this (and its variants) looks like a nightmare to implement. My real problem is for n=20, so although brute force (= trying all possible pairings) is not OK (brute-forcing n=8 took an hour on my computer), pretty much anything better than brute force should do the trick; if I can avoid one week of coding at the cost of one hour of computation I'm in.
I was thinking along the lines of using the Hungarian/Munkres algorithm on a 2n-by-2n array filling the diagonal with +%inf and other elements by the symmetric cost matrix, then somehow selecting from the resulting permutation a relevant pairing, but I fail to find a reliable way to do this. (Note, the Hungarian algorithm is already coded for a separate section, so you may use it without cost to the "easy to implement" requirement.)
I hope that compared to the blossom-algorithm problem, the completeness of the graph allows for some shortcuts... (Edit: see DE's comment below, this is wrong for semi-obvious reasons)

I do not know Scilab I am afraid, but if you are willing to use Python it is very easy as the Networkx library provides support for this function:
import networkx as nx
import networkx.algorithms.matching as matching
def C(i,j):
return i*j
n=40
G=nx.Graph()
for i in range(n):
for j in range(n):
G.add_edge(i,j,weight = -C(i,j))
M = matching.max_weight_matching(G,maxcardinality=True)
for i in M:
print i,'with',M[i]
This code prints out the answer within a second.
The function C defines the cost of pairing i with j. Note that the weights are set to -C(i,j) in order to transform the max_weight_matching into a min_weight_matching algorithm.

Related

What's the worst-case valid sudoku puzzle for simple backtracking brute force algorithm?

The "simple/naive backtracking brute force algorithm", "Straightforward Depth-First Search" for sudoku is commonly known and implemented.
and no different implementation seems to exist.
(when i first wrote this question.. i wanted to mean we could completely standardize it, but the wording is bad..)
This guy has described the algorithm well i think: https://stackoverflow.com/a/2075498/3547717
Edit: So let me have it more specified with pseudo code...
var field[9][9]
set the givens in 'field'
if brute (first empty grid) = true then
output solution
else
output no solution
end if
function brute (cx, cy)
for n = 1 to 9
if (n doesn't present in row cy) and (n doesn't present in column cx) and (n doesn't present in block (cx div 3, cy div 3)) then
let field[cx][cy] = n
if (cx, cy) this is the last empty grid then
return true
elseif brute (next empty grid) = true then
return true
end if
let field[cx][cy] = empty
end if
next n
end function
I want to find the puzzle that requires most time. We may call it "hardest" for this particular "standardized" algorithm, but this one is not like those questions asking for "Hardest sudoku".
In fact, a "hard" puzzle under this definition may turn super easy when simply rotated or flipped.
According to the rule "for each grid try number 1 to 9", it tries from 1 on, so we may somehow let it try more by using proper number, by the way there won't be permutation problem.
The sudoku puzzle must be valid, i.e. it should have exactly 1 solution. Some guy got a puzzle requiring 1439 seconds, but it's not valid because of having no solution.
I define the time required (or say time complexity) equivalent to how many times the recursive function is entered. (in my implementation, it's slightly different from the pseudo code above, because of the last entrance, and ensuring unique solution, etc.)
Is there any good way to construct it, or we have to use approximate ones like heuristic algorithms to find inexact solutions?
I've implemented a backtracking with both naive strategy (that I referred to as "simple" above, it's unique) and Peter Norvig's "Least Candidates First" strategy (my implementation is deterministic, but not unique. As Peter has also mentioned, the order of python dict changes the result a lot, in case of a tie on the number of candidates).
https://github.com/farteryhr/labs/blob/master/sudoku.c
The no-solution one:
.....5.8....6.1.43..........1.5........1.6...3.......553.....61........4.........
takes 60 seconds on my laptop to get the no-solution conclusion, entering the recursion function 2549798781 times (called "cycles" later). With my implementation of LCF, 78308087 cycles in 30 seconds to conclude. It's because finding the grid with least candidates needs more operations, a single cycle of LCF strategy uses about 16x more time.
The topmost one on the Hardest list:
4.....8.5.3..........7......2.....6.....8.4......1.......6.3.7.5..2.....1.4......
takes 3.0s, found the solution at cycle 9727397, and 142738236 cycles for ensuring unique solution. (my LCF: 981/7216 in 0.004s)
Many in the "hard" list are still easy for naive, though a larger portion of them needs 10^7 to 10^9 cycles.
On Wikipedia: Sudoku solving algorithms (Original) it's stated that such puzzles against backtracking algorithm can be constructed, by making as many empty grids at the beginning as possible and the permutation of the top row 987654321.
Well the test..
..............3.85..1.2.......5.7.....4...1...9.......5......73..2.1........4...9
takes 1.4s, 69175317 cycles for finding solution, 69207227 cycles ensuring unique solution. Not as good as the hard one provided by Peter, but OK, and it's almost right after finding the solution, the search ends. That's probably how the first row works by being lexicographically large. (my LCF: 29206/46160 in 0.023s)
Yes these are obvious, I'm just asking for better ways...
There are also other ways of measuring the difficulty of Sudoku (through solving)
Sudoku Analyst will get stuck with the multiple-solution puzzle given by Peter (naive 419195/419256, LCF 2529478/2529482, yes, there are some puzzles that make LCF do worse):
.....6....59.....82....8....45........3........6..3.54...325..6..................
This one is easy for both naive backtracking (10008/76703) and LCF backtracking (313/1144), but also gets Sudoku Analyst stuck.
..53.....8......2..7..1.5..4....53...1..7...6..32...8..6.5....9..4....3......97..
Another update:
The most difficult Sudoku puzzles are quickly solved by a straightforward depth-first search algorithm
Ha, finally someone also looking for it, and a super tough one is given! The following valid puzzle:
9..8...........5............2..1...3.1.....6....4...7.7.86.........3.1..4.....2..
In this paper, the algorithm is named SDFS, Straightforward Depth-First Search. The number of cycles stated by the author is 1553023932/1884424814, and with my implementation, it's 1305263522/1584688020. Yes, there will be some difference on precisely where to pop the counter, but the basic behavior matches. On repl.it 's server, it took 97s to find the answer and 119s to finish the search.
You can easily generate the worst case by recording the time taken / no. of operations taken by your code to solve hard sudoku puzzles. You can either use a random generator that generates valid sudoku puzzles (or) you can take hard sudoku puzzles from the internet and run your code against it to measure the time/number of operations. Once you run your code against 10000 such cases the slowest 5 (and the unsolved ones) would be the worst cases for your solution.

Suggestions for fragment proposal algorithm

I'm currently trying to solve the following problem, but am unsure which algorithm I should be using. Its in the area of mass identification.
I have a series of "weights", *w_i*, which can sum up to a total weight. The as-measured total weight has an error associated with it, so is thus inexact.
I need to find, given the total weight T, the closest k possible combinations of weights that can sum up to the total, where k is an input from the user. Each weight can be used multiple times.
Now, this sounds suspiciously like the bounded-integer multiple knapsack problem, however
it is possible to go over the weight, and
I also want all of the ranked solutions in terms of error
I can probably solve it using multiple sweeps of the knapsack problem, from weight-error->weight+error, by stepping in small enough increments, however it is possible if the increment is too large to miss certain weight combinations that could be used.
The number of weights is usually small (4 ->10 weights) and the ratio of the total weight to the mean weight is usually around 2 or 3
Does anyone know the names of an algorithm that might be suitable here?
Your problem effectively resembles the knapsack problem which is a NP-complete problem.
For really limited number of weights, you could run over every combinations with repetition followed by a sorting which gives you a quite high number of manipulations; at best: (n + k - 1)! / ((n - 1)! · k!) for the combination and n·log(n) for the sorting part.
Solving this kind of problem in a reasonable amount of time is best done by evolutionary algorithms nowadays.
If you take the following example from deap, an evolutionary algorithm framework in Python:
ga_knapsack.py, you realise that by modifying lines 58-59 that automatically discards an overweight solution for something smoother (a linear relation, for instance), it will give you solutions close to the optimal one in a shorter time than brute force. Solutions are already sorted for you at the end, as you requested.
As a first attempt I'd go for constraint programming (but then I almost always do, so take the suggestion with a pinch of salt):
Given W=w_1, ..., w_i for weights and E=e_1,.., e_i for the error (you can also make it asymmetric), and T.
Find all sets S (if the weights are unique, or a list) st sum w_1+e_1,..., w_k+e_k (where w_1, .., w_k \elem and e_1, ..., e_k \elem E) \approx T within some delta which you derive from k. Or just set it to some reasonably large value and decrease it as you are solving the constraints.
I just realise that you also want to parametrise the expression w_n op e_m over op \elem +, - (any combination of weights and error terms) and off the top of my head I don't know which constraint solver would allow you to do that. In any case, you can always fall back to prolog. It may not fly, especially if you have a lot of weights, but it will give you solutions quickly.

Efficient Computation of The Least Fixed Point of A Polynomial

Let P(x) denote the polynomial in question. The least fixed point (LFP) of P is the lowest value of x such that x=P(x). The polynomial has real coefficients. There is no guarantee in general that an LFP will exist, although one is guaranteed to exist if the degree is odd and ≥ 3. I know of an efficient solution if the degree is 3. x=P(x) thus 0=P(x)-x. There is a closed-form cubic formula, solving for x is somewhat trivial and can be hardcoded. Degrees 2 and 1 are similarly easy. It's the more complicated cases that I'm having trouble with, since I can't seem to come up with a good algorithm for arbitrary degree.
EDIT:
I'm only considering real fixed points and taking the least among them, not necessarily the fixed point with the least absolute value.
Just solve f(x) = P(x) - x using your favorite numerical method. For example, you could iterate
x_{n + 1} = x_n - P(x_n) / (P'(x_n) - 1).
You won't find closed-form formula in general because there aren't any closed-form formula for quintic and higher polynomials. Thus, for quintic and higher degree you have to use a numerical method of some sort.
Since you want the least fixed point, you can't get away without finding all real roots of P(x) - x and selecting the smallest.
Finding all the roots of a polynomial is a tricky subject. If you have a black box routine, then by all means use it. Otherwise, consider the following trick:
Form M the companion matrix of P(x) - x
Find all eigenvalues of M
but this requires you have access to a routine for finding eigenvalues (which is another tricky problem, but there are plenty of good libraries).
Otherwise, you can implement the Jenkins-Traub algorithm, which is a highly non trivial piece of code.
I don't really recommend finding a zero (with eg. Newton's method) and deflating until you reach degree one: it is very unstable if not done properly, and you'll lose a lot of accuracy (and it is very difficult to tackle multiple roots with it). The proper way do do it is in fact the above-mentioned Jenkins-Traub algorithm.
This problem is trying to find the "least" (here I'm not sure if you mean in magnitude or actually the smallest, which could be the most negative) root of a polynomial. There is no closed form solution for polynomials of large degree, but there are myriad numerical approaches to finding roots.
As is often the case, Wikipedia is a good place to begin your search.
If you want to find the smallest root, then you can use the rule of signs to pin down the interval where it exists and then use some numerical method to find roots in that interval.

Find the priority function / alphabet order for extreme higher order elements relation

This question is an extension to the following one. The difference is that now our function to optimize will have higher order relations between elements:
We have an array of elements a1,a2,...aN from an alphabet E. Assuming |N| >> |E|.
For each symbol of the alphabet we define an unique integer priority = V(sym). Let's define V{i} := V(symbol(ai)) for the simplicity.
The task is to find a priority function V for which:
Count(i)->MIN | V{i} > V{i+1} <= V{i+2}
In other words, I need to find the priorities / permutation of the alphabet for which the number of positions i, satisfying the condition V{i}>V{i+1}<=V{i+2}, is minimum.
Maximum required abstraction (low priority for me). I guess once the solution model for the initial question is extended to cover the first part of this one, extending it farther (see below) will be easier.
Given a matrix of signs B of size MxK (basically B[i,j] is from the set {<,>,<=,>=}), find the priority function V for which:
Sum(for all j in range [1,M]) {Count(i)}->EXTREMUM | V{i} B[j,1] V{i+1} B[j,2] ... B[j,K] V{i+K}
As an example, find the priority function V, for which the number of i, satisfying V{i}<V{i+1}<V{i+2} or V{i}>V{i+1}>V{i+2}, is minimum.
My intuition is that all variations on this problem will prove to be NP-hard. So I'd begin looking for heuristics that produce reasonable answers. This may involve some trial and error.
A simplistic approach is to write down a possible permutation. And then try possible swaps until you've arrived at a local minimum. Try several times, and pick the best answer.
Simulated annealing provides a more sophisticated version of this approach, see http://en.wikipedia.org/wiki/Simulated_annealing for a description. It may take some experimentation to find a set of parameters that seems to converge relatively well.
Another idea is to look for a genetic algorithm. Based on a quick Google search it looks like the standard way to do this is to try to turn an NP-complete problem into a SAT problem, and then use a genetic algorithm on that problem. This approach would require turning this into a SAT problem in some reasonable way. Unfortunately it is not obvious to me how one would go about doing this reduction. Indeed in the first version that you had, your problem was closely connected to a classic NP-hard problem. The fact that it is labeled NP-hard rather than NP-complete is evidence that people haven't found a good way to transform it into a SAT problem. So if it isn't obvious how to turn the simple version into a SAT problem, then you are unlikely to convert the hard problem either.
But you could still try some variation on genetic algorithms. Mutation is pretty simple, just swap some elements around. One way to combine elements would be to take 3 permutations and use quicksort to find the combination as follows: take a random pivot, and then use "majority wins" to bucket elements into bigger and smaller. Sort each half in the same way.
I'm sorry that I can't just give you an approach and say, "This should work." You've got what looks like an open-ended research project, and the best I can do is give you some ideas about things you can try that might work reasonably well.

What's the most insidious way to pose this problem?

My best shot so far:
A delivery vehicle needs to make a series of deliveries (d1,d2,...dn), and can do so in any order--in other words, all the possible permutations of the set D = {d1,d2,...dn} are valid solutions--but the particular solution needs to be determined before it leaves the base station at one end of the route (imagine that the packages need to be loaded in the vehicle LIFO, for example).
Further, the cost of the various permutations is not the same. It can be computed as the sum of the squares of distance traveled between di -1 and di, where d0 is taken to be the base station, with the caveat that any segment that involves a change of direction costs 3 times as much (imagine this is going on on a railroad or a pneumatic tube, and backing up disrupts other traffic).
Given the set of deliveries D represented as their distance from the base station (so abs(di-dj) is the distance between two deliveries) and an iterator permutations(D) which will produce each permutation in succession, find a permutation which has a cost less than or equal to that of any other permutation.
Now, a direct implementation from this description might lead to code like this:
function Cost(D) ...
function Best_order(D)
for D1 in permutations(D)
Found = true
for D2 in permutations(D)
Found = false if cost(D2) > cost(D1)
return D1 if Found
Which is O(n*n!^2), e.g. pretty awful--especially compared to the O(n log(n)) someone with insight would find, by simply sorting D.
My question: can you come up with a plausible problem description which would naturally lead the unwary into a worse (or differently awful) implementation of a sorting algorithm?
I assume you're using this question for an interview to see if the applicant can notice a simple solution in a seemingly complex question.
[This assumption is incorrect -- MarkusQ]
You give too much information.
The key to solving this is realizing that the points are in one dimension and that a sort is all that is required. To make this question more difficult hide this fact as much as possible.
The biggest clue is the distance formula. It introduces a penalty for changing directions. The first thing an that comes to my mind is minimizing this penalty. To remove the penalty I have to order them in a certain direction, this ordering is the natural sort order.
I would remove the penalty for changing directions, it's too much of a give away.
Another major clue is the input values to the algorithm: a list of integers. Give them a list of permutations, or even all permutations. That sets them up to thinking that a O(n!) algorithm might actually be expected.
I would phrase it as:
Given a list of all possible
permutations of n delivery locations,
where each permutation of deliveries
(d1, d2, ...,
dn) has a cost defined by:
Return permutation P such that the
cost of P is less than or equal to any
other permutation.
All that really needs to be done is read in the first permutation and sort it.
If they construct a single loop to compare the costs ask them what the big-o runtime of their algorithm is where n is the number of delivery locations (Another trap).
This isn't a direct answer, but I think more clarification is needed.
Is di allowed to be negative? If so, sorting alone is not enough, as far as I can see.
For example:
d0 = 0
deliveries = (-1,1,1,2)
It seems the optimal path in this case would be 1 > 2 > 1 > -1.
Edit: This might not actually be the optimal path, but it illustrates the point.
YOu could rephrase it, having first found the optimal solution, as
"Give me a proof that the following convination is the most optimal for the following set of rules, where optimal means the smallest number results from the sum of all stage costs, taking into account that all stages (A..Z) need to be present once and once only.
Convination:
A->C->D->Y->P->...->N
Stage costs:
A->B = 5,
B->A = 3,
A->C = 2,
C->A = 4,
...
...
...
Y->Z = 7,
Z->Y = 24."
That ought to keep someone busy for a while.
This reminds me of the Knapsack problem, more than the Traveling Salesman. But the Knapsack is also an NP-Hard problem, so you might be able to fool people to think up an over complex solution using dynamic programming if they correlate your problem with the Knapsack. Where the basic problem is:
can a value of at least V be achieved
without exceeding the weight W?
Now the problem is a fairly good solution can be found when V is unique, your distances, as such:
The knapsack problem with each type of
item j having a distinct value per
unit of weight (vj = pj/wj) is
considered one of the easiest
NP-complete problems. Indeed empirical
complexity is of the order of O((log
n)2) and very large problems can be
solved very quickly, e.g. in 2003 the
average time required to solve
instances with n = 10,000 was below 14
milliseconds using commodity personal
computers1.
So you might want to state that several stops/packages might share the same vj, inviting people to think about the really hard solution to:
However in the
degenerate case of multiple items
sharing the same value vj it becomes
much more difficult with the extreme
case where vj = constant being the
subset sum problem with a complexity
of O(2N/2N).
So if you replace the weight per value to distance per value, and state that several distances might actually share the same values, degenerate, some folk might fall in this trap.
Isn't this just the (NP-Hard) Travelling Salesman Problem? It doesn't seem likely that you're going to make it much harder.
Maybe phrasing the problem so that the actual algorithm is unclear - e.g. by describing the paths as single-rail railway lines so the person would have to infer from domain knowledge that backtracking is more costly.
What about describing the question in such a way that someone is tempted to do recursive comparisions - e.g. "can you speed up the algorithm by using the optimum max subset of your best (so far) results"?
BTW, what's the purpose of this - it sounds like the intent is to torture interviewees.
You need to be clearer on whether the delivery truck has to return to base (making it a round trip), or not. If the truck does return, then a simple sort does not produce the shortest route, because the square of the return from the furthest point to base costs so much. Missing some hops on the way 'out' and using them on the way back turns out to be cheaper.
If you trick someone into a bad answer (for example, by not giving them all the information) then is it their foolishness or your deception that has caused it?
How great is the wisdom of the wise, if they heed not their ego's lies?

Resources