What is the name of this problem? - algorithm

You are given a list of distances between various points on a single line.
For example:
100 between a and b
20 between c and b
90 between c and d
170 between a and d
Return the sorted sequence of points as they appear on the line with distances between them:
For example the above input yields:
a<----80-----> c <----20------> b <----70-----> d or the reverse sequence(doesn't matter)
What is this problem called? I would like to research it.
If anybody knows, also, what are some of the possible asymptotic runtimes achieved for this?

not sure it has a name; more formally stated, it would be:
|a-b| = 100
|c-b| = 20
|c-d| = 90
|a-d| = 170
where |x| stands for the absolute value of x
As far as the generalized system goes, if you have N equations of this type with k unknowns, you have N choices of sign. Without loss of generality (because any solution yields a second solution with reversed ordering) you can choose a sign for the first equation, and a particular value for one of the unknowns (since the whole thing can slide left and right in position). Then you have 2N-1 possibilities for the remaining equations, and all you have to do is go through them to see which ones, if any, have solutions. Because the coefficients are all +/- 1 and each equation has 2 unknowns, you just go through them one by one:
Step 1: Without loss of generality,
choose a sign for one equation
and pick a value for one unknown:
a-b = 100, a = 0
Step 2: Choose signs for the remaining absolute values.
a = 0
a-b = 100
c-b = 20
c-d = 90
a-d = 170
Step 3: go through them one by one to solve / verify there aren't conflicts
(time = N steps).
0-b = 100 => b = -100
c-b = 20 => c = -80
c-d = 90 => d = -170
a-d = 170 => OK => (0,-100,-80,-170) is a solution
Step 4: if this doesn't work, go back through the possible choices of sign
and try again, starting at step 2.
Full set of solutions is (0,-100,-80,-170)
and its negation (0,100,80,170) and any number x<sub>0</sub> added to all terms.
So an upper bound for the runtime is O(N * 2N-1) ≡ O(N * 2N).
I suppose there could be a shortcut but no obvious one comes to mind.

As written, your problem is just a system of non-linear equations (expressed with absolute values or quadratic equations). However, it looks similar to the problems of finding Golomb rulers or perfect rulers.
If you consider your constraints as quadratic equations eg. (a-b)^2=100^2, then you can formulate this as a quadratic programming problem and use some of the well-studied techniques for that class of problem.

Considering the sign of the direction of each segment X[i] -> X[i+1] it becomes a boolean satisfiability problem. I can't see an obvious simplification. The runtime is O(2^N) - specifically 2^(N-2) tests with N values and an O(1) expression to test.
Assuming a = 0 and fixing the direction of a -> b:
a = 0
b = 100
c = b + 20 X[0] = 100 + 20 X[0]
d = c + 90 X[1] = 100 + 20 X[0] + 90 X[1]
test d == 170
where X[i] is either +1 or -1.
Although the expression for d appears to require O(N) operations ( (N-2) multiplications and (N-2) additions ), by using a Gray code or other such mechanism for changing the state of only one X at a time so the cost can be O(1) per test. ( though for N=4 it probably isn't worth it )
Simplifications may arise if you either have more constraints than points - if you were given |b-d| == 70, then you only need tests two cases rather than four - essentially b,c and d become their own fully constrained sub-problem.
Simplifications may also arise from the triangular property
| |a-b| - |b-c| | <= |a-c| <= |a-b| + |b-c| for all a, b and c.
So if you have many points, and you know the total of the distances between the points up to a certain point given the assignments made to X, and that total is further from the target value than the total of the distances between the remaining points, you can then deduce that there is no combination of assignments of the remaining points which will work.

algebra...
or it may be a simplification of the traveling salesman problem

I don't have an algorithms book handy, but this sounds like a graph search problem where the paths are constrained. You could probably use Dijkstra's Algorithm or some variant of it.

Related

Is there an easy function from a pair of 32-bit ints to a single 64-bit int that preserves rotational order?

This is a question that came up in the context of sorting points with integer coordinates into clockwise order, but this question is not about how to do that sorting.
This question is about the observation that 2-d vectors have a natural cyclic ordering. Unsigned integers with usual overflow behavior (or signed integers using twos-complement) also have a natural cyclic ordering. Can you easily map from the first ordering to the second?
So, the exact question is whether there is a map from pairs of twos-complement signed 32-bit integers to unsigned (or twos-complement signed) 64-bit integers such that any list of vectors that is in clockwise order maps to integers that are in decreasing (modulo overflow) order?
Some technical cases that people will likely ask about:
Yes, vectors that are multiples of each other should map to the same thing
No, I don't care which vector (if any) maps to 0
No, the images of antipodal vectors don't have to differ by 2^63 (although that is a nice-to-have)
The obvious answer is that since there are only around 0.6*2^64 distinct slopes, the answer is yes, such a map exists, but I'm looking for one that is easily computable. I understand that "easily" is subjective, but I'm really looking for something reasonably efficient and not terrible to implement. So, in particular, no counting every lattice point between the ray and the positive x-axis (unless you know a clever way to do that without enumerating them all).
An important thing to note is that it can be done by mapping to 65-bit integers. Simply project the vector out to where it hits the box bounded by x,y=+/-2^62 and round toward negative infinity. You need 63 bits to represent that integer and two more to encode which side of the box you hit. The implementation needs a little care to make sure you don't overflow, but only has one branch and two divides and is otherwise quite cheap. It doesn't work if you project out to 2^61 because you don't get enough resolution to separate some slopes.
Also, before you suggest "just use atan2", compute atan2(1073741821,2147483643) and atan2(1073741820,2147483641)
EDIT: Expansion on the "atan2" comment:
Given two values x_1 and x_2 that are coprime and just less than 2^31 (I used 2^31-5 and 2^31-7 in my example), we can use the extended Euclidean algorithm to find y_1 and y_2 such that y_1/x_1-y_2/x_2 = 1/(x_1*x_2) ~= 2^-62. Since the derivative of arctan is bounded by 1, the difference of the outputs of atan2 on these values is not going to be bigger than that. So, there are lots of pairs of vectors that won't be distinguishable by atan2 as vanilla IEEE 754 doubles.
If you have 80-bit extended registers and you are sure you can retain residency in those registers throughout the computation (and don't get kicked out by a context switch or just plain running out of extended registers), then you're fine. But, I really don't like the correctness of my code relying on staying resident in extended registers.
Here's one possible approach, inspired by a comment in your question. (For the tl;dr version, skip down to the definition of point_to_line at the bottom of this answer: that gives a mapping for the first quadrant only. Extension to the whole plane is left as a not-too-difficult exercise.)
Your question says:
in particular, no counting every lattice point between the ray and the positive x-axis (unless you know a clever way to do that without enumerating them all).
There is an algorithm to do that counting without enumerating the points; its efficiency is akin to that of the Euclidean algorithm for finding greatest common divisors. I'm not sure to what extent it counts as either "easily computable" or "clever".
Suppose that we're given a point (p, q) with integer coordinates and both p and q positive (so that the point lies in the first quadrant). We might as well also assume that q < p, so that the point (p, q) lies between the x-axis y = 0 and the diagonal line y = x: if we can solve the problem for the half of the first quadrant that lies below the diagonal, we can make use of symmetry to solve it generally.
Write M for the bound on the size of p and q, so that in your example we want M = 2^31.
Then the number of lattice points strictly inside the triangle bounded by:
the x-axis y = 0
the ray y = (q/p)x that starts at the origin and passes through (p, q), and
the vertical line x = M
is the sum as x ranges over integers in (0, M) of ⌈qx/p⌉ - 1.
For convenience, I'll drop the -1 and include 0 in the range of the sum; both those changes are trivial to compensate for. And now the core functionality we need is the ability to evaluate the sum of ⌈qx/p⌉ as x ranges over the integers in an interval [0, M). While we're at it, we might also want to be able to compute a closely-related sum: the sum of ⌊qx/p⌋ over that same range of x (and it'll turn out that it makes sense to evaluate both of these together).
For testing purposes, here are slow, naive-but-obviously-correct versions of the functions we're interested in, here written in Python:
def floor_sum_slow(p, q, M):
"""
Sum of floor(q * x / p) for 0 <= x < M.
Assumes p positive, q and M nonnegative.
"""
return sum(q * x // p for x in range(M))
def ceil_sum_slow(p, q, M):
"""
Sum of ceil(q * x / p) for 0 <= x < M.
Assumes p positive, q and M nonnegative.
"""
return sum((q * x + p - 1) // p for x in range(M))
And an example use:
>>> floor_sum_slow(51, 43, 2**28) # takes several seconds to complete
30377220771239253
>>> ceil_sum_slow(140552068, 161600507, 2**28)
41424305916577422
These sums can be evaluated much faster. The first key observation is that if q >= p, then we can apply the Euclidean "division algorithm" and write q = ap + r for some integers a and r. The sum then simplifies: the ap part contributes a factor of a * M * (M - 1) // 2, and we're reduced from computing floor_sum(p, q, M) to computing floor_sum(p, r, M). Similarly, the computation of ceil_sum(p, q, M) reduces to the computation of ceil_sum(p, q % p, M).
The second key observation is that we can express floor_sum(p, q, M) in terms of ceil_sum(q, p, N), where N is the ceiling of (q/p)M. To do this, we consider the rectangle [0, M) x (0, (q/p)M), and divide that rectangle into two triangles using the line y = (q/p)x. The number of lattice points within the rectangle that lie on or below the line is floor_sum(p, q, M), while the number of lattice points within the rectangle that lie above the line is ceil_sum(q, p, N). Since the total number of lattice points in the rectangle is (N - 1)M, we can deduce the value of floor_sum(p, q, M) from that of ceil_sum(q, p, N), and vice versa.
Combining those two ideas, and working through the details, we end up with a pair of mutually recursive functions that look like this:
def floor_sum(p, q, M):
"""
Sum of floor(q * x / p) for 0 <= x < M.
Assumes p positive, q and M nonnegative.
"""
a = q // p
r = q % p
if r == 0:
return a * M * (M - 1) // 2
else:
N = (M * r + p - 1) // p
return a * M * (M - 1) // 2 + (N - 1) * M - ceil_sum(r, p, N)
def ceil_sum(p, q, M):
"""
Sum of ceil(q * x / p) for 0 <= x < M.
Assumes p positive, q and M nonnegative.
"""
a = q // p
r = q % p
if r == 0:
return a * M * (M - 1) // 2
else:
N = (M * r + p - 1) // p
return a * M * (M - 1) // 2 + N * (M - 1) - floor_sum(r, p, N)
Performing the same calculation as before, we get exactly the same results, but this time the result is instant:
>>> floor_sum(51, 43, 2**28)
30377220771239253
>>> ceil_sum(140552068, 161600507, 2**28)
41424305916577422
A bit of experimentation should convince you that the floor_sum and floor_sum_slow functions give the same result in all cases, and similarly for ceil_sum and ceil_sum_slow.
Here's a function that uses floor_sum and ceil_sum to give an appropriate mapping for the first quadrant. I failed to resist the temptation to make it a full bijection, enumerating points in the order that they appear on each ray, but you can fix that by simply replacing the + gcd(p, q) term with + 1 in both branches.
from math import gcd
def point_to_line(p, q, M):
"""
Bijection from [0, M) x [0, M) to [0, M^2), preserving
the 'angle' ordering.
"""
if p == q == 0:
return 0
elif q <= p:
return ceil_sum(p, q, M) + gcd(p, q)
else:
return M * (M - 1) - floor_sum(q, p, M) + gcd(p, q)
Extending to the whole plane should be straightforward, though just a little bit messy due to the asymmetry between the negative range and the positive range in the two's complement representation.
Here's a visual demonstration for the case M = 7, printed using this code:
M = 7
for q in reversed(range(M)):
for p in range(M):
print(" {:02d}".format(point_to_line(p, q, M)), end="")
print()
Results:
48 42 39 36 32 28 27
47 41 37 33 29 26 21
46 40 35 30 25 20 18
45 38 31 24 19 16 15
44 34 23 17 14 12 11
43 22 13 10 09 08 07
00 01 02 03 04 05 06
This doesn't meet your requirement for an "easy" function, nor for a "reasonably efficient" one. But in principle it would work, and it might give some idea of how difficult the problem is. To keep things simple, let's consider just the case where 0 < y ≤ x, because the full problem can be solved by splitting the full 2D plane into eight octants and mapping each to its own range of integers in essentially the same way.
A point (x1, y1) is "anticlockwise" of (x2, y2) if and only if the slope y1/x1 is greater than the slope y2/x2. To map the slopes to integers in an order-preserving way, we can consider the sequence of all distinct fractions whose numerators and denominators are within range (i.e. up to 231), in ascending numerical order. Note that each fraction's numerical value is between 0 and 1 since we are just considering one octant of the plane.
This sequence of fractions is finite, so each fraction has an index at which it occurs in the sequence; so to map a point (x, y) to an integer, first reduce the fraction y/x to its simplest form (e.g. using Euclid's algorithm to find the GCD to divide by), then compute that fraction's index in the sequence.
It turns out this sequence is called a Farey sequence; specifically, it's the Farey sequence of order 231. Unfortunately, computing the index of a given fraction in this sequence turns out to be neither easy nor reasonably efficient. According to the paper
Computing Order Statistics in the Farey Sequence by Corina E. Pǎtraşcu and Mihai Pǎtraşcu, there is a somewhat complicated algorithm to compute the rank (i.e. index) of a fraction in O(n) time, where n in your case is 231, and there is unlikely to be an algorithm in time polynomial in log n because the algorithm can be used to factorise integers.
All of that said, there might be a much easier solution to your problem, because I've started from the assumption of wanting to map these fractions to integers as densely as possible (i.e. no "unused" integers in the target range), whereas in your question you wrote that the number of distinct fractions is about 60% of the available range of size 264. Intuitively, that amount of leeway doesn't seem like a lot to me, so I think the problem is probably quite difficult and you may need to settle for a solution that uses a larger output range, or a smaller input range. At the very least, by writing this answer I might save somebody else the effort of investigating whether this approach is feasible.
Just some random ideas / observations:
(edit: added two more and marked the first one as wrong as pointed out in the comments)
Divide into 16 22.5° segments instead of 8 45° segments
If I understand the problem correctly, the lines spread out "more" towards 45°, "wasting" resolution that you need for smaller angles. (Incorrect, see below)
In the mapping to 62 bit integers, there must be gaps. Identify enough low density areas to map down to 61 bits? Perhaps plot for a smaller problem to potentially see a pattern?
As the range for x and y is limited, for a given x0, all (legal) x < x0 with y > q must have a smaller angle. Could this help to break down the problem in some way? Perhaps cutting a triangle where points can easily be enumerated out of the problem for each quadrant?

Represent a prime number as a sum of four squared integers

Given a prime number p, find a four integers such that p is equal to sum of square of those integers.
1 < p < 10^12.
If p is of form 8n + 1 or 8n + 5, then p can be written as sum of two squares. This can be solved in O(sqrt(p)*log(sqrt(p)). But for other cases,i.e. when p cannot be written as sum of two squares, than is very inefficient. So, it would be great if anyone can give some resource material which i can read to solve the problem.
Given your constraints, I think that you can do a smart brute force.
First, note that if p = a^2 + b^2 + c^2 + d^2, each of a, b, c, d have to be less than 10^6. So just loop over a from 0 to sqrt(p). Consider q = p - a^2. It is easy to check whether q can be written as the sum of three squares using Legendre's three-square theorem. Once you find a value of q that works, a is fixed and you can just worry about q.
Deal with q the same way. Loop over b from 0 to sqrt(q), and consider r = q - b^2. Fermat's two-square theorem tells you how to check whether r can be written as the sum of two squares. Though this check requires O(sqrt(r)) time again, in practice you should be able to quickly find a value of b that works.
After this, it should be straightforward to find a (c,d) pair that works for r.
Since the loops for finding a and b and (c,d) are not nested but come one after the other, the complexity should be low enough to work in your problem.

Finding a polynomial of minimum degree given series

Given a series on number, how can be find a polynomial which generalizes the series. And than with this generalization one should be able to find out any term in the series.
While searching on net I found out that one can use Langrange's Interpolation technique. How accurate is the method for generalizing the series?
Can we use some other method to find a polynomial?
There are several algorithms which will generate a polynomial matching a finite series, as "justhalf" identified Lagrange's interpolation is one technique.
In general, if you are given a function with n points, you can uniquely define a polynomial of degree n-1 (or sometimes less) which matches at every point.
Consider the series with only two term, "2, 4". As this has only two terms (n=2), there is a polynomial of degree 1 which will generate the series. The general form is y = ax+b and we need to find a and b:
y = ax + b
So
2 = a⋅1 + b => 2 = a + b
4 = a⋅2 + b => 4 = 2a + b
Therefore a = 2 and b = 0.
y = 2x
You can see if you substitute x=1 and x=2 you get the values 2 and 4 respectively.
If the series was 2,4,8 then you would need a polynomial of degree 3-1 = 2, say y = ax^2 + bx + c (where these a and b are new values, not necessarily the same as the a and b for the previous case).
Then you would know that:
2 = a⋅1² + b⋅1 + c => 2 = a + b + c (i)
4 = a⋅2² + b⋅2 + c => 4 = 4a + 2b + c (ii)
8 = a⋅3² + b⋅3 + c => 8 = 9a + 3b + c (iii)
You can solve these equations to find a, b and c:
Subtract (i) from (ii):
2 = 3a + b (iv)
Subtract (ii) from (iii)
4 = 5a + b (v)
Subtract (iv) from (v)
2 = 2a => a = 1
So from (iv)
2 = 3⋅1 + b = 3 + b => b = -1
From (i)
2 = a + b + c = 1 + -1 + c = c => c = 2
So the polynomial y = ax² + bx + c = x² - x + 2 agrees at the three points
Verify:
1² - 1 + 2 = 2
2² - 2 + 2 = 4
3² - 3 + 2 = 8
As we wanted.
But note that this polynomial y = x² - x + 2 also exactly generates the series with only the first 2 terms, "2, 4". So this series with only two terms is satisfied by two polynomials, y = 2x and y = x² - x + 2. Despite agreeing on the first two values 2,4 these are very different polynomials.
In general, if you have a series of n terms then there is a unique polynomial of degree n-1 which will generate the series. In general, there will be no polynomials of degree less than n-1 which will exactly generate it (you may get lucky, but its not generally true). There are an infinite number of polynomials of degree greater then n-1 which will generate the data.
Usually in numerical analysis you try and generate a polynomial of degree less than n-1 which approximates the data (doesn't match exactly, but minimises error). Exact solutions of degree n-1 are unstable, in that tiny changes to the input series produces very different equations. This is not so true of polynomial approximations of degree less than n-1. As many physical measurements have inherent error, using lower degree polynomials minimises the impact of measurement errors.
Lets now consider the series 2, 4, 8, 16
You can produce a polynomial of degree 3 (y = ax³ + bx² + cx + d) which exactly matches these data points using exactly the same approach. This (again) is just solving a set of linear simultaneous equations. This is essentially how Lagrange's algorithm works; we have solved the equations by hand instead of using matrix notation (as Lagrange does).
But given 2,4,8,16 most people would think that the equation is y = 2x. This is not a polynomial equation, so can't be expressed as a polynomial.
For the series 2,4,8 we derived the polynomial y = x² - x + 2. If we tried to extrapolate to find the next value, plugging x=4 will give us y = 4² - 4 + 2 = 14. The term after (x=5) that would be y = 5² - 5 + 2 = 22. As x gets larger, y = x² - x + 2 becomes an increasingly bad approximation to y = 2x. In fact no polynomial will grow as fast as y = 2x.
So ...
If you have n points, you can always find a unique polynomial of degree n-1 (or sometimes less) which will generate exactly those n points for x=1,2,3..n. This is not often used for real life problems, because these solutions are unstable (small changes to input produce large changes to the polynomial).
If you have n points, there are an infinite number of polynomials of degree n or greater which will produce the series. These all have identical values for x = 1, 2, ... n but will disagree on the n+1, n+2 etc terms.
Typically a polynomial approximation of degree less than n-1 is used. It won't usually be an exact fit, but will often show the general shape of the curve. For 8 points you might try and find a polynomial of degree 4 (y = ax⁴ + cx³ + dx² + e) which minimises the error. As a rule of thumb, a polynomial of degree of about n/2 is often used. This is more art than science; usually you have some idea of what the underlying (correct) formula is, and this helps select the degree of the approximating polynomial.
Polynomial approximations can work reasonably well for interpolation (finding a value between two data points) but are hopeless for extrapolation. As we have no knowledge at all of what the "next" value is a series might be (it could be anything), no formula can successfully predict it.
I hope this is useful. Producing a polynomial which exactly generates a finite series is not hard ... its simply solving n linear simultaneous equations with n variables (the coefficients of xn-1, xn-2, ... x², x, and the constant term). This is what we have done above and how Lagrange works. However, in physical systems it may not be particularly meaningful. User beware.

optimization of sum of multi variable functions

Imagine that I'm a bakery trying to maximize the number of pies I can produce with my limited quantities of ingredients.
Each of the following pie recipes A, B, C, and D produce exactly 1 pie:
A = i + j + k
B = t + z
C = 2z
D = 2j + 2k
*The recipes always have linear form, like above.
I have the following ingredients:
4 of i
5 of z
4 of j
2 of k
1 of t
I want an algorithm to maximize my pie production given my limited amount of ingredients.
The optimal solution of these example inputs would yield me the following quantities of pies:
2 x A
1 x B
2 x C
0 x D
= a total of 5 pies
I can solve this easily enough by taking the maximal producer of all combinations, but the number
of combos becomes prohibitive as the quantities of ingredients increases. I feel like there must
be generalizations of this type of optimization problem, I just don't know where to start.
While I can only bake whole pies, I would be still be interested in seeing a method which may produce non integer results.
You can define the linear programming problem. I'll show the usage on the example, but it can of course be generalized to any data.
Denote your pies as your variables (x1 = A, x2 = B, ...) and the LP problem will be as follows:
maximize x1 + x2 + x3 + x4
s.t. x1 <= 4 (needed i's)
x1 + 2x4 <= 4 (needed j's)
x1 + 2x4 <= 2 (needed k's)
x2 <= 1 (needed t's)
x2 + 2x3 <= 5 (needed z's)
and x1,x2,x3,x4 >= 0
The fractional solution to this problem is solveable polynomially, but the integer linear programming is NP-Complete.
The problem is indeed NP-Complete, because given an integer linear programming problem, you can reduce the problem to "maximize the number of pies" using the same approach, where each constraint is an ingredient in the pie and the variables are the number of pies.
For the integers problem - there are a lot of approximation techniques in the literature for the problem if you can do with "close up to a certain bound", (for example local ratio technique or primal-dual are often used) or if you need an exact solution - exponential solution is probably your best shot. (Unless of course, P=NP)
Since all your functions are linear, it sounds like you're looking for either linear programming (if continuous values are acceptable) or integer programming (if you require your variables to be integers).
Linear programming is a standard technique, and is efficiently solvable. A traditional algorithm for doing this is the simplex method.
Integer programming is intractable in general, because adding integral constraints allows it to describe intractable combinatorial problems. There seems to be a large number of approximation techniques (for example, you might try just using regular linear programming to see what that gets you), but of course they depend on the specific nature of your problem.

Distance measure between two sets of possibly different size

I have 2 sets of integers, A and B, not necessarily of the same size. For my needs, I take the distance between each 2 elements a and b (integers) to be just abs(a-b).
I am defining the distance between the two sets as follows:
If the sets are of the same size, minimize the sum of distances of all pairs [a,b] (a from A and b from B), minimization over all possible 'pairs partitions' (there are n! possible partitions).
If the sets are not of the same size, let's say A of size m and B of size n, with m < n, then minimize the distance from (1) over all subsets of B which are of size m.
My question is, is the following algorithm (just an intuitive guess) gives the right answer, according to the definition written above.
Construct a matrix D of size m X n, with D(i,j) = abs(A(i)-B(j))
Find the smallest element of D, accumulate it, and delete the row and the column of that element. Accumulate the next smallest entry, and keep accumulating until all rows and columns are deleted.
for example, if A={0,1,4} and B={3,4}, then D is (with the elements above and to the left):
3 4
0 3 4
1 2 3
4 1 0
And the distance is 0 + 2 = 2, coming from pairing 4 with 4 and 3 with 1.
Note that this problem is referred to sometimes as the skis and skiers problem, where you have n skis and m skiers of varying lengths and heights. The goal is to match skis with skiers so that the sum of the differences between heights and ski lengths is minimized.
To solve the problem you could use minimum weight bipartite matching, which requires O(n^3) time.
Even better, you can achieve O(n^2) time with O(n) extra memory using the simple dynamic programming algorithm below.
Optimally, you can solve the problem in linear time if the points are already sorted using the algorithm described in this paper.
O(n^2) dynamic programming algorithm:
if (size(B) > size(A))
swap(A, B);
sort(A);
sort(B);
opt = array(size(B));
nopt = array(size(B));
for (i = 0; i < size(B); i++)
opt[i] = abs(A[0] - B[i]);
for (i = 1; i < size(A); i++) {
fill(nopt, infinity);
for (j = 1; j < size(B); j++) {
nopt[j] = min(nopt[j - 1], opt[j - 1] + abs(A[i] - B[j]));
swap(opt, nopt);
}
return opt[size(B) - 1];
After each iteration i of the outer for loop above, opt[j] contains the optimal solution matching {A[0],..., A[i]} using the elements {B[0],..., B[j]}.
The correctness of this algorithm relies on the fact that in any optimal matching if a1 is matched with b1, a2 is matched with b2, and a1 < a2, then b1 <= b2.
In order to get the optimum, solve the assignment problem on D.
The assignment problem finds a perfect matching in a bipartite graph such that the total edge weight is minimized, which maps perfectly to your problem. It is also in P.
EDIT to explain how OP's problem maps onto assignment.
For simplicity of explanation, extend the smaller set with special elements e_k.
Let A be the set of workers, and B be the set of tasks (the contents are just labels).
Let the cost be the distance between an element in A and B (i.e. an entry of D). The distance between e_k and anything is 0.
Then, we want to find a perfect matching of A and B (i.e. every worker is matched with a task), such that the cost is minimized. This is the assignment problem.
No It's not a best answer, for example:
A: {3,7} and B:{0,4} you will choose: {(3,4),(0,7)} and distance is 8 but you should choose {(3,0),(4,7)} in this case distance is 6.
Your answer gives a good approximation to the minimum, but not necessarily the best minimum. You are following a "greedy" approach which is generally much easier, and gives good results, but can not guarantee the best answer.

Resources