How do I find the lowest numbers in a square table, only one per column and one per row [duplicate] - algorithm

Suppose we have a table of numbers like this (we can assume it is a square table):
20 2 1 3 4
5 1 14 8 9
15 12 17 17 11
16 1 1 15 18
20 13 15 5 11
Your job is to calculate the maximum sum of n numbers where n is the number of rows or columns in the table. The catch is each number must come from a unique row and column.
For example, selecting the numbers at (0,0), (1,1), (2,2), (3,3), and (4,4) is acceptable, but (0,0), (0,1), (2,2), (3,3), and (4,4) is not because the first two numbers were pulled from the same row.
My (laughable) solution to this problem is iterating through all the possible permutations of the rows and columns. This works for small grids but, of course, it is incredibly slow as n gets big. It has O(n!) time complexity, if I'm not mistaken (sample Python code below).
I really think this can be solved in better time, but I'm not coming up with anything sufficiently clever.
So my question is, what algorithm should be used to solve this?
If it helps, this problem seems similar to the
knapsack problem.
import itertools
import re
grid = """20 2 1 3 4
5 1 14 8 9
15 12 17 17 11
16 1 1 15 18
20 13 15 5 11"""
grid = [[int(x) for x in re.split("\s+", line)] for line in grid.split("\n")]
possible_column_indexes = itertools.permutations(range(len(grid)))
max_sum = 0
max_positions = []
for column_indexes in possible_column_indexes:
current_sum = 0
current_positions = []
for row, col in enumerate(column_indexes):
current_sum += grid[row][col]
current_positions.append("(%d, %d)" % (row, col))
if current_sum > max_sum:
max_sum = current_sum
max_positions = current_positions
print "Max sum is", max_sum
for position in max_positions:
print position

This is the maximum cost bipartite matching problem. The classical way to solve it is by using the Hungarian algorithm.
Basically you have a bipartite graph: the left set is the rows and the right set is the columns. Each edge from row i to column j has cost matrix[i, j]. Find the matching that maximizes the costs.

For starters, you can use dynamic programming.
In your straightforward approach, you are doing exactly the same computation many, many times.
For example, at some point you answer the question: "For the last three columns with rows 1 and 2 already taken, how do I maximize the sum?" You compute the answer to this question twice, once when you pick row 1 from column 1 and row 2 from column 2, and once when you pick them vice-versa.
So don't do that. Cache the answer -- and also cache all similar answers to all similar questions -- and re-use them.
I do not have time right now to analyze the running time of this approach. I think it is O(2^n) or thereabouts. More later maybe...

Related

Finding the best combination of elements with the max total parameter value

I have 100 elements. Each element has 4 features A,B,C,D. Each feature is an integer.
I want to select 2 elements for each feature, so that I have selected a total of 8 distinct elements. I want to maximize the sum of the 8 selected features A,A,B,B,C,C,D,D.
A greedy algorithm would be to select the 2 elements with highest A, then the two elements with highest B among the remaining elements, etc. However, this might not be optimal, because the elements that have highest A could also have a much higher B.
Do we have an algorithm to solve such a problem optimally?
This can be solved as a minimum cost flow problem. In particular, this is an Assignment problem
First of all, see that we only need the 8 best elements of each features, meaning 32 elements maximum. It should even be possible to cut the search space further (as if the 2 best elements of A is not one of the 6 best elements of any other feature, we can already assigne those 2 elements to A, and each other feature only needs to look at the first 6 best elements. If it's not clear why, I'll try to explain further).
Then we make the vertices S,T and Fa,Fb,Fc,Fd and E1,E2,...E32, with the following edge :
for each vertex Fx, an edge from S to Fx with maximum flow 2 and a weight of 0 (as we want 2 element for each feature)
for each vertex Ei, an edge from Fx to Ei if Ei is one of the top elements of feature x, with maximum flow 1 and weight equal to the negative value of feature x of Ei. (negative because the algorithm will find the minimum cost)
for each vertex Ei, an edge from Ei to T, with maximum flow 1 and weight 0. (as each element can only be selected once)
I'm not sure if this is the best way, but It should work.
As suggested per #AloisChristen, this can be written as an assignment problem:
On the one side, we select the 8 best elements for each feature; that's 32 elements or less, since one element might be in the best 8 for more than one feature;
On the other side, we put 8 seats A,A,B,B,C,C,D,D
Solve the resulting assignment problem.
Here the problem is solved using scipy's linear_sum_assignment optimization function:
from numpy.random import randint
from numpy import argpartition, unique, concatenate
from scipy.optimize import linear_sum_assignment
# PARAMETERS
n_elements = 100
n_features = 4
n_per_feature = 2
# RANDOM DATA
data = randint(0, 21, (n_elements, n_features)) # random data with integer features between 0 and 20 included
# SELECT BEST 8 CANDIDATES FOR EACH FEATURE
n_selected = n_features * n_per_feature
n_candidates = n_selected * n_features
idx = argpartition(data, range(-n_candidates, 0), axis=0)
idx = unique(idx[-n_selected:].ravel())
candidates = data[idx]
n_candidates = candidates.shape[0]
# SOLVE ASSIGNMENT PROBLEM
cost_matrix = -concatenate((candidates,candidates), axis=1) # 8 columns in order ABCDABCD
element_idx, seat_idx = linear_sum_assignment(cost_matrix)
score = -cost_matrix[element_idx, seat_idx].sum()
# DISPLAY RESULTS
print('SUM OF SELECTED FEATURES: {}'.format(score))
for e,s in zip(element_idx, seat_idx):
print('{:2d}'.format(idx[e]),
'ABCDABCD'[s],
-cost_matrix[e,s],
data[idx[e]])
Output:
SUM OF SELECTED FEATURES: 160
3 B 20 [ 5 20 14 11]
4 A 20 [20 9 3 12]
6 C 20 [ 3 3 20 8]
10 A 20 [20 10 9 9]
13 C 20 [16 12 20 18]
23 D 20 [ 6 10 4 20]
24 B 20 [ 5 20 6 8]
27 D 20 [20 13 19 20]

Find the number of all possible path in a grid, from (0, 0) to (n, n)

I don't know how to find the number of all possible path in a grid, from a Point A to a Point B.
The point A is on (0,0) and the point B is on (n,n).
A can move up, down, right, and left, and can't move on visited points.
While A moving, A(x,y) = (x,y|(0=<x=<n)∩(0=<y=<n)).
You can solve this problem with recursive backtracking, but there's another approach which I think is more interesting.
If we work out the first few cases by hand we find that:
A 1x1 square has 1 path
A 2x2 square has 2 paths
A 3x3 square has 12 paths
If we then go to OEIS (the Online Encyclopedia of Integer Sequences) and put in the search phrase "1,2,12 paths", the very first result is A007764 which is entitled "Number of nonintersecting (or self-avoiding) rook paths joining opposite corners of an n X n grid".
Knowing what integer sequence you're looking for unlocks significant mathematical resources, including source code to generate the sequence, related sequences, and best-known values.
The known values of the sequence are:
1 1
2 2
3 12
4 184
5 8512
6 1262816
7 575780564
8 789360053252
9 3266598486981642
10 41044208702632496804
11 1568758030464750013214100
12 182413291514248049241470885236
13 64528039343270018963357185158482118
14 69450664761521361664274701548907358996488
15 227449714676812739631826459327989863387613323440
16 2266745568862672746374567396713098934866324885408319028
17 68745445609149931587631563132489232824587945968099457285419306
18 6344814611237963971310297540795524400449443986866480693646369387855336
19 1782112840842065129893384946652325275167838065704767655931452474605826692782532
20 1523344971704879993080742810319229690899454255323294555776029866737355060592877569255844
21 3962892199823037560207299517133362502106339705739463771515237113377010682364035706704472064940398
22 31374751050137102720420538137382214513103312193698723653061351991346433379389385793965576992246021316463868
23 755970286667345339661519123315222619353103732072409481167391410479517925792743631234987038883317634987271171404439792
24 55435429355237477009914318489061437930690379970964331332556958646484008407334885544566386924020875711242060085408513482933945720
25 12371712231207064758338744862673570832373041989012943539678727080484951695515930485641394550792153037191858028212512280926600304581386791094
26 8402974857881133471007083745436809127296054293775383549824742623937028497898215256929178577083970960121625602506027316549718402106494049978375604247408
27 17369931586279272931175440421236498900372229588288140604663703720910342413276134762789218193498006107082296223143380491348290026721931129627708738890853908108906396
You can generate the first few terms yourself on paper or via recursive backtracking, per the other answer.
I would suggest solving this with naive recursion.
Keep a set visted of places that you have visited. And in pseudo-code that is deliberately not any particular language:
function recursive_call(i, j, visited=none)
if visited is none then
visited = set()
end if
if i = n and j = n then
return 1
else if (i, j) in visited or not in grid then
return 0
else
total = 0
add (i, j) to visited
for direction in directions:
(new_i, new_j) = move(i, j, direction)
total += recursive_call(new_i, new_j, visited)
remove (i, j) from visited
return total
end if
end function

MATLAB: Fast creation of random symmetric Matrix with fixed degree (sum of rows)

I am searching for a method to create, in a fast way a random matrix A with the follwing properties:
A = transpose(A)
A(i,i) = 0 for all i
A(i,j) >= 0 for all i, j
sum(A) =~ degree; the sum of rows are randomly distributed by a distribution I want to specify (here =~ means approximate equality).
The distribution degree comes from a matrix orig, specifically degree=sum(orig), thus I know that matrices with this distribution exist.
For example: orig=[0 12 7 5; 12 0 1 9; 7 1 0 3; 5 9 3 0]
orig =
0 12 7 5
12 0 1 9
7 1 0 3
5 9 3 0
sum(orig)=[24 22 11 17];
Now one possible matrix A=[0 11 5 8, 11 0 4 7, 5 4 0 2, 8 7 2 0] is
A =
0 11 5 8
11 0 4 7
5 4 0 2
8 7 2 0
with sum(A)=[24 22 11 17].
I am trying this for quite some time, but unfortunatly my two ideas didn't work:
version 1:
I switch Nswitch times two random elements: A(k1,k3)--; A(k1,k4)++; A(k2,k3)++; A(k2,k4)--; (the transposed elements aswell).
Unfortunatly, Nswitch = log(E)*E (with E=sum(sum(nn))) in order that the Matrices are very uncorrelated. As my E > 5.000.000, this is not feasible (in particular, as I need at least 10 of such matrices).
version 2:
I create the matrix according to the distribution from scratch. The idea is, to fill every row i with degree(i) numbers, based on the distribution of degree:
nn=orig;
nnR=zeros(size(nn));
for i=1:length(nn)
degree=sum(nn);
howmany=degree(i);
degree(i)=0;
full=rld_cumsum(degree,1:length(degree));
rr=randi(length(full),[1,howmany]);
ff=full(rr);
xx=i*ones([1,length(ff)]);
nnR = nnR + accumarray([xx(:),ff(:)],1,size(nnR));
end
A=nnR;
However, while sum(A')=degree, sum(A) systematically deviates from degree, and I am not able to find the reason for that.
Small deviations from degree are fine of course, but there seem to be systmatical deviations in particulat of the matrices contain in some places large numbers.
I would be very happy if somebody could either show me a fast method for version1, or a reason for the systematic deviation of the distribution in version 2, or a method to create such matrices in a different way. Thank you!
Edit:
This is the problem in matsmath's proposed solution:
Imagine you have the matrix:
orig =
0 12 3 1
12 0 1 9
3 1 0 3
1 9 3 0
with r(i)=[16 22 7 13].
Step 1: r(1)=16, my random integer partition is p(i)=[0 7 3 6].
Step 2: Check that all p(i)<=r(i), which is the case.
Step 3:
My random matrix starts looks like
A =
0 7 3 6
7 0 . .
3 . 0 .
6 . . 0
with the new row sum vector rnew=[r(2)-p(2),...,r(n)-p(n)]=[15 4 7]
Second iteration (here the problem occures):
Step 1: rnew(1)=15, my random integer partition is p(i)=[0 A B]: rnew(1)=15=A+B.
Step 2: Check that all p(i)<=rnew(i), which gives A<=4, B<=7. So A+B<=11, but A+B has to be 15. contradiction :-/
Edit2:
This is the code representing (to the best of my knowledge) the solution posted by David Eisenstat:
orig=[0 12 3 1; 12 0 1 9; 3 1 0 3; 1 9 3 0];
w=[2.2406 4.6334 0.8174 1.6902];
xfull=zeros(4);
for ii=1:1000
rndmat=[poissrnd(w(1),1,4); poissrnd(w(2),1,4); poissrnd(w(3),1,4); poissrnd(w(4),1,4)];
kkk=rndmat.*(ones(4)-eye(4)); % remove diagonal
hhh=sum(sum(orig))/sum(sum(kkk))*kkk; % normalisation
xfull=xfull+hhh;
end
xf=xfull/ii;
disp(sum(orig)); % gives [16 22 7 13]
disp(sum(xf)); % gives [14.8337 9.6171 18.0627 15.4865] (obvious systematic problem)
disp(sum(xf')) % gives [13.5230 28.8452 4.9635 10.6683] (which is also systematically different from [16, 22, 7, 13]
Since it's enough to approximately preserve the degree sequence, let me propose a random distribution where each entry above the diagonal is chosen according to a Poisson distribution. My intuition is that we want to find weights w_i such that the i,j entry for i != j has mean w_i*w_j (all of the diagonal entries are zero). This gives us a nonlinear system of equations:
for all i, (sum_{j != i} w_i*w_j) = d_i,
where d_i is the degree of i. Equivalently,
for all i, w_i * (sum_j w_j) - w_i^2 = d_i.
The latter can be solved by applying Newton's method as described below from a starting solution of w_i = d_i / sqrt(sum_j d_j).
Once we have the w_is, we can sample repeatedly using poissrnd to generate samples of multiple Poisson distributions at once.
(If I have time, I'll try implementing this in numpy.)
The Jacobian matrix of the equation system for a 4 by 4 problem is
(w_2 + w_3 + w_4) w_1 w_1 w_1
w_2 (w_1 + w_3 + w_4) w_2 w_2
w_3 w_3 (w_1 + w_2 + w_4) w_3
w_4 w_4 w_4 (w_1 + w_2 + w_3).
In general, let A be a diagonal matrix where A_{i,i} = sum_j w_j - 2*w_i. Let u = [w_1, ..., w_n]' and v = [1, ..., 1]'. The Jacobian can be written J = A + u*v'. The inverse is given by the Sherman--Morrison formula
A^-1*u*v'*A^-1
J^-1 = (A + u*v')^-1 = A^-1 - -------------- .
1 + v'*A^-1*u
For the Newton step, we need to compute J^-1*y for some given y. This can be done straightforwardly in time O(n) using the above equation. I'll add more detail when I get the chance.
First approach (based on version2)
Let your row sum vector given by the matrix orig [r(1),r(2),...,r(n)].
Step 1. Take a random integer partition of the integer r(1) into exactly n-1 parts, say p(2), p(3), ..., p(n)
Step 2. Check if p(i)<=r(i) for all i=2...n. If not, go to Step 1.
Step 3. Fill out your random matrix first row and colum by the entries 0, p(2), ... , p(n), and consider the new row sum vector [r(2)-p(2),...,r(n)-p(n)].
Repeat these steps with a matrix of order n-1.
The point is, that you randomize one row at a time, and reduce the problem to searching for a matrix of size one less.
As pointed out by OP in the comment, this naive algorithm fails. The reason is that the matrices in question have a further necessary condition on their entries as follows:
FACT:
If A is an orig matrix with row sums [r(1), r(2), ..., r(n)] then necessarily for every i=1..n it holds that r(i)<=-r(i)+sum(r(j),j=1..n).
That is, any row sum, say the ith, r(i), is necessarily at most as big as the sum of the other row sums (not including r(i)).
In light of this, a revised algorithm is possible. Note that in Step 2b. we check if the new row sum vector has the property discussed above.
Step 1. Take a random integer partition of the integer r(1) into exactly n-1 parts, say p(2), p(3), ..., p(n)
Step 2a. Check if p(i)<=r(i) for all i=2...n. If not, go to Step 1.
Step 2b. Check if r(i)-p(i)<=-r(i)+p(i)+sum(r(j)-p(j),j=2..n) for all i=2..n. If not, go to Step 1.
Step 3. Fill out your random matrix first row and colum by the entries 0, p(2), ... , p(n), and consider the new row sum vector [r(2)-p(2),...,r(n)-p(n)].
Second approach (based on version1)
I am not sure if this approach gives you random matrices, but it certainly gives you different matrices.
The idea here is to change some parts of your orig matrix locally, in a way which maintains all of its properties.
You should look for a random 2x2 submatrix below the main diagonal which contains strictly positive entries, like [[a,b],[c,d]] and perturbe its contents by a random value r to [[a+r,b-r],[c-r,d+r]]. You make the same change above the main diagonal too, to keep your new matrix symmetric. Here the point is that the changes within the entries "cancel" each other out.
Of course, r should be chosen in a way such that b-r>=0 and c-r>=0.
You can pursue this idea to modify larger submatrices too. For example, you might choose 3 random row coordinates r1, r2, r2 and 3 random column coordinates c1, c2, and c3 and then make changes in your orig matrix at the 9 positions (ri,cj) as follows: you change your 3x3 submatrix [[a b c],[d e f], [g h i]] to [[a-r b+r c] [d+r e f-r], [g h-r i+r]]. You do the same at the transposed places. Again, the random value r must be chosen in a way so that a-r>=0 and f-r>=0 and h-r>=0. Moreover, c1 and r1, and c3 and r3 must be distinct as you can't change the 0 entries in the main diagonal of the matrix orig.
You can repeat such things over and over again, say 100 times, until you find something which looks random. Note that this idea uses the fact that you have existing knowledge of a solution, this is the matrix orig, while the first approach does not use such knowledge at all.

minimum steps required to make array of integers contiguous

given a sorted array of distinct integers, what is the minimum number of steps required to make the integers contiguous? Here the condition is that: in a step , only one element can be changed and can be either increased or decreased by 1 . For example, if we have 2,4,5,6 then '2' can be made '3' thus making the elements contiguous(3,4,5,6) .Hence the minimum steps here is 1 . Similarly for the array: 2,4,5,8:
Step 1: '2' can be made '3'
Step 2: '8' can be made '7'
Step 3: '7' can be made '6'
Thus the sequence now is 3,4,5,6 and the number of steps is 3.
I tried as follows but am not sure if its correct?
//n is the number of elements in array a
int count=a[n-1]-a[0]-1;
for(i=1;i<=n-2;i++)
{
count--;
}
printf("%d\n",count);
Thanks.
The intuitive guess is that the "center" of the optimal sequence will be the arithmetic average, but this is not the case. Let's find the correct solution with some vector math:
Part 1: Assuming the first number is to be left alone (we'll deal with this assumption later), calculate the differences, so 1 12 3 14 5 16-1 2 3 4 5 6 would yield 0 -10 0 -10 0 -10.
sidenote: Notice that a "contiguous" array by your implied definition would be an increasing arithmetic sequence with difference 1. (Note that there are other reasonable interpretations of your question: some people may consider 5 4 3 2 1 to be contiguous, or 5 3 1 to be contiguous, or 1 2 3 2 3 to be contiguous. You also did not specify if negative numbers should be treated any differently.)
theorem: The contiguous numbers must lie between the minimum and maximum number. [proof left to reader]
Part 2: Now returning to our example, assuming we took the 30 steps (sum(abs(0 -10 0 -10 0 -10))=30) required to turn 1 12 3 14 5 16 into 1 2 3 4 5 6. This is one correct answer. But 0 -10 0 -10 0 -10+c is also an answer which yields an arithmetic sequence of difference 1, for any constant c. In order to minimize the number of "steps", we must pick an appropriate c. In this case, each time we increase or decrease c, we increase the number of steps by N=6 (the length of the vector). So for example if we wanted to turn our original sequence 1 12 3 14 5 16 into 3 4 5 6 7 8 (c=2), then the differences would have been 2 -8 2 -8 2 -8, and sum(abs(2 -8 2 -8 2 -8))=30.
Now this is very clear if you could picture it visually, but it's sort of hard to type out in text. First we took our difference vector. Imagine you drew it like so:
4|
3| *
2| * |
1| | | *
0+--+--+--+--+--*
-1| |
-2| *
We are free to "shift" this vector up and down by adding or subtracting 1 from everything. (This is equivalent to finding c.) We wish to find the shift which minimizes the number of | you see (the area between the curve and the x-axis). This is NOT the average (that would be minimizing the standard deviation or RMS error, not the absolute error). To find the minimizing c, let's think of this as a function and consider its derivative. If the differences are all far away from the x-axis (we're trying to make 101 112 103 114 105 116), it makes sense to just not add this extra stuff, so we shift the function down towards the x-axis. Each time we decrease c, we improve the solution by 6. Now suppose that one of the *s passes the x axis. Each time we decrease c, we improve the solution by 5-1=4 (we save 5 steps of work, but have to do 1 extra step of work for the * below the x-axis). Eventually when HALF the *s are past the x-axis, we can NO LONGER IMPROVE THE SOLUTION (derivative: 3-3=0). (In fact soon we begin to make the solution worse, and can never make it better again. Not only have we found the minimum of this function, but we can see it is a global minimum.)
Thus the solution is as follows: Pretend the first number is in place. Calculate the vector of differences. Minimize the sum of the absolute value of this vector; do this by finding the median OF THE DIFFERENCES and subtracting that off from the differences to obtain an improved differences-vector. The sum of the absolute value of the "improved" vector is your answer. This is O(N) The solutions of equal optimality will (as per the above) always be "adjacent". A unique solution exists only if there are an odd number of numbers; otherwise if there are an even number of numbers, AND the median-of-differences is not an integer, the equally-optimal solutions will have difference-vectors with corrective factors of any number between the two medians.
So I guess this wouldn't be complete without a final example.
input: 2 3 4 10 14 14 15 100
difference vector: 2 3 4 5 6 7 8 9-2 3 4 10 14 14 15 100 = 0 0 0 -5 -8 -7 -7 -91
note that the medians of the difference-vector are not in the middle anymore, we need to perform an O(N) median-finding algorithm to extract them...
medians of difference-vector are -5 and -7
let us take -5 to be our correction factor (any number between the medians, such as -6 or -7, would also be a valid choice)
thus our new goal is 2 3 4 5 6 7 8 9+5=7 8 9 10 11 12 13 14, and the new differences are 5 5 5 0 -3 -2 -2 -86*
this means we will need to do 5+5+5+0+3+2+2+86=108 steps
*(we obtain this by repeating step 2 with our new target, or by adding 5 to each number of the previous difference... but since you only care about the sum, we'd just add 8*5 (vector length times correct factor) to the previously calculated sum)
Alternatively, we could have also taken -6 or -7 to be our correction factor. Let's say we took -7...
then the new goal would have been 2 3 4 5 6 7 8 9+7=9 10 11 12 13 14 15 16, and the new differences would have been 7 7 7 2 1 0 0 -84
this would have meant we'd need to do 7+7+7+2+1+0+0+84=108 steps, the same as above
If you simulate this yourself, can see the number of steps becomes >108 as we take offsets further away from the range [-5,-7].
Pseudocode:
def minSteps(array A of size N):
A' = [0,1,...,N-1]
diffs = A'-A
medianOfDiffs = leftMedian(diffs)
return sum(abs(diffs-medianOfDiffs))
Python:
leftMedian = lambda x:sorted(x)[len(x)//2]
def minSteps(array):
target = range(len(array))
diffs = [t-a for t,a in zip(target,array)]
medianOfDiffs = leftMedian(diffs)
return sum(abs(d-medianOfDiffs) for d in diffs)
edit:
It turns out that for arrays of distinct integers, this is equivalent to a simpler solution: picking one of the (up to 2) medians, assuming it doesn't move, and moving other numbers accordingly. This simpler method often gives incorrect answers if you have any duplicates, but the OP didn't ask that, so that would be a simpler and more elegant solution. Additionally we can use the proof I've given in this solution to justify the "assume the median doesn't move" solution as follows: the corrective factor will always be in the center of the array (i.e. the median of the differences will be from the median of the numbers). Thus any restriction which also guarantees this can be used to create variations of this brainteaser.
Get one of the medians of all the numbers. As the numbers are already sorted, this shouldn't be a big deal. Assume that median does not move. Then compute the total cost of moving all the numbers accordingly. This should give the answer.
community edit:
def minSteps(a):
"""INPUT: list of sorted unique integers"""
oneMedian = a[floor(n/2)]
aTarget = [oneMedian + (i-floor(n/2)) for i in range(len(a))]
# aTargets looks roughly like [m-n/2?, ..., m-1, m, m+1, ..., m+n/2]
return sum(abs(aTarget[i]-a[i]) for i in range(len(a)))
This is probably not an ideal solution, but a first idea.
Given a sorted sequence [x1, x2, …, xn]:
Write a function that returns the differences of an element to the previous and to the next element, i.e. (xn – xn–1, xn+1 – xn).
If the difference to the previous element is > 1, you would have to increase all previous elements by xn – xn–1 – 1. That is, the number of necessary steps would increase by the number of previous elements × (xn – xn–1 – 1). Let's call this number a.
If the difference to the next element is >1, you would have to decrease all subsequent elements by xn+1 – xn – 1. That is, the number of necessary steps would increase by the number of subsequent elements × (xn+1 – xn – 1). Let's call this number b.
If a < b, then increase all previous elements until they are contiguous to the current element. If a > b, then decrease all subsequent elements until they are contiguous to the current element. If a = b, it doesn't matter which of these two actions is chosen.
Add up the number of steps taken in the previous step (by increasing the total number of necessary steps by either a or b), and repeat until all elements are contiguous.
First of all, imagine that we pick an arbitrary target of contiguous increasing values and then calculate the cost (number of steps required) for modifying the array the array to match.
Original: 3 5 7 8 10 16
Target: 4 5 6 7 8 9
Difference: +1 0 -1 -1 -2 -7 -> Cost = 12
Sign: + 0 - - - -
Because the input array is already ordered and distinct, it is strictly increasing. Because of this, it can be shown that the differences will always be non-increasing.
If we change the target by increasing it by 1, the cost will change. Each position in which the difference is currently positive or zero will incur an increase in cost by 1. Each position in which the difference is currently negative will yield a decrease in cost by 1:
Original: 3 5 7 8 10 16
New target: 5 6 7 8 9 10
New Difference: +2 +1 0 0 -1 -6 -> Cost = 10 (decrease by 2)
Conversely, if we decrease the target by 1, each position in which the difference is currently positive will yield a decrease in cost by 1, while each position in which the difference is zero or negative will incur an increase in cost by 1:
Original: 3 5 7 8 10 16
New target: 3 4 5 6 7 8
New Difference: 0 -1 -2 -2 -3 -8 -> Cost = 16 (increase by 4)
In order to find the optimal values for the target array, we must find a target such that any change (increment or decrement) will not decrease the cost. Note that an increment of the target can only decrease the cost when there are more positions with negative difference than there are with zero or positive difference. A decrement can only decrease the cost when there are more positions with a positive difference than with a zero or negative difference.
Here are some example distributions of difference signs. Remember that the differences array is non-increasing, so positives always have to be first and negatives last:
C C
+ + + - - - optimal
+ + 0 - - - optimal
0 0 0 - - - optimal
+ 0 - - - - can increment (negatives exceed positives & zeroes)
+ + + 0 0 0 optimal
+ + + + - - can decrement (positives exceed negatives & zeroes)
+ + 0 0 - - optimal
+ 0 0 0 0 0 optimal
C C
Observe that if one of the central elements (marked C) is zero, the target must be optimal. In such a circumstance, at best any increment or decrement will not change the cost, but it may increase it. This result is important, because it gives us a trivial solution. We pick a target such that a[n/2] remains unchanged. There may be other possible targets that yield the same cost, but there are definitely none that are better. Here's the original code modified to calculate this cost:
//n is the number of elements in array a
int targetValue;
int cost = 0;
int middle = n / 2;
int startValue = a[middle] - middle;
for (i = 0; i < n; i++)
{
targetValue = startValue + i;
cost += abs(targetValue - a[i]);
}
printf("%d\n",cost);
You can not do it by iterating once on the array, that's for sure.
You need first to check the difference between each two numbers, for example:
2,7,8,9 can be 2,3,4,5 with 18 steps or 6,7,8,9 with 4 steps.
Create a new array with the difference like so: for 2,7,8,9 it wiil be 4,1,1. Now you can decide whether to increase or decrease the first number.
Lets assume that the contiguous array looks something like this -
c c+1 c+2 c+3 .. and so on
Now lets take an example -
5 7 8 10
The contiguous array in this case will be -
c c+1 c+2 c+3
In order to get the minimum steps, the sum of the modulus of the difference of the integers(before and after) w.r.t the ith index should be the minimum. In which case,
(c-5)^2 + (c-6)^2 + (c-6)^2 + (c-7)^2 should be minimum
Let f(c) = (c-5)^2 + (c-6)^2 + (c-6)^2 + (c-7)^2
= 4c^2 - 48c + 146
Applying differential calculus to get the minima,
f'(c) = 8c - 48 = 0
=> c = 6
So our contiguous array is 6 7 8 9 and the minimum cost here is 2.
To sum it up, just generate f(c), get the first differential and find out c.
This should take O(n).
Brute force approach O(N*M)
If one draws a line through each point in the array a then y0 is a value where each line starts at index 0. Then the answer is the minimum among number of steps reqired to get from a to every line that starts at y0, in Python:
y0s = set((y - i) for i, y in enumerate(a))
nsteps = min(sum(abs(y-(y0+i)) for i, y in enumerate(a))
for y0 in xrange(min(y0s), max(y0s)+1)))
Input
2,4,5,6
2,4,5,8
Output
1
3

Maximize sum of table where each number must come from unique row and column

Suppose we have a table of numbers like this (we can assume it is a square table):
20 2 1 3 4
5 1 14 8 9
15 12 17 17 11
16 1 1 15 18
20 13 15 5 11
Your job is to calculate the maximum sum of n numbers where n is the number of rows or columns in the table. The catch is each number must come from a unique row and column.
For example, selecting the numbers at (0,0), (1,1), (2,2), (3,3), and (4,4) is acceptable, but (0,0), (0,1), (2,2), (3,3), and (4,4) is not because the first two numbers were pulled from the same row.
My (laughable) solution to this problem is iterating through all the possible permutations of the rows and columns. This works for small grids but, of course, it is incredibly slow as n gets big. It has O(n!) time complexity, if I'm not mistaken (sample Python code below).
I really think this can be solved in better time, but I'm not coming up with anything sufficiently clever.
So my question is, what algorithm should be used to solve this?
If it helps, this problem seems similar to the
knapsack problem.
import itertools
import re
grid = """20 2 1 3 4
5 1 14 8 9
15 12 17 17 11
16 1 1 15 18
20 13 15 5 11"""
grid = [[int(x) for x in re.split("\s+", line)] for line in grid.split("\n")]
possible_column_indexes = itertools.permutations(range(len(grid)))
max_sum = 0
max_positions = []
for column_indexes in possible_column_indexes:
current_sum = 0
current_positions = []
for row, col in enumerate(column_indexes):
current_sum += grid[row][col]
current_positions.append("(%d, %d)" % (row, col))
if current_sum > max_sum:
max_sum = current_sum
max_positions = current_positions
print "Max sum is", max_sum
for position in max_positions:
print position
This is the maximum cost bipartite matching problem. The classical way to solve it is by using the Hungarian algorithm.
Basically you have a bipartite graph: the left set is the rows and the right set is the columns. Each edge from row i to column j has cost matrix[i, j]. Find the matching that maximizes the costs.
For starters, you can use dynamic programming.
In your straightforward approach, you are doing exactly the same computation many, many times.
For example, at some point you answer the question: "For the last three columns with rows 1 and 2 already taken, how do I maximize the sum?" You compute the answer to this question twice, once when you pick row 1 from column 1 and row 2 from column 2, and once when you pick them vice-versa.
So don't do that. Cache the answer -- and also cache all similar answers to all similar questions -- and re-use them.
I do not have time right now to analyze the running time of this approach. I think it is O(2^n) or thereabouts. More later maybe...

Resources