finding the maximum sum in Matrix - algorithm

Input: A 2-dimensional NxN -symmetric Matrix - with NxN positive elements.
Output: A 2-dimensional matrix of NxN size with N selected elements such that its summation is the maximum among all possible selection. Other elements that are not selected are zero. In other words, we should select N elements from matrix to return maximum sum.
Requirement: If the dimension of matrix is 4*4, we should select 4 integer. Every row and column in matrix should not be used more than 2 times.
For example if we have 4*4 matrix, the following element could be selected:
(1,2)
(2,3)
(3,4)
(4,1)
but if we select (1,2)and (4,1), we cannot select (1,3), because we used 1 two times.
IS there an efficient algorithm for this problem?

This problem is in that weird place where it's not obvious that there's a solution polytope with integral vertices and yet its similarity to general matching discourages me from looking for an NP-hardness reduction. Algorithms for general matching are complicated enough that modifying one would be a daunting prospect even if there were a way to do it, so my advice would be to use an integer program solver. There's a formulation like
maximize sum_{ij} w_{ij} x_{ij}
subject to
sum_{ij} x_{ij} = n
forall k, sum_i x_{ik} + sum_j x_{kj} ≤ 2
forall ij, x_{ij} in {0, 1},
where w_{ij} is the ijth element of the weight matrix.

Related

Integer partition weighted minimum of maximum

Given a non-negative integer n and a positive real weight vector w with dimension m, partition n into a length-m non-negative integer vector that sums to n (call it v) such that max w_iv_i is the smallest, that is, we want to find the vector v such that the maximum of element-wise product between w and v is the smallest. There maybe several partitions, and we only want the smallest value of max w_iv_i among all possible v.
Seems like this problem can use a greedy algorithm to solve. From a target vector v for n-1, we add 1 to each entry, and find the minimum among those m vectors. but I don't think it's correct. The intuition is that it might add "over" the minimum. That is, there exists another partition not yielded by the add 1 procedure that falls in between the "minimum" of n-1 produced by this greedy algorithm and that of n produced by this greedy algorithm. Can anyone prove if this is correct or incorrect?
If you already know the maximum element-wise product P, then you can just set vi = floor(P/wi) until you run out of n.
Use binary search to find the smallest possible value of P.
The largest guess you need to try is n * min(w), so that means testing log(n) + log(min(w)) candidates, spending O(m) time for each test, or O(m*(log n + log(min(w))) all together.

Integer partition weighted minimum

Given a non-negative integer $n$ and a positive real weight vector $w$ with dimension $m$, partition $n$ into a length-$m$ non-negative integer vector that sums to $n$ (call it $v$) such that $w\cdot v$ is the smallest. There maybe several partitions, and we only want the value of $w\cdot v$.
Seems like this problem can use a greedy algorithm to solve. From a target vector for $n-1$, we add 1 to each entry, and find the minimum among those $m$ vectors. but I don't think it's correct. The intuition is that it might add "over" the minimum. That is, there exists another partition not yielded by the add 1 procedure that falls in between the "minimum" of $n-1$ produced by this greedy algorithm and that of $n$ produced by this greedy algorithm. Can anyone prove if this is correct or incorrect?
Without loss of generality, assume that the elements of w are non-decreasing. Let v be a m-vector whose values are non-negative integers that sum to n. Then the smallest inner product of v and w is achieved by setting v[0] = n and v[i] = 0 for i > 0.
This is easy to prove. Suppose v is any other vector with v[i] > 0 for some i > 0. Then we can increase v[0] by v[i] and reduce v[i] to zero. The elements of v will still sum to n and the inner product of v and w will be reduced by w[i] - w[0] >= 0.

How to write a a pseudocode to calculate the elements of matrix?

How can i write a pseudocode to calculate elements of matrix ?
The following algorithm finds the sum S of two n-by-n matrices A[0..n-1, 0..n-1] and B[0..n-1, 0..n-1].
To add two matrices, we add their corresponding elements. The resulting matrix S[0..n-1, 0..n-1] is an
n-by-n matrix with elements computed by the formula:
S[i, j] = A[i, j] + B[i, j]
Write the pseudocode for calculating the elements of matrix S.
ALGORITHM addMatrices(A[0..n-1,0..n-1], B[0..n-1, 0..n-1])
// Input: Two n-by-n matrices A and B
// Output: Matrix S = A + B
(i) What is the algorithm’s basic operation?
(ii) How many times is the basic operation executed?
(iii)What is the class O(…) the algorithm belongs to?
Algorithms basic operation is to access each element using two loops and add each element of 'A' matrix to each element of 'B' matrix designated by 'i' and 'j', and construct another matrix 'S' using it.
'S' denotes the sum matrix of 'A' and 'B' matrices.
The basic operation is executed n2 times, where 'n' is the order of the matrices.
For each i ∈ {0 , ... , n1-1} and each j ∈ {0 , ... , n2-1} where 'n1' and 'n2' are matrices order.
From 2nd part we denote the Big-O Notation complexity to be n2 or n1·n2 depending upon the order of the matrices.

Computing Combinations

I am facing difficulty in coming up with a solution for the problem given below:
We are given n boxes each having a weight ( it means each ball in box B_i have weight C_i),
Each box contain some balls specifically
{b1,b2,b3...,b_n} (b_i is the count of balls in Box B_i).
we have to choose m balls out of it such that sum of the weights of m chosen balls be less than a given number T.
How many ways to do it?
First, let's have a look on a similar problem:
The similar problem is: you are looking to maximize the sum (such that it is still smaller then T), you are facing a variation of subset-sum problem, which is NP-Hard. The variation with a constant number of items is discussed in this thread: Sum-subset with a fixed subset size.
An alternative way to look at the problem is with a 2-dimensional knapsack problem, where weight = cost, and an extra dimension for number of elements. This concept is discussed in this thread: What's the fastest way to solve knapsack prob with two properties
Now, look at your problem: Finding the number of possible ways to achieve a sum which is smaller/equal T is still NP-Hard.
Assume you had a polynomial algorithm to do it, let it be A.
Running A(T) and A(T-1) will give you two numbers, if A(T) > A(T-1), the answer to the subset sum problem would have been true - otherwise it is false, so given a polynomial solution to this problem, we could prove P=NP.
You can solve it by using dynamic programming techniques.
Let f[i][j][k] denote the number of ways to choose j balls from B_1 to B_i with sum of weights to be exactly k. The answer you want to get is f[n][m][T].
Initially, let f[i][j][k] = 1 for all i,j,k
for i = 1 to n
for j = 0 to m
for k = 0 to T
for x = 0 to min(b_i,j) # choose x balls from B_i
y = x * C_i
if y <= k
f[i][j][k] = f[i][j][k] * f[i-1][j-x][k-y] * Comb(b_i,x)
Comb(n,k) is the number of ways to choose k elements from n elements.
The time complexity is O(n m T b) where b is the maximum number of balls in a box.
Note that, because of the T in the big-O notation, theoretically it is NP-hard. However, in practice, when T is relatively small, this algorithm is still feasible.

find the values greater than x in N dimensional Matrix, where x is sum of index

We are given an N dimensional matrix of order [m][m][m]....n times where value position contains the value sum of its index..
For example in 6x6 matrix A, value at position A[3][4] will be 7.
We have to find out the total number of counts of elements greater than x. For 2 dimensional matrix we have following approach:
If we know the one index say [i][j] {i+j = x} then we create a diagonal by just doing [i++][j--] of [i--][j++] with constraint that i and j are always in range of 0 to m.
For example in two dimensional matrix A[6][6] for value A[3][4] (x = 7), diagonal can be created via:
A[1][6] -> A[2][5] -> A[3][4] -> A[4][3] -> A[5][2] -> A[6][2]
Here we have converted our problem into another problem which is count the element below the diagonal including the diagonal.
We can easily count in O(m) complexity instead spending O(m^2) where 2 is order of matrix.
But if we consider N dimensional matrix, how we will do it, because in N dimensional matrix if we know the index of that location,
where sum of index is x say A[i1][i2][i3][i4]....[in] times.
Then there may be multiple diagonal which satisfy that condition, say by doing i1-- we can increment any of {i2, i3, i4....in}
So, above used approach for 2 dimensional matrix become useless here... because there is only two variable quantity i1 and i2 is present.
Please help me to find solution
For 2D: count of the elements below diagonal is triangular number.
For 3D: count of the elements below diagonal plane is tetrahedral number
Note that Kth tetrahedral number is the sum of the first K triangular numbers.
For nD: n-simplexial (I don't know exact english term) number (is sum of first (n-1)-simplexial numbers).
The value of Kth n-simplexial is
S(k, n) = k * (k+1) * (k+2).. (k + n - 1) / n! = BinomialCoefficient(k+n-1, n)
Edit: this method works "as is" for limited values of X below main anti-diagonal (hyper)plane.
Generating function approach:
Let's we have polynom
A(s)=1+s+s^2+s^3+..+s^m
then it's nth power
B(s) = An(s) has important property: coefficient of kth power of s is the number of ways to compose k from n summands. So the sum of nth to kth coefficients gives us the count of the elements below kth diagonal
For a 2-dimensional matrix, you converted the problem into another problem, which is count the elements below the diagonal including the diagonal.
Try and visualize it for a 3-d matrix. In case of a 3-dimensional matrix, the problem will be reduced to another problem, which is to count the elements below the diagonal plane including the diagonal

Resources