Game of choosing maximum amount after removing K coins optimally - algorithm

I am given the following task to solve:
Two players play a game. In this game there are coins and each coin has a value. Each player takes turns and chooses 1 coin. The goal is to have the highest total value at the end. Each player is forced to play optionally(that means always choosing the highest value from the pile). I must find out the sum of the 2 players/the difference between their highest possible sums
Constraints: All values are natural integers and positive.
The task above is a classic greedy problem. From what I've tried it can be sorted with quickSort and then just picking the elements in order for the 2 players. If you need a better time on my tests Radix-Sort performs better. Ok so this task is pretty easy.
Now I have the same task as above BUT the first player must remove OPTIMALLY K coins such that the difference between their scores is maximal. Well this sounds like DP but my mind can't come up with the solution. I must find out again the maximal difference between their points(with both players playing optimally). Or the points of the 2 players in such a way that the difference between them is maximal.
Is there such an algorithm already implemented? Or can someone give me some tips on this issue?

Here is a DP approach solution. We consider n coins, sorted by descending order to simplify the notation (meaning coins[0] is the highest value coin, while coins[n-1] has the lowest value), and we want to remove k coins in order to win the game with a margin as big as possible.
We will consider a matrix M, of dimensions n-k per k.
M stores the following: M(i, j) is the best possible score after playing i turns, when j coins have been removed out of the i+j best coins. It may sound a bit counter-intuitive at first, but it actually is what we are looking for.
Indeed, we have already a value to initialize our matrix: M(0, 0) = 0.
We also can see that M(n-k, k) is actually the solution to the problem we want to solve.
We now need recurrence equations to fill up our matrix. We consider that we want to maximize the score difference for the first player. To maximize the score difference for the second player, the approach is the same, just modify some signs.
if i = 0 then:
M(i, j) = 0 // score difference is always 0 after playing 0 turns
else if j = 0 and i % 2 = 0: // player 1 plays
M(i, j) = M(i-1, j) + coins[i+j]
else if j = 0 and i % 2 = 1: // player 2 plays
M(i, j) = M(i-1, j) - coins[i+j]
else if i % 2 = 0:
M(i, j) = max(M(i, j-1), M(i-1, j) + coins[i+j])
else if i % 2 = 1:
M(i, j) = max(M(i, j-1), M(i-1, j) - coins[i+j])
This recurrence simply means that the best choice, at any point, is between removing the coin (in the case where the best value is M(i, j-1)), or not removing it(case where the best value is M(i-1, j) +/- coins[i+j]) .
That will give you the final score difference, but not the set of coins to remove. To find it, you must keep the 'optimal path' that your program used to calculate the matrix values (was the best value coming from M(i-1,j) or from M(i,j-1) ?).
This path can give you the set you are looking for. By the way, you can see this makes sense, as there are k among n possible ways to remove k coins out of n coins, and there are as well k among n paths from top left to bottom right in a k per n-k matrix if you're allowed to go right or down only.
This explanation might still be unclear, do not hesitate to ask precisions in the comment, I'll edit the answer for more clarity.

Related

Find minimum number of steps to collect target number of coins

Given a list of n houses, each house has a certain number of coins in it. And a target value t. We have to find the minimum number of steps required to reach the target.
The person can choose to start at any house and then go right or left and collect coins in that direction until it reaches the target value. But the person cannot
change the direction.
Example: 5 1 2 3 4 These are supposed the coin values in 5 houses and the target is 13 then the minimum number of steps required is 5 because we have to select all the coins.
My Thoughts:
One way will be for each index i calculate the steps required in left or right direction to reach the target and then take the minimum of all these 2*n values.
Could there be a better way ?
First, let's simplify and canonize the problem.
Observation 1: The "choose direction" capability is redundant, if you choose to go from house j to house i, you can also go from i to j to have the same value, so it is sufficient to look at one direction only.
Observation 2: Now that we can look at the problem as going from left to right (observation 1), it is clear that we are looking for a subarray whose value exceeds k.
This means that we can canonize the problem:
Given an array with non negative values a, find minimal subarray
with values summing k or more.
There are various ways to solve this, one simple solution using a sorted map (balanced tree for example) is to go from left to right, summing values, and looking for the last element seen whose value was sum - k.
Pseudo code:
solve(array, k):
min_houses = inf
sum = 0
map = new TreeMap()
map.insert(0, -1) // this solves issue where first element is sufficient on its own.
for i from 0 to array.len():
sum = sum + array[i]
candidate = map.FindClosestLowerOrEqual(sum - k)
if candidate == null: // no matching sum, yet
continue
min_houses = min(min_houses, i - candidate)
map.insert(sum, i)
return min_houses
This solution runs in O(nlogn), as each map insertion takes O(logn), and there are n+1 of those.
An optimization, running in O(n), can be done if we take advantage of "non negative" trait of the array. This means, as we go on in the array - the candidate chosen (in the map seek) is always increasing.
We can utilize it to have two pointers running concurrently, and finding best matches, instead of searching from scratch in the index as we did before.
solve(array, k):
left = 0
sum = 0
min_houses = infinity
for right from 0 to len(array):
sum = sum + array[right]
while (left < right && sum >= k):
min_houses = min(min_houses, right - left)
sum = sum - array[left]
left = left + 1
return min_houses
This runs in O(n), as each index is increased at most n times, and every operation is O(1).

Algorithm to find best combination or path through nodes

As I am not very proficient in various optimization/tree algorithms, I am seeking help.
Problem Description:
Assume, a large sequence of sorted nodes is given with each node representing an integer value L. L is always getting bigger with each node and no nodes have the same L.
The goal now is to find the best combination of nodes, where the difference between the L-values of subsequent nodes is closest to a given integer value M(L) that changes over L.
Example:
So, in the beginning I would have L = 50 and M = 100. The next nodes have L = 70,140,159,240,310.
First, the value of 159 seems to be closest to L+M = 150, so it is chosen as the right value.
However, in the next step, M=100 is still given and we notice that L+M = 259, which is far away from 240.
If we now go back and choose the node with L=140 instead, which then is followed by 240, the overall match between the M values and the L-differences is stronger. The algorithm should be able to find back to the optimal path, even if a mistake was made along the way.
Some additional information:
1) the start node is not necessarily part of the best combination/path, but if required, one could first develop an algorithm, which chooses the best starter candidate.
2) the optimal combination of nodes is following the sorted sequence and not "jumping back" -> so 1,3,5,7 is possible but not 1,3,5,2,7.
3) in the end, the differences between the L values of chosen nodes should in the mean squared sense be closest to the M values
Every help is much appreciated!
If I understand your question correctly, you could use Dijktras algorithm:
https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
http://www.mathworks.com/matlabcentral/fileexchange/20025-dijkstra-s-minimum-cost-path-algorithm
For that you have to know your neighbours of every node and create an Adjacency Matrix. With the implementation of Dijktras algorithm which I posted above you can specify edge weights. You could specify your edge weight in a manner that it is L of the node accessed + M. So for every node combination you have your L of new node + M. In that way the algorithm should find the optimum path between your nodes.
To get all edge combinations you can use Matlabs graph functions:
http://se.mathworks.com/help/matlab/ref/graph.html
If I understand your problem correctly you need an undirected graph.
You can access all edges with the command
G.Edges after you have created the graph.
I know its not the perfect answer but I hope it helps!
P.S. Just watch out, Djikstras algorithm can only handle positive edge weights.
Suppose we are given a number M and a list of n numbers, L[1], ..., L[n], and we want to find a subsequence of at least q of the latter numbers that minimises the sum of squared errors (SSE) with respect to M, where the SSE of a list of k positions x[1], ..., x[k] with respect to M is given by
SSE(M, x[1], ..., x[k]) = sum((L[x[i]]-L[x[i-1]]-M)^2) over all 2 <= i <= k,
with the SSE of a list of 0 or 1 positions defined to be 0.
(I'm introducing the parameter q and associated constraint on the subsequence length here because without it, there always exists a subsequence of length exactly 2 that achieves the minimum possible SSE -- and I'm guessing that such a short sequence isn't helpful to you.)
This problem can be solved in O(qn^2) time and O(qn) space using dynamic programming.
Define f(i, j) to be the minimum sum of squared errors achievable under the following constraints:
The number at position i is selected, and is the rightmost selected position. (Here, i = 0 implies that no positions are selected.)
We require that at least j (instead of q) of these first i numbers are selected.
Also define g(i, j) to be the minimum of f(k, j) over all 0 <= k <= i. Thus g(n, q) will be the minimum sum of squared errors achievable on the entire original problem. For efficient (O(1)) calculation of g(i, j), note that
g(i>0, j>0) = min(g(i-1, j), f(i, j))
g(0, 0) = 0
g(0, j>0) = infinity
To calculate f(i, j), note that if i > 0 then any solution must be formed by appending the ith position to some solution Y that selects at least j-1 positions and whose rightmost selected position is to the left of i -- i.e. whose rightmost selected position is k, for some k < i. The total SSE of this solution to the (i, j) subproblem will be whatever the SSE of Y was, plus a fixed term of (L[x[i]]-L[x[k]]-M)^2 -- so to minimise this total SSE, it suffices to minimise the SSE of Y. But we can compute that minimum: it is g(k, j-1).
Since this holds for any 0 <= k < i, it suffices to try all such values of k, and take the one that gives the lowest total SSE:
f(i>=j, j>=2) = min of (g(k, j-1) + (L[x[i]]-L[x[k]]-M)^2) over all 0 <= k < i
f(i>=j, j<2) = 0 # If we only need 0 or 1 position, SSE is 0
f(i, j>i) = infinity # Can't choose > i positions if the rightmost chosen position is i
With the above recurrences and base cases, we can compute g(n, q), the minimum possible sum of squared errors for the entire problem. By memoising values of f(i, j) and g(i, j), the time to compute all needed values of f(i, j) is O(qn^2), since there are at most (n+1)*(q+1) possible distinct combinations of input parameters (i, j), and computing a particular value of f(i, j) requires at most (n+1) iterations of the loop that chooses values of k, each iteration of which takes O(1) time outside of recursive subcalls. Storing solution values of f(i, j) requires at most (n+1)*(q+1), or O(qn), space, and likewise for g(i, j). As established above, g(i, j) can be computed in O(1) time when all needed values of f(x, y) have been computed, so g(n, q) can be computed in the same time complexity.
To actually reconstruct a solution corresponding to this minimum SSE, you can trace back through the computed values of f(i, j) in reverse order, each time looking for a value of k that achieves a minimum value in the recurrence (there may in general be many such values of k), setting i to this value of k, and continuing on until i=0. This is a standard dynamic programming technique.
I now answer my own post with my current implementation, in order to structure my post and load images. Unfortunately, the code does not do what it should do. Imagine L,M and q given like in the images below. With the calcf and calcg functions I calculated the F and G matrices where F(i+1,j+1) is the calculated and stored f(i,j) and G(i+1,j+1) from g(i,j). The SSE of the optimal combination should be G(N+1,q+1), but the result is wrong. If anyone found the mistake, that would be much appreciated.
G and F Matrix of given problem in the workspace. G and F are created by calculating g(N,q) via calcg(L,N,q,M).
calcf and calcg functions

A stone game - 2 players play perfectly

Recently I have learned about the nim game and grundy number
I am stuck on a problem. Please give me some ideas
Problem:
A and B play a game with a pile of stone. A starts the game and they alternate moves. In each move, a player has to remove at least one and no more than sqrt of number stones from the pile. So, for example if a pile contains 10 stones, then a player can take 1,2,3 stones from the pile.
Both A and B play perfectly. The player who cannot make a valid move loses. Now you are given the number of stone, you have to find the player who will win if both play optimally.
Example
n=3 A win,
n=5 B win
n<=10^12
I can solve this problem with small number of stone by using Grundy number https://www.topcoder.com/community/data-science/data-science-tutorials/algorithm-games/?
grundy function is g(x) with x is the remain stones.
call F(s) is set of number of remain stone that we can obtain from x stone.
if s is a terminal position, g(s)=0
if s is not a terminal position, Let X = {g(t)| t in F(s)}. Then, grundy number of s is the smallest integer greater than or equal to 0 which is not in X.
for example x=10 so F(x)={9,8,7} by take 1,2 or 3 stones. (sqrt(10)<=3)
if g(n)>0 => the first player win
g(0)=0
g(1)=1
g(2)=0
g(3)=1
g(4)=2
....
but this algorithm is to slow.
Thanks in advance.
I'm adding a second answer because my first answer provides the background theory without the optimization. But since OP clearly is looking for some optimization and for a very fast solution without a lot of recursion, I took my own advice:
Of course, the really fast way to do this is to do some more math and figure out some simple properties of n you can check that will determine whether or not it is a winner or a loser.
I'm going to use the terminology I defined there, so if this isn't making sense, read that answer! Specifically, n is the pile size, k is the number of stones to remove, n is a winner if there is a winning strategy for player A starting with a pile of size n and it is a loser otherwise. Let's start with the following key insight:
Most numbers are winners.
Why is this true? It isn't obvious for small numbers: 0 is a loser, 1 is a winner, 2 is a loser, 3 is a winner, so is 4, but 5 is a loser again. But for larger numbers, it becomes more obvious.
If some integer p is large and is a loser then p+1, p+2, ... p+k_m are all winners for some k_m that is around the size of sqrt(p). This is because once I find a loser, for any pile that isn't too much larger than that, I can remove a few stones to leave my opponent with that loser. The key is just determining what the largest valid value of k is, since k is defined in terms of the starting pile size n rather than the final pile size p.
So the question becomes, given some integer p, for which values of k is it true that k <= sqrt(n) where n = p+k. In other words, given p, what starting pile sizes n allow me to remove k and leave my opponent with p. Well, since n = p+k and the values are all nonnegative, we must have
k <= sqrt(n) = sqrt(p+k) ==> k^2 <= p + k ==> k^2 - k - p <= 0.
This is a quadratic equation in k for any fixed value of p. The endpoints of the solution set can be found using the quadratic formula:
k = (1 +- sqrt(1 + 4p))/2
So, the inequality is satisfied for values of k between (1-sqrt(1+4p))/2 and (1+sqrt(1+4p))/2. Of course, sqrt(1+4p) is at least sqrt(5) > 2, so the left endpoint is negative. So then k_m = floor((1+sqrt(1+4p))/2).
More importantly, I claim that the next loser after p is the number L = p + k_m + 1. Let's try to prove this:
Theorem: If p is a loser, then L = p + k_m + 1 is also a loser and every integer p < n < L is a winner.
Proof: We have already shown that every integer n in the interval [p+1, p+k_m] is a winner, so we only need to prove that L is a loser.
Suppose, to the contrary, that L is a winner. Then there exists some 1 <= k <= sqrt(L) such that L - k is a loser (by definition). Since we have proven that the integers p+1, ..., p+k_m are winners, we must have that L - k <= p since no number smaller than L and larger than p can be a loser. But this means that L <= p + k and so k satisfies the equation k <= sqrt(L) <= sqrt(p + k). We have already shown that solutions to the equation k <= sqrt(p + k) are no larger than (1+sqrt(1+4p))/2, so any integer solution must satisfy k <= k_m. But then L - k = p + k_m + 1 - k >= p + k_m + 1 - k_m = p + 1. This is a contradiction since p < L - k < L and we have already proved that there are no losers larger than p and smaller than L.
QED
The above theorem gives us a nice approach since we now know that winners fall into intervals of integers separated by a single loser and we know how to calculate the interval between two losers. In particular, if p is a loser, then p + k_m + 1 is a loser where
k_m = floor((1+sqrt(1+4p))/2).
Now we can rewrite the function in a purely iterative manner that should be fast and requires constant space. The approach is simply to compute the sequence of losers until we either find n (in which case it is a loser) or determine that n lies in the interval between two losers.
bool is_winner(int n) {
int p = 0;
// loop through losers until we find one at least as large as n
while (p < n) {
int km = floor((1+sqrt(1+4p))/2);
p = p + km + 1;
}
/* if we skipped n while computing losers,
* it is a winner that lies in the interval
* between two losers. So n is a winner as
* long as it isn't equal to p.*/
return (p != n);
}
You have to think this game recursively from the end: Clearly, to win, you have to take the last stone.
1 stone: First player wins. It's A's turn to take the only stone.
2 stones: Second player wins. A cannot take two stones but has to take one. So A is forced to take one stone and leave the other one for B to take.
3 stones: First player wins. There is still no choice. A has to take one stone, and smiles because they know that B can't win with two stones.
4 stones: First player wins. Now A has the choice to leave two or three stones. From the above, A knows that B will win if given three stones, but B will lose if given two stones, so A takes two stones.
5 stones: Second player wins. Even though A has the choice to leave three or four stones, B will win if given either amount.
As you can see, you can easily calculate who will win a game with n stones by complete knowledge of the outcomes of the games with 1 to n-1 stones.
An algorithmic solution will thus create a boolean array wins, where wins[i] is true if the player given i stones will win the game. wins[0] is initialized to false. The rest of the array is then filled iteratively from the start by scanning the reachable portion of the array for a false entry. If a false entry is found, the current entry is set to true, because A can leave the board in a loosing state for B, otherwise it is set to false.
I will build upon cmaster's answer because it is already pretty close. The question is how to efficiently calculate the values.
The answer is: We don't need the whole array. Only the false values are interesting. Let's analyze:
If we have a false value in the array, then the next few entries will be true because they can remove stones, such that the other player lands on the false value. The question is: How many true entries will be there?
If we are at the false entry z, then the entry x will be a true entry if x - sqrt(x) <= z. We can solve this for x and get:
x <= 1/2 * (1 + 2 * z + sqrt(1 + 4 * z))
This is the last true entry. E.g. for z = 2, this returns 4. The next entry will be false because the player can only remove stones, such that the opponent will come out at a true entry.
Knowing this, our algorithm is almost complete. Start at a known false value (e.g. 0). Then, iteratively move to the next false value until you reach n.
bool isWinner(long long n)
{
double loser = 0;
while(n > loser)
loser = floor(0.5 * (1 + 2 * loser + sqrt(1 + 4 * loser))) + 1;
return n != loser;
}
Games like this (Towers of Hanoi is another classic example) are meant to illustrate the mathematical principles of induction and recursion, with recursion being particularly relevant in programming.
We want to determine whether a pile of n stones is a winning pile or a losing pile. Intuitively, a winning pile is one so that no matter what sequence of choices your opponent makes you can always take some number of stones to guarantee you will win. Similarly, a losing pile is one such that no matter what choice you make, you always leave your opponent a winning strategy.
Obviously, n = 0 is a losing pile; you've already lost. And n = 1 is a winning pile since you take one stone and leave your opponent n=0. What about n=2? Well, you are only allowed to take one stone, at which point you have given your opponent a winning pile (n=1), so n=2 is a losing number. We can make this mathematically more precise.
Definition: An integer n is a loser if n=0 or for every integer k between 1 and sqrt(n), n-k is a winner. An integer n is a winner if there exists some integer k between 1 and sqrt(n) such that n-k is a loser.
In this definition, n is the size of the pile, k is the number of stones you choose to take. A pile is a losing pile if every valid number of stones to remove gives your opponent a winning pile and a winning pile is one where some choice gives your opponent a losing pile.
Of course, that definition should make you a little uneasy because we actually have no idea if it makes sense for anything other than n=0,1,2, which we already checked. Perhaps some number fits the definition of both a winner and a loser, or neither definition. This would certainly be confusing. This is where induction comes in.
Theorem: Every nonnegative integer is either a winner or a loser, but not both.
Proof: We'll use the principle of Strong or Complete Induction. We know that n=0 is a loser (by definition) and we've already shown that n=1 is a winner and n=2 is a loser directly. Those are our base cases.
Now let's consider some integer n_0 > 2 and we'll assume (using strong induction) that every nonnegative integer less than n_0 is either a winner or a loser, but not both. Let s = floor(sqrt(n_0)) and consider the set of integers P = {n_0-s, n_0-s+1, ..., n_0 - 1}. (Since {1, 2, ..., s} is the set of possible choices of stones to remove, P is the set of piles I can leave my opponent with.) By strong induction, since each value in P is a nonnegative integer less than n_0, each of them is either a winner or a loser (but not both). If any value in P is a loser, then by definition, n_0 is a winner (because you remove enough stones to leave your opponent that losing pile). If not, then every value in P is a winner, so then n_0 is a loser (because no matter how many stones you take, your opponent is still left with a winning pile). Therefore, n_0 is either a winner or a loser, but not both.
By strong induction, we conclude that every nonnegative integer is either a winner or a loser but not both.
QED
OK, that was pretty straightforward if you are comfortable with induction. But all we've shown is that our very intuitive definition actually makes sense and that every pile you get is either a winner (if you play it right) or a loser (if your opponent plays it right). How do we determine which is which?
Well, induction leads right to recursion. Let's write recursive functions for our two definitions: is n a winner or a loser? Here's some C-like pseudocode without error checking.
bool is_winner(int n) {
// check all valid numbers of stones to remove (k)
for (int k = 1; k <= sqrt(n); k++) {
if (is_loser(n-k)) {
// I can force the loser n-k on my opponent, so n is a winner
return true;
}
}
// I didn't find a way to force a loser on my opponent, so this must be a loser.
return false;
}
bool is_loser(int n) {
if (n == 0) { // this is our base case
return true;
}
for (int k = 1; k <= sqrt(n); k++) {
if (!is_winner(n-k)) {
// we found a way to give them a pile that ISN'T a winner, so this isn't a loser
return false;
}
}
// Nope: every pile we can give our opponent is a winner, so this pile is a loser
return true;
}
Of course, the code above is somewhat redundant since we've already shown that every number is either a winner or a loser. Therefore, it makes more sense to implement is_loser as just returning !is_winner or vice-versa. Perhaps we'll just do is_winner as a stand-alone implementation.
bool is_winner(int n) {
if (n < 0) {
// raise error
} else if (n == 0) {
return false; // 0 is a loser
} else {
for (int k = 1; k <= sqrt(n); k++) {
if (!is_winner(n-k)) {
// we can give opponent a loser, this is a winner
return true;
}
}
// all choices give our opponent a winner, this is a loser
return false;
}
}
To use this function to answer the question, if the game starts with n stones and player A goes first and both players play optimally, player A wins if is_winner(n) and player B wins if !is_winner(n). To figure out what their plays should be, if you have a winning pile, you should choose some valid k such that n-k is a loser (it doesn't matter which one, but the largest value will make the game end fastest) and if you are given a losing pile, it doesn't matter what you choose -- that is the point of a loser, but again, choosing the largest value of k will make the game end sooner.
None of this really considers performance. Since n could be quite large, there are a number of things you could consider. Like, for example, pre-computing the common small values of n that you are going to consider, or using Memoization, at least within a single recursive call. Furthermore, as I suggested previously, removing the largest value of k ends the game with fewer turns. Similarly, if you reverse the loops and check the largest allowed values of k first, you should be able to reduce the number of recursive calls.
Of course, the really fast way to do this is to do some more math and figure out some simple properties of n you can check that will determine whether or not it is a winner or a loser.
public class Solution {
public boolean canWinNim(int n) {
if(n % 4 == 0)
{
return false;
}
else
{
return true;
}
}
}

greedy algorithm matrix of 0\1

I'm studying for an algorithm exam and I cant find a way to solve the next problem:
INPUT: Positive integers r1,r2....rn and c1,c2....cn
OUTPUT: An n by n matrix A with 0/1 entries such that for all i the sum of the i-th row in
A is ri and the sum of the i-th column in A is ci, if such a matrix exists.
Consider the following greedy algorithm that constructs A row by row. Assume that the first
i-1 rows have been constructed. Let aj be the number of 1's in the j-th column in the first i-1 rows. The ri columns with maximum cj-aj are assigned 1's in row , and the rest
of the columns are assigned 0's. That is, the columns that still needs the most 1's are given 1's. Formally prove that this algorithm is correct using an exchange argument.
So what I tried to do as with most greedy problems I encountered is to wrap it up in an induction,
assume that the rows up to the i-th row in the greedy solution and the optimal solution are the same and then try to change the i+1-th row so it will match the greedy and prove that it wont create a less optimal solution to the optimal given. then keep it up for k-i iterations till the entire solution is similar.
After trying that unsuccessfully I tried the same idea only per coordinate assume the ij coordinate is the first one that does not match and again failed.
Then I tried a different approach assuming that I have a set of steps in the greedy and swap a whole step of the algorithm (which is basically the same idea as the first one) and still I did not manage to find a way in which it is guaranteed that I did not create a less optimal solution.
thanks in advance for any assistance.
Consider some inputs r and c and a matrix S that satisfies them.
Throw away the contents of the last row in S to get a new matrix S(n-1). If we fed S(n-1) to the greedy algorithm and asked it to fill in this last row, it'd obviously recover a satisfying solution.
Well, now throw away the contents of the last two rows in S to get S(n-2). Since we know a satisfying solution exists, we know that there are no j such that c(j) - a(j) > 2, and the number of j such that c(j)-a(j) == 2 is smaller or equal to r(n-1). It follows that the greedy algorithm will set A[n-1, j] = 1 for the latter set of j, along with some other j for which c(j)-a(j) = 1. But because we know there's a satisfying solution, it must be that the number of j with c(j) - a(j) == 1 after the n-1th row is filled is exactly r(n), and is hence satisfiable.
Anyway, we can extend this downwards: in S(n-k-1) it must be the case that:
there aren't any j such that c(j) - a(j) > k+1
there can be at most r(n-k) many j such that c(j) - a(j) = k+1,
and Sum(c(j) - a(j)) == Sum(r(i), i >= n-k)
So after the greedy algorithm has processed the n-kth row, it must be the case that
there aren't any j such that c(j) - a(j) > k
there can be at most r(n-k+1) many j such that c(j) - a(j) = k,
and Sum(c(j) - a(j)) == Sum(r(i), i >= n-k+1)
Hence when k = 0, there aren't any j such that c(j) - a(j) > 0

Maximum Coin Partition

Since standing at the point of sale in the supermarket yesterday, once more trying to heuristically find an optimal partition of my coins while trying to ignore the impatient and nervous queue behind me, I've been pondering about the underlying algorithmic problem:
Given a coin system with values v1,...,vn, a limited stock of coins a1,...,an and the sum s which we need to pay.
We're looking for an algorithm to calculate a partition x1,...,xn (with 0<=xi<=ai) with x1*v1+x2*v2+...+xn*vn >= s such that the sum x1+...+xn - R(r) is maximized, where r is the change, i.e. r = x1*v1+x2*v2+...+xn*vn - s and R(r) is the number of coins returned from the cashier. We assume that the cashier has an unlimited amount of all coins and always gives back the minimal number of coins (by for example using the greedy-algorithm explained in SCHOENING et al.). We also need to make sure that there's no money changing, so that the best solution is NOT to simply give all of the money (because the solution would always be optimal in that case).
Thanks for your creative input!
If I understand correctly, this is basically a variant of subset sum. If we assume you have 1 of each coin (a[i] = 1 for each i), then you would solve it like this:
sum[0] = true
for i = 1 to n do
for j = maxSum downto v[i] do
sum[j] |= sum[j - v[i]]
Then find the first k >= s and sum[k] is true. You can get the actual coins used by keeping track of which coin contributed to each sum[j]. The closest you can get your sum to s using your coins, the less the change will be, which is what you're after.
Now you don't have 1 of each coin i, you have a[i] of each coin i. I suggest this:
sum[0] = true
for i = 1 to n do
for j = maxSum downto v[i] do
for k = 1 to a[i] do
if j - k*v[i] >= 0 do
sum[j] |= sum[j - k*v[i]] <- use coin i k times
It should be fairly easy to get your x vector from this. Let me know if you need any more details.

Resources