Maximum Coin Partition - algorithm

Since standing at the point of sale in the supermarket yesterday, once more trying to heuristically find an optimal partition of my coins while trying to ignore the impatient and nervous queue behind me, I've been pondering about the underlying algorithmic problem:
Given a coin system with values v1,...,vn, a limited stock of coins a1,...,an and the sum s which we need to pay.
We're looking for an algorithm to calculate a partition x1,...,xn (with 0<=xi<=ai) with x1*v1+x2*v2+...+xn*vn >= s such that the sum x1+...+xn - R(r) is maximized, where r is the change, i.e. r = x1*v1+x2*v2+...+xn*vn - s and R(r) is the number of coins returned from the cashier. We assume that the cashier has an unlimited amount of all coins and always gives back the minimal number of coins (by for example using the greedy-algorithm explained in SCHOENING et al.). We also need to make sure that there's no money changing, so that the best solution is NOT to simply give all of the money (because the solution would always be optimal in that case).
Thanks for your creative input!

If I understand correctly, this is basically a variant of subset sum. If we assume you have 1 of each coin (a[i] = 1 for each i), then you would solve it like this:
sum[0] = true
for i = 1 to n do
for j = maxSum downto v[i] do
sum[j] |= sum[j - v[i]]
Then find the first k >= s and sum[k] is true. You can get the actual coins used by keeping track of which coin contributed to each sum[j]. The closest you can get your sum to s using your coins, the less the change will be, which is what you're after.
Now you don't have 1 of each coin i, you have a[i] of each coin i. I suggest this:
sum[0] = true
for i = 1 to n do
for j = maxSum downto v[i] do
for k = 1 to a[i] do
if j - k*v[i] >= 0 do
sum[j] |= sum[j - k*v[i]] <- use coin i k times
It should be fairly easy to get your x vector from this. Let me know if you need any more details.

Related

What does the +1 mean in the recurrence relation for the coin change problem (dynamic programming approach)?

I was seeing the Coin Change problem. In general, the input is n (the change to be returned) and the denominations (values of coins in cents) available, v1 < v2 < v1 < ... < vk; the goal is to make the change for n cents with the minimum number of coins.
I was reading this pdf from Columbia university, but I don't get why, at slide number 6, we have a +1 in the recurrence relation:
Does it represent the coins we've already used?
C[p] indicates the minimum number of coins you to build denomination p from your available coins array d.
So to create such sum you must pick coins d[i] such that d[i]<p.
Let's assume that you picked a coin d[i] from d. Which means you coin count now is one.
Now to make the sum p, collect more coins for a sum p-d[i].
But you already did have min_coins needed for making sum p-d[i] in C[p-d[i]].
Which means one possible coin count for making sum p is 1+C[p-d[i]].
But there could be multiple denominations where d[i]<p possible, then you have to pick the one which results you with minimum, which is exactly what that you function is doing.
This way you can understand that +1 in function as a first coin that we are considering for making the sum p.
Suppose my denominations look like this: d = [1, 5, 10, 25]. Let's also suppose n, the change to be returned, is 26. This means that:
C[26] = min{C[26 - d[i]] + 1}
which can be expressed as:
C[26] = min{C[25], C[21], C[16], C[1]} + 1.
The "+1" here is just the coin you need to add to one of the previously-solved subproblems (e.g. C[25], C[21]) to get C[26].
If we consider an even simpler example, like n = 6 with the same denominations, we know that the recurrence will be:
C[6] = min{C[6 - d[i]]} + 1
or:
C[6] = min{C[5], C[1]} + 1
We know that C[5] is 1 (because the minimum way to make 5 cents with a denomination of 5 in the mix is 1) and similarly C[1] = 1. The minimum here is just 1, so 1 + 1 = 2 and the minimum number of coins needed to make 6 cents is 2 coins.

Game of choosing maximum amount after removing K coins optimally

I am given the following task to solve:
Two players play a game. In this game there are coins and each coin has a value. Each player takes turns and chooses 1 coin. The goal is to have the highest total value at the end. Each player is forced to play optionally(that means always choosing the highest value from the pile). I must find out the sum of the 2 players/the difference between their highest possible sums
Constraints: All values are natural integers and positive.
The task above is a classic greedy problem. From what I've tried it can be sorted with quickSort and then just picking the elements in order for the 2 players. If you need a better time on my tests Radix-Sort performs better. Ok so this task is pretty easy.
Now I have the same task as above BUT the first player must remove OPTIMALLY K coins such that the difference between their scores is maximal. Well this sounds like DP but my mind can't come up with the solution. I must find out again the maximal difference between their points(with both players playing optimally). Or the points of the 2 players in such a way that the difference between them is maximal.
Is there such an algorithm already implemented? Or can someone give me some tips on this issue?
Here is a DP approach solution. We consider n coins, sorted by descending order to simplify the notation (meaning coins[0] is the highest value coin, while coins[n-1] has the lowest value), and we want to remove k coins in order to win the game with a margin as big as possible.
We will consider a matrix M, of dimensions n-k per k.
M stores the following: M(i, j) is the best possible score after playing i turns, when j coins have been removed out of the i+j best coins. It may sound a bit counter-intuitive at first, but it actually is what we are looking for.
Indeed, we have already a value to initialize our matrix: M(0, 0) = 0.
We also can see that M(n-k, k) is actually the solution to the problem we want to solve.
We now need recurrence equations to fill up our matrix. We consider that we want to maximize the score difference for the first player. To maximize the score difference for the second player, the approach is the same, just modify some signs.
if i = 0 then:
M(i, j) = 0 // score difference is always 0 after playing 0 turns
else if j = 0 and i % 2 = 0: // player 1 plays
M(i, j) = M(i-1, j) + coins[i+j]
else if j = 0 and i % 2 = 1: // player 2 plays
M(i, j) = M(i-1, j) - coins[i+j]
else if i % 2 = 0:
M(i, j) = max(M(i, j-1), M(i-1, j) + coins[i+j])
else if i % 2 = 1:
M(i, j) = max(M(i, j-1), M(i-1, j) - coins[i+j])
This recurrence simply means that the best choice, at any point, is between removing the coin (in the case where the best value is M(i, j-1)), or not removing it(case where the best value is M(i-1, j) +/- coins[i+j]) .
That will give you the final score difference, but not the set of coins to remove. To find it, you must keep the 'optimal path' that your program used to calculate the matrix values (was the best value coming from M(i-1,j) or from M(i,j-1) ?).
This path can give you the set you are looking for. By the way, you can see this makes sense, as there are k among n possible ways to remove k coins out of n coins, and there are as well k among n paths from top left to bottom right in a k per n-k matrix if you're allowed to go right or down only.
This explanation might still be unclear, do not hesitate to ask precisions in the comment, I'll edit the answer for more clarity.

Investors and pools - backtracking

I've decided to learn deeper the concept of backtracking and I have following task:
Given N investors, M cities, N by M matrix P of investors preferences (P[i, j] = 1 when i-th investor would like the pool to be built in the j-th city; P[i, j] = 0 then he's neutral and when P[i, j] = -1 he's sceptical) and acceptance level L (if for a given choice of places, sum of investors preferences is greater or equal to L then we consider him as convinced). Find maxmimal number of investors that can be convinced and cities in which pools should be built.
I have tried using backtracking but I wonder if it is possible to optimize it more. For now, on each recursion level I keep track of how many people can possibly be convinced. If this number is less or equal to my current maximum then I return (there will be no better answer).
I'm not sure if this is what you're looking for, but with a little trick, you can express the problem as an integer linear program (ILP). Then you can use an integer linear programming solver (for example, GLPK) to find an optimal solution.
Let s[i] be 0-1 integer variables (i ranging over investors), and c[j] 0-1 integer variables ranging over cities and K be a large number (L + the number of investors will do).
Then, your problem is to minimize sum(s[i]) such that for each i, sum(P[i, j]*c[j]) + s[i] * K >= L. The value of sum(s[i]) in the optimal solution is the number of dissatisfied investors, and c[j] indicates whether to build a pool in city j.
This formulation of the problem is in a standard form for ILPs, so you're good to go.

Generate list of real values, which sum up to fixed value and satisfy some constraints

I need to generate n random real values P[0], P[1], ..., P[n-1] which satisfy the following constraints:
Pmin[0] <= P[0] <= Pmax[0]
Pmin[1] <= P[1] <= Pmax[1]
...
Pmin[n-1] <= P[n-1] <= Pmax[n-1]
P[0] + P[1] + ... + P[n-1] = S
Any idea how to do this efficiently?
In general, it is not possible to solve this problem if choosing elements uniformly at random from the given ranges.
Example 1: Say that Pmin[i] = 0 and Pmax[i] = 1. Say that n = 10 and S = 100. Then there is no solution, since the greatest possible sum is 10.
Example 2: Say that Pmin[i] = 0 and Pmax[i] = 1. Say that n = 10 and S = 10. Then there is exactly one solution: choose P[i] = 1.
It is possible to write an algorithm such that the resulting sequence is chosen uniformly at random from the set of possible solutions; this is quite different from saying that the P[i] are uniformly distributed between Pmin[i] and Pmax[i].
The basic idea is to, at each stage, further restrict your range, as follows:
The beginning of the range ought to be the larger of the following two quantities: Pmin[i], or S - Smax[i] - P, where Smax[i] is the sum Pmax[i+1] + ... + Pmax[n] and P is the sum P[0] + ... + P[i]. This guarantees that you're picking a number large enough to eventually work.
The end of the range ought to be the smaller of the following two quantities:
Pmax[i], or S - Smin[i] - P, where Smin[i] is the sum Pmin[i+1] + ... + Pmin[n] and P is as before. This guarantees that you're picking a number small enough to eventually work.
If you are able to obey those rules when picking each P[i], there's a solution, and you will find one at random. Otherwise, there is not a solution.
Note that to actually make this select solutions at random, it's probably best to shuffle the indices, perform this algorithm, and then rearrange the sequence so that it's in the proper order. You can shuffle in O(n), do this algorithm (recommend dynamic programming here, since you can build solutions bottom-up) and then spit out the sequence by "unshuffling" the resulting sequence.
For every i, assign P[i] := Pmin[i]
Compute the sum
If sum>S, then stop (it's impossible)
For every i:
If P[i]+S-sum <= Pmax[i]
P[i] = P[i]+S-sum
Stop (it's done :-)
sum = sum+Pmax[i]-P[i]
P[i] = Pmax[i]
Go for next i
Stop (it's impossible)
Ooops, sorry, you said random... that's not so trivial. Let me think about it...
Run the previous algorithm to have a starting point. Now compute the total margin above and below. The margin above is the sum of individual margins Pmax[i]-P[i] for every i. The margin below is the sum of individual margins P[i]-Pmin[i] for every i.
Traverse all the elements but one in a random order, visiting each one of them exactly once. For every one of them:
Update the margin above and the margin below subtracting from them the contribution of the current element.
Establish a min and max for the current value taking into account that:
They must be in the interval [Pmin[i], Pmax[i]] AND
These min and max are near enough to P[i], so that changing other elements later can compensate changing P[i] to this min or max (that's what the margins above and below indicate)
Change P[i] to a random value in the calculated interval [min, max] and update the sum and the margins (I'm not 100% sure of how the margins should be updated here...)
Then adjust the remaining element to fit the sum S.
Regarding the traversal in random order, see the Knuth shuffles.

Algorithm: Find peak in a circle

Given n integers, arranged in a circle, show an efficient algorithm that can find one peak. A peak is a number that is not less than the two numbers next to it.
One way is to go through all the integers and check each one to see whether it is a peak. That yields O(n) time. It seems like there should be some way to divide and conquer to be more efficient though.
EDIT
Well, Keith Randall proved me wrong. :)
Here's Keith's solution implemented in Python:
def findPeak(aBase):
N = len(aBase)
def a(i): return aBase[i % N]
i = 0
j = N / 3
k = (2 * N) / 3
if a(j) >= a(i) and a(j) >= a(k)
lo, candidate, hi = i, j, k
elif a(k) >= a(j) and a(k) >= a(i):
lo, candidate, hi = j, k, i + N
else:
lo, candidate, hi = k, i + N, j + N
# Loop invariants:
# a(lo) <= a(candidate)
# a(hi) <= a(candidate)
while lo < candidate - 1 or candidate < hi - 1:
checkRight = True
if lo < candidate - 1:
mid = (lo + candidate) / 2
if a(mid) >= a(candidate):
hi = candidate
candidate = mid
checkRight = False
else:
lo = mid
if checkRight and candidate < hi - 1:
mid = (candidate + hi) / 2
if a(mid) >= a(candidate):
lo = candidate
candidate = mid
else:
hi = mid
return candidate % N
Here's a recursive O(log n) algorithm.
Suppose we have an array of numbers, and we know that the middle number of that segment is no smaller than the endpoints:
A[i] <= A[m] >= A[j]
for i,j indexes into an array, and m=(i+j)/2. Examine the elements midway between the endpoints and the midpoint, i.e. those at indexes x=(3*i+j)/4 and y=(i+3*j)/4. If A[x]>=A[m], then recurse on the interval [i,m]. If A[y]>=A[m], then recurse on the interval [m,j]. Otherwise, recurse on the interval [x,y].
In every case, we maintain the invariant on the interval above. Eventually we get to an interval of size 2 which means we've found a peak (which will be A[m]).
To convert the circle to an array, take 3 equidistant samples and orient yourself so that the largest (or one tied for the largest) is in the middle of the interval and the other two points are the endpoints. The running time is O(log n) because each interval is half the size of the previous one.
I've glossed over the problem of how to round when computing the indexes, but I think you could work that out successfully.
When you say "arranged in a circle", you mean like in a circular linked list or something? From the way you describe the data set, it sounds like these integers are completely unordered, and there's no way to look at N integers and come to any kind of conclusion about any of the others. If that's the case, then the brute-force solution is the only possible one.
Edit:
Well, if you're not concerned with worst-case time, there are slightly more efficient ways to do it. The naive approach would be to look at Ni, Ni-1, and Ni+1 to see if Ni is a peak, then repeat, but you can do a little better.
While not done
If N[i] < N[i+1]
i++
Else
If N[i]>N[i-1]
Done
Else
i+=2
(Well, not quite that, because you have to deal with the case where N[i]=N[i+1]. But something very similar.)
That will at least keep you from comparing Ni to Ni+1, adding 1 to i, and then redundantly comparing Ni to Ni-1. It's a distinctly marginal gain, though. You're still marching through the numbers, but there's no way around that; jumping blindly is unhelpful, and there's no way to look ahead without taking just as long as doing the actual work would be.

Resources