change coin implementation issue - algorithm

I ran into an implementation problem when trying to solve this classic problem using DP.
The problem is given a set of coins, and return the number of ways of making a change.
The DP equation is something like the following:
DP[i] += DP[i - coin[j]]
where DP[i] means the number of ways of making change for i.
Here is a straightforward implementation, which is incorrect:
int make_change_wrong(int coin[], int size, int change) {
vector<int> DP(change + 1, 0);
DP[0] = 1;
for (int i = 1; i <= change; ++i) {
for (int j = 0; j < size; ++j) {
if (i - coin[j] >= 0 ) {
DP[i] += DP[i - coin[j]];
}
}
}
return DP[change];
}
Given input:
int coin[] = {1, 5}
change = 6.
make_change_wrong(coin, 2, 6) returns 3, but 2 is correct.
Using the same logic, I re-write it in a less intuitive way and get the correct answer:
int make_change(int coin[], int size, int change) {
vector<int> DP(change + 1, 0);
DP[0] = 1;
for (int i = 0; i < size; ++i) {
for (int j = coin[i]; j <= change; ++j) {
DP[j] += DP[j - coin[i]];
}
}
return DP[change];
}
This puzzled me a lot because to me, they're the same thing...
Can someone illustrate a bit on the problems in the two implementations?

Your first algorithm is wrong.
DP[5] = 2 {1,1,1,1,1}, {5}
DP[6] = DP[5] + DP[1] = 3
you are counting {5,1} twice.
EDITED
So the standard trick for doing this is that you keep a count of the denomination you are allowed to use
DP[i,m] = DP[i-coin[m],m] + DP[i,m-1]
which means number of ways of making a change of i amount using coins in range[1..m].
This is obviously, you either use the mth denomination or you don't.
The second algorithm you are using is doing the same trick but is a really clever way to do that, take the ith coin and see what all change you can generate using it. This will avoid over counting because you avoid doing things like {1,5} and {5,1}.

This problem is in the interview prep book Cracking the Coding Interview, and the solution given in the book is not optimized at all. It uses recursion (without DP) to calculate sub-problems repeatedly and therefore runs in O(N^3) which is especially ironic since it's part of the Dynamic Programming chapter.
Here's a very simple working solution (Java) which uses DP and runs in O(N) time.
static int numCombos(int n) {
int[] dyn = new int[n + 1];
Arrays.fill(dyn, 0);
dyn[0] = 1;
for (int i = 1; i <= n; i++) dyn[i] += dyn[i - 1];
for (int i = 5; i <= n; i++) dyn[i] += dyn[i - 5];
for (int i = 10; i <= n; i++) dyn[i] += dyn[i - 10];
for (int i = 25; i <= n; i++) dyn[i] += dyn[i - 25];
return dyn[n];
}

Please try your input for your second method:
coin[5] = {1,5,10,20,30};
make_change(coin,5,30);
It returns 21. Please check my test case.

Related

Leetcode Target sum of dynamic programming

Given n and target, find the number of combinations of number from [1,2,...,n] adding up to target. The number can be repeatedly picked (1 + 1 + 2 = 4), however the combinations cannot be duplicated ({1,1,2} and {1,2,1} are regard as one combination). e.g.
n = 2, target = 4: {1,1,1,1}, {1,1,2}, {1,3}, {2,2}, so return 4
Since we only need to return the number of combinations, we use dynamic programming as following:
int sum(int n, int target) {
vector<int> dp(target + 1);
dp[0] = 1;
for (int i = 1; i <= target; ++i) {
for (int j = 1; j <= n; j++) {
if (i >= j) dp[i] += dp[i - j];
}
}
return dp.back();
}
However this solution is for duplicated combinations:{1,1,1,1}, {1,1,2}, {1,2,1}, {2,1,1}, {1,3}, {3,1} {2,2}, so return 7.
Do you know how to modify it to remove the duplications?
Simple modification
for (int j = 1; j <= n; j++) {
for (int i = j; i <= target; i++) {
dp[i] += dp[i - j];
}
}
helps to avoid using small values after larger value, so code counts only sorted combinations
Alike question with specific coin nominals instead of 1..n values

Big(O) notation - which one is correct

I am trying to learn Big(O) notation. While searching for some articles online, I came across two different articles , A and B
Strictly speaking in terms of loops - it seems that they almost have the same kind of flow.
For example
[A]'s code is as follows (its done in JS)
function allPairs(arr) {
var pairs = [];
for (var i = 0; i < arr.length; i++) {
for (var j = i + 1; j < arr.length; j++) {
pairs.push([arr[i], arr[j]]);
}
}
return pairs;
}
[B]'s code is as follows (its done in C)- entire code is here
for(int i = 0; i < n-1 ; i++) {
char min = A[i]; // minimal element seen so far
int min_pos = i; // memorize its position
// search for min starting from position i+1
for(int j = i + 1; j < n; j++)
if(A[j] < min) {
min = A[j];
min_pos = j;
}
// swap elements at positions i and min_pos
A[min_pos] = A[i];
A[i] = min;
}
The article on site A mentions that time complexity is O(n^2) while the article on site B mentions that its O(1/2·n2).
Which one is right?
Thanks
Assuming that O(1/2·n2) means O(1/2·n^2), the two time complexity are equal. Remember that Big(O) notation does not care about constants, so both algorithms are O(n^2).
You didn't read carefully. Article B says that the algorithm performs about N²/2 comparisons and goes on to explain that this is O(N²).

minimum coin with no DP

public int MinCoins(int[] change, int cents)
{
Stopwatch sw = Stopwatch.StartNew();
int coins = 0;
int cent = 0;
int finalCount = cents;
for (int i = change.Length - 1; i >= 0; i--)
{
cent = cents;
for (int j = i; j <= change.Length - 1; j++)
{
coins += cent / change[j];
cent = cent % change[j];
if (cent == 0) break;
}
if (coins < finalCount)
{
finalCount = coins;
}
coins = 0;
}
sw.Stop();
var elapsedMs = sw.Elapsed.ToString(); ;
Console.WriteLine("time for non dp " + elapsedMs);
return finalCount;
}
public int MinCoinsDp(int[] change, int cents)
{
Stopwatch sw = Stopwatch.StartNew();
int[] minCoins = new int[cents + 1];
for (int i = 1; i <= cents; i++)
{
minCoins[i] = 99999;
for (int j = 0; j < change.Length; j++)
{
if(i >= change[j])
{
int n = minCoins[i - change[j]] + 1;
if (n < minCoins[i])
minCoins[i] = n;
}
}
}
sw.Stop();
var elapsedMs = sw.Elapsed.ToString();
Console.WriteLine("time for dp " + elapsedMs);
return minCoins[cents];
}
I have written a minimum number of coins programs using iterative and Dynamic Programming. I have seen a lot of blogs discussing about DP for this problem. Iterative solutions has running time O(numberOfCoins * numberofCoins) and DP has O(numberofcoins*arraySize) roughly same. Which one is better? Please suggest good book for advanced algorithms.
Please run with {v1 > v2 > v3 > v4} like {25,10,5}
I see that you're trying to measure running times of both algorithms and decide which one is better.
Well, there is a more important thing about your algorithms. The first one is unfortunately incorrect. For example, please consider the following input:
Suppose we want to exchange 100 and available coins have the following nominals: 5, 6, 90, 96. The best that we can do is to use 3 coins: 5, 5, 90. However, your solution returns 1

Max sum in an array with constraints

I have this problem , where given an array of positive numbers i have to find the maximum sum of elements such that no two adjacent elements are picked. The maximum has to be less than a certain given K. I tried thinking on the lines of the similar problem without the k , but i have failed so far.I have the following dp-ish soln for the latter problem
int sum1,sum2 = 0;
int sum = sum1 = a[0];
for(int i=1; i<n; i++)
{
sum = max(sum2 + a[i], sum1);
sum2 = sum1;
sum1 = sum;
}
Could someone give me tips on how to proceed with my present problem??
The best I can think of off the top of my head is an O(n*K) dp:
int sums[n][K+1] = {{0}};
int i, j;
for(j = a[0]; j <= K; ++j) {
sums[0][j] = a[0];
}
if (a[1] > a[0]) {
for(j = a[0]; j < a[1]; ++j) {
sums[1][j] = a[0];
}
for(j = a[1]; j <= K; ++j) {
sums[1][j] = a[1];
}
} else {
for(j = a[1]; j < a[0]; ++j) {
sums[1][j] = a[1];
}
for(j = a[0]; j <= K; ++j) {
sums[1][j] = a[0];
}
}
for(i = 2; i < n; ++i) {
for(j = 0; j <= K && j < a[i]; ++j) {
sums[i][j] = max(sums[i-1][j],sums[i-2][j]);
}
for(j = a[i]; j <= K; ++j) {
sums[i][j] = max(sums[i-1][j],a[i] + sums[i-2][j-a[i]]);
}
}
sums[i][j] contains the maximal sum of non-adjacent elements of a[0..i] not exceeding j. The solution is then sums[n-1][K] at the end.
Make a copy (A2) of the original array (A1).
Find largest value in array (A2).
Extract all values before the it's preceeding neighbour and the values after it's next neighbour into a new array (A3).
Find largest value in the new array (A3).
Check if sum is larger that k. If sum passes the check you are done.
If not you will need to go back to the copied array (A2), remove the second larges value (found in step 3) and start over with step 3.
Once there are no combinations of numbers that can be used with the largest number (i.e. number found in step 1 + any other number in array is larger than k) you remove it from the original array (A1) and start over with step 0.
If for some reason there are no valid combinations (e.g. array is only three numbers or no combination of numbers are lower than k) then throw an exception or you return null if that seems more appropriate.
First idea: Brute force
Iterate all legal combination of indexes and build the sum on the fly.
Stop with one sequence when you get over K.
keep the sequence until you find a larger one, that is still smaller then K
Second idea: maybe one can force this into a divide and conquer thing ...
Here is a solution to the problem without the "k" constraint which you set out to do as the first step: https://stackoverflow.com/a/13022021/1110808
The above solution can in my view be easily extended to have the k constraint by simply amending the if condition in the following for loop to include the constraint: possibleMax < k
// Subproblem solutions, DP
for (int i = start; i <= end; i++) {
int possibleMaxSub1 = maxSum(a, i + 2, end);
int possibleMaxSub2 = maxSum(a, start, i - 2);
int possibleMax = possibleMaxSub1 + possibleMaxSub2 + a[i];
/*
if (possibleMax > maxSum) {
maxSum = possibleMax;
}
*/
if (possibleMax > maxSum && possibleMax < k) {
maxSum = possibleMax;
}
}
As posted in the original link, this approach can be improved by adding memorization so that solutions to repeating sub problems are not recomputed. Or can be improved by using a bottom up dynamic programming approach (current approach is a recursive top down approach)
You can refer to a bottom up approach here: https://stackoverflow.com/a/4487594/1110808

Are these 2 knapsack algorithms the same? (Do they always output the same thing)

In my code, assuming C is the capacity, N is the amount of items, w[j] is the weight of item j, and v[j] is the value of item j, does it do the same thing as the 0-1 knapsack algorithm? I've been trying my code on some data sets, and it seems to be the case. The reason I'm wondering this is because the 0-1 knapsack algorithm we've been taught is 2-dimensional, whereas this is 1-dimensional:
for (int j = 0; j < N; j++) {
if (C-w[j] < 0) continue;
for (int i = C-w[j]; i >= 0; --i) { //loop backwards to prevent double counting
dp[i + w[j]] = max(dp[i + w[j]], dp[i] + v[j]); //looping fwd is for the unbounded problem
}
}
printf( "max value without double counting (loop backwards) %d\n", dp[C]);
Here is my implementation of the 0-1 knapsack algorithm: (with the same variables)
for (int i = 0; i < N; i++) {
for (int j = 0; j <= C; j++) {
if (j - w[i] < 0) dp2[i][j] = i==0?0:dp2[i-1][j];
else dp2[i][j] = max(i==0?0:dp2[i-1][j], dp2[i-1][j-w[i]] + v[i]);
}
}
printf("0-1 knapsack: %d\n", dp2[N-1][C]);
Yes, your algorithm gets you the same result. This enhancement to the classic 0-1 Knapsack is reasonably popular: Wikipedia explains it as follows:
Additionally, if we use only a 1-dimensional array m[w] to store the current optimal values and pass over this array i + 1 times, rewriting from m[W] to m[1] every time, we get the same result for only O(W) space.
Note that they specifically mention your backward loop.

Resources