Does this algorithm cover all cases for finding minimum coin change for a sum? - algorithm

I am trying to solve the minimum coin change problem. The question is:
Given a value V, if we want to make change for V cents, and we have infinite supply of each of C = { C1, C2, .. , Cm} valued coins, what is the minimum number of coins to make the change?
My algorithm suggested is:
Starting with an array arr[1..V],where V is the value:
For all the denominations,initialise arr[d]=1 as this is the base case. If the value == demoniation of a coin,only 1 coin is required and hence it is the least
For all values from i : 1...V:
compute the minimum no of coins required to make change for a value of 'i'.
2.1. This can be done by :
For all j: 1....(i-1)
arr[i]=min(arr[i],arr[j]+arr[i-j]);
return arr[V];
Is this logic flawed or does it cover all cases?
Most DP solutions have used a 2-D array and I don't understand why they would use O(n^2) memory space,if this exists and is correct.
Thank u.

How about cases whre some value V cant be obtained?
i.e. we have coins {5,6,7,8,9} and we cant make values 1,2,3,4, You should initialize all value != demoniation cells to infinity constant or something similar.
Now for the reason most people use O(n^2) memory:
This problem comes with various flavors, most common one being you can only use each coin once, in this case using state dp[i][j] - min coins that sum to j after considering first i coins seem to be easier to understand for most people even though this can also be done with O(n) memory (just looping backwards)

Related

Find minimum steps to convert all elements to zero

You are given an array of positive integers of size N. You can choose any positive number x such that x<=max(Array) and subtract it from all elements of the array greater than and equal to x.
This operation has a cost A[i]-x for A[i]>=x. The total cost for a particular step is the
sum(A[i]-x). A step is only valid if the sum(A[i]-x) is less than or equal to a given number K.
For all the valid steps find the minimum number of steps to make all elements of the array zero.
0<=i<10^5
0<=x<=10^5
0<k<10^5
Can anybody help me with any approach? DP will not work due to high constraints.
Just some general exploratory thoughts.
First, there should be a constraint on N. If N is 3, this is much easier than if it is 100. The naive brute force approach is going to be O(k^N)
Next, you are right that DP will not work with these constraints.
For a greedy approach, I would want to minimize the number of distinct non-zero values, and not maximize how much I took. Our worst case approach is take out the largest each time, for N steps. If you can get 2 pairs of entries to both match, then that shortened our approach.
The obvious thing to try if you can is an A* search. However that requires a LOWER bound (not upper). The best naive lower bound that I can see is ceil(log_2(count_distinct_values)). Unless you're incredibly lucky and the problem can be solved that quickly, this is unlikely to narrow your search enough to be helpful.
I'm curious what trick makes this problem actually doable.
I do have an idea. But it is going to take some thought to make it work. Naively we want to take each choice for x and explore the paths that way. And this is a problem because there are 10^5 choices for x. After 2 choices we have a problem, and after 3 we are definitely not going to be able to do it.
BUT instead consider the possible orders of the array elements (with ties both possible and encouraged) and the resulting inequalities on the range of choices that could have been made. And now instead of having to store a 10^5 choices of x we only need store the distinct orderings we get, and what inequalities there are on the range of choices that get us there. As long as N < 10, the number of weak orderings is something that we can deal with if we're clever.
It would take a bunch of work to flesh out this idea though.
I may be totally wrong, and if so, please tell me and I'm going to delete my thoughts: maybe there is an opportunity if we translate the problem into another form?
You are given an array A of positive integers of size N.
Calculate the histogram H of this array.
The highest populated slot of this histogram has index m ( == max(A)).
Find the shortest sequence of selections of x for:
Select an index x <= m which satisfies sum(H[i]*(i-x)) <= K for i = x+1 .. m (search for suitable x starts from m down)
Add H[x .. m] to H[0 .. m-x]
Set the new m as the highest populated index in H[0 .. x-1] (we ignore everything from H[x] up)
Repeat until m == 0
If there is only a "good" but not optimal solution sought for, I could imagine that some kind of spectral analysis of H could hint towards favorable x selections so that maxima in the histogram pile upon other maxima in the reduction step.

Dyanamic Programming - Coin Change Problem

I am solving the following problem from hackerrank
https://www.hackerrank.com/challenges/coin-change/problem
I 'm unable to solve the problem , so I have looked at the editorial and they mentioned
T(i, m) = T(i, m-i)+T(i+1, m)
I'm unable to get big picture of why this solution works on a higher level. (like a proof in CLRS or simple understandable example)
Solution which I have written is as follows
fun(m){
//base cases
count = 0;
for(i..n){
count+= fun(m-i);
}
}
My solution didn't work because there are some duplicates calls. But how editorial works and what is the difference between my solution and editorial on a higher level..
I think in order for this to work you have to clearly define what T is. Namely, let's define T(i,m) to be the number of ways to make change for m units using only coins with index at least i (i.e. we only look at the ith coin, the (i+1)th coin, all the way to the nth coin while neglecting the first i-1 coins). Further, we define an array C such that C[i] is the value of the ith coin (note that in general C[i] is not the same as i). As a result, if there are n coins (i.e. length of C is n) and we want to make change for W units, we are looking for the value T(0, W) as our answer (make sure you can see why this is the case at this point!).
Now, we proceed by constructing a recursive definition of T(i,m). Note that our solution will either contain an additional ith coin or it won't. In the case that it does, our new target will simply be m - C[i] and the number of ways to make change for this is T(i,m - C[i]) (since our new target is now C[i] less than m). In another case, our solution doesn't contain the ith coin. In this case, we keep the target value the same, but only consider coins with index greater than i. Namely, the number of ways to make change in this case is T(i+1,m). Since these cases are disjoint and exhaustive (either you put the ith coin in the solution or you don't!), we have that
T(i,m) = T(i, m-C[i]) + T(i+1,m)
which is very similar to what you had (the C[i] difference is important). Note that if m <= 0 (since we are assuming that coin values are positive), there are 0 ways to make change. You must keep these base cases in mind when computing T(i,m).
Now it remains to compute T(0, W), which you can easily do recursively. However, you likely noticed that a lot of the subproblems are repeated making this a slow solution. The solution is to use something called dynamic programming or memoization. Namely, whenever a solution is computed, add its value to a table (e.g. T[i,m] where T is a n x W size 2D array). Then whenever you recursively compute something check the table first so you don't compute the same thing twice. This is called memoization. Dynamic programming is simple except you use a little foresight to compute things in the order in which they will be needed. For example, I would compute the base cases first i.e. the column T[ . , 0]. And then I would compute all values bordering this row and column based on the recursive definition.

Algorithm: Coin Change - Counting the number of ways one can give change

I am trying to understand the DP behind the coin change problem, where one is supposed to count the number of ways you can give change for a denomination given a set of coins. Each coin is present infinite number of times.
The algorithm is taken from this geeks4geeks page. The algorithms is the following (where N stands for denomination and dp is an array of size N+1):
dp[0] = 1
for each coin c:
for i from c to N:
if i >= c:
dp[i] += dp[i-c]
I am not able to understand how is the DP working here and what are the subproblems.
Edit: I checked other related questions but none mentions the algorithm stated above. A 2-D DP solution is discussed in previous questions.
You can consider this approach as solving C subproblems where C is the number of coins.
Subproblem c consists of "Which values can be made from coins up to and including coin c, but not including any coins of greater value."
The base subproblem then becomes "Which values can be made from no coins", to which the answer is just the value 0.
Then to work out each additional subproblem we can iterate through the array marking values as possible if they consist of some value made from previous coin values plus some number of coins of the current value.
As we are updating the DP array in-place, it turns out that we only have to consider adding one coin of the current value for each location in the array.

Knapsack with unique elements

I'm trying to solve the following:
The knapsack problem is as follows: given a set of integers S={s1,s2,…,sn}, and a given target number T, find a subset of S that adds up exactly to T. For example, within S={1,2,5,9,10} there is a subset that adds up to T=22 but not T=23. Give a correct programming algorithm for knapsack that runs in O(nT) time.
but the only algorithm I could come up with is generating all the 1 to N combinations and try the sum out (exponential time).
I can't devise a dynamic programming solution since the fact that I can't reuse an object makes this problem different from a coin rest exchange problem and from a general knapsack problem.
Can somebody help me out with this or at least give me a hint?
The O(nT) running time gives you the hint: do dynamic programming on two axes. That is, let f(a,b) denote the maximum sum <= b which can be achieved with the first a integers.
f satisfies the recurrence
f(a,b) = max( f(a-1,b), f(a-1,b-s_a)+s_a )
since the first value is the maximum without using s_a and the second is the maximum including s_a. From here the DP algorithm should be straightforward, as should outputting the correct subset of S.
I did find a solution but with O(T(n2)) time complexity. If we make a table from bottom to top. In other words If we sort the array and start with the greatest number available and make a table where columns are the target values and rows the provided number. We will need to consider the sum of all possible ways of making i- cost [j] +j . Which will take n^2 time. And this multiplied with target.

Revisit: 2D Array Sorted Along X and Y Axis

So, this is a common interview question. There's already a topic up, which I have read, but it's dead, and no answer was ever accepted. On top of that, my interests lie in a slightly more constrained form of the question, with a couple practical applications.
Given a two dimensional array such that:
Elements are unique.
Elements are sorted along the x-axis and the y-axis.
Neither sort predominates, so neither sort is a secondary sorting parameter.
As a result, the diagonal is also sorted.
All of the sorts can be thought of as moving in the same direction. That is to say that they are all ascending, or that they are all descending.
Technically, I think as long as you have a >/=/< comparator, any total ordering should work.
Elements are numeric types, with a single-cycle comparator.
Thus, memory operations are the dominating factor in a big-O analysis.
How do you find an element? Only worst case analysis matters.
Solutions I am aware of:
A variety of approaches that are:
O(nlog(n)), where you approach each row separately.
O(nlog(n)) with strong best and average performance.
One that is O(n+m):
Start in a non-extreme corner, which we will assume is the bottom right.
Let the target be J. Cur Pos is M.
If M is greater than J, move left.
If M is less than J, move up.
If you can do neither, you are done, and J is not present.
If M is equal to J, you are done.
Originally found elsewhere, most recently stolen from here.
And I believe I've seen one with a worst-case O(n+m) but a optimal case of nearly O(log(n)).
What I am curious about:
Right now, I have proved to my satisfaction that naive partitioning attack always devolves to nlog(n). Partitioning attacks in general appear to have a optimal worst-case of O(n+m), and most do not terminate early in cases of absence. I was also wondering, as a result, if an interpolation probe might not be better than a binary probe, and thus it occurred to me that one might think of this as a set intersection problem with a weak interaction between sets. My mind cast immediately towards Baeza-Yates intersection, but I haven't had time to draft an adaptation of that approach. However, given my suspicions that optimality of a O(N+M) worst case is provable, I thought I'd just go ahead and ask here, to see if anyone could bash together a counter-argument, or pull together a recurrence relation for interpolation search.
Here's a proof that it has to be at least Omega(min(n,m)). Let n >= m. Then consider the matrix which has all 0s at (i,j) where i+j < m, all 2s where i+j >= m, except for a single (i,j) with i+j = m which has a 1. This is a valid input matrix, and there are m possible placements for the 1. No query into the array (other than the actual location of the 1) can distinguish among those m possible placements. So you'll have to check all m locations in the worst case, and at least m/2 expected locations for any randomized algorithm.
One of your assumptions was that matrix elements have to be unique, and I didn't do that. It is easy to fix, however, because you just pick a big number X=n*m, replace all 0s with unique numbers less than X, all 2s with unique numbers greater than X, and 1 with X.
And because it is also Omega(lg n) (counting argument), it is Omega(m + lg n) where n>=m.
An optimal O(m+n) solution is to start at the top-left corner, that has minimal value. Move diagonally downwards to the right until you hit an element whose value >= value of the given element. If the element's value is equal to that of the given element, return found as true.
Otherwise, from here we can proceed in two ways.
Strategy 1:
Move up in the column and search for the given element until we reach the end. If found, return found as true
Move left in the row and search for the given element until we reach the end. If found, return found as true
return found as false
Strategy 2:
Let i denote the row index and j denote the column index of the diagonal element we have stopped at. (Here, we have i = j, BTW). Let k = 1.
Repeat the below steps until i-k >= 0
Search if a[i-k][j] is equal to the given element. if yes, return found as true.
Search if a[i][j-k] is equal to the given element. if yes, return found as true.
Increment k
1 2 4 5 6
2 3 5 7 8
4 6 8 9 10
5 8 9 10 11

Resources