I understand that there are mainly two approaches to dynamic programming solutions:
Fixed optimal order of evaluation (lets call it Foo approach): Foo approach usually goes from subproblems to bigger problems thus using results obtained earlier for subproblems to solve bigger problems, thus avoiding "revisiting" subproblem. CLRS also seems to call this "Bottom Up" approach.
Without fixed optimal order of evaluation (lets call it Non-Foo approach): In this approach evaluation proceeds from problems to their sub-problems . It ensures that sub problems are not "re-evaluated" (thus ensuring optimality) by maintaining results of their past evaluations in some data structure and then first checking if the result of the problem at hand exists in this data structure before starting its evaluation. CLRS seem to call this as "Top Down" approach
This is what is roughly conveyed as one of the main points by this answer.
I have following doubts:
Q1. Memoization or not?
CLRS uses terms "top down with memoization" approach and "bottom up" approach. I feel both approaches require memory to cache results of sub problems. But, then, why CLRS use term "memoization" only for top down approach and not for bottom up approach? After solving some problems by DP approach, I feel that solutions by top down approach for all problems require memory to caches results of "all" subproblems. However, that is not the case with bottom up approach. Solutions by bottom up approach for some problems does not need to cache results of "all" sub problems. Q1. Am I correct with this?
For example consider this problem:
Given cost[i] being the cost of ith step on a staircase, give the minimum cost of reaching the top of the floor if:
you can climb either one or two steps
you can start from the step with index 0, or the step with index 1
The top down approach solution is as follows:
class Solution:
def minCostAux(self, curStep, cost):
if self.minCosts[curStep] > -1:
return self.minCosts[curStep]
if curStep == -1:
return 0
elif curStep == 0:
self.minCosts[curStep] = cost[0]
else:
self.minCosts[curStep] = min(self.minCostAux(curStep-2, cost) + cost[curStep]
, self.minCostAux(curStep-1, cost) + cost[curStep])
return self.minCosts[curStep]
def minCostClimbingStairs(self, cost) -> int:
cost.append(0)
self.minCosts = [-1] * len(cost)
return self.minCostAux(len(cost)-1, cost)
The bottom up approach solution is as follows:
class Solution:
def minCostClimbingStairs(self, cost) -> int:
cost.append(0)
secondLastMinCost = cost[0]
lastMinCost = min(cost[0]+cost[1], cost[1])
minCost = lastMinCost
for i in range(2,len(cost)):
minCost = min(lastMinCost, secondLastMinCost) + cost[i]
secondLastMinCost = lastMinCost
lastMinCost = minCost
return minCost
Note that the top down approach caches result of all steps in self.minCosts while bottom up approach caches result of only last two steps in variables lastMinCost and secondLastMinCost.
Q2. Does all problems have solutions by both approaches?
I feel no. I came to this opinion after solving this problem:
Find the probability that the knight will not go out of n x n chessboard after k moves, if the knight was initially kept in the cell at index (row, column).
I feel the only way to solve this problem is to find successive probabilities in increasing number of steps starting from cell (row, column), that is probability that the knight will not go out of chessboard after step 1, then after step 2, then after step 3 and so on. This is bottom up approach. We cannot do it top down, for example, we cannot start with kth step and go to k-1th step, then k-2th step and so on, because:
We cannot know which cells will be reached in kth step to start with
We cannot ensure that all paths from kth step will lead to initial knight cell position (row,column).
Even one of the top voted answer gives dp solution as follows:
class Solution {
private int[][]dir = new int[][]{{-2,-1},{-1,-2},{1,-2},{2,-1},{2,1},{1,2},{-1,2},{-2,1}};
private double[][][] dp;
public double knightProbability(int N, int K, int r, int c) {
dp = new double[N][N][K + 1];
return find(N,K,r,c);
}
public double find(int N,int K,int r,int c){
if(r < 0 || r > N - 1 || c < 0 || c > N - 1) return 0;
if(K == 0) return 1;
if(dp[r][c][K] != 0) return dp[r][c][K];
double rate = 0;
for(int i = 0;i < dir.length;i++) rate += 0.125 * find(N,K - 1,r + dir[i][0],c + dir[i][1]);
dp[r][c][K] = rate;
return rate;
}
}
I feel this is still a bottom up approach since it starts with initial knight cell position (r,c) (and hence starts from 0th or no step to Kth step) despite the fact that it counts K downwads to 0. So, this is bottom up approach done recursively and not top down approach. To be precise, this solution does NOT first find:
probability of knight not going out of chessboard after K steps starting at cell (r,c)
and then find:
probability of knight not going out of chessboard after K-1 steps starting at cell (r,c)
but it finds in reverse / bottom up order: first for K-1 steps and then for K steps.
Also, I did not find any solutions in of top voted discussions in leetcode doing it in truly top down manner, starting from Kth step to 0th step ending in (row,column) cell, instead of starting with (row,column) cell.
Similarly we cannot solve the following problem with the bottom up approach but only with top down approach:
Find the probability that the Knight ends up in the cell at index (row,column) after K steps, starting at any initial cell.
Q2. So am I correct with my understanding that not all problems have solutions by both top down or bottom up approaches? Or am I just overthinking unnecessarily and both above problems can indeed be solved with both top down and bottom up approaches?
PS: I indeed seem to have done overthinking here: knightProbability() function above is indeed top down, and I ill-interpreted as explained in detailed above 😑. I have kept this explanation for reference as there are already some answers below and also as a hint of how confusion / mis-interpretaions might happen, so that I will be more cautious in future. Sorry if this long explanation caused you some confusion / frustrations. Regardless, the main question still holds: does every problem have bottom up and top down solutions?
Q3. Bottom up approach recursively?
I am pondering if bottom up solutions for all problems can also be implemented recursively. After trying to do so for other problems, I came to following conclusion:
We can implement bottom up solutions for such problems recursively, only that the recursion won't be meaningful, but kind of hacky.
For example, below is recursive bottom up solution for minimum cost climbing stairs problem mentioned in Q1:
class Solution:
def minCostAux(self, step_i, cost):
if self.minCosts[step_i] != -1:
return self.minCosts[step_i]
self.minCosts[step_i] = min(self.minCostAux(step_i-1, cost)
, self.minCostAux(step_i-2, cost)) + cost[step_i]
if step_i == len(cost)-1: # returning from non-base case, gives sense of
# not-so meaningful recursion.
# Also, base cases usually appear at the
# beginning, before recursive call.
# Or should we call it "ceil condition"?
return self.minCosts[step_i]
return self.minCostAux(step_i+1, cost)
def minCostClimbingStairs(self, cost: List[int]) -> int:
cost.append(0)
self.minCosts = [-1] * len(cost)
self.minCosts[0] = cost[0]
self.minCosts[1] = min(cost[0]+cost[1], cost[1])
return self.minCostAux(2, cost)
Is my quoted understanding correct?
First, context.
Every dynamic programming problem can be solved without dynamic programming using a recursive function. Generally this will take exponential time, but you can always do it. At least in principle. If the problem can't be written that way, then it really isn't a dynamic programming problem.
The idea of dynamic programming is that if I already did a calculation and have a saved result, I can just use that saved result instead of doing the calculation again.
The whole top-down vs bottom-up distinction refers to the naive recursive solution.
In a top-down approach your call stack looks like the naive version except that you make a "memo" of what the recursive result would have given. And then the next time you short-circuit the call and return the memo. This means you can always, always, always solve dynamic programming problems top down. There is always a solution that looks like recursion+memoization. And that solution by definition is top down.
In a bottom up approach you start with what some of the bottom levels would have been and build up from there. Because you know the structure of the data very clearly, frequently you are able to know when you are done with data and can throw it away, saving memory. Occasionally you can filter data on non-obvious conditions that are hard for memoization to duplicate, making bottom up faster as well. For a concrete example of the latter, see Sorting largest amounts to fit total delay.
Start with your summary.
I strongly disagree with your thinking about the distinction in terms of the optimal order of evaluations. I've encountered many cases with top down where optimizing the order of evaluations will cause memoization to start hitting sooner, making code run faster. Conversely while bottom up certainly picks a convenient order of operations, it is not always optimal.
Now to your questions.
Q1: Correct. Bottom up often knows when it is done with data, top down does not. Therefore bottom up gives you the opportunity to delete data when you are done with it. And you gave an example where this happens.
As for why only one is called memoization, it is because memoization is a specific technique for optimizing a function, and you get top down by memoizing recursion. While the data stored in dynamic programming may match up to specific memos in memoization, you aren't using the memoization technique.
Q2: I do not know.
I've personally found cases where I was solving a problem over some complex data structure and simply couldn't find a bottom up approach. Maybe I simply wasn't clever enough, but I don't believe that a bottom up approach always exists to be found.
But top down is always possible. Here is how to do it in Python for the example that you gave.
First the naive recursive solution looks like this:
def prob_in_board(n, i, j, k):
if i < 0 or j < 0 or n <= i or n <= j:
return 0
elif k <= 0:
return 1
else:
moves = [
(i+1, j+2), (i+1, j-2),
(i-1, j+2), (i-1, j-2),
(i+2, j+1), (i+2, j-1),
(i-2, j+1), (i-2, j-1),
]
answer = 0
for next_i, next_j in moves:
answer += prob_in_board(n, next_i, next_j, k-1) / len(moves)
return answer
print(prob_in_board(8, 3, 4, 7))
And now we just memoize.
def prob_in_board_memoized(n, i, j, k, cache=None):
if cache is None:
cache = {}
if i < 0 or j < 0 or n <= i or n <= j:
return 0
elif k <= 0:
return 1
elif (i, j, k) not in cache:
moves = [
(i+1, j+2), (i+1, j-2),
(i-1, j+2), (i-1, j-2),
(i+2, j+1), (i+2, j-1),
(i-2, j+1), (i-2, j-1),
]
answer = 0
for next_i, next_j in moves:
answer += prob_in_board_memoized(n, next_i, next_j, k-1, cache) / len(moves)
cache[(i, j, k)] = answer
return cache[(i, j, k)]
print(prob_in_board_memoized(8, 3, 4, 7))
This solution is top down. If it seems otherwise to you, then you do not correctly understand what is meant by top-down.
I found your question ( does every dynamic programming problem have bottom up and top down solutions ? ) very interesting. That's why I'm adding another answer to continue the discussion about it.
To answer the question in its generic form, I need to formulate it more precisely with math. First, I need to formulate precisely what is a dynamic programming problem. Then, I need to define precisely what is a bottom up solution and what is a top down solution.
I will try to put some definitions but I think they are not the most generic ones. I think a really generic definition would need more heavy math.
First, define a state space S of dimension d as a subset of Z^d (Z represents the integers set).
Let f: S -> R be a function that we are interested in calculate for a given point P of the state space S (R represents the real numbers set).
Let t: S -> S^k be a transition function (it associates points in the state space to sets of points in the state space).
Consider the problem of calculating f on a point P in S.
We can consider it as a dynamic programming problem if there is a function g: R^k -> R such that f(P) = g(f(t(P)[0]), f(t(P)[1]), ..., f(t(P)[k])) (a problem can be solved only by using sub problems) and t defines a directed graph that is not a tree (sub problems have some overlap).
Consider the graph defined by t. We know it has a source (the point P) and some sinks for which we know the value of f (the base cases). We can define a top down solution for the problem as a depth first search through this graph that starts in the source and calculate f for each vertex at its return time (when the depth first search of all its sub graph is completed) using the transition function. On the other hand, a bottom up solution for the problem can be defined as a multi source breadth first search through the transposed graph that starts in the sinks and finishes in the source vertex, calculating f at each visited vertex using the previous visited layer.
The problem is: to navigate through the transposed graph, for each point you visit you need to know what points transition to this point in the original graph. In math terms, for each point Q in the transition graph, you need to know the set J of points such that for each point Pi in J, t(Pi) contains Q and there is no other point Pr in the state space outside of J such that t(Pr) contains Q. Notice that a trivial way to know this is to visit all the state space for each point Q.
The conclusion is that a bottom up solution as defined here always exists but it only compensates if you have a way to navigate through the transposed graph at least as efficiently as navigating through the original graph. This depends essentially in the properties of the transition function.
In particular, for the leetcode problem you mentioned, the transition function is the function that, for each point in the chessboard, gives all the points to which the knight can go to. A very special property about this function is that it's symmetric: if the knight can go from A to B, then it can also go from B to A. So, given a certain point P, you can know to which points the knight can go as efficiently as you can know from which points the knight can come from. This is the property that guarantees you that there exists a bottom up approach as efficient as the top down approach for this problem.
For the leetcode question you mentioned, the top down approach is like the following:
Let P(x, y, k) be the probability that the knight is at the square (x, y) at the k-th step. Look at all squares that the knight could have come from (you can get them in O(1), just look at the board with a pen and paper and get the formulas from the different cases, like knight in the corner, knight in the border, knight in a central region etc). Let them be (x1, y1), ... (xj, yj). For each of these squares, what is the probability that the knight jumps to (x, y) ? Considering that it can go out of the border, it's always 1/8. So:
P(x, y, k) = (P(x1, y1, k-1) + ... + P(xj, yj, k-1))/8
The base case is k = 0:
P(x, y ,0) = 1 if (x, y) = (x_start, y_start) and P(x, y, 0) = 0 otherwise.
You iterate through all n^2 squares and use the recurrence formula to calculate P(x, y, k). Many times you will need solutions you already calculated for k-1 and so you can benefit a lot from memoization.
In the end, the final solution will be the sum of P(x, y, k) over all squares of the board.
When reading about dynamic programming in "Introduction to algorithms" By cormen, Chapter 15: Dynamic Programming , I came across this statement
When developing a dynamic-programming algorithm, we follow a sequence of
four steps:
Characterize the structure of an optimal solution.
Recursively define the value of an optimal solution.
Compute the value of an optimal solution, typically in a bottom-up fashion.
Construct an optimal solution from computed information.
Steps 1–3 form the basis of a dynamic-programming solution to a problem. If we
need only the value of an optimal solution, and not the solution itself, then we
can omit step 4. When we do perform step 4, we sometimes maintain additional
information during step 3 so that we can easily construct an optimal solution.
I did not understand the difference in step 3 and 4.
computing the value of optimal solution
and
constructing the optimal solution.
I was expecting to understand this by reading even further, but failed to understand.
Can some one help me understanding this by giving an example ?
Suppose we are using dynamic programming to work out whether there is a subset of [1,3,4,6,10] that sums to 9.
The answer to step 3 is the value, in this case "TRUE".
The answer to step 4 is working out the actual subset that sums to 9, in this case "3+6".
In dynamical programming we most of the time end up with a huge results hash. However initially it only contains the result obtained from the first, smallest, simplest (bottom) case and by using these initial results and calculating on top of them we eventually merge to the target. At this point the last item in the hash most of the time is the target (step 3 completed). Then we will have to process it to get the desired result.
A perfect example could be finding the minimum number of cubes summing up to a target. Target is 500 and we should get [5,5,5,5] or if the target is 432 we must get [6,6].
So we can implement this task in JS as follows;
function getMinimumCubes(tgt){
var maxi = Math.floor(Math.fround(Math.pow(tgt,1/3))),
hash = {0:[[]]},
cube = 0;
for (var i = 1; i <= maxi; i++){
cube = i*i*i;
for (var j = 0; j <= tgt - cube; j++){
hash[j+cube] = hash[j+cube] ? hash[j+cube].concat(hash[j].map(e => e.concat(i)))
: hash[j].map(e => e.concat(i));
}
}
return hash[tgt].reduce((p,c) => p.length < c.length ? p:c);
}
var target = 432,
result = [];
console.time("perf:");
result = getMinimumCubes(target);
console.timeEnd("perf:");
console.log(result);
So in this code, hash = {0:[[]]}, is step 1; the nested for loops which eventually prepare the hash[tgt] are in fact step 3 and the .reduce() functor at the return stage is step 4 since it shapes up the last item of the hash (hash[tgt]) to give us the desired result by filtering out the shortest result among all results that sum up to the target value.
To me the step 2 is somewhat meaningless. Not because of the mention of recursion but also by meaning. Besides I have never used nor seen a recursive approach in dynamical programming. It's best implemented with while or for loops.
I am having issues with understanding dynamic programming solutions to various problems, specifically the coin change problem:
"Given a value N, if we want to make change for N cents, and we have infinite supply of each of S = { S1, S2, .. , Sm} valued coins, how many ways can we make the change? The order of coins doesn’t matter.
For example, for N = 4 and S = {1,2,3}, there are four solutions: {1,1,1,1},{1,1,2},{2,2},{1,3}. So output should be 4. For N = 10 and S = {2, 5, 3, 6}, there are five solutions: {2,2,2,2,2}, {2,2,3,3}, {2,2,6}, {2,3,5} and {5,5}. So the output should be 5."
There is another variation of this problem where the solution is the minimum number of coins to satisfy the amount.
These problems appear very similar, but the solutions are very different.
Number of possible ways to make change: the optimal substructure for this is DP(m,n) = DP(m-1, n) + DP(m, n-Sm) where DP is the number of solutions for all coins up to the mth coin and amount=n.
Minimum amount of coins: the optimal substructure for this is
DP[i] = Min{ DP[i-d1], DP[i-d2],...DP[i-dn] } + 1 where i is the total amount and d1..dn represent each coin denomination.
Why is it that the first one required a 2-D array and the second a 1-D array? Why is the optimal substructure for the number of ways to make change not "DP[i] = DP[i-d1]+DP[i-d2]+...DP[i-dn]" where DP[i] is the number of ways i amount can be obtained by the coins. It sounds logical to me, but it produces an incorrect answer. Why is that second dimension for the coins needed in this problem, but not needed in the minimum amount problem?
LINKS TO PROBLEMS:
http://comproguide.blogspot.com/2013/12/minimum-coin-change-problem.html
http://www.geeksforgeeks.org/dynamic-programming-set-7-coin-change/
Thanks in advance. Every website I go to only explains how the solution works, not why other solutions do not work.
Lets first talk about the number of ways, DP(m,n) = DP(m-1, n) + DP(m, n-Sm). This in indeed correct because either you can use the mth denomination or you can avoid it. Now you say why don't we write it as DP[i] = DP[i-d1]+DP[i-d2]+...DP[i-dn]. Well this will lead to over counting , lets take an example where n=4 m=2 and S={1,3}. Now according to your solution dp[4]=dp[1]+dp[3]. ( Assuming 1 to be a base case dp[1]=1 ) .Now dp[3]=dp[2]+dp[0]. ( Again dp[0]=1 by base case ). Again applying the same dp[2]=dp[1]=1. Thus in total you get answer as 3 when its supposed to be just 2 ( (1,3) and (1,1,1,1) ). Its so because
your second method treats (1,3) and (3,1) as two different solution.Your second method can be applied to case where order matters, which is also a standard problem.
Now to your second question you say that minimum number of denominations can
be found out by DP[i] = Min{ DP[i-d1], DP[i-d2],...DP[i-dn] } + 1. Well this is correct as in finding minimum denominations, order or no order does not matter. Why this is linear / 1-D DP , well although the DP array is 1-D each state depends on at most m states unlike your first solution where array is 2-D but each state depends on at most 2 states. So in both case run time which is ( number of states * number of states each state depends on ) is the same which is O(nm). So both are correct, just your second solution saves memory. So either you can find it by 1-D array method or by 2-D by using the recurrence
dp(n,m)=min(dp(m-1,n),1+dp(m,n-Sm)). (Just use min in your first recurrence)
Hope I cleared the doubts , do post if still something is unclear.
This is a very good explanation of the coin change problem using Dynamic Programming.
The code is as follows:
public static int change(int amount, int[] coins){
int[] combinations = new int[amount + 1];
combinations[0] = 1;
for(int coin : coins){
for(int i = 1; i < combinations.length; i++){
if(i >= coin){
combinations[i] += combinations[i - coin];
//printAmount(combinations);
}
}
//System.out.println();
}
return combinations[amount];
}
What is a non recursive algorithm for deciding whether a passed in amount can be built additively from a set of numbers.
In my case I'm determining whether a certain currency amount (such as $40) can be met by adding up some combination of a set of bills (such as $5, $10 and $20 bills). That is a simple example, but the algorithm needs to work for any currency set (some currencies use funky bill amounts and some bills may not be available at a given time).
So $50 can be met with a set of ($20 and $30), but cannot be met with a set of ($20 and $40). The non-recursive requirement is due to the target code base being for SQL Server 2000 where the support of recursion is limited.
In addition this is for supporting a multi currency environment where the set of bills available may change (think a foreign currency exchange teller for example).
You have twice stated that the algorithm cannot be recursive, yet that is the natural solution to this problem. One way or another, you will need to perform a search to solve this problem. If recursion is out, you will need to backtrack manually.
Pick the largest currency value below the target value. If it's match, you're done. If not, push the current target value on a stack and subtract from the target value the picked currency value. Keep doing this until you find a match or there are no more currency values left. Then use the stack to backtrack and pick a different value.
Basically, it's the recursive solution inside a loop with a manually managed stack.
If you treat each denomination as a point on a base-n number, where n is the maximum number of notes you would need, then you can increment through that number until you've exhausted the problem space or found a solution.
The maximum number of notes you would need is the Total you require divided by the lowest denomination note.
It's a brute force response to the problem, but it'll definitely work.
Here's some p-code. I'm probably all over the place with my fence posts, and it's so unoptimized to be ridiculous, but it should work. I think the idea's right anyway.
Denominations = [10,20,50,100]
Required = 570
Denominations = sort(Denominations)
iBase = integer (Required / Denominations[1])
BumpList = array [Denominations.count]
BumpList.Clear
repeat
iTotal = 0
for iAdd = 1 to Bumplist.size
iTotal = iTotal + bumplist [iAdd] * Denominations[iAdd]
loop
if iTotal = Required then exit true
//this bit should be like a mileometer.
//We add 1 to each wheel, and trip over to the next wheel when it gets to iBase
finished = true
for iPos from bumplist.last to bumplist.first
if bumplist[iPos] = (iBase-1) then bumplist[iPos] = 0
else begin
finished = false
bumplist[iPos] = bumplist[iPos]+1
exit for
end
loop
until (finished)
exit false
That's a problem that can be solved by an approach known as dynamic programming. The lecture notes I have are too focused on bioinformatics, unfortunately, so you'll have to google for it yourself.
This sounds like the subset sum problem, which is known to be NP-complete.
Good luck with that.
Edit: If you're allowed arbitrary number of bills/coins of some denomination (as opposed to just one), then it's a different problem, and is easier. See the coin problem. I realized this when reading another answer to a (suspiciously) similar question.
I agree with Tyler - what you are describing is a variant of the Subset Sum problem which is known to be NP-Complete. In this case you are a bit lucky as you are working with a limited set of values so you can use dynamic programming techniques here to optimize the problem a bit. In terms of some general ideas for the code:
Since you are dealing with money, there are only so many ways to make change with a given bill and in most cases some bills are used more often than others. So if you store the results you can keep a set of the most common solutions and then just check them before you try and find the actual solution.
Unless the language you are working with doesn't support recursion there is no reason to completely ignore the use of recursion in the solution. While any recursive problem can be solved using iteration, this is a case where recursion is likely going to be easier to write.
Some of the other users such as Kyle and seanyboy point you in the right direction for writing your own function so you should take a look at what they have provided for what you are working on.
You can deal with this problem with Dynamic Programming method as MattW. mentioned.
Given limited number of bills and maximum amount of money, you can try the following solution. The code snippet is in C# but I believe you can port it to other language easily.
// Set of bills
int[] unit = { 40,20,70};
// Max amount of money
int max = 100000;
bool[] bucket = new bool[max];
foreach (int t in unit)
bucket[t] = true;
for (int i = 0; i < bucket.Length; i++)
if (bucket[i])
foreach (int t in unit)
if(i + t < bucket.Length)
bucket[i + t] = true;
// Check if the following amount of money
// can be built additively
Console.WriteLine("15 : " + bucket[15]);
Console.WriteLine("50 : " + bucket[50]);
Console.WriteLine("60 : " + bucket[60]);
Console.WriteLine("110 : " + bucket[110]);
Console.WriteLine("120 : " + bucket[120]);
Console.WriteLine("150 : " + bucket[150]);
Console.WriteLine("151 : " + bucket[151]);
Output:
15 : False
50 : False
60 : True
110 : True
120 : True
150 : True
151 : False
There's a difference between no recursion and limited recursion. Don't confuse the two as you will have missed the point of your lesson.
For example, you can safely write a factorial function using recursion in C++ or other low level languages because your results will overflow even your biggest number containers within but a few recursions. So the problem you will face will be that of storing the result before it ever gets to blowing your stack due to recursion.
This said, whatever solution you find - and I haven't even bothered understanding your problem deeply as I see that others have already done that - you will have to study the behaviour of your algorithm and you can determine what is the worst case scenario depth of your stack.
You don't need to avoid recursion altogether if the worst case scenario is supported by your platform.
Edit: The following will work some of the time. Think about why it won't work all the time and how you might change it to cover other cases.
Build it starting with the largest bill towards the smallest. This will yeild the lowest number of bills.
Take the initial amount and apply the largest bill as many times as you can without going over the price.
Step to the next largest bill and apply it the same way.
Keep doing this until you are on your smallest bill.
Then check if the sum equals the target amount.
Algorithm:
1. Sort currency denominations available in descending order.
2. Calculate Remainder = Input % denomination[i] i -> n-1, 0
3. If remainder is 0, the input can be broken down, otherwise it cannot be.
Example:
Input: 50, Available: 10,20
[50 % 20] = 10, [10 % 10] = 0, Ans: Yes
Input: 50, Available: 15,20
[50 % 20] = 10, [10 % 15] = 15, Ans: No