Reducing the time complexity of this algorithm - performance

I'm playing a game that has a weapon-forging component, where you combine two weapons to get a new one. The sheer number of weapon combinations (see "6.1. Blade Combination Tables" at http://www.gamefaqs.com/ps/914326-vagrant-story/faqs/8485) makes it difficult to figure out what you can ultimately create out of your current weapons through repeated forging, so I tried writing a program that would do this for me. I give it a list of weapons that I currently have, such as:
francisca
tabarzin
kris
and it gives me the list of all weapons that I can forge:
ball mace
chamkaq
dirk
francisca
large crescent
throwing knife
The problem is that I'm using a brute-force algorithm that scales extremely poorly; it takes about 15 seconds to calculate all possible weapons for seven starting weapons, and a few minutes to calculate for eight starting weapons. I'd like it to be able to calculate up to 64 weapons (the maximum that you can hold at once), but I don't think I'd live long enough to see the results.
function find_possible_weapons(source_weapons)
{
for (i in source_weapons)
{
for (j in source_weapons)
{
if (i != j)
{
result_weapon = combine_weapons(source_weapons[i], source_weapons[j]);
new_weapons = array();
new_weapons.add(result_weapon);
for (k in source_weapons)
{
if (k != i && k != j)
new_weapons.add(source_weapons[k]);
}
find_possible_weapons(new_weapons);
}
}
}
}
In English: I attempt every combination of two weapons from my list of source weapons. For each of those combinations, I create a new list of all weapons that I'd have following that combination (that is, the newly-combined weapon plus all of the source weapons except the two that I combined), and then I repeat these steps for the new list.
Is there a better way to do this?
Note that combining weapons in the reverse order can change the result (Rapier + Firangi = Short Sword, but Firangi + Rapier = Spatha), so I can't skip those reversals in the j loop.
Edit: Here's a breakdown of the test example that I gave above, to show what the algorithm is doing. A line in brackets shows the result of a combination, and the following line is the new list of weapons that's created as a result:
francisca,tabarzin,kris
[francisca + tabarzin = chamkaq]
chamkaq,kris
[chamkaq + kris = large crescent]
large crescent
[kris + chamkaq = large crescent]
large crescent
[francisca + kris = dirk]
dirk,tabarzin
[dirk + tabarzin = francisca]
francisca
[tabarzin + dirk = francisca]
francisca
[tabarzin + francisca = chamkaq]
chamkaq,kris
[chamkaq + kris = large crescent]
large crescent
[kris + chamkaq = large crescent]
large crescent
[tabarzin + kris = throwing knife]
throwing knife,francisca
[throwing knife + francisca = ball mace]
ball mace
[francisca + throwing knife = ball mace]
ball mace
[kris + francisca = dirk]
dirk,tabarzin
[dirk + tabarzin = francisca]
francisca
[tabarzin + dirk = francisca]
francisca
[kris + tabarzin = throwing knife]
throwing knife,francisca
[throwing knife + francisca = ball mace]
ball mace
[francisca + throwing knife = ball mace]
ball mace
Also, note that duplicate items in a list of weapons are significant and can't be removed. For example, if I add a second kris to my list of starting weapons so that I have the following list:
francisca
tabarzin
kris
kris
then I'm able to forge the following items:
ball mace
battle axe
battle knife
chamkaq
dirk
francisca
kris
kudi
large crescent
scramasax
throwing knife
The addition of a duplicate kris allowed me to forge four new items that I couldn't before. It also increased the total number of forge tests to 252 for a four-item list, up from 27 for the three-item list.
Edit: I'm getting the feeling that solving this would require more math and computer science knowledge than I have, so I'm going to give up on it. It seemed like a simple enough problem at first, but then, so does the Travelling Salesman. I'm accepting David Eisenstat's answer since the suggestion of remembering and skipping duplicate item lists made such a huge difference in execution time and seems like it would be applicable to a lot of similar problems.

Start by memoizing the brute force solution, i.e., sort source_weapons, make it hashable (e.g., convert to a string by joining with commas), and look it up in a map of input/output pairs. If it isn't there, do the computation as normal and add the result to the map. This often results in big wins for little effort.
Alternatively, you could do a backward search. Given a multiset of weapons, form predecessors by replacing one of the weapon with two weapons that forge it, in all possible ways. Starting with the singleton list consisting of the singleton multiset consisting of the goal weapon, repeatedly expand the list by predecessors of list elements and then cull multisets that are supersets of others. Stop when you reach a fixed point.
If linear programming is an option, then there are systematic ways to prune search trees. In particular, let's make the problem easier by (i) allowing an infinite supply of "catalysts" (maybe not needed here?) (ii) allowing "fractional" forging, e.g., if X + Y => Z, then 0.5 X + 0.5 Y => 0.5 Z. Then there's an LP formulation as follows. For all i + j => k (i and j forge k), the variable x_{ijk} is the number of times this forge is performed.
minimize sum_{i, j => k} x_{ijk} (to prevent wasteful cycles)
for all i: sum_{j, k: j + k => i} x_{jki}
- sum_{j, k: j + i => k} x_{jik}
- sum_{j, k: i + j => k} x_{ijk} >= q_i,
for all i + j => k: x_{ijk} >= 0,
where q_i is 1 if i is the goal item, else minus the number of i initially available. There are efficient solvers for this easy version. Since the reactions are always 2 => 1, you can always recover a feasible forging schedule for an integer solution. Accordingly, I would recommend integer programming for this problem. The paragraph below may still be of interest.
I know shipping an LP solver may be inconvenient, so here's an insight that will let you do without. This LP is feasible if and only if its dual is bounded. Intuitively, the dual problem is to assign a "value" to each item such that, however you forge, the total value of your inventory does not increase. If the goal item is valued at more than the available inventory, then you can't forge it. You can use any method that you can think of to assign these values.

I think you are unlikely to get a good general answer to this question because
if there was an efficient algorithm to solve your problem, then it would also be able to solve NP-complete problems.
For example, consider the problem of finding the maximum number of independent rows in a binary matrix.
This is a known NP-complete problem (e.g. by showing equivalence to the maximum independent set problem).
We can reduce this problem to your question in the following manner:
We can start holding one weapon for each column in the binary matrix, and then we imagine each row describes an alternative way of making a new weapon (say a battle axe).
We construct the weapon translation table such that to make the battle axe using method i, we need all weapons j such that M[i,j] is equal to 1 (this may involve inventing some additional weapons).
Then we construct a series of super weapons which can be made by combining different numbers of our battle axes.
For example, the mega ultimate battle axe may require 4 battle axes to be combined.
If we are able to work out the best weapon that can be constructed from your starting weapons, then we have solved the problem of finding the maximum number of independent rows in the original binary matrix.

It's not a huge saving, however looking at the source document, there are times when combining weapons produces the same weapon as one that was combined. I assume that you won't want to do this as you'll end up with less weapons.
So if you added a check for if the result_weapon was the same type as one of the inputs, and didn't go ahead and recursively call find_possible_weapons(new_weapons), you'd trim the search down a little.
The other thing I could think of, is you are not keeping a track of work done, so if the return from find_possible_weapons(new_weapons) returns the same weapon that you already have got by combining other weapons, you might well be performing the same search branch multiple times.
e.g. if you have a, b, c, d, e, f, g, and if a + b = x, and c + d = x, then you algorithm will be performing two lots of comparing x against e, f, and g. So if you keep a track of what you've already computed, you'll be onto a winner...
Basically, you have to trim the search tree. There are loads of different techniques to do this: it's called search. If you want more advice, I'd recommend going to the computer science stack exchange.
If you are still struggling, then you could always start weighting items/resulting items, and only focus on doing the calculation on 'high gain' objects...

You might want to start by creating a Weapon[][] matrix, to show the results of forging each pair. You could map the name of the weapon to the index of the matrix axis, and lookup of the results of a weapon combination would occur in constant time.

Related

Shortest Path for n entities

I'm trying to solve a problem that is about minimizing the distance traveled by a group of n entities who have to go trough a group of x points in a given order.
The n entities all start in the same position (1,1) and then I'm given x points that are in a queue and have to be "answered" in the correct order. However, I want the distance to be minimal.
My approach to the algorithm so far was to order the entities in increasing order of their distance to the x that is next in line. Then, I'd check, from the ones that are closer to the ones that are the furthest away, if this distance to the next in line is bigger than the distance to the one that comes afterwards to minimize the distance. The closest one to not fulfill this condition went to answer. If all were closer to the x that came afterwards, I'd reorder them in increasing order of distance to the one that came afterwards and send the furthest away from this to answer the x. Since this is a test problem I'm doing as practice for a competition I know what the result should be for my test case and it seems I'm doing this wrong.
How should I implement such an algorithm that guarantees that the distance is minimal?
The algorithm you've described sounds like a greedy search algorithm. Greedy algorithms are not guaranteed to find optimal solutions except under specific conditions that don't seem to hold here.
This looks like a candidate for a dynamic programming formulation. Alternatively, you can use heuristic search such as the A* search algorithm. I'd go with the latter if I was in the competition. See the link for a description of how the algorithm works, how to implement it, and how you might apply it to your problem.
Although since the points need to be visited in order, there is a bound on the number of possible arrangements, I couldn't think of a more efficient way than the following formulation. Let f(ns, i) represent the optimal arrangement up to the ith point, where ns is the list of the last chosen point for each entity that has at least one point. Then we have at most two choices: either start a new entity if we haven't run out, or try the current point as the next visit for each entity.
Python recursion:
import math
def d(p1, p2):
return math.sqrt(math.pow(p1[0] - p2[0], 2) + math.pow(p1[1] - p2[1], 2))
def f(ps, n):
def g(ns, i):
if i == len(ps):
return 0
# start a new entity if we haven't run out
best = d((0,0), ps[i]) + g(ns[:] + [i], i + 1) if len(ns) < n else float('inf')
# try the current point as the next visit for each entity
for entity_idx, point_idx in enumerate(ns):
_ns = ns[:]
_ns[entity_idx] = i
best = min(best, d(ps[point_idx], ps[i]) + g(_ns, i + 1))
return best
return d((0,0), ps[0]) + g([0], 1)
Output:
"""
p5
p3
p1
(0,0) p4 p6
p2
"""
points = [(0,1), (0,-1), (0,2), (2,0), (0,3), (3,0)]
print f(points, 3) # 7.0

How many thresholds and distance matrix are in Eigenface?

I edited my question trying to make it as short and precise.
I am developing a prototype of a facial recognition system for my Graduation Project. I use Eigenface and my main source is the document Turk and Pentland. It is available here: http://www.face-rec.org/algorithms/PCA/jcn.pdf.
My doubts focus on step 4 and 5.
I can not correctly interpret the number of thresholds: If two types of thresholds, or only one (Notice that the text speaks of two types but uses the same symbol). And again, my question is whether this (or these) threshold(s) is unique and global for all person or if each person has their own default.
I understand the steps to be calculated until an matrix O() of classes with weights or weighted. So this matrix O() is of dimension M'x P. Since M' equal to the amount of eigenfaces chosen and P the number of people.
What follows and confuses me. He speaks of two distances: the distance of a class against another, and also from a distance of one face to another. I call it D1 and D2 respectively. NOTE: In the training set there are M images in total, with F = M / P the number of images per person.
I understand that threshold(s) should be chosen empirically. But there must be a way to approximate. I was initially designing a matrix of distances D1() of dimension PxP. Where the row vector D(i) has the distances from the vector average class O(i) to each O(j), j = 1..P. Ie a "all vs all."
Until I came here, and what follows depends on whether I should actually choose a single global threshold for all. Or if I should be chosen for each individual value. Also not if they are 2 types: one for distance classes, and one for distance faces.
I have a theory as could proceed but not so supported by the concepts of Turk:
Stage Pre-Test:
Gender two matrices of distances D1 and D2:
In D1 would be stored distances between classes, and in D2 distances between faces. This basis of the matrices W and A respectively.
Then, as indeed in the training set are P people, taking the F vectors columns D1 for each person and estimate a threshold T1 was in range [Min, Max]. Thus I will have a T1(i), i = 1..P
Separately have a T2 based on the range [Min, Max] out of all the matrix D2. This define is a face or not.
Step Test:
Buid a test set of image with a 1 image for each known person
Itest = {Itest(1) ... Itest(P)}
For every image Itest(i) test:
Calculate the space face Atest = Itest - Imean
Calculate the weight vector Otest = UT * Atest
Calculating distances:
dist1(j) = distance(Otest, O (j)), j = 1..P
Af = project(Otest, U)
dist2 = distance(Atest, Af)
Evaluate recognition:
MinDist = Min(dist1)
For each j = 1..P
If dist2 > T2 then "not is face" else:
If MinDist <= T1(j) then "Subject identified as j" else "subject unidentified"
Then I take account of TFA and TFR and repeat the test process with different threshold values until I find the best approach gives to each person.
Already defined thresholds can put the system into operation unknown images. The algorithm is similar to the test.
I know I get out of "script" of the official documentation but at least this reasoning is the most logical place my head. I wondered if I could give guidance.
EDIT:
i No more to say that has not already been said and that may help clarify things.
Could anyone tell me if I'm okay tackled with my "theory"? I'm moving into my project, and if this is not the right way would appreciate some guidance and does not work and you wrong.

matlab: optimum amount of points for linear fit

I want to make a linear fit to few data points, as shown on the image. Since I know the intercept (in this case say 0.05), I want to fit only points which are in the linear region with this particular intercept. In this case it will be lets say points 5:22 (but not 22:30).
I'm looking for the simple algorithm to determine this optimal amount of points, based on... hmm, that's the question... R^2? Any Ideas how to do it?
I was thinking about probing R^2 for fits using points 1 to 2:30, 2 to 3:30, and so on, but I don't really know how to enclose it into clear and simple function. For fits with fixed intercept I'm using polyfit0 (http://www.mathworks.com/matlabcentral/fileexchange/272-polyfit0-m) . Thanks for any suggestions!
EDIT:
sample data:
intercept = 0.043;
x = 0.01:0.01:0.3;
y = [0.0530642513911393,0.0600786706929529,0.0673485248329648,0.0794662409166333,0.0895915873196170,0.103837395346484,0.107224784565365,0.120300492775786,0.126318699218730,0.141508831492330,0.147135757370947,0.161734674733680,0.170982455701681,0.191799936622712,0.192312642057298,0.204771365716483,0.222689541632988,0.242582251060963,0.252582727297656,0.267390860166283,0.282890010610515,0.292381165948577,0.307990544720676,0.314264952297699,0.332344368808024,0.355781519885611,0.373277721489254,0.387722683944356,0.413648156978284,0.446500064130389;];
What you have here is a rather difficult problem to find a general solution of.
One approach would be to compute all the slopes/intersects between all consecutive pairs of points, and then do cluster analysis on the intersepts:
slopes = diff(y)./diff(x);
intersepts = y(1:end-1) - slopes.*x(1:end-1);
idx = kmeans(intersepts, 3);
x([idx; 3] == 2) % the points with the intersepts closest to the linear one.
This requires the statistics toolbox (for kmeans). This is the best of all methods I tried, although the range of points found this way might have a few small holes in it; e.g., when the slopes of two points in the start and end range lie close to the slope of the line, these points will be detected as belonging to the line. This (and other factors) will require a bit more post-processing of the solution found this way.
Another approach (which I failed to construct successfully) is to do a linear fit in a loop, each time increasing the range of points from some point in the middle towards both of the endpoints, and see if the sum of the squared error remains small. This I gave up very quickly, because defining what "small" is is very subjective and must be done in some heuristic way.
I tried a more systematic and robust approach of the above:
function test
%% example data
slope = 2;
intercept = 1.5;
x = linspace(0.1, 5, 100).';
y = slope*x + intercept;
y(1:12) = log(x(1:12)) + y(12)-log(x(12));
y(74:100) = y(74:100) + (x(74:100)-x(74)).^8;
y = y + 0.2*randn(size(y));
%% simple algorithm
[X,fn] = fminsearch(#(ii)P(ii, x,y,intercept), [0.5 0.5])
[~,inds] = P(X, y,x,intercept)
end
function [C, inds] = P(ii, x,y,intercept)
% ii represents fraction of range from center to end,
% So ii lies between 0 and 1.
N = numel(x);
n = round(N/2);
ii = round(ii*n);
inds = min(max(1, n+(-ii(1):ii(2))), N);
% Solve linear system with fixed intercept
A = x(inds);
b = y(inds) - intercept;
% and return the sum of squared errors, divided by
% the number of points included in the set. This
% last step is required to prevent fminsearch from
% reducing the set to 1 point (= minimum possible
% squared error).
C = sum(((A\b)*A - b).^2)/numel(inds);
end
which only finds a rough approximation to the desired indices (12 and 74 in this example).
When fminsearch is run a few dozen times with random starting values (really just rand(1,2)), it gets more reliable, but I still wouln't bet my life on it.
If you have the statistics toolbox, use the kmeans option.
Depending on the number of data values, I would split the data into a relative small number of overlapping segments, and for each segment calculate the linear fit, or rather the 1-st order coefficient, (remember you know the intercept, which will be same for all segments).
Then, for each coefficient calculate the MSE between this hypothetical line and entire dataset, choosing the coefficient which yields the smallest MSE.

Grundy's game extended to more than two heaps

How can In break a heap into two heaps in the Grundy's game?
What about breaking a heap into any number of heaps (no two of them being equal)?
Games of this type are analyzed in great detail in the book series "Winning Ways for your Mathematical Plays". Most of the things you are looking for are probably in volume 1.
You can also take a look at these links: Nimbers (Wikipedia), Sprague-Grundy theorem (Wikipedia) or do a search for "combinatorial game theory".
My knowledge on this is quite rusty, so I'm afraid I can't help you myself with this specific problem. My excuses if you were already aware of everything I linked.
Edit: In general, the method of solving these types of games is to "build up" stack sizes. So start with a stack of 1 and decide who wins with optimal play. Then do the same for a stack of 2, which can be split into 1 & 1. The move on to 3, which can be split into 1 & 2. Same for 4 (here it gets trickier): 3 & 1 or 2 & 2, using the Spague-Grundy theorem & the algebraic rules for nimbers, you can calculate who will win. Keep going until you reach the stack size for which you need to know the answer.
Edit 2: The website I was talking about in the comments seems to be down. Here is a link of a backup of it: Wayback Machine - Introduction to Combinatorial Games.
Grundy's Game, and many games like it, can be solved with an algorithm like this:
//returns a Move object representing the current player's optimal move, or null if the player has no chance of winning
function bestMove(GameState g){
for each (move in g.possibleMoves()){
nextState = g.applyMove(move)
if (bestMove(nextState) == null){
//the next player's best move is null, so if we take this move,
//he has no chance of winning. This is good for us!
return move;
}
}
//none of our possible moves led to a winning strategy.
//We have no chance of winning. This is bad for us :-(
return null;
}
Implementations of GameState and Move depend on the game. For Grundy's game, both are simple.
GameState stores a list of integers, representing the size of each heap in the game.
Move stores an initialHeapSize integer, and a resultingHeapSizes list of integers.
GameState::possibleMoves iterates through its heap size list, and determines the legal divisions for each one.
GameState::applyMove(Move) returns a copy of the GameState, except the move given to it is applied to the board.
GameState::possibleMoves can be implemented for "classic" Grundy's Game like so:
function possibleMoves(GameState g){
moves = []
for each (heapSize in g.heapSizes){
for each (resultingHeaps in possibleDivisions(heapSize)){
Move m = new Move(heapSize, resultingHeaps)
moves.append(m)
}
}
return moves
}
function possibleDivisions(int heapSize){
divisions = []
for(int leftPileSize = 1; leftPileSize < heapSize; leftPileSize++){
int rightPileSize = heapSize - leftPileSize
if (leftPileSize != rightPileSize){
divisions.append([leftPileSize, rightPileSize])
}
}
return divisions
}
Modifying this to use the "divide into any number of unequal piles" rule is just a matter of changing the implementation of possibleDivisions.
I haven't calculated it exactly, but an unoptimized bestMove has a pretty crazy worst-case runtime. Once you start giving it a starting state of around 12 stones, you'll get long wait times. So you should implement memoization to improve performance.
For best results, keep each GameState's heap size list sorted, and discard any heaps of size 2 or 1.

What is a good non-recursive algorithm for deciding whether a passed in amount can be built additively from a set of numbers?

What is a non recursive algorithm for deciding whether a passed in amount can be built additively from a set of numbers.
In my case I'm determining whether a certain currency amount (such as $40) can be met by adding up some combination of a set of bills (such as $5, $10 and $20 bills). That is a simple example, but the algorithm needs to work for any currency set (some currencies use funky bill amounts and some bills may not be available at a given time).
So $50 can be met with a set of ($20 and $30), but cannot be met with a set of ($20 and $40). The non-recursive requirement is due to the target code base being for SQL Server 2000 where the support of recursion is limited.
In addition this is for supporting a multi currency environment where the set of bills available may change (think a foreign currency exchange teller for example).
You have twice stated that the algorithm cannot be recursive, yet that is the natural solution to this problem. One way or another, you will need to perform a search to solve this problem. If recursion is out, you will need to backtrack manually.
Pick the largest currency value below the target value. If it's match, you're done. If not, push the current target value on a stack and subtract from the target value the picked currency value. Keep doing this until you find a match or there are no more currency values left. Then use the stack to backtrack and pick a different value.
Basically, it's the recursive solution inside a loop with a manually managed stack.
If you treat each denomination as a point on a base-n number, where n is the maximum number of notes you would need, then you can increment through that number until you've exhausted the problem space or found a solution.
The maximum number of notes you would need is the Total you require divided by the lowest denomination note.
It's a brute force response to the problem, but it'll definitely work.
Here's some p-code. I'm probably all over the place with my fence posts, and it's so unoptimized to be ridiculous, but it should work. I think the idea's right anyway.
Denominations = [10,20,50,100]
Required = 570
Denominations = sort(Denominations)
iBase = integer (Required / Denominations[1])
BumpList = array [Denominations.count]
BumpList.Clear
repeat
iTotal = 0
for iAdd = 1 to Bumplist.size
iTotal = iTotal + bumplist [iAdd] * Denominations[iAdd]
loop
if iTotal = Required then exit true
//this bit should be like a mileometer.
//We add 1 to each wheel, and trip over to the next wheel when it gets to iBase
finished = true
for iPos from bumplist.last to bumplist.first
if bumplist[iPos] = (iBase-1) then bumplist[iPos] = 0
else begin
finished = false
bumplist[iPos] = bumplist[iPos]+1
exit for
end
loop
until (finished)
exit false
That's a problem that can be solved by an approach known as dynamic programming. The lecture notes I have are too focused on bioinformatics, unfortunately, so you'll have to google for it yourself.
This sounds like the subset sum problem, which is known to be NP-complete.
Good luck with that.
Edit: If you're allowed arbitrary number of bills/coins of some denomination (as opposed to just one), then it's a different problem, and is easier. See the coin problem. I realized this when reading another answer to a (suspiciously) similar question.
I agree with Tyler - what you are describing is a variant of the Subset Sum problem which is known to be NP-Complete. In this case you are a bit lucky as you are working with a limited set of values so you can use dynamic programming techniques here to optimize the problem a bit. In terms of some general ideas for the code:
Since you are dealing with money, there are only so many ways to make change with a given bill and in most cases some bills are used more often than others. So if you store the results you can keep a set of the most common solutions and then just check them before you try and find the actual solution.
Unless the language you are working with doesn't support recursion there is no reason to completely ignore the use of recursion in the solution. While any recursive problem can be solved using iteration, this is a case where recursion is likely going to be easier to write.
Some of the other users such as Kyle and seanyboy point you in the right direction for writing your own function so you should take a look at what they have provided for what you are working on.
You can deal with this problem with Dynamic Programming method as MattW. mentioned.
Given limited number of bills and maximum amount of money, you can try the following solution. The code snippet is in C# but I believe you can port it to other language easily.
// Set of bills
int[] unit = { 40,20,70};
// Max amount of money
int max = 100000;
bool[] bucket = new bool[max];
foreach (int t in unit)
bucket[t] = true;
for (int i = 0; i < bucket.Length; i++)
if (bucket[i])
foreach (int t in unit)
if(i + t < bucket.Length)
bucket[i + t] = true;
// Check if the following amount of money
// can be built additively
Console.WriteLine("15 : " + bucket[15]);
Console.WriteLine("50 : " + bucket[50]);
Console.WriteLine("60 : " + bucket[60]);
Console.WriteLine("110 : " + bucket[110]);
Console.WriteLine("120 : " + bucket[120]);
Console.WriteLine("150 : " + bucket[150]);
Console.WriteLine("151 : " + bucket[151]);
Output:
15 : False
50 : False
60 : True
110 : True
120 : True
150 : True
151 : False
There's a difference between no recursion and limited recursion. Don't confuse the two as you will have missed the point of your lesson.
For example, you can safely write a factorial function using recursion in C++ or other low level languages because your results will overflow even your biggest number containers within but a few recursions. So the problem you will face will be that of storing the result before it ever gets to blowing your stack due to recursion.
This said, whatever solution you find - and I haven't even bothered understanding your problem deeply as I see that others have already done that - you will have to study the behaviour of your algorithm and you can determine what is the worst case scenario depth of your stack.
You don't need to avoid recursion altogether if the worst case scenario is supported by your platform.
Edit: The following will work some of the time. Think about why it won't work all the time and how you might change it to cover other cases.
Build it starting with the largest bill towards the smallest. This will yeild the lowest number of bills.
Take the initial amount and apply the largest bill as many times as you can without going over the price.
Step to the next largest bill and apply it the same way.
Keep doing this until you are on your smallest bill.
Then check if the sum equals the target amount.
Algorithm:
1. Sort currency denominations available in descending order.
2. Calculate Remainder = Input % denomination[i] i -> n-1, 0
3. If remainder is 0, the input can be broken down, otherwise it cannot be.
Example:
Input: 50, Available: 10,20
[50 % 20] = 10, [10 % 10] = 0, Ans: Yes
Input: 50, Available: 15,20
[50 % 20] = 10, [10 % 15] = 15, Ans: No

Resources