Improvement for Selection Sort Algorithm? - algorithm

I'm a new student to CS. Right now I'm self-studying Intro to programming. I was studying about Selection Sort Algorithm, and I think that by making this change below to selection sort it is going to make it be more efficient, is this true or I'm missing something??
the change is instead of calling the swap function each time even when there is no change made to the array, we can add changeMade Boolean variable and we use it to call the function only when there is a change made to the array. Please correct me if I'm wrong
Declare Integer startScan, i, minValue
Declare Integer minIndex
//the boolean variable
//that could make algorithim
//more efficient
Declare Boolean changemade
//Declare the array and Declare it is size
//and initialize it
Constant Integer SIZE = 5
Declare Integer array[SIZE]=1, 4, 8, 2, 5
For StartScan = 0 To SIZE-2
set changeMade = False
set array[startScan]= minValue
For i= startScan+1 To SIZE-1
If array[i]<minValue Then
set minValue=array[i]
set minIndex= i
set changeMade=True //the modification
End If
End For
If ChangeMade = True Then
call swap(array[minIndex], array[startScan])
End For
Module swap(Integer Ref a, Integer Ref b)
Declare Integer temp
set temp = a
set a = b
set b = temp
End Module

Operations such as swap are almost ignored while calculating the complexity.
Although all the operations are taken into account when calculating time complexity. But as loops are dominant compared to other operations, we ignore other operations and only consider dominant operations(Because for large input value, cost of all other operations are much smaller than the dominant operations) .
As an example with selection sort: When you consider all statement costs into account then you get a function f(n)=an2+bn+c (a,b and c are constants and depend on machine architecture). Here dominant term is an2.So we can say Time complexity of selection sort O(an2).We also ignore leading terms coefficient a ,as a does not change the rate of growth.
Have you read about the Asymptotic analysis and notations such as theta, omega, big O. Have a look at them, it will help you get the answer to your question.

That sounds like a false good idea.
This heuristic is effective on elements that are already at the correct position.
Assume there are 10% of them, which can be considered optimistic. While sorting N elements, you will spare 0.1 N swaps but add a large number of assignments to the flag (up to N²/2 !) and N tests of the flag (conditional instructions are very slow).
Unless the swap is really costly, odds are high that the overhead of manipulating the flag will dominate.
It is certainly a better idea to drop the flag and test minIndex != startScan, but even so, it is unsure that avoiding the swap will counterbalance the extra comparisons.

Related

Generating Combinational string of length n using 3 possible values

I have three possible values of war(w) ,buy(b) and sell(s). I have to generate a combinational string of length N.
Suppose N is 2 total combination is 3x3=9
w,w
w,b
w,s
b,w
b,b
b,s
s,w
s,b
s,s
likewise I have to generate a combinational string of (w,s,b) of size equals to N. 2<=N<=8000
You can do it with a recursive function. Here's an example in python, but you can easily rewrite it in your favorite language.
def x(partial):
if len(partial) == N:
handle_solution(partial)
for c in ('w', 'b', 's'):
x(partial + c)
This is going to be slow regardless of language or implementation. The number of solutions is 3^N so even for relatively small values of N this will take a very long time. You should go back to your original problem and figure out a way to solve it without going through all the combinations.

How to design a data structure that allows one to search, insert and delete an integer X in O(1) time

Here is an exercise (3-15) in the book "Algorithm Design Manual".
Design a data structure that allows one to search, insert, and delete an integer X in O(1) time (i.e. , constant time, independent of the total number of integers stored). Assume that 1 ≤ X ≤ n and that there are m + n units of space available, where m is the maximum number of integers that can be in the table at any one time. (Hint: use two arrays A[1..n] and B[1..m].) You are not allowed to initialize either A or B, as that would take O(m) or O(n) operations. This means the arrays are full of random garbage to begin with, so you must be very careful.
I am not really seeking for the answer, because I don't even understand what this exercise asks.
From the first sentence:
Design a data structure that allows one to search, insert, and delete an integer X in O(1) time
I can easily design a data structure like that. For example:
Because 1 <= X <= n, so I just have an bit vector of n slots, and let X be the index of the array, when insert, e.g., 5, then a[5] = 1; when delete, e.g., 5, then a[5] = 0; when search, e.g.,5, then I can simply return a[5], right?
I know this exercise is harder than I imagine, but what's the key point of this question?
You are basically implementing a multiset with bounded size, both in number of elements (#elements <= m), and valid range for elements (1 <= elementValue <= n).
Search: myCollection.search(x) --> return True if x inside, else False
Insert: myCollection.insert(x) --> add exactly one x to collection
Delete: myCollection.delete(x) --> remove exactly one x from collection
Consider what happens if you try to store 5 twice, e.g.
myCollection.insert(5)
myCollection.insert(5)
That is why you cannot use a bit vector. But it says "units" of space, so the elaboration of your method would be to keep a tally of each element. For example you might have [_,_,_,_,1,_,...] then [_,_,_,_,2,_,...].
Why doesn't this work however? It seems to work just fine for example if you insert 5 then delete 5... but what happens if you do .search(5) on an uninitialized array? You are specifically told you cannot initialize it, so you have no way to tell if the value you'll find in that piece of memory e.g. 24753 actually means "there are 24753 instances of 5" or if it's garbage.
NOTE: You must allow yourself O(1) initialization space, or the problem cannot be solved. (Otherwise a .search() would not be able to distinguish the random garbage in your memory from actual data, because you could always come up with random garbage which looked like actual data.) For example you might consider having a boolean which means "I have begun using my memory" which you initialize to False, and set to True the moment you start writing to your m words of memory.
If you'd like a full solution, you can hover over the grey block to reveal the one I came up with. It's only a few lines of code, but the proofs are a bit longer:
SPOILER: FULL SOLUTION
Setup:
Use N words as a dispatch table: locationOfCounts[i] is an array of size N, with values in the range location=[0,M]. This is the location where the count of i would be stored, but we can only trust this value if we can prove it is not garbage. >!
(sidenote: This is equivalent to an array of pointers, but an array of pointers exposes you being able to look up garbage, so you'd have to code that implementation with pointer-range checks.)
To find out how many is there are in the collection, you can look up the value counts[loc] from above. We use M words as the counts themselves: counts is an array of size N, with two values per element. The first value is the number this represents, and the second value is the count of that number (in the range [1,m]). For example a value of (5,2) would mean that there are 2 instances of the number 5 stored in the collection.
(M words is enough space for all the counts. Proof: We know there can never be more than M elements, therefore the worst-case is we have M counts of value=1. QED)
(We also choose to only keep track of counts >= 1, otherwise we would not have enough memory.)
Use a number called numberOfCountsStored that IS initialized to 0 but is updated whenever the number of item types changes. For example, this number would be 0 for {}, 1 for {5:[1 times]}, 1 for {5:[2 times]}, and 2 for {5:[2 times],6:[4 times]}.
                          1  2  3  4  5  6  7  8...
locationOfCounts[<N]: [☠, ☠, ☠, ☠, ☠, 0, 1, ☠, ...]
counts[<M]:           [(5,⨯2), (6,⨯4), ☠, ☠, ☠, ☠, ☠, ☠, ☠, ☠..., ☠]
numberOfCountsStored:          2
Below we flush out the details of each operation and prove why it's correct:
Algorithm:
There are two main ideas: 1) we can never allow ourselves to read memory without verifying that is not garbage first, or if we do we must be able to prove that it was garbage, 2) we need to be able to prove in O(1) time that the piece of counter memory has been initialized, with only O(1) space. To go about this, the O(1) space we use is numberOfItemsStored. Each time we do an operation, we will go back to this number to prove that everything was correct (e.g. see ★ below). The representation invariant is that we will always store counts in counts going from left-to-right, so numberOfItemsStored will always be the maximum index of the array that is valid.
.search(e) -- Check locationsOfCounts[e]. We assume for now that the value is properly initialized and can be trusted. We proceed to check counts[loc], but first we check if counts[loc] has been initialized: it's initialized if 0<=loc<numberOfCountsStored (if not, the data is nonsensical so we return False). After checking that, we look up counts[loc] which gives us a number,count pair. If number!=e, we got here by following randomized garbage (nonsensical), so we return False (again as above)... but if indeed number==e, this proves that the count is correct (★proof: numberOfCountsStored is a witness that this particular counts[loc] is valid, and counts[loc].number is a witness that locationOfCounts[number] is valid, and thus our original lookup was not garbage.), so we would return True.
.insert(e) -- Perform the steps in .search(e). If it already exists, we only need to increment the count by 1. However if it doesn't exist, we must tack on a new entry to the right of the counts subarray. First we increment numberOfCountsStored to reflect the fact that this new count is valid: loc = numberOfCountsStored++. Then we tack on the new entry: counts[loc] = (e,⨯1). Finally we add a reference back to it in our dispatch table so we can look it up quickly locationOfCounts[e] = loc.
.delete(e) -- Perform the steps in .search(e). If it doesn't exist, throw an error. If the count is >= 2, all we need to do is decrement the count by 1. Otherwise the count is 1, and the trick here to ensure the whole numberOfCountsStored-counts[...] invariant (i.e. everything remains stored on the left part of counts) is to perform swaps. If deletion would get rid of the last element, we will have lost a counts pair, leaving a hole in our array: [countPair0, countPair1, _hole_, countPair2, countPair{numberOfItemsStored-1}, ☠, ☠, ☠..., ☠]. We swap this hole with the last countPair, decrement numberOfCountsStored to invalidate the hole, and update locationOfCounts[the_count_record_we_swapped.number] so it now points to the new location of the count record.
Here is an idea:
treat the array B[1..m] as a stack, and make a pointer p to point to the top of the stack (let p = 0 to indicate that no elements have been inserted into the data structure). Now, to insert an integer X, use the following procedure:
p++;
A[X] = p;
B[p] = X;
Searching should be pretty easy to see here (let X' be the integer you want to search for, then just check that 1 <= A[X'] <= p, and that B[A[X']] == X'). Deleting is trickier, but still constant time. The idea is to search for the element to confirm that it is there, then move something into its spot in B (a good choice is B[p]). Then update A to reflect the pointer value of the replacement element and pop off the top of the stack (e.g. set B[p] = -1 and decrement p).
It's easier to understand the question once you know the answer: an integer is in the set if A[X]<total_integers_stored && B[A[X]]==X.
The question is really asking if you can figure out how to create a data structure that is usable with a minimum of initialization.
I first saw the idea in Cameron's answer in Jon Bentley Programming Pearls.
The idea is pretty simple but it's not straightforward to see why the initial random values that may be on the uninitialized arrays does not matter. This link explains pretty well the insertion and search operations. Deletion is left as an exercise, but is answered by one of the commenters:
remove-member(i):
if not is-member(i): return
j = dense[n-1];
dense[sparse[i]] = j;
sparse[j] = sparse[i];
n = n - 1

no explicit loop to calculate product of list to some modulo in Mathematica

In Mathematica, do I have to use an explicit loop to calculate the product of elements in a given list (potentially very long) modulo to another number?
Please teach me your elegant approach if you do have. Thanks!
Edit
Just to give an example
list=Range[2000];Mod[Product[list],32327]
The above is very inefficient, because while calculating the products, one could have taken the modulo to make the multipliers smaller.
Edit 2
I guess my question relates to how to replace for loop for
Module[{ret = initial_value}, For[i = 1, i <= Length[list], i++, ret = general_function[list[[i]],ret]; ret]
given a general function general_function and a list list.
For long lists a divide-and-conquer is typically faster. The idea is to compute the times-mod for the first and second halves, multiply that, and take the mod.
Here is an example. We'll use a list of 10^6 integers, all between 0 and 10^10.
SeedRandom[1111111];
len = 6;
max = 10;
list = RandomInteger[10^max, 10^len];
Multiplying and taking the modulus, for a slightly larger mod (I wanted to decrease the likelihood that the result was zero):
In[119]:= Timing[Mod[Times ## list, 32327541]]
Out[119]= {1.360000, 8826597}
Here is a variant of the sort I described. Trial and error tuning indicated that lists of length 2^9 or so were best done nonrecursively, at least for numbers in the size range indicated above.
tmod2[ll_List, m_] := With[{len=Floor[Length[ll]/2]},
If[len<=256,
Mod[Times ## ll, m],
Mod[tmod2[Take[ll,len],m] * tmod2[Drop[ll,len],m], m]]]
In[120]:= Timing[tmod2[list, 32327541]]
Out[120]= {0.310000, 8826597}
When I increase the list length to 10^7 and allow ints from 0 to 10^20, the first method takes 50 seconds and the second one takes 5 seconds. So clearly the scaling is working to our advantage.
For situations where an iteration interleaving two operations might be preferred to divide-and-conquer, one might use Fold as below.
tmod3[ll_List, m_] := Fold[Mod[#1*#2,m]&, First[ll], Rest[ll]]
While not competitive with tmod2 on long lists, this is faster than multiplying out everything prior to invoking Mod. For length 10^7 and max element 0f 10^20 it takes around 8 seconds to do what tmod2 did in 5.
Why not use Times? The following
list=Range[2000];
Mod[Times##list,32327]
will probably be the most efficient. From a recent WRI blog post,
Times knows a clever binary splitting trick that can be used when you have a large number of integer arguments. It is faster to recursively split the arguments into two smaller products, (1*2*…32767)(32768*…*65536), rather than working through the arguments from first to last. It still has to do the same number of multiplications, but fewer of them involve very big integers, and so, on average, are quicker to do
I'm assuming that list in your question is just an example. If you really have to take the product of n consecutive integers starting with 1, then Factorial will be the fastest. i.e.,
Mod[2000!, 32327]
This appears to be as much as twice as fast as Daniel's code on my system:
SeedRandom[1];
list = RandomInteger[1*^20, 1*^7];
m = 32327501;
Mod[Times ## Mod[Times ### Partition[list, 50, 50, 1, {}], m], m] // AbsoluteTiming
tmod2[list, m] // AbsoluteTiming
{1.5800904, 21590133}
{3.1081778, 21590133}
Different partition lengths could be used to tune this for your system and work set.

How to test if one set of (unique) integers belongs to another set, efficiently?

I'm writing a program where I'm having to test if one set of unique integers A belongs to another set of unique numbers B. However, this operation might be done several hundred times per second, so I'm looking for an efficient algorithm to do it.
For example, if A = [1 2 3] and B = [1 2 3 4], it is true, but if B = [1 2 4 5 6], it's false.
I'm not sure how efficient it is to just sort and compare, so I'm wondering if there are any more efficient algorithms.
One idea I came up with, was to give each number n their corresponding n'th prime: that is 1 = 2, 2 = 3, 3 = 5, 4 = 7 etc. Then I could calculate the product of A, and if that product is a factor of the similar product of B, we could say that A is a subset of similar B with certainty. For example, if A = [1 2 3], B = [1 2 3 4] the primes are [2 3 5] and [2 3 5 7] and the products 2*3*5=30 and 2*3*5*7=210. Since 210%30=0, A is a subset of B. I'm expecting the largest integer to be couple of million at most, so I think it's doable.
Are there any more efficient algorithms?
The asymptotically fastest approach would be to just put each set in a hash table and query each element, which is O(N) time. You cannot do better (since it will take that much time to read the data).
Most set datastructures already support expected and/or amortized O(1) query time. Some languages even support this operation. For example in python, you could just do
A < B
Of course the picture changes drastically depending on what you mean by "this operation is repeated". If you have the ability to do precalculations on the data as you add it to the set (which presumably you have the ability to do so), this will allow you to subsume the minimal O(N) time into other operations such as constructing the set. But we can't advise without knowing much more.
Assuming you had full control of the set datastructure, your approach to keep a running product (whenever you add an element, you do a single O(1) multiplication) is a very good idea IF there exists a divisibility test that is faster than O(N)... in fact your solution is really smart, because we can just do a single ALU division and hope we're within float tolerance. Do note however this will only allow you roughly a speedup factor of 20x max I think, since 21! > 2^64. There might be tricks to play with congruence-modulo-an-integer, but I can't think of any. I have a slight hunch though that there is no divisibility test that is faster than O(#primes), though I'd like to be proved wrong!
If you are doing this repeatedly on duplicates, you may benefit from caching depending on what exactly you are doing; give each set a unique ID (though since this makes updates hard, you may ironically wish to do something exactly like your scheme to make fingerprints, but mod max_int_size with detection-collision). To manage memory, you can pin extremely expensive set comparison (e.g. checking if a giant set is part of itself) into the cache, while otherwise using a most-recent policy if you run into memory issues. This nice thing about this is it synergizes with an element-by-element rejection test. That is, you will be throwing out sets quickly if they don't have many overlapping elements, but if they have many overlapping elements the calculations will take a long time, and if you repeat these calculations, caching could come in handy.
Let A and B be two sets, and you want to check if A is a subset of B. The first idea that pops into my mind is to sort both sets and then simply check if every element of A is contained in B, as following:
Let n_A and n_B be the cardinality of A and B, respectively. Let i_A = 1, i_B = 1. Then the following algorithm (that is O(n_A + n_B)) will solve the problem:
// A and B assumed to be sorted
i_A = 1;
i_B = 1;
n_A = size(A);
n_B = size(B);
while (i_A <= n_A) {
while (A[i_A] > B[i_B]) {
i_B++;
if (i_B > n_B) return false;
}
if (A[i_A] != B[i_B}) return false;
i_A++;
}
return true;
The same thing, but in a more functional, recursive way (some will find the previous easier to understand, others might find this one easier to understand):
// A and B assumed to be sorted
function subset(A, B)
n_A = size(A)
n_B = size(B)
function subset0(i_A, i_B)
if (i_A > n_A) true
else if (i_B > n_B) false
else
if (A[i_A] <= B[i_B]) return (A[i_A] == B[i_B]) && subset0(i_A + 1, i_B + 1);
else return subset0(i_A, i_B + 1);
subset0(1, 1)
In this last example, notice that subset0 is tail recursive, since if (A[i_A] == B[i_B]) is false then there will be no recursive call, otherwise, if (A[i_A] == B[i_B]) is true, than there's no need to keep this information, since the result of true && subset0(...) is exactly the same as subset0(...). So, any smart compiler will be able to transform this into a loop, avoiding stack overflows or any performance hits caused by function calls.
This will certainly work, but we might be able to optimize it a lot in the average case if you have and provide more information about your sets, such as the probability distribution of the values in the sets, if you somehow expect the answer to be biased (ie, it will more often be true, or more often be false), etc.
Also, have you already written any code to actually measure its performance? Or are you trying to pre-optimize?
You should start by writing the simplest and most straightforward solution that works, and measure its performance. If it's not already satisfactory, only then you should start trying to optimize it.
I'll present an O(m+n) time-per-test algorithm. But first, two notes regarding the problem statement:
Note 1 - Your edits say that set sizes may be a few thousand, and numbers may range up to a million or two.
In following, let m, n denote the sizes of sets A, B and let R denote the size of the largest numbers allowed in sets.
Note 2 - The multiplication method you proposed is quite inefficient. Although it uses O(m+n) multiplies, it is not an O(m+n) method because the product lengths are worse than O(m) and O(n), so it would take more than O(m^2 + n^2) time, which is worse than the O(m ln(m) + n ln(n)) time required for sorting-based methods, which in turn is worse than the O(m+n) time of the following method.
For the presentation below, I suppose that sets A, B can completely change between tests, which you say can occur several hundred times per second. If there are partial changes, and you know which p elements change in A from one test to next, and which q change in B, then the method can be revised to run in O(p+q) time per test.
Step 0. (Performed one time only, at outset.) Clear an array F, containing R bits or bytes, as you prefer.
Step 1. (Initial step of per-test code.) For i from 0 to n-1, set F[B[i]], where B[i] denotes the i'th element of set B. This is O(n).
Step 2. For i from 0 to m-1, { test F[A[i]]. If it is clear, report that A is not a subset of B, and go to step 4; else continue }. This is O(m).
Step 3. Report that A is a subset of B.
Step 4. (Clear used bits) For i from 0 to n-1, clear F[B[i]]. This is O(n).
The initial step (clearing array F) is O(R) but steps 1-4 amount to O(m+n) time.
Given the limit on the size of the integers, if the set of B sets is small and changes seldom, consider representing the B sets as bitsets (bit arrays indexed by integer set member). This doesn't require sorting, and the test for each element is very fast.
If the A members are sorted and tend to be clustered together, then get another speedup by testing all the element in one word of the bitset at a time.

What is a good non-recursive algorithm for deciding whether a passed in amount can be built additively from a set of numbers?

What is a non recursive algorithm for deciding whether a passed in amount can be built additively from a set of numbers.
In my case I'm determining whether a certain currency amount (such as $40) can be met by adding up some combination of a set of bills (such as $5, $10 and $20 bills). That is a simple example, but the algorithm needs to work for any currency set (some currencies use funky bill amounts and some bills may not be available at a given time).
So $50 can be met with a set of ($20 and $30), but cannot be met with a set of ($20 and $40). The non-recursive requirement is due to the target code base being for SQL Server 2000 where the support of recursion is limited.
In addition this is for supporting a multi currency environment where the set of bills available may change (think a foreign currency exchange teller for example).
You have twice stated that the algorithm cannot be recursive, yet that is the natural solution to this problem. One way or another, you will need to perform a search to solve this problem. If recursion is out, you will need to backtrack manually.
Pick the largest currency value below the target value. If it's match, you're done. If not, push the current target value on a stack and subtract from the target value the picked currency value. Keep doing this until you find a match or there are no more currency values left. Then use the stack to backtrack and pick a different value.
Basically, it's the recursive solution inside a loop with a manually managed stack.
If you treat each denomination as a point on a base-n number, where n is the maximum number of notes you would need, then you can increment through that number until you've exhausted the problem space or found a solution.
The maximum number of notes you would need is the Total you require divided by the lowest denomination note.
It's a brute force response to the problem, but it'll definitely work.
Here's some p-code. I'm probably all over the place with my fence posts, and it's so unoptimized to be ridiculous, but it should work. I think the idea's right anyway.
Denominations = [10,20,50,100]
Required = 570
Denominations = sort(Denominations)
iBase = integer (Required / Denominations[1])
BumpList = array [Denominations.count]
BumpList.Clear
repeat
iTotal = 0
for iAdd = 1 to Bumplist.size
iTotal = iTotal + bumplist [iAdd] * Denominations[iAdd]
loop
if iTotal = Required then exit true
//this bit should be like a mileometer.
//We add 1 to each wheel, and trip over to the next wheel when it gets to iBase
finished = true
for iPos from bumplist.last to bumplist.first
if bumplist[iPos] = (iBase-1) then bumplist[iPos] = 0
else begin
finished = false
bumplist[iPos] = bumplist[iPos]+1
exit for
end
loop
until (finished)
exit false
That's a problem that can be solved by an approach known as dynamic programming. The lecture notes I have are too focused on bioinformatics, unfortunately, so you'll have to google for it yourself.
This sounds like the subset sum problem, which is known to be NP-complete.
Good luck with that.
Edit: If you're allowed arbitrary number of bills/coins of some denomination (as opposed to just one), then it's a different problem, and is easier. See the coin problem. I realized this when reading another answer to a (suspiciously) similar question.
I agree with Tyler - what you are describing is a variant of the Subset Sum problem which is known to be NP-Complete. In this case you are a bit lucky as you are working with a limited set of values so you can use dynamic programming techniques here to optimize the problem a bit. In terms of some general ideas for the code:
Since you are dealing with money, there are only so many ways to make change with a given bill and in most cases some bills are used more often than others. So if you store the results you can keep a set of the most common solutions and then just check them before you try and find the actual solution.
Unless the language you are working with doesn't support recursion there is no reason to completely ignore the use of recursion in the solution. While any recursive problem can be solved using iteration, this is a case where recursion is likely going to be easier to write.
Some of the other users such as Kyle and seanyboy point you in the right direction for writing your own function so you should take a look at what they have provided for what you are working on.
You can deal with this problem with Dynamic Programming method as MattW. mentioned.
Given limited number of bills and maximum amount of money, you can try the following solution. The code snippet is in C# but I believe you can port it to other language easily.
// Set of bills
int[] unit = { 40,20,70};
// Max amount of money
int max = 100000;
bool[] bucket = new bool[max];
foreach (int t in unit)
bucket[t] = true;
for (int i = 0; i < bucket.Length; i++)
if (bucket[i])
foreach (int t in unit)
if(i + t < bucket.Length)
bucket[i + t] = true;
// Check if the following amount of money
// can be built additively
Console.WriteLine("15 : " + bucket[15]);
Console.WriteLine("50 : " + bucket[50]);
Console.WriteLine("60 : " + bucket[60]);
Console.WriteLine("110 : " + bucket[110]);
Console.WriteLine("120 : " + bucket[120]);
Console.WriteLine("150 : " + bucket[150]);
Console.WriteLine("151 : " + bucket[151]);
Output:
15 : False
50 : False
60 : True
110 : True
120 : True
150 : True
151 : False
There's a difference between no recursion and limited recursion. Don't confuse the two as you will have missed the point of your lesson.
For example, you can safely write a factorial function using recursion in C++ or other low level languages because your results will overflow even your biggest number containers within but a few recursions. So the problem you will face will be that of storing the result before it ever gets to blowing your stack due to recursion.
This said, whatever solution you find - and I haven't even bothered understanding your problem deeply as I see that others have already done that - you will have to study the behaviour of your algorithm and you can determine what is the worst case scenario depth of your stack.
You don't need to avoid recursion altogether if the worst case scenario is supported by your platform.
Edit: The following will work some of the time. Think about why it won't work all the time and how you might change it to cover other cases.
Build it starting with the largest bill towards the smallest. This will yeild the lowest number of bills.
Take the initial amount and apply the largest bill as many times as you can without going over the price.
Step to the next largest bill and apply it the same way.
Keep doing this until you are on your smallest bill.
Then check if the sum equals the target amount.
Algorithm:
1. Sort currency denominations available in descending order.
2. Calculate Remainder = Input % denomination[i] i -> n-1, 0
3. If remainder is 0, the input can be broken down, otherwise it cannot be.
Example:
Input: 50, Available: 10,20
[50 % 20] = 10, [10 % 10] = 0, Ans: Yes
Input: 50, Available: 15,20
[50 % 20] = 10, [10 % 15] = 15, Ans: No

Resources