Algorithm to check if a number if a perfect number - algorithm

I am looking for an algorithm to find if a given number is a perfect number.
The most simple that comes to my mind is :
Find all the factors of the number
Get the prime factors [except the number itself, if it is prime] and add them up to check if it is a perfect number.
Is there a better way to do this ?.
On searching, some Euclids work came up, but didnt find any good algorithm. Also this golfscript wasnt helpful: https://stackoverflow.com/questions/3472534/checking-whether-a-number-is-mathematically-a-perfect-number .
The numbers etc can be cached etc in real world usage [which I dont know where perfect nos are used :)]
However, since this is being asked in interviews, I am assuming there should be a "derivable" way of optimizing it.
Thanks !

If the input is even, see if it is of the form 2^(p-1)*(2^p-1), with p and 2^p-1 prime.
If the input is odd, return "false". :-)
See the Wikipedia page for details.
(Actually, since there are only 47 perfect numbers with fewer than 25 million digits, you might start with a simple table of those. Ask the interviewer if you can assume you are using 64-bit numbers, for instance...)

Edit: Dang, I failed the interview! :-(
In my over zealous attempt at finding tricks or heuristics to improve upon the "factorize + enumerate divisors + sum them" approach, I failed to note that being 1 modulo 9 was merely a necessary, and certainly not a sufficient condition for at number (other than 6) to be perfect...
Duh... with on average 1 in 9 even number satisfying this condition, my algorithm would sure find a few too many perfect numbers ;-).
To redeem myself, persist and maintain the suggestion of using the digital root, but only as a filter, to avoid the more expensive computation of the factor, in most cases.
[Original attempt: hall of shame]
If the number is even,<br>
compute its [digital root][1].
if the digital root is 1, the number is perfect, otherwise it isn't.
If the number is odd...
there are no shortcuts, other than...
"Not perfect" if the number is smaller than 10^300
For bigger values, one would then need to run the algorithm described in
the question, possibly with a few twists typically driven by heuristics
that prove that the sum of divisors will be lacking when the number
doesn't have some of the low prime factors.
My reason for suggesting the digital root trick for even numbers is that this can be computed without the help of an arbitrary length arithmetic library (like GMP). It is also much less computationally expensive than the decomposition in prime factors and/or the factorization (2^(p-1) * ((2^p)-1)). Therefore if the interviewer were to be satisfied with a "No perfect" response for odd numbers, the solution would be both very efficient and codable in most computer languages.
[Second and third attempt...]
If the number is even,<br>
if it is 6
The number is PERFECT
otherwise compute its [digital root][1].
if the digital root is _not_ 1
The number is NOT PERFECT
else ...,
Compute the prime factors
Enumerate the divisors, sum them
if the sum of these divisor equals the 2 * the number
it is PERFECT
else
it is NOT PERFECT
If the number is odd...
same as previously
On this relatively odd interview question...
I second andrewdski's comment to another response in this post, that this particular question is rather odd in the context of an interview for a general purpose developer. As with many interview questions, it can be that the interviewer isn't seeking a particular solution, but rather is providing an opportunity for the candidate to demonstrate his/her ability to articulate the general pros and cons of various approaches. Also, if the candidate is offered an opportunity to look-up generic resources such as MathWorld or Wikipedia prior to responding, this may also be a good test of his/her ability to quickly make sense of the info offered there.

Here's a quick algorithm just for fun, in PHP - using just a simple for loop. You can easliy port that to other languages:
function isPerfectNumber($num) {
$out = false;
if($num%2 == 0) {
$divisors = array(1);
for($i=2; $i<$num; $i++) {
if($num%$i == 0)
$divisors[] = $i;
}
if(array_sum($divisors) == $num)
$out = true;
}
return $out ? 'It\'s perfect!' : 'Not a perfect number.';
}
Hope this helps, not sure if this is what you're looking for.

#include<stdio.h>
#include<stdlib.h>
int sumOfFactors(int );
int main(){
int x, start, end;
printf("Enter start of the range:\n");
scanf("%d", &start);
printf("Enter end of the range:\n");
scanf("%d", &end);
for(x = start;x <= end;x++){
if(x == sumOfFactors(x)){
printf("The numbers %d is a perfect number\n", x);
}
}
return 0;
}
int sumOfFactors(int x){
int sum = 1, i, j;
for(j=2;j <= x/2;j++){
if(x % j == 0)
sum += j;
}
return sum;
}

Related

Why should we use Dynamic Programming with Memoization in order to solve - Minimum Number of Coins to Make Change

The Problem Statement:
Given an infinite supply of coins of values {C1, C2, ..., Cn} and a sum, find the minimum number of coins that can represent the sum X.
Most of the solutions on the web include dynamic programming with memoization. Here is an example from Youtube: https://www.youtube.com/watch?v=Kf_M7RdHr1M
My question is: why don't we sort the array of coins in descending order first and start exploring recursively by minimizing the sum until we reach 0? When we reach 0, we know that we have found the needed coins to make up the sum. Because we sorted the array in descending order, we know that we will always choose the greatest coin. Therefore, the first time the sum reaches down to 0, the count will have to be minimum.
I'd greatly appreciate if you help understand the complexity of my algorithm and compare it to the dynamic programming with memoization approach.
For simplicty, we are assuming there will always be a "$1" coin and thus there is always a way to make up the sum.
import java.util.*;
public class Solution{
public static void main(String [] args){
MinCount cnt=new MinCount(new Integer []{1,2,7,9});
System.out.println(cnt.count(12));
}
}
class MinCount{
Integer[] coins;
public MinCount(Integer [] coins){
Arrays.sort(coins,Collections.reverseOrder());
this.coins=coins;
}
public int count(int sum){
if(sum<0)return Integer.MAX_VALUE;
if(sum==0)return 0;
int min=Integer.MAX_VALUE;
for(int i=0; i<coins.length; i++){
int val=count(sum-coins[i]);
if(val<min)min=val;
if(val!=Integer.MAX_VALUE)break;
}
return min+1;
}
}
Suppose that you have coins worth $1, $50, and $52, and that your total is $100. Your proposed algorithm would produce a solution that uses 49 coins ($52 + $1 + $1 + … + $1 + $1); but the correct minimum result requires only 2 coins ($50 + $50).
(Incidentally, I think it's cheating to write
For simplicty we are assuming there will always be a "$1" coin and thus there is always a way to make up the sum.
when this is not in the problem statement, and therefore not assumed in other sources. That's a bit like asking "Why do sorting algorithms always put a lot of effort into rearranging the elements, instead of just assuming that the elements are in the right order to begin with?" But as it happens, even assuming the existence of a $1 coin doesn't let you guarantee that the naïve/greedy algorithm will find the optimal solution.)
I will complement the answer that has already been provided to your question with some algorithm design advice.
The solution that you propose is what is called a "greedy algorithm": a problem solving strategy that makes the locally optimal choice at each stage with the hope of finding a global optimum.
In many problems, a greedy strategy does not produce an optimal solution. The best way to disprove the correctess of an algorithm is to find a counter-example, such as the case of the "$52", "$50", and "$1" coins. To find counter-examples, Steven Skiena gives the following advice in his book "The Algorithm Design Manual":
Think small: when an algorithm fails, there is usually a very simple example on which it fails.
Hunt for the weakness: if the proposed algorithm is of the form "always take the biggest" (that is, a greedy algorithm), think about why that might prove to be the wrong thing to do. In particular, ...
Go for a tie: A devious way to break a greedy algorithm is to provide instances where everything is the same size. This way the algorithm may have nothing to base its decision on.
Seek extremes: many counter-examples are mixtures of huge and tiny, left and right, few and many, near and far. It is usually easier to verify or reason about extreme examples than more muddled ones.
#recursive solution in python
import sys
class Solution:
def __init__(self):
self.ans=0
self.maxint=sys.maxsize-1
def minCoins(self, coins, m, v):
res=self.solve(coins,m,v)
if res==sys.maxsize-1:
return -1
return res
def solve(self,coins,m,v):
if m==0 and v>0:
return self.maxint
if v==0:
return 0
if coins[m-1]<=v:
self.ans=min(self.solve(coins,m,v-coins[m-1])+1,self.solve(coins,m-1,v))
return self.ans
else:
self.ans=self.solve(coins,m-1,v)
return self.ans

Fast algorithm for finding prime numbers? [duplicate]

This question already has answers here:
Which is the fastest algorithm to find prime numbers?
(20 answers)
Closed 9 years ago.
First of all - I checked a lot in this forum and I haven't found something fast enough.
I try to make a function that returns me the prime numbers in a specified range.
For example I did this function (in C#) using the sieve of Eratosthenes. I tried also Atkin's sieve but the Eratosthenes one runs faster (in my implementation):
public static void SetPrimesSieve(int Range)
{
Primes = new List<uint>();
Primes.Add(2);
int Half = (Range - 1) >> 1;
BitArray Nums = new BitArray(Half, false);
int Sqrt = (int)Math.Sqrt(Range);
for (int i = 3, j; i <= Sqrt; )
{
for (j = ((i * i) >> 1) - 1; j < Half; j += i)
Nums[j] = true;
do
i += 2;
while (i <= Sqrt && Nums[(i >> 1) - 1]);
}
for (int i = 0; i < Half; ++i)
if (!Nums[i])
Primes.Add((uint)(i << 1) + 3);
}
It runs about twice faster than codes & algorithms I found...
There's should be a faster way to find prime numbers, could you help me?
When searching around for algorithms on this topic (for project Euler) I don't remember finding anything faster. If speed is really the concern, have you thought about just storing the primes so you simply need to look it up?
EDIT: quick google search found this, confirming that the fastest method would be just to page the results and look them up as needed.
One more edit - you may find more information here, essentially a duplicate of this topic. Top post there states that atkin's sieve was faster than eras' as far as generating on the fly.
The fastest algorithm in my experience so far is the Sieve of Erathostenes with wheel factorization for 2, 3 and 5, where the primes among the remaining numbers are represented as bits in a byte array. In Java on one core of my 3 year old Laptop it takes 23 seconds to compute the primes up to 1 billion.
With wheel factorization the Sieve of Atkin was about a factor of two slower, while with an ordinary BitSet it was about 30% faster.
See also this answer.
I did an algorithm that can find prime numbers from range 2-90 000 000 for 0.65 sec on I 350M-notebook, written in C .... you have to use bitwise operations and have "code" for recalculating index of your array to index of concrete bit you want. for example If you want folds of number 2, concrete bits will be for example ....10101000 ... so if you read from left ... you get index 4,6,8 ... thats it
Several comments.
For speed, precompute then load from disk. It's super fast. I did it in Java long ago.
Don't store as an array, store as a bitsequence for odd numbers. Way more efficient on memory
If your speed question is that you want this particular computation to run fast (you need to justify why you can't precompute and load it from disk) you need to code a better Atkin's sieve. It is faster. But only slightly.
You haven't indicated the end use for these primes. We may be missing something completely because you've not told us the application. Tell us a sketch of the application and the answers will be targetted better for your context.
Why on earth do you think something faster exists? You haven't justified your hunch. This is a very hard problem. (that is to find something faster)
You can do better than that using the Sieve of Atkin, but it is quite tricky to implement it fast and correctly. A simple translation of the Wikipedia pseudo-code is probably not good enough.

random permutation

I would like to genrate a random permutation as fast as possible.
The problem: The knuth shuffle which is O(n) involves generating n random numbers.
Since generating random numbers is quite expensive.
I would like to find an O(n) function involving a fixed O(1) amount of random numbers.
I realize that this question has been asked before, but I did not see any relevant answers.
Just to stress a point: I am not looking for anything less than O(n), just an algorithm involving less generation of random numbers.
Thanks
Create a 1-1 mapping of each permutation to a number from 1 to n! (n factorial). Generate a random number in 1 to n!, use the mapping, get the permutation.
For the mapping, perhaps this will be useful: http://en.wikipedia.org/wiki/Permutation#Numbering_permutations
Of course, this would get out of hand quickly, as n! can become really large soon.
Generating a random number takes long time you say? The implementation of Javas Random.nextInt is roughly
oldseed = seed;
nextseed = (oldseed * multiplier + addend) & mask;
return (int)(nextseed >>> (48 - bits));
Is that too much work to do for each element?
See https://doi.org/10.1145/3009909 for a careful analysis of the number of random bits required to generate a random permutation. (It's open-access, but it's not easy reading! Bottom line: if carefully implemented, all of the usual methods for generating random permutations are efficient in their use of random bits.)
And... if your goal is to generate a random permutation rapidly for large N, I'd suggest you try the MergeShuffle algorithm. An article published in 2015 claimed a factor-of-two speedup over Fisher-Yates in both parallel and sequential implementations, and a significant speedup in sequential computations over the other standard algorithm they tested (Rao-Sandelius).
An implementation of MergeShuffle (and of the usual Fisher-Yates and Rao-Sandelius algorithms) is available at https://github.com/axel-bacher/mergeshuffle. But caveat emptor! The authors are theoreticians, not software engineers. They have published their experimental code to github but aren't maintaining it. Someday, I imagine someone (perhaps you!) will add MergeShuffle to GSL. At present gsl_ran_shuffle() is an implementation of Fisher-Yates, see https://www.gnu.org/software/gsl/doc/html/randist.html?highlight=gsl_ran_shuffle.
Not what you asked exactly, but if provided random number generator doesn't satisfy you, may be you should try something different. Generally, pseudorandom number generation can be very simple.
Probably, best-known algorithm
http://en.wikipedia.org/wiki/Linear_congruential_generator
More
http://en.wikipedia.org/wiki/List_of_pseudorandom_number_generators
As other answers suggest, you can make a random integer in the range 0 to N! and use it to produce a shuffle. Although theoretically correct, this won't be faster in general since N! grows fast and you'll spend all your time doing bigint arithmetic.
If you want speed and you don't mind trading off some randomness, you will be much better off using a less good random number generator. A linear congruential generator (see http://en.wikipedia.org/wiki/Linear_congruential_generator) will give you a random number in a few cycles.
Usually there is no need in full-range of next random value, so to use exactly the same amount of randomness you can use next approach (which is almost like random(0,N!), I guess):
// ...
m = 1; // range of random buffer (single variant)
r = 0; // random buffer (number zero)
// ...
for(/* ... */) {
while (m < n) { // range of our buffer is too narrow for "n"
r = r*RAND_MAX + random(); // add another random to our random-buffer
m *= RAND_MAX; // update range of random-buffer
}
x = r % n; // pull-out next random with range "n"
r /= n; // remove it from random-buffer
m /= n; // fix range of random-buffer
// ...
}
P.S. of course there will be some errors related with division by value different from 2^n, but they will be distributed among resulted samples.
Generate N numbers (N < of the number of random number you need) before to do the computation, or store them in an array as data, with your slow but good random generator; then pick up a number simply incrementing an index into the array inside your computing loop; if you need different seeds, create multiple tables.
Are you sure that your mathematical and algorithmical approach to the problem is correct?
I hit exactly same problem where Fisher–Yates shuffle will be bottleneck in corner cases. But for me the real problem is brute force algorithm that doesn't scale well to all problems. Following story explains the problem and optimizations that I have come up with so far.
Dealing cards for 4 players
Number of possible deals is 96 bit number. That puts quite a stress for random number generator to avoid statical anomalies when selecting play plan from generated sample set of deals. I choose to use 2xmt19937_64 seeded from /dev/random because of the long period and heavy advertisement in web that it is good for scientific simulations.
Simple approach is to use Fisher–Yates shuffle to generate deals and filter out deals that don't match already collected information. Knuth shuffle takes ~1400 CPU cycles per deal mostly because I have to generate 51 random numbers and swap 51 times entries in the table.
That doesn't matter for normal cases where I would only need to generate 10000-100000 deals in 7 minutes. But there is extreme cases when filters may select only very small subset of hands requiring huge number of deals to be generated.
Using single number for multiple cards
When profiling with callgrind (valgrind) I noticed that main slow down was C++ random number generator (after switching away from std::uniform_int_distribution that was first bottleneck).
Then I came up with idea that I can use single random number for multiple cards. The idea is to use least significant information from the number first and then erase that information.
int number = uniform_rng(0, 52*51*50*49);
int card1 = number % 52;
number /= 52;
int cards2 = number % 51;
number /= 51;
......
Of course that is only minor optimization because generation is still O(N).
Generation using bit permutations
Next idea was exactly solution asked in here but I ended up still with O(N) but with larger cost than original shuffle. But lets look into solution and why it fails so miserably.
I decided to use idea Dealing All the Deals by John Christman
void Deal::generate()
{
// 52:26 split, 52!/(26!)**2 = 495,918,532,948,1041
max = 495918532948104LU;
partner = uniform_rng(eng1, max);
// 2x 26:13 splits, (26!)**2/(13!)**2 = 10,400,600**2
max = 10400600LU*10400600LU;
hands = uniform_rng(eng2, max);
// Create 104 bit presentation of deal (2 bits per card)
select_deal(id, partner, hands);
}
So far good and pretty good looking but select_deal implementation is PITA.
void select_deal(Id &new_id, uint64_t partner, uint64_t hands)
{
unsigned idx;
unsigned e, n, ns = 26;
e = n = 13;
// Figure out partnership who owns which card
for (idx = CARDS_IN_SUIT*NUM_SUITS; idx > 0; ) {
uint64_t cut = ncr(idx - 1, ns);
if (partner >= cut) {
partner -= cut;
// Figure out if N or S holds the card
ns--;
cut = ncr(ns, n) * 10400600LU;
if (hands > cut) {
hands -= cut;
n--;
} else
new_id[idx%NUM_SUITS] |= 1 << (idx/NUM_SUITS);
} else
new_id[idx%NUM_SUITS + NUM_SUITS] |= 1 << (idx/NUM_SUITS);
idx--;
}
unsigned ew = 26;
// Figure out if E or W holds a card
for (idx = CARDS_IN_SUIT*NUM_SUITS; idx-- > 0; ) {
if (new_id[idx%NUM_SUITS + NUM_SUITS] & (1 << (idx/NUM_SUITS))) {
uint64_t cut = ncr(--ew, e);
if (hands >= cut) {
hands -= cut;
e--;
} else
new_id[idx%NUM_SUITS] |= 1 << (idx/NUM_SUITS);
}
}
}
Now that I had the O(N) permutation solution done to prove algorithm could work I started searching for O(1) mapping from random number to bit permutation. Too bad it looks like only solution would be using huge lookup tables that would kill CPU caches. That doesn't sound good idea for AI that will be using very large amount of caches for double dummy analyzer.
Mathematical solution
After all hard work to figure out how to generate random bit permutations I decided go back to maths. It is entirely possible to apply filters before dealing cards. That requires splitting deals to manageable number of layered sets and selecting between sets based on their relative probabilities after filtering out impossible sets.
I don't yet have code ready for that to tests how much cycles I'm wasting in common case where filter is selecting major part of deal. But I believe this approach gives the most stable generation performance keeping the cost less than 0.1%.
Generate a 32 bit integer. For each index i (maybe only up to half the number of elements in the array), if bit i % 32 is 1, swap i with n - i - 1.
Of course, this might not be random enough for your purposes. You could probably improve this by not swapping with n - i - 1, but rather by another function applied to n and i that gives better distribution. You could even use two functions: one for when the bit is 0 and another for when it's 1.

Sudoku Solver by Backtracking not working

Assuming a two dimensional array holding a 9x9 sudoku grid, where is my solve function breaking down? I'm trying to solve this using a simple backtracking approach. Thanks!
bool solve(int grid[9][9])
{
int i,j,k;
bool isSolved = false;
if(!isSolved(grid))
isSolved = false;
if(isSolved)
return isSolved;
for(i=0; i<9; i++)
{
for(j=0; j<9; j++)
{
if(grid[i][j] == 0)
{
for(k=1; k<=9; k++)
{
if(legalMove(grid,i,j,k))
{
grid[i][j] = k;
isSolved = solve(grid);
if (isSolved)
return true;
}
grid[i][j] = 0;
}
isSolved = false;
}
}
}
return isSolved;
}
Even after changing the isSolved issues, my solution seems to breakdown into an infinite loop. It appears like I am missing some base-case step, but I'm not sure where or why. I have looked at similar solutions and still can't identify the issue. I'm just trying to create basic solver, no need to go for efficiency. Thanks for the help!
Yea your base case is messed up. In recursive functions base cases should be handled at the start. You got
bool isSolved = false;
if(!isSolved(grid))
isSolved = false;
if(isSolved)
return isSolved;
notice your isSolved variable can never be set to true, hence your code
if(isSolved)
return isSolved;
is irrelevant.
Even if you fix this, its going to feel like an infinite loop even though it is finite. This is because your algorithm has a possible total of 9*9*9 = 729 cases to check every time it calls solve. Entering this function n times may require up to 729^n cases to be checked. It won't be checking that many cases obviously because it will find dead ends when placement is illegal, but whose to say that 90% of the arragements of the possible numbers result in cases where all but one number fit legally? Moreover, even if you were to check k cases on average where k is a small number (k<=10) this would still blow up (run time of k^n.)
The trick is to "try" placing numbers where they will likely result in a high probability of being the actual good placement. Probably the simplest way I can think of doing this is a constraint satisfaction solver, or a search algorithm with a heuristic (like A*.)
I actually wrote a sudoku solver based on a constraint satisfaction solver and it would solve 100x100 sudokus in less than a second.
If by some miracle the "brute force" backtracking algorithm works well for you in the 9x9 case try higher values, you will quickly see a deterioation in run time.
I'm not bashing the backtracking algorithm, in fact I love it, its been shown time and time again that backtracking if implemented correctly can be just as efficient as dynamic programming, however, in your case you aren't implementing it correctly. You are bruteforcing it, you might as well just make your code non-recursive, it will accomplish the same thing.
You refer to isSolved as both a function and a boolean variable.
I don't think this is legal, and its definitely not smart.
Your functions should have distinct names from your variables.
It seems that regardless of whether or not it is a legal move, you are assigning "0" to the square, with that "grid[i][j] = 0;" line. Maybe you meant to put "else" and THEN "grid[i][j] = 0;" ?

What is a good non-recursive algorithm for deciding whether a passed in amount can be built additively from a set of numbers?

What is a non recursive algorithm for deciding whether a passed in amount can be built additively from a set of numbers.
In my case I'm determining whether a certain currency amount (such as $40) can be met by adding up some combination of a set of bills (such as $5, $10 and $20 bills). That is a simple example, but the algorithm needs to work for any currency set (some currencies use funky bill amounts and some bills may not be available at a given time).
So $50 can be met with a set of ($20 and $30), but cannot be met with a set of ($20 and $40). The non-recursive requirement is due to the target code base being for SQL Server 2000 where the support of recursion is limited.
In addition this is for supporting a multi currency environment where the set of bills available may change (think a foreign currency exchange teller for example).
You have twice stated that the algorithm cannot be recursive, yet that is the natural solution to this problem. One way or another, you will need to perform a search to solve this problem. If recursion is out, you will need to backtrack manually.
Pick the largest currency value below the target value. If it's match, you're done. If not, push the current target value on a stack and subtract from the target value the picked currency value. Keep doing this until you find a match or there are no more currency values left. Then use the stack to backtrack and pick a different value.
Basically, it's the recursive solution inside a loop with a manually managed stack.
If you treat each denomination as a point on a base-n number, where n is the maximum number of notes you would need, then you can increment through that number until you've exhausted the problem space or found a solution.
The maximum number of notes you would need is the Total you require divided by the lowest denomination note.
It's a brute force response to the problem, but it'll definitely work.
Here's some p-code. I'm probably all over the place with my fence posts, and it's so unoptimized to be ridiculous, but it should work. I think the idea's right anyway.
Denominations = [10,20,50,100]
Required = 570
Denominations = sort(Denominations)
iBase = integer (Required / Denominations[1])
BumpList = array [Denominations.count]
BumpList.Clear
repeat
iTotal = 0
for iAdd = 1 to Bumplist.size
iTotal = iTotal + bumplist [iAdd] * Denominations[iAdd]
loop
if iTotal = Required then exit true
//this bit should be like a mileometer.
//We add 1 to each wheel, and trip over to the next wheel when it gets to iBase
finished = true
for iPos from bumplist.last to bumplist.first
if bumplist[iPos] = (iBase-1) then bumplist[iPos] = 0
else begin
finished = false
bumplist[iPos] = bumplist[iPos]+1
exit for
end
loop
until (finished)
exit false
That's a problem that can be solved by an approach known as dynamic programming. The lecture notes I have are too focused on bioinformatics, unfortunately, so you'll have to google for it yourself.
This sounds like the subset sum problem, which is known to be NP-complete.
Good luck with that.
Edit: If you're allowed arbitrary number of bills/coins of some denomination (as opposed to just one), then it's a different problem, and is easier. See the coin problem. I realized this when reading another answer to a (suspiciously) similar question.
I agree with Tyler - what you are describing is a variant of the Subset Sum problem which is known to be NP-Complete. In this case you are a bit lucky as you are working with a limited set of values so you can use dynamic programming techniques here to optimize the problem a bit. In terms of some general ideas for the code:
Since you are dealing with money, there are only so many ways to make change with a given bill and in most cases some bills are used more often than others. So if you store the results you can keep a set of the most common solutions and then just check them before you try and find the actual solution.
Unless the language you are working with doesn't support recursion there is no reason to completely ignore the use of recursion in the solution. While any recursive problem can be solved using iteration, this is a case where recursion is likely going to be easier to write.
Some of the other users such as Kyle and seanyboy point you in the right direction for writing your own function so you should take a look at what they have provided for what you are working on.
You can deal with this problem with Dynamic Programming method as MattW. mentioned.
Given limited number of bills and maximum amount of money, you can try the following solution. The code snippet is in C# but I believe you can port it to other language easily.
// Set of bills
int[] unit = { 40,20,70};
// Max amount of money
int max = 100000;
bool[] bucket = new bool[max];
foreach (int t in unit)
bucket[t] = true;
for (int i = 0; i < bucket.Length; i++)
if (bucket[i])
foreach (int t in unit)
if(i + t < bucket.Length)
bucket[i + t] = true;
// Check if the following amount of money
// can be built additively
Console.WriteLine("15 : " + bucket[15]);
Console.WriteLine("50 : " + bucket[50]);
Console.WriteLine("60 : " + bucket[60]);
Console.WriteLine("110 : " + bucket[110]);
Console.WriteLine("120 : " + bucket[120]);
Console.WriteLine("150 : " + bucket[150]);
Console.WriteLine("151 : " + bucket[151]);
Output:
15 : False
50 : False
60 : True
110 : True
120 : True
150 : True
151 : False
There's a difference between no recursion and limited recursion. Don't confuse the two as you will have missed the point of your lesson.
For example, you can safely write a factorial function using recursion in C++ or other low level languages because your results will overflow even your biggest number containers within but a few recursions. So the problem you will face will be that of storing the result before it ever gets to blowing your stack due to recursion.
This said, whatever solution you find - and I haven't even bothered understanding your problem deeply as I see that others have already done that - you will have to study the behaviour of your algorithm and you can determine what is the worst case scenario depth of your stack.
You don't need to avoid recursion altogether if the worst case scenario is supported by your platform.
Edit: The following will work some of the time. Think about why it won't work all the time and how you might change it to cover other cases.
Build it starting with the largest bill towards the smallest. This will yeild the lowest number of bills.
Take the initial amount and apply the largest bill as many times as you can without going over the price.
Step to the next largest bill and apply it the same way.
Keep doing this until you are on your smallest bill.
Then check if the sum equals the target amount.
Algorithm:
1. Sort currency denominations available in descending order.
2. Calculate Remainder = Input % denomination[i] i -> n-1, 0
3. If remainder is 0, the input can be broken down, otherwise it cannot be.
Example:
Input: 50, Available: 10,20
[50 % 20] = 10, [10 % 10] = 0, Ans: Yes
Input: 50, Available: 15,20
[50 % 20] = 10, [10 % 15] = 15, Ans: No

Resources