A greedy change-making algorithm is one that makes change by choosing the highest denomination of coin available until it reaches the amount of change it is trying to make. Surprisingly, this algorithm actually works for making change in the most efficient manner for US and Euro coin denominations!
However, the greedy algorithm can sometimes fail for making change. Suppose we have the denominations [25,15,1] and are trying to make 31 cents. The greedy algorithm would pick 25,1,1,1,1,1,1 (7 coins) while 31 cents can actually be made as 15,15,1 (3 coins).
What I'm wondering is if there's a way to make the greedy algorithm fail for a SUBSET of Euro coins (the list of Euro coins is 1,2,5,10,20,50,100,200) that includes the denomination 1. While I can make the greedy algorithm fail subsets with other values, I can't seem to make it fail for a subset of the Euro coins.
Some resources say that the greedy algorithm will fail whenever the highest element plus the lowest element is less than twice the second highest element (so in the example from above, 25 + 1 < 15 + 15), but there is no way to make this possible with a subset of Euro coins.
Try to make 60 with 1, 20, 50.
Related
Question: I have a sack which can carry some weight, and number of items with weight and i want to put as much weight as possible in the sack to carry, after some thought I have got into a conclusion, I take the highest weight every time and put into the sack, intuitivaly that it will work if the weights that are given are incremented atleast by multiplication of 2. For e.g. 2 4 8 16 32 64..
Can anyone help me prove if I am right or wrong about that? I have also an intuition about that, would love to hear urs.
Note: thought about saying that the sum of the previous numbers wont be bigger of the current nunber.
Yes, described greedy algorithm will work for powers of two.
Note that partial sum of geometric sequence 1,2,4,8,16..2^(k-1) is 2^k-1, that is why you always should choose the largest possible item - it is always bigger than any sum of smaller items.
In mathematical sense set of 2's powers forms matroid
But it would fail in general case (example - 3,3,4 and sum 6). You can learn for dynamic programming to solve this problem with integer weights. It is similar to knapsack problem with unit item costs.
The Problem is making n cents change with quarters, dimes, nickels, and pennies, and using the least total number of coins. In the particular case where the four denominations are quarters,dimes, nickels, and pennies, we have c1 = 25, c2 = 10, c3 = 5, and c4 = 1.
If we have only quarters, dimes, and pennies (and no nickels) to use,
the greedy algorithm would make change for 30 cents using six coins—a quarter and five pennies—whereas we could have used three coins, namely, three dimes.
Given a set of denominations, how can we say whether greedy approach creates an optimal solution?
What you are asking is how to decide whether a given system of coins is canonical for the change-making problem. A system is canonical if the greedy algorithm always gives an optimal solution. You can decide whether a system of coins which includes a 1-cent piece is canonical or not in a finite number of steps. Details, and more efficient algorithms in certain cases, can be found in http://arxiv.org/pdf/0809.0400.pdf.
Just came across this simple algorithm here to find the odd coin (which weighs heavy) from a list of identical weighing coins.
I can understand that if we take 3 coins at a time, then the minimum number of weighings is just two.
How did I find the answer ?
I manually tried weighing 4 sets of coins at a time, weighing 3 sets of coin at a time, weighing two coins at a time, weighing one coins at a time.
Ofcourse, only if we take 3 coins at a time then the minimum number of steps (two) is achievable.
The question is, how do you know that we have to take 3 coins ?
I am just trying to understand how to approach this puzzle instead of doing all possible combinations and then telling the answer as 2.
1 http://en.wikipedia.org/wiki/Balance_puzzle
In each weighings, exactly three different things can happen, so with two weightings you can only see nine different overall things happening. So with each weighing, you need to be guaranteed of eliminating at least two thirds of the (remaining) possibilities. Weighing three coins on each side is guaranteed to do this. Weighing four coins on each side could maybe eliminate eight coins, but could also eliminate only five.
It can be strictly proved on the ground of Information Theory -- a very beautiful subject, that builds the very foundations of computer science.
There is a proof in those excellent lectures of David MacKay. (sorry but do not remember in which one exactly: probably one of the first five).
The base-case is this:
How do you know that we should take three coins at a time ?
The approach :
First find the base-case.
Here the base-case would be to find the maximum number of coins from which you can find the counterfeit coins in just one-weighing. You can either take two or three coins from which you can find the counterfeit one. So, maximum(two, three) = three.
So, the base-case for this approach would be dividing the available coins by taking three at a time.
2. The generalized formula is 3^n - 3 = (X*2) where X is the available number of coins and n is the number of weighing's required. (Remember n should be floored not ceiled).
Consider X = 9 balls. 3^n = 21 and n is ceiled to 2.
So, the algorithm to tell the minimum number of weighing's would something be similar to:
algo_Min_Weight[int num_Balls]
{
return log base 3 ([num_Balls * 2] + 3);
}
How would you reach a given sum in the most optimal manner possible given a set of coins ?
Let's say that in this case we have random numbers of 1, 5, 10, 20 and 50 cent coins with the biggest coins getting the priority.
My first intuition would be to use all the biggest coins possible to fit and then use up the next smallest coin in value if the sum is exceeded.
Would this do or are there any shortfalls to this approach ? Are there any more efficient approaches ?
There are shortfalls to simply giving out the largest coins first.
Let's say your vending machine is out of every coin except twenty each of 50c, 20c and 1c coins and you have to deliver 60c in change.
A "prioritise-largest" (or greedy) scheme will give you eleven coins, one 50c coin and ten 1c coins.
The better solution is three 20c coins.
Greedy schemes only give you local optimum solutions. For global optima, you generally need to examine all possibilities (though there may be minimax-type algorithms to reduce the search space) to be certain which, for delivering change, is usually quite within the limits of computability.
Greedy Algorithms (what you are doing right now) are usually chosen for this type of things and implemented as Final State Machines to be used in vending machines (for this particular case).
The greedy algorithm determines the minimum number of coins to give
while making change. These are the steps a human would take to emulate
a greedy algorithm
The assumption to exhaust largest denomination will not be the best solution each time. Example:
Input: coins[] = {25, 10, 5}, V = 30
Output: Minimum 2 coins required
We can use one coin of 25 cents and one of 5 cents
Input: coins[] = {9, 6, 5, 1}, V = 11
Output: Minimum 2 coins required
We can use one coin of 6 cents and 1 coin of 5 cents (min)
As per logic of exhausting largest coins first, we would end up with one
coin of 9 cents and 2 coins of 1 cent
Refer this answer for more clarification.
What would be the best algorithm to solve this problem? I spent a couple of hours on this problem. But couldn't sort it out.
A guy purchased a necklace and planned to make it into two pieces in such a way that the average brightness of each piece should be either greater than or equal to the original piece.
The criteria for dividing the necklaces are
1.The difference in number of pearls between the two pearls sets should not be greater than 10% of the number of pearls in the original necklace or 3 whichever is higher.
2.The difference between number of pearls in 2 necklaces should be minimum.
3.In case if the average brightness of any one of the necklace is less than the average brightness of the original set return 0 as output.
4.Two necklaces should have their average brightness greater than the original one and the difference between the average brightness of the two pieces is minimum.
5.The average brightness of each piece should be either greater than or equal to the original piece.
This problem is rather hard to do efficiently (in NP somewhere).
Say you had a set that averaged to X. That is, X = (x1 + x2 + ... + xn) / n.
Suppose you break it up into sets that average to S and T with s and t items in each set, respectively.
You can mathematically prove that if one of the averages, S or T, is greater than X, the other of the two must be less than X.
Hence, the two sets must have exactly the same brightness because that's the only way your conditions are satisfiable.
Knowing this, you're ending up with the sumset sum problem -- you want to find a subset that sums to exactly half of the sum of the entire set. That's a problem that's known to be hard. (It's been classified NP. And alright, it's not exactly the same as the subset sum problem, but if subtract the average of the full set from each of the brightness values, solving the subset sum problem will give you your answer. (Do the reverse to see how you can solve the subset sum problem from your problem.)
Hence, there's no fast way of doing this -- only approximations or exponential running times... However, maybe this will help. It mentions better running times if your weights (in your case, brightness levels) are bounded.