How would you reach a given sum in the most optimal manner possible given a set of coins ?
Let's say that in this case we have random numbers of 1, 5, 10, 20 and 50 cent coins with the biggest coins getting the priority.
My first intuition would be to use all the biggest coins possible to fit and then use up the next smallest coin in value if the sum is exceeded.
Would this do or are there any shortfalls to this approach ? Are there any more efficient approaches ?
There are shortfalls to simply giving out the largest coins first.
Let's say your vending machine is out of every coin except twenty each of 50c, 20c and 1c coins and you have to deliver 60c in change.
A "prioritise-largest" (or greedy) scheme will give you eleven coins, one 50c coin and ten 1c coins.
The better solution is three 20c coins.
Greedy schemes only give you local optimum solutions. For global optima, you generally need to examine all possibilities (though there may be minimax-type algorithms to reduce the search space) to be certain which, for delivering change, is usually quite within the limits of computability.
Greedy Algorithms (what you are doing right now) are usually chosen for this type of things and implemented as Final State Machines to be used in vending machines (for this particular case).
The greedy algorithm determines the minimum number of coins to give
while making change. These are the steps a human would take to emulate
a greedy algorithm
The assumption to exhaust largest denomination will not be the best solution each time. Example:
Input: coins[] = {25, 10, 5}, V = 30
Output: Minimum 2 coins required
We can use one coin of 25 cents and one of 5 cents
Input: coins[] = {9, 6, 5, 1}, V = 11
Output: Minimum 2 coins required
We can use one coin of 6 cents and 1 coin of 5 cents (min)
As per logic of exhausting largest coins first, we would end up with one
coin of 9 cents and 2 coins of 1 cent
Refer this answer for more clarification.
Related
I am in need of help proving that an algorithm has greedy choice property and optimal substructure.
Context of the problem:
Consider a problem where a company owns n gas stations that are connected by a highway.
Each gas station has a limited supply g_i of gas-cans. Since the company don't know which gas station is most visited they want all of them to have the same amount of gas.
So they hire a fueling-truck to haul gas between the stations in their truck. However, truck also consumes 1 gas-can per kilometer driven.
Your task will be to help the chain calculate the largest amount of gas-cans g_bar
they can have at all Stations.
Consider the example: Here we have g = (20, 40, 80, 10, 20) and
p = (0, 5, 13, 33, 36) (in kilometers). In order to send one gas-can from station 3 to
station 4 we need to put 41 gas-cans in the truck, as the fueling-truck will consume 40 before reaching their destination (to send two gas-cans we need to put 42 in the truck).
The optimal g_bar for the example is 21 and can be achieved as follows:
Station 2 sends 11 gas-cans towards Station 1. One gas-can arrives while ten are consumed on the way.
Station 3 sends 59 gas-cans towards Station 4. 19 arrive while 40 are consumed on the way.
Station 4 now has 29 gas-cans and send eight towards Station 5. Two of these arrive
and six are consumed on the way.
The final distribution of gas-cans is: (21, 29, 21, 21, 22).
Given an integer g_bar. Determine whether it is possible to get at least g_bar gas-cans in every Gas Station.
in order for the greedy choice property and optimal substructure to make sense for a decision problem, you can define an optimal solution to be a solution with at least g_bar gas-cans in every gas station if such a solution exists; otherwise, any solution is an optimal solution.
Input: The position p_i and gas-can supply g_i of each bar. Here g_i is the supply for the bar at position p_i. You may assume that the positions are in sorted order – i.e. p_1 < p_2 < . . . < p_n.
Output: The largest amount g_bar, such that each gas-station can have a gas-can supply of at least g_bar after the truck have transferred gas-cans between the stations.
How can i prove Greedy Choice and Optimal Substructure for this?
Let's define an optimal solution: Each station has at least X gas cans in each station (X = g_bar).
Proving greedy property
Let us assume our solution is sub-optimal. There must exist a station i such that gas[i] < X. Based on our algorithm, we borrow X - gas[i] from station i+1 (which is a valid move, since we had already found a solution). Now station i has gas = X. This contradicts the original assumption that there must exist a station i such that gas[i] < X, which means our solution isn't suboptimal. Hence, we prove the optimality.
Proving optimal substructure
Assume we have a subset of the stations of size N', and our solution is suboptimal. Again, if the solution is suboptimal, there must exist a station i such that gas[i] < X. You can use the greedy proof again to prove that our solution isn't suboptimal. Since we have optimal solution for each arbitrary subset, we prove that we have optimal substructure.
This problem is similar to the classic coin search for a single counterfeit coin that weighs lighter than x number of coins but with a twist in the number of coins that could possibly be fake. The real coins all weigh the same, and the fake coins weigh the same. The fake coins weigh less than the real coins.
The difference in the one I am trying to solve is for when there are at most 2 counterfeits, (i.e There can be possibly, No fake coins, 1 fake coin, or 2 fake coins).
Example of my attempt:
My attempt at an earlier part of this problem was figuring out how to find the fake coins if any, when x = 9 # of coins, however you were only allowed to use the weight scale at most 6 times to figure it out.
I started by separating x = 9 coins into groups of 3 and comparing the groups to check for equality (if all groups are = there are no fake coins, since there could be at most 2 fake coins and at least 0 fake coins.) Then going from there to checking inequalities for group 1 with group 2 and group 1 again with group 3. With the possibilities of there being 2 fake coins in group 1,2, or 3, and the other possibility of there being 1 fake coin each in 2 groups such as group 1,2, 1,3 or 2,3. Considering these cases I followed the comparisons, thereby breaking down the comparing of groups into thirds until I get to the final few coins and find the fake coins.
The problem is:
In a pile of coins where x amount of coins is ">= 3", how would I go about finding the fake coins while making sure the number of times weighed is O(log base 2 of (n)). And How would I find a generic formula to find the number of weighings required to find at most 2 fakes from an x amount of coins.
Programming this is easy when I can consider all cases and compare each one at a slower speed. However it gets significantly more difficult when considering the amount of times weighed has to be O(log base 2 (n)). I have considered using the number of coins to differentiate how the comparisons will be made such as checking if x amount of coins is an odd or even number of coins, then deciding how to compare. If odd, divide x-1 into 3 groups and put the last coin into a fourth group, then continue down the spiral of comparisons to finally find the fake coins, if there are any at all. I also considered dividing say 100 coins into 33 each and comparing the 3 groups, then getting rid of 1/3 of the coins and running comparisons on the 66 left. I still can't wrap my head around solving how to design a generic algorithm procedure to find the fake coins, and then how to even find a generic formula for comparing the amount of times weighed to log base 2 (n).
Even when n = prime/odd numbers it is difficult to split those coins and check for weight in a general procedure that works with any number n >= 3.
To clarify, I need help with figuring out if/how my earlier attempt/example can be applied to create a general comparison algorithm that will apply to any number of coins where x>=3, while the amount of times weighed is O(log base 2 (n)).
Since O(log_2 n) is the same as O(log_b n) for any base b>1, the recursive breakdown into thirds suggested by user #n.1.8e9 in the comments fits that requirement. There's no need to consider prime/odd numbers, as long as we can solve for some specified constant number of coins with a constant number of weighings.
Here, let 3 coins be our base case. After weighing all 3 pairings (technically, we can get away with 2 weighings), we will know exactly which of the 3 coins are light, if any. So if we split a pile of 11 coins into thirds of 3 each, we can take the 2 leftover coins, borrow any other coin from the other piles, perform the 3 weighings, and then discard the 2 leftover coins since we know their status. As long as there are O(log n) splitting stages, dealing with the leftovers won't affect the asymptotics.
The only complex part of the proof is that after the first step, we go from the '0, 1 or 2 fakes' problem to either two 'exactly 1 fake' subproblems or a '1 or 2 fakes' subproblem. Assuming you know the solution to the original 'exactly 1 fake' problem with 1 + log_3 n weighings, the proof should look fairly similar.
The procedure for 'at most 2 fake' and '1 or 2 fakes' is the same. Given n coins, we divide them into three groups of floor(n/3) coins (and treat any leftovers as we did above). If n <= 3, stop and just perform all weighings. Otherwise, given piles A, B and C, perform the 3 pair weighings (A, B), (A, C) and (B, C).
If they all weigh the same (A=B=C), there are no fake coins.
If one pile is different, there are two cases: the single pile is lighter or heavier than the other two.
If it is lighter (say, A < B, A < C, and B = C), then pile A has exactly 1 or 2 fake coins and we have a single problem instance on n/3 coins (discard piles B and C).
If the outlier is heavier (say, A = B, A < C, and B < C), then piles A and B have exactly one fake coin each, which is the standard counterfeit problem.
To prove the bound on number of weighings, you probably need to use induction. Each recursion level requires at most 6 weighings, so an upper bound formula for the number of weighings required when there may be up to 2 fake coins remaining is T(n) = max(T(n/3), 2 * (1 + log_3(n/3))) + 6, where the 1 + log_3 (n/3) term is the standard upper bound with perfect strategy to find one light coin among n/3 coins (where we take the floor of all divisions to get integers).
A greedy change-making algorithm is one that makes change by choosing the highest denomination of coin available until it reaches the amount of change it is trying to make. Surprisingly, this algorithm actually works for making change in the most efficient manner for US and Euro coin denominations!
However, the greedy algorithm can sometimes fail for making change. Suppose we have the denominations [25,15,1] and are trying to make 31 cents. The greedy algorithm would pick 25,1,1,1,1,1,1 (7 coins) while 31 cents can actually be made as 15,15,1 (3 coins).
What I'm wondering is if there's a way to make the greedy algorithm fail for a SUBSET of Euro coins (the list of Euro coins is 1,2,5,10,20,50,100,200) that includes the denomination 1. While I can make the greedy algorithm fail subsets with other values, I can't seem to make it fail for a subset of the Euro coins.
Some resources say that the greedy algorithm will fail whenever the highest element plus the lowest element is less than twice the second highest element (so in the example from above, 25 + 1 < 15 + 15), but there is no way to make this possible with a subset of Euro coins.
Try to make 60 with 1, 20, 50.
The Problem is making n cents change with quarters, dimes, nickels, and pennies, and using the least total number of coins. In the particular case where the four denominations are quarters,dimes, nickels, and pennies, we have c1 = 25, c2 = 10, c3 = 5, and c4 = 1.
If we have only quarters, dimes, and pennies (and no nickels) to use,
the greedy algorithm would make change for 30 cents using six coins—a quarter and five pennies—whereas we could have used three coins, namely, three dimes.
Given a set of denominations, how can we say whether greedy approach creates an optimal solution?
What you are asking is how to decide whether a given system of coins is canonical for the change-making problem. A system is canonical if the greedy algorithm always gives an optimal solution. You can decide whether a system of coins which includes a 1-cent piece is canonical or not in a finite number of steps. Details, and more efficient algorithms in certain cases, can be found in http://arxiv.org/pdf/0809.0400.pdf.
Just came across this simple algorithm here to find the odd coin (which weighs heavy) from a list of identical weighing coins.
I can understand that if we take 3 coins at a time, then the minimum number of weighings is just two.
How did I find the answer ?
I manually tried weighing 4 sets of coins at a time, weighing 3 sets of coin at a time, weighing two coins at a time, weighing one coins at a time.
Ofcourse, only if we take 3 coins at a time then the minimum number of steps (two) is achievable.
The question is, how do you know that we have to take 3 coins ?
I am just trying to understand how to approach this puzzle instead of doing all possible combinations and then telling the answer as 2.
1 http://en.wikipedia.org/wiki/Balance_puzzle
In each weighings, exactly three different things can happen, so with two weightings you can only see nine different overall things happening. So with each weighing, you need to be guaranteed of eliminating at least two thirds of the (remaining) possibilities. Weighing three coins on each side is guaranteed to do this. Weighing four coins on each side could maybe eliminate eight coins, but could also eliminate only five.
It can be strictly proved on the ground of Information Theory -- a very beautiful subject, that builds the very foundations of computer science.
There is a proof in those excellent lectures of David MacKay. (sorry but do not remember in which one exactly: probably one of the first five).
The base-case is this:
How do you know that we should take three coins at a time ?
The approach :
First find the base-case.
Here the base-case would be to find the maximum number of coins from which you can find the counterfeit coins in just one-weighing. You can either take two or three coins from which you can find the counterfeit one. So, maximum(two, three) = three.
So, the base-case for this approach would be dividing the available coins by taking three at a time.
2. The generalized formula is 3^n - 3 = (X*2) where X is the available number of coins and n is the number of weighing's required. (Remember n should be floored not ceiled).
Consider X = 9 balls. 3^n = 21 and n is ceiled to 2.
So, the algorithm to tell the minimum number of weighing's would something be similar to:
algo_Min_Weight[int num_Balls]
{
return log base 3 ([num_Balls * 2] + 3);
}