Finding all heavy coins in 0(log^2(n)) [duplicate] - algorithm

This question already has an answer here:
Given n coins, some of which are heavier, find the number of heavy coins? [closed]
(1 answer)
Closed 8 years ago.
Suppose you are given n coins, some of which are heavy and the others
light. All heavy coins have the same weight, as do all the light coins, and
the weight of a heavy coin is strictly greater than the weight of a light coin.
At least one of the coins is known to be light. You are given a balance,
using which you can weigh a subset of coins against another disjoint subset
of coins. Show how you can determine the number of heavy coins using
O(log2 n) weighings.

I guess this must be a generalization of the problem where you have 8 coins and one of them is light. So you can perform a kind of binary search in order to find the lightest coin using a pair of scales balance. However, it is strange that you are supposed to find several light coins at the same time. In this case, this does not seem to scale with log2 n.
See the example below in order to understand my point.
In the case of 8 coins where one of them is light. You should follow three steps:
Step 1) Divide the sample in two parts and find the lightest part. => 1 weighting. [You got a sample with 4 coins that is lighter]
Step 2) Divide the lightest part of the previos procedure and weight these parts to find the lightest part. => + 1 weighting [You got a sample with 2 coins]
Step 3) Now you have only two coins. You have only to weight them to find the lightest.
Off course, the generalization to a sample of size n is trivial.
The proof that this scales with log2 n follows the binary search proof.
However, if the number of light coins is different from 1, you cannot focus only in the lightest part of the sample. [Disclaimer: Maybe I am wrong, but it is difficult to say that this will scale with log2 n. FOr instance, consider the situation where the number of light coins scales with n (the number of coins)]
Actually, the most beautiful solution to this problem is to find the lightest coin in only two weightings:
Step 1) Divide your sample in 3 parts. The first part has three coins, the second part also has three coins and the last part only 2.
Step 2) Weight the first and the second part. There are three situations:
a) The first part is lighter.
b) The second part is lighter.
c) The first and the second part have the same weight.
If (a or b) wight two of them. If they have the same weight, the other one that was not weighted is the lighter. On the other hand, if they dont have the same weight, one of them is the lighter
if(c) just weight the two coins to find the lighter one.
This can also be generalized, but the generalization is much more complicated.

Related

Classic Counterfeit coin puzzle with a twist

This problem is similar to the classic coin search for a single counterfeit coin that weighs lighter than x number of coins but with a twist in the number of coins that could possibly be fake. The real coins all weigh the same, and the fake coins weigh the same. The fake coins weigh less than the real coins.
The difference in the one I am trying to solve is for when there are at most 2 counterfeits, (i.e There can be possibly, No fake coins, 1 fake coin, or 2 fake coins).
Example of my attempt:
My attempt at an earlier part of this problem was figuring out how to find the fake coins if any, when x = 9 # of coins, however you were only allowed to use the weight scale at most 6 times to figure it out.
I started by separating x = 9 coins into groups of 3 and comparing the groups to check for equality (if all groups are = there are no fake coins, since there could be at most 2 fake coins and at least 0 fake coins.) Then going from there to checking inequalities for group 1 with group 2 and group 1 again with group 3. With the possibilities of there being 2 fake coins in group 1,2, or 3, and the other possibility of there being 1 fake coin each in 2 groups such as group 1,2, 1,3 or 2,3. Considering these cases I followed the comparisons, thereby breaking down the comparing of groups into thirds until I get to the final few coins and find the fake coins.
The problem is:
In a pile of coins where x amount of coins is ">= 3", how would I go about finding the fake coins while making sure the number of times weighed is O(log base 2 of (n)). And How would I find a generic formula to find the number of weighings required to find at most 2 fakes from an x amount of coins.
Programming this is easy when I can consider all cases and compare each one at a slower speed. However it gets significantly more difficult when considering the amount of times weighed has to be O(log base 2 (n)). I have considered using the number of coins to differentiate how the comparisons will be made such as checking if x amount of coins is an odd or even number of coins, then deciding how to compare. If odd, divide x-1 into 3 groups and put the last coin into a fourth group, then continue down the spiral of comparisons to finally find the fake coins, if there are any at all. I also considered dividing say 100 coins into 33 each and comparing the 3 groups, then getting rid of 1/3 of the coins and running comparisons on the 66 left. I still can't wrap my head around solving how to design a generic algorithm procedure to find the fake coins, and then how to even find a generic formula for comparing the amount of times weighed to log base 2 (n).
Even when n = prime/odd numbers it is difficult to split those coins and check for weight in a general procedure that works with any number n >= 3.
To clarify, I need help with figuring out if/how my earlier attempt/example can be applied to create a general comparison algorithm that will apply to any number of coins where x>=3, while the amount of times weighed is O(log base 2 (n)).
Since O(log_2 n) is the same as O(log_b n) for any base b>1, the recursive breakdown into thirds suggested by user #n.1.8e9 in the comments fits that requirement. There's no need to consider prime/odd numbers, as long as we can solve for some specified constant number of coins with a constant number of weighings.
Here, let 3 coins be our base case. After weighing all 3 pairings (technically, we can get away with 2 weighings), we will know exactly which of the 3 coins are light, if any. So if we split a pile of 11 coins into thirds of 3 each, we can take the 2 leftover coins, borrow any other coin from the other piles, perform the 3 weighings, and then discard the 2 leftover coins since we know their status. As long as there are O(log n) splitting stages, dealing with the leftovers won't affect the asymptotics.
The only complex part of the proof is that after the first step, we go from the '0, 1 or 2 fakes' problem to either two 'exactly 1 fake' subproblems or a '1 or 2 fakes' subproblem. Assuming you know the solution to the original 'exactly 1 fake' problem with 1 + log_3 n weighings, the proof should look fairly similar.
The procedure for 'at most 2 fake' and '1 or 2 fakes' is the same. Given n coins, we divide them into three groups of floor(n/3) coins (and treat any leftovers as we did above). If n <= 3, stop and just perform all weighings. Otherwise, given piles A, B and C, perform the 3 pair weighings (A, B), (A, C) and (B, C).
If they all weigh the same (A=B=C), there are no fake coins.
If one pile is different, there are two cases: the single pile is lighter or heavier than the other two.
If it is lighter (say, A < B, A < C, and B = C), then pile A has exactly 1 or 2 fake coins and we have a single problem instance on n/3 coins (discard piles B and C).
If the outlier is heavier (say, A = B, A < C, and B < C), then piles A and B have exactly one fake coin each, which is the standard counterfeit problem.
To prove the bound on number of weighings, you probably need to use induction. Each recursion level requires at most 6 weighings, so an upper bound formula for the number of weighings required when there may be up to 2 fake coins remaining is T(n) = max(T(n/3), 2 * (1 + log_3(n/3))) + 6, where the 1 + log_3 (n/3) term is the standard upper bound with perfect strategy to find one light coin among n/3 coins (where we take the floor of all divisions to get integers).

Finding the counterfeit coin from a list of 9 coins

Just came across this simple algorithm here to find the odd coin (which weighs heavy) from a list of identical weighing coins.
I can understand that if we take 3 coins at a time, then the minimum number of weighings is just two.
How did I find the answer ?
I manually tried weighing 4 sets of coins at a time, weighing 3 sets of coin at a time, weighing two coins at a time, weighing one coins at a time.
Ofcourse, only if we take 3 coins at a time then the minimum number of steps (two) is achievable.
The question is, how do you know that we have to take 3 coins ?
I am just trying to understand how to approach this puzzle instead of doing all possible combinations and then telling the answer as 2.
1 http://en.wikipedia.org/wiki/Balance_puzzle
In each weighings, exactly three different things can happen, so with two weightings you can only see nine different overall things happening. So with each weighing, you need to be guaranteed of eliminating at least two thirds of the (remaining) possibilities. Weighing three coins on each side is guaranteed to do this. Weighing four coins on each side could maybe eliminate eight coins, but could also eliminate only five.
It can be strictly proved on the ground of Information Theory -- a very beautiful subject, that builds the very foundations of computer science.
There is a proof in those excellent lectures of David MacKay. (sorry but do not remember in which one exactly: probably one of the first five).
The base-case is this:
How do you know that we should take three coins at a time ?
The approach :
First find the base-case.
Here the base-case would be to find the maximum number of coins from which you can find the counterfeit coins in just one-weighing. You can either take two or three coins from which you can find the counterfeit one. So, maximum(two, three) = three.
So, the base-case for this approach would be dividing the available coins by taking three at a time.
2. The generalized formula is 3^n - 3 = (X*2) where X is the available number of coins and n is the number of weighing's required. (Remember n should be floored not ceiled).
Consider X = 9 balls. 3^n = 21 and n is ceiled to 2.
So, the algorithm to tell the minimum number of weighing's would something be similar to:
algo_Min_Weight[int num_Balls]
{
return log base 3 ([num_Balls * 2] + 3);
}

Google Interview : Find the maximum sum of a polygon [closed]

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
Given a polygon with N vertexes and N edges. There is an int number(could be negative) on every vertex and an operation in set (*,+) on every edge. Every time, we remove an edge E from the polygon, merge the two vertexes linked by the edge (V1,V2) to a new vertex with value: V1 op(E) V2. The last case would be two vertexes with two edges, the result is the bigger one.
Return the max result value can be gotten from a given polygon.
For the last case we might not need two merge as the other number could be negative, so in that case we would just return the larger number.
How I am approaching the problem:
p[i,j] denotes the maximum value we can obtain by merging nodes from labelled i to j.
p[i,i] = v[i] -- base case
p[i,j] = p[i,k] operator in between p[k+1,j] , for k between i to j-1.
and then p[0,n] will be my answer.
Second point , i will have to start from all the vertices and do the same as above as this will be cyclic n vertices n edges.
The time complexity for this is n^3 *n i.e n^4 .
Can i do better then this ?
As you have identified (tagged) correctly, this indeed is very similar to the matrix multiplication problem (in what order do I multiply matrixes in order to do it quickly).
This can be solved polynomially using a dynamic algorithm.
I'm going to instead solve a similar, more classic (and identical) problem, given a formula with numbers, addition and multiplications, what way of parenthesizing it gives the maximal value, for example
6+1 * 2 becomes (6+1)*2 which is more than 6+(1*2).
Let us denote our input a1 to an real numbers and o(1),...o(n-1) either * or +. Our approach will work as follows, we will observe the subproblem F(i,j) which represents the maximal formula (after parenthasizing) for a1,...aj. We will create a table of such subproblems and observe that F(1,n) is exactly the result we were looking for.
Define
F(i,j)
- If i>j return 0 //no sub-formula of negative length
- If i=j return ai // the maximal formula for one number is the number
- If i<j return the maximal value for all m between i (including) and j (not included) of:
F(i,m) (o(m)) F(m+1,j) //check all places for possible parenthasis insertion
This goes through all possible options. TProof of correctness is done by induction on the size n=j-i and is pretty trivial.
Lets go through runtime analysis:
If we do not save the values dynamically for smaller subproblems this runs pretty slow, however we can make this algorithm perform relatively fast in O(n^3)
We create a n*n table T in which the cell at index i,j contains F(i,j) filling F(i,i) and F(i,j) for j smaller than i is done in O(1) for each cell since we can calculate these values directly, then we go diagonally and fill F(i+1,i+1) (which we can do quickly since we already know all the previous values in the recursive formula), we repeat this n times for n diagonals (all the diagonals in the table really) and filling each cell takes (O(n)), since each cell has O(n) cells we fill each diagonals in O(n^2) meaning we fill all the table in O(n^3). After filling the table we obviously know F(1,n) which is the solution to your problem.
Now back to your problem
If you translate the polygon into n different formulas (one for starting at each vertex) and run the algorithm for formula values on it, you get exactly the value you want.
I think you can reduce the need for a brute force search. For example: if there is a chain of
x + y + z
You can replace it with a single vertex whose value is the sum, you can't do better than that. You need to do the multiplying after the addition when you're dealing with +ve integers. So if it's all positive then simply reduce all + chains and then mutliply.
So that leaves the cases where there are -ve numbers. Seems to me that the strategy for a single -ve number is pretty obvious, for two -ve numbers there are a few cases (remembering that - x - is positive) and for more than 2 -ve numbers it seems to get tricky :-)

Algorithm for categorizing values

What would be the best algorithm to solve this problem? I spent a couple of hours on this problem. But couldn't sort it out.
A guy purchased a necklace and planned to make it into two pieces in such a way that the average brightness of each piece should be either greater than or equal to the original piece.
The criteria for dividing the necklaces are
1.The difference in number of pearls between the two pearls sets should not be greater than 10% of the number of pearls in the original necklace or 3 whichever is higher.
2.The difference between number of pearls in 2 necklaces should be minimum.
3.In case if the average brightness of any one of the necklace is less than the average brightness of the original set return 0 as output.
4.Two necklaces should have their average brightness greater than the original one and the difference between the average brightness of the two pieces is minimum.
5.The average brightness of each piece should be either greater than or equal to the original piece.
This problem is rather hard to do efficiently (in NP somewhere).
Say you had a set that averaged to X. That is, X = (x1 + x2 + ... + xn) / n.
Suppose you break it up into sets that average to S and T with s and t items in each set, respectively.
You can mathematically prove that if one of the averages, S or T, is greater than X, the other of the two must be less than X.
Hence, the two sets must have exactly the same brightness because that's the only way your conditions are satisfiable.
Knowing this, you're ending up with the sumset sum problem -- you want to find a subset that sums to exactly half of the sum of the entire set. That's a problem that's known to be hard. (It's been classified NP. And alright, it's not exactly the same as the subset sum problem, but if subtract the average of the full set from each of the brightness values, solving the subset sum problem will give you your answer. (Do the reverse to see how you can solve the subset sum problem from your problem.)
Hence, there's no fast way of doing this -- only approximations or exponential running times... However, maybe this will help. It mentions better running times if your weights (in your case, brightness levels) are bounded.

Algorithm to find minimum number of weightings required to find defective ball from a set of n balls

Okay here is a puzzle I come across a lot of times-
Given a set of 12 balls , one of which is defective (it weighs either less or more) . You are allow to weigh 3 times to find the defective and also tell which weighs less or more.
The solution to this problem exists, but I want to know whether we can algorithmically determine if given a set of 'n' balls what is the minimum number of times you would need to use a beam balance to determine which one is defective and how(lighter or heavier).
A wonderful algorithm by Jack Wert can be found here
http://www.cut-the-knot.org/blue/OddCoinProblems.shtml
(as described for the case n is of the form (3^k-3)/2, but it is generalizable to other n, see the writeup below)
A shorter version and probably more readable version of that is here
http://www.cut-the-knot.org/blue/OddCoinProblemsShort.shtml
For n of the form (3^k-3)/2, the above solution applies perfectly and the minimum number of weighings required is k.
In other cases...
Adapting Jack Wert's algorithm for all n.
In order to modify the above algorithm for all n, you can try the following (I haven't tried proving the correctness, though):
First check if n is of the from (3^k-3)/2. If it is, apply above algorithm.
If not,
If n = 3t (i.e. n is a multiple of 3), you find the least m > n such that m is of the form (3^k-3)/2. The number of weighings required will be k. Now form the groups 1, 3, 3^2, ..., 3^(k-2), Z, where 3^(k-2) < Z < 3^(k-1) and repeat the algorithm from Jack's solution.
Note: We would also need to generalize the method A (the case when we know if the coin is heavier of lighter), for arbitrary Z.
If n = 3t+1, try to solve for 3t (keeping one ball aside). If you don't find the odd ball among 3t, the one you kept aside is defective.
If n = 3t+2, form the groups for 3t+3, but have one group not have the one ball group. If you come to the stage when you have to rotate the one ball group, you know the defective ball is one of two balls and you can then weigh one of those two balls against one of the known good balls (from among the other 3t).
Trichotomy ! :)
Explanation :
Given a set of n balls, subdivide it in 3 sets A, B and C of n/3 balls.
Compare A and B. If equal, then the defective ball is in C.
etc.
So, your minimum number of times is the number of times you can divide n by three (sorry, i do not know the english word for that).
You could use a general planning algorithm: http://www.inf.ed.ac.uk/teaching/courses/plan/

Resources