Classic Counterfeit coin puzzle with a twist - algorithm

This problem is similar to the classic coin search for a single counterfeit coin that weighs lighter than x number of coins but with a twist in the number of coins that could possibly be fake. The real coins all weigh the same, and the fake coins weigh the same. The fake coins weigh less than the real coins.
The difference in the one I am trying to solve is for when there are at most 2 counterfeits, (i.e There can be possibly, No fake coins, 1 fake coin, or 2 fake coins).
Example of my attempt:
My attempt at an earlier part of this problem was figuring out how to find the fake coins if any, when x = 9 # of coins, however you were only allowed to use the weight scale at most 6 times to figure it out.
I started by separating x = 9 coins into groups of 3 and comparing the groups to check for equality (if all groups are = there are no fake coins, since there could be at most 2 fake coins and at least 0 fake coins.) Then going from there to checking inequalities for group 1 with group 2 and group 1 again with group 3. With the possibilities of there being 2 fake coins in group 1,2, or 3, and the other possibility of there being 1 fake coin each in 2 groups such as group 1,2, 1,3 or 2,3. Considering these cases I followed the comparisons, thereby breaking down the comparing of groups into thirds until I get to the final few coins and find the fake coins.
The problem is:
In a pile of coins where x amount of coins is ">= 3", how would I go about finding the fake coins while making sure the number of times weighed is O(log base 2 of (n)). And How would I find a generic formula to find the number of weighings required to find at most 2 fakes from an x amount of coins.
Programming this is easy when I can consider all cases and compare each one at a slower speed. However it gets significantly more difficult when considering the amount of times weighed has to be O(log base 2 (n)). I have considered using the number of coins to differentiate how the comparisons will be made such as checking if x amount of coins is an odd or even number of coins, then deciding how to compare. If odd, divide x-1 into 3 groups and put the last coin into a fourth group, then continue down the spiral of comparisons to finally find the fake coins, if there are any at all. I also considered dividing say 100 coins into 33 each and comparing the 3 groups, then getting rid of 1/3 of the coins and running comparisons on the 66 left. I still can't wrap my head around solving how to design a generic algorithm procedure to find the fake coins, and then how to even find a generic formula for comparing the amount of times weighed to log base 2 (n).
Even when n = prime/odd numbers it is difficult to split those coins and check for weight in a general procedure that works with any number n >= 3.
To clarify, I need help with figuring out if/how my earlier attempt/example can be applied to create a general comparison algorithm that will apply to any number of coins where x>=3, while the amount of times weighed is O(log base 2 (n)).

Since O(log_2 n) is the same as O(log_b n) for any base b>1, the recursive breakdown into thirds suggested by user #n.1.8e9 in the comments fits that requirement. There's no need to consider prime/odd numbers, as long as we can solve for some specified constant number of coins with a constant number of weighings.
Here, let 3 coins be our base case. After weighing all 3 pairings (technically, we can get away with 2 weighings), we will know exactly which of the 3 coins are light, if any. So if we split a pile of 11 coins into thirds of 3 each, we can take the 2 leftover coins, borrow any other coin from the other piles, perform the 3 weighings, and then discard the 2 leftover coins since we know their status. As long as there are O(log n) splitting stages, dealing with the leftovers won't affect the asymptotics.
The only complex part of the proof is that after the first step, we go from the '0, 1 or 2 fakes' problem to either two 'exactly 1 fake' subproblems or a '1 or 2 fakes' subproblem. Assuming you know the solution to the original 'exactly 1 fake' problem with 1 + log_3 n weighings, the proof should look fairly similar.
The procedure for 'at most 2 fake' and '1 or 2 fakes' is the same. Given n coins, we divide them into three groups of floor(n/3) coins (and treat any leftovers as we did above). If n <= 3, stop and just perform all weighings. Otherwise, given piles A, B and C, perform the 3 pair weighings (A, B), (A, C) and (B, C).
If they all weigh the same (A=B=C), there are no fake coins.
If one pile is different, there are two cases: the single pile is lighter or heavier than the other two.
If it is lighter (say, A < B, A < C, and B = C), then pile A has exactly 1 or 2 fake coins and we have a single problem instance on n/3 coins (discard piles B and C).
If the outlier is heavier (say, A = B, A < C, and B < C), then piles A and B have exactly one fake coin each, which is the standard counterfeit problem.
To prove the bound on number of weighings, you probably need to use induction. Each recursion level requires at most 6 weighings, so an upper bound formula for the number of weighings required when there may be up to 2 fake coins remaining is T(n) = max(T(n/3), 2 * (1 + log_3(n/3))) + 6, where the 1 + log_3 (n/3) term is the standard upper bound with perfect strategy to find one light coin among n/3 coins (where we take the floor of all divisions to get integers).

Related

Binary tree algorithm variation: How to conduct search if the each group can hold limited elements?

Consider exactly one out of n people carries a virus. The testing can be done with blood samples where small portions of people's blood are mixed together, but at most k people's blood can be mixed together. Also applying the test from the blood samples of k people cost k^(1/2)
The algorithm that I thought of is:
when n = 1 return the person
First divide the groups into n/k, so that k people are in each group
test each group
if(group i contains a virus)
binary search(group i)
However, I don't know how to set k to minimize the total cost. I believe closer for k to be n will cost the minimum, but it is only my intuition without any evidence.
Is there a more efficient algorithm that will cost the minimum?
The optimum way of minimizing test cost is arranging the group into a square, mix all the samples from each row into one sample, all the samples from each column into one sample and test them. Based on the result row & columns are identified and the specific sample can be identified.
if sqrt(n) < K
1 2 3
4 5 6
7 8 9
now total 6 samples R[1,2,3],R[4,5,6],R[7,8,9],C[1,4,7],C[2,5,8],C[3,6,9] and based on the result construct the result. If R1, R2 are positive and C1 & C2 are positive then all 1,2,4,5 are positive.
if sqrt(N) > K then divide N into N/K^2 groups of K^2 elements and do the same.

Finding all heavy coins in 0(log^2(n)) [duplicate]

This question already has an answer here:
Given n coins, some of which are heavier, find the number of heavy coins? [closed]
(1 answer)
Closed 8 years ago.
Suppose you are given n coins, some of which are heavy and the others
light. All heavy coins have the same weight, as do all the light coins, and
the weight of a heavy coin is strictly greater than the weight of a light coin.
At least one of the coins is known to be light. You are given a balance,
using which you can weigh a subset of coins against another disjoint subset
of coins. Show how you can determine the number of heavy coins using
O(log2 n) weighings.
I guess this must be a generalization of the problem where you have 8 coins and one of them is light. So you can perform a kind of binary search in order to find the lightest coin using a pair of scales balance. However, it is strange that you are supposed to find several light coins at the same time. In this case, this does not seem to scale with log2 n.
See the example below in order to understand my point.
In the case of 8 coins where one of them is light. You should follow three steps:
Step 1) Divide the sample in two parts and find the lightest part. => 1 weighting. [You got a sample with 4 coins that is lighter]
Step 2) Divide the lightest part of the previos procedure and weight these parts to find the lightest part. => + 1 weighting [You got a sample with 2 coins]
Step 3) Now you have only two coins. You have only to weight them to find the lightest.
Off course, the generalization to a sample of size n is trivial.
The proof that this scales with log2 n follows the binary search proof.
However, if the number of light coins is different from 1, you cannot focus only in the lightest part of the sample. [Disclaimer: Maybe I am wrong, but it is difficult to say that this will scale with log2 n. FOr instance, consider the situation where the number of light coins scales with n (the number of coins)]
Actually, the most beautiful solution to this problem is to find the lightest coin in only two weightings:
Step 1) Divide your sample in 3 parts. The first part has three coins, the second part also has three coins and the last part only 2.
Step 2) Weight the first and the second part. There are three situations:
a) The first part is lighter.
b) The second part is lighter.
c) The first and the second part have the same weight.
If (a or b) wight two of them. If they have the same weight, the other one that was not weighted is the lighter. On the other hand, if they dont have the same weight, one of them is the lighter
if(c) just weight the two coins to find the lighter one.
This can also be generalized, but the generalization is much more complicated.

Finding the counterfeit coin from a list of 9 coins

Just came across this simple algorithm here to find the odd coin (which weighs heavy) from a list of identical weighing coins.
I can understand that if we take 3 coins at a time, then the minimum number of weighings is just two.
How did I find the answer ?
I manually tried weighing 4 sets of coins at a time, weighing 3 sets of coin at a time, weighing two coins at a time, weighing one coins at a time.
Ofcourse, only if we take 3 coins at a time then the minimum number of steps (two) is achievable.
The question is, how do you know that we have to take 3 coins ?
I am just trying to understand how to approach this puzzle instead of doing all possible combinations and then telling the answer as 2.
1 http://en.wikipedia.org/wiki/Balance_puzzle
In each weighings, exactly three different things can happen, so with two weightings you can only see nine different overall things happening. So with each weighing, you need to be guaranteed of eliminating at least two thirds of the (remaining) possibilities. Weighing three coins on each side is guaranteed to do this. Weighing four coins on each side could maybe eliminate eight coins, but could also eliminate only five.
It can be strictly proved on the ground of Information Theory -- a very beautiful subject, that builds the very foundations of computer science.
There is a proof in those excellent lectures of David MacKay. (sorry but do not remember in which one exactly: probably one of the first five).
The base-case is this:
How do you know that we should take three coins at a time ?
The approach :
First find the base-case.
Here the base-case would be to find the maximum number of coins from which you can find the counterfeit coins in just one-weighing. You can either take two or three coins from which you can find the counterfeit one. So, maximum(two, three) = three.
So, the base-case for this approach would be dividing the available coins by taking three at a time.
2. The generalized formula is 3^n - 3 = (X*2) where X is the available number of coins and n is the number of weighing's required. (Remember n should be floored not ceiled).
Consider X = 9 balls. 3^n = 21 and n is ceiled to 2.
So, the algorithm to tell the minimum number of weighing's would something be similar to:
algo_Min_Weight[int num_Balls]
{
return log base 3 ([num_Balls * 2] + 3);
}

Find the smallest set group to cover all combinatory possibilities

I'm making some exercises on combinatorics algorithm and trying to figure out how to solve the question below:
Given a group of 25 bits, set (choose) 15 (non-permutable and order NON matters):
n!/(k!(n-k)!) = 3.268.760
Now for every of these possibilities construct a matrix where I cross every unique 25bit member against all other 25bit member where
in the relation in between it there must be at least 11 common setted bits (only ones, not zeroes).
Let me try to illustrate representing it as binary data, so the first member would be:
0000000000111111111111111 (10 zeros and 15 ones) or (15 bits set on 25 bits)
0000000001011111111111111 second member
0000000001101111111111111 third member
0000000001110111111111111 and so on....
...
1111111111111110000000000 up to here. The 3.268.760 member.
Now crossing these values over a matrix for the 1 x 1 I must have 15 bits common. Since the result is >= 11 it is a "useful" result.
For the 1 x 2 we have 14 bits common so also a valid result.
Doing that for all members, finally, crossing 1 x 3.268.760 should result in 5 bits common so since it's < 11 its not "useful".
What I need is to find out (by math or algorithm) wich is the minimum number of members needed to cover all possibilities having 11 bits common.
In other words a group of N members that if tested against all others may have at least 11 bits common over the whole 3.268.760 x 3.268.760 universe.
Using a brute force algorithm I found out that with 81 25bit member is possible achive this. But i'm guessing that this number should be smaller (something near 12).
I was trying to use a brute force algorithm to make all possible variations of 12 members over the 3.268.760 but the number of possibilities
it's so huge that it would take more than a hundred years to compute (3,156x10e69 combinations).
I've googled about combinatorics but there are so many fields that i don't know in wich these problem should fit.
So any directions on wich field of combinatorics, or any algorithm for these issue is greatly appreciate.
PS: Just for reference. The "likeness" of two members is calculated using:
(Not(a xor b)) and a
After that there's a small recursive loop to count the bits given the number of common bits.
EDIT: As promissed (#btilly)on the comment below here's the 'fractal' image of the relations or link to image
The color scale ranges from red (15bits match) to green (11bits match) to black for values smaller than 10bits.
This image is just sample of the 4096 first groups.
tl;dr: you want to solve dominating set on a large, extremely symmetric graph. btilly is right that you should not expect an exact answer. If this were my problem, I would try local search starting with the greedy solution. Pick one set and try to get rid of it by changing the others. This requires data structures to keep track of which sets are covered exactly once.
EDIT: Okay, here's a better idea for a lower bound. For every k from 1 to the value of the optimal solution, there's a lower bound of [25 choose 15] * k / [maximum joint coverage of k sets]. Your bound of 12 (actually 10 by my reckoning, since you forgot some neighbors) corresponds to k = 1. Proof sketch: fix an arbitrary solution with m sets and consider the most coverage that can be obtained by k of the m. Build a fractional solution where all symmetries of the chosen k are averaged together and scaled so that each element is covered once. The cost of this solution is [25 choose 15] * k / [maximum joint coverage of those k sets], which is at least as large as the lower bound we're shooting for. It's still at least as small, however, as the original m-set solution, as the marginal returns of each set are decreasing.
Computing maximum coverage is in general hard, but there's a factor (e/(e-1))-approximation (≈ 1.58) algorithm: greedy, which it sounds as though you could implement quickly (note: you need to choose the set that covers the most uncovered other sets each time). By multiplying the greedy solution by e/(e-1), we obtain an upper bound on the maximum coverage of k elements, which suffices to power the lower bound described in the previous paragraph.
Warning: if this upper bound is larger than [25 choose 15], then k is too large!
This type of problem is extremely hard, you should not expect to be able to find the exact answer.
A greedy solution should produce a "fairly good" answer. But..how to be greedy?
The idea is to always choose the next element to be the one that is going to match as many possibilities as you can that are currently unmatched. Unfortunately with over 3 million possible members, that you have to try match against millions of unmatched members (note, your best next guess might already match another member in your candidate set..), even choosing that next element is probably not feasible.
So we'll have to be greedy about choosing the next element. We will choose each bit to maximize the sum of the probabilities of eventually matching all of the currently unmatched elements.
For that we will need a 2-dimensional lookup table P such that P(n, m) is the probability that two random members will turn out to have at least 11 bits in common, if m of the first n bits that are 1 in the first member are also 1 in the second. This table of 225 probabilities should be precomputed.
This table can easily be computed using the following rules:
P(15, m) is 0 if m < 11, 1 otherwise.
For n < 15:
P(n, m) = P(n+1, m+1) * (15-m) / (25-n) + P(n+1, m) * (10-n+m) / (25-n)
Now let's start with a few members that are "very far" from each other. My suggestion would be:
First 15 bits 1, rest 0.
First 10 bits 0, rest 1.
First 8 bits 1, last 7 1, rest 0.
Bits 1-4, 9-12, 16-23 are 1, rest 0.
Now starting with your universe of (25 choose 15) members, eliminate all of those that match one of the elements in your initial collection.
Next we go into the heart of the algorithm.
While there are unmatched members:
Find the bit that appears in the most unmatched members (break ties randomly)
Make that the first set bit of our candidate member for the group.
While the candidate member has less than 15 set bits:
Let p_best = 0, bit_best = 0;
For each unset bit:
Let p = 0
For each unmatched member:
p += P(n, m) where m = number of bits in common between
candidate member+this bit and the unmatched member
and n = bits in candidate member + 1
If p_best < p:
p_best = p
bit_best = this unset bit
Set bit_best as the next bit in our candidate member.
Add the candidate member to our collection
Remove all unmatched members that match this from unmatched members
The list of candidate members is our answer
I have not written code, I therefore have no idea how good an answer this algorithm will produce. But assuming that it does no better than your current, for 77 candidate members (we cheated and started with 4) you have to make 271 passes through your unmatched candidates (25 to find the first bit, 24 to find the second, etc down to 11 to find the 15th, and one more to remove the matched members). That's 20867 passes. If you have an average of 1 million unmatched members, that's on the order of a 20 billion operations.
This won't be quick. But it should be computationally feasible.

Algorithm to find minimum number of weightings required to find defective ball from a set of n balls

Okay here is a puzzle I come across a lot of times-
Given a set of 12 balls , one of which is defective (it weighs either less or more) . You are allow to weigh 3 times to find the defective and also tell which weighs less or more.
The solution to this problem exists, but I want to know whether we can algorithmically determine if given a set of 'n' balls what is the minimum number of times you would need to use a beam balance to determine which one is defective and how(lighter or heavier).
A wonderful algorithm by Jack Wert can be found here
http://www.cut-the-knot.org/blue/OddCoinProblems.shtml
(as described for the case n is of the form (3^k-3)/2, but it is generalizable to other n, see the writeup below)
A shorter version and probably more readable version of that is here
http://www.cut-the-knot.org/blue/OddCoinProblemsShort.shtml
For n of the form (3^k-3)/2, the above solution applies perfectly and the minimum number of weighings required is k.
In other cases...
Adapting Jack Wert's algorithm for all n.
In order to modify the above algorithm for all n, you can try the following (I haven't tried proving the correctness, though):
First check if n is of the from (3^k-3)/2. If it is, apply above algorithm.
If not,
If n = 3t (i.e. n is a multiple of 3), you find the least m > n such that m is of the form (3^k-3)/2. The number of weighings required will be k. Now form the groups 1, 3, 3^2, ..., 3^(k-2), Z, where 3^(k-2) < Z < 3^(k-1) and repeat the algorithm from Jack's solution.
Note: We would also need to generalize the method A (the case when we know if the coin is heavier of lighter), for arbitrary Z.
If n = 3t+1, try to solve for 3t (keeping one ball aside). If you don't find the odd ball among 3t, the one you kept aside is defective.
If n = 3t+2, form the groups for 3t+3, but have one group not have the one ball group. If you come to the stage when you have to rotate the one ball group, you know the defective ball is one of two balls and you can then weigh one of those two balls against one of the known good balls (from among the other 3t).
Trichotomy ! :)
Explanation :
Given a set of n balls, subdivide it in 3 sets A, B and C of n/3 balls.
Compare A and B. If equal, then the defective ball is in C.
etc.
So, your minimum number of times is the number of times you can divide n by three (sorry, i do not know the english word for that).
You could use a general planning algorithm: http://www.inf.ed.ac.uk/teaching/courses/plan/

Resources