Classic problem with 12 coins ( or marbles) one of which is fake. Fake coin assumed to be lighter than real one.
Having scales to compare coins (or marbles).
One can do comparison one by one and compare all 12 coins.
More efficiently one can do it using Decrease By Factor algorithm. Which is divide the coin stack by 2 and compare 2 stacks on the scales.
Big O of the decrease by factor 2 is log2n.
There is more efficient decrease by factor 3 (log3n) algorithm but I have not yet found it.
If anyone explains it and why it is more efficient let me know.
The main idea here is to use more knowledge of the problem in setting up your test: If you separate into 3 instead of two stacks and do a weighing with two of those stacks (each containing the same number of coins), you can have only two cases given that the single fake coin can be in only one of these three stacks:
1.) Both sides have identical weight: the fake coin cannot be in the two stacks weighed, so must be in the 3rd: you reduced the problem space to 1/3
2.) One side weighs more than the other: since there is only one fake coin it must be on the side that weighs less: again you reduced the problem space to 1/3
Rinse and repeat.
The "decrease by 3" algorithm works on the principle that you can reduce the set of marbles you have to compare by 1/3 by doing only 1 comparison.
Split the marbles into 3 groups, and weight 2 of them, say group 1 and 2.
If weight of group 1 == weight of group 2 then group 3 has the fake marble
If weight of group 1 < weight of group 2 then group 1 has the fake marble
If weight of group 1 > weight of group 2 then group 2 has the fake marble
Of course this assumes that the original set of marbles can be split evenly into 3 groups. If that's not the case then split into 3 groups evenly ( each group has the same number of marbles ) and keep the remaining ( 0,1, or 2 ) marbles on the side and add them back to the group of marbles you have to consider after the comparison step.
Related
This problem is similar to the classic coin search for a single counterfeit coin that weighs lighter than x number of coins but with a twist in the number of coins that could possibly be fake. The real coins all weigh the same, and the fake coins weigh the same. The fake coins weigh less than the real coins.
The difference in the one I am trying to solve is for when there are at most 2 counterfeits, (i.e There can be possibly, No fake coins, 1 fake coin, or 2 fake coins).
Example of my attempt:
My attempt at an earlier part of this problem was figuring out how to find the fake coins if any, when x = 9 # of coins, however you were only allowed to use the weight scale at most 6 times to figure it out.
I started by separating x = 9 coins into groups of 3 and comparing the groups to check for equality (if all groups are = there are no fake coins, since there could be at most 2 fake coins and at least 0 fake coins.) Then going from there to checking inequalities for group 1 with group 2 and group 1 again with group 3. With the possibilities of there being 2 fake coins in group 1,2, or 3, and the other possibility of there being 1 fake coin each in 2 groups such as group 1,2, 1,3 or 2,3. Considering these cases I followed the comparisons, thereby breaking down the comparing of groups into thirds until I get to the final few coins and find the fake coins.
The problem is:
In a pile of coins where x amount of coins is ">= 3", how would I go about finding the fake coins while making sure the number of times weighed is O(log base 2 of (n)). And How would I find a generic formula to find the number of weighings required to find at most 2 fakes from an x amount of coins.
Programming this is easy when I can consider all cases and compare each one at a slower speed. However it gets significantly more difficult when considering the amount of times weighed has to be O(log base 2 (n)). I have considered using the number of coins to differentiate how the comparisons will be made such as checking if x amount of coins is an odd or even number of coins, then deciding how to compare. If odd, divide x-1 into 3 groups and put the last coin into a fourth group, then continue down the spiral of comparisons to finally find the fake coins, if there are any at all. I also considered dividing say 100 coins into 33 each and comparing the 3 groups, then getting rid of 1/3 of the coins and running comparisons on the 66 left. I still can't wrap my head around solving how to design a generic algorithm procedure to find the fake coins, and then how to even find a generic formula for comparing the amount of times weighed to log base 2 (n).
Even when n = prime/odd numbers it is difficult to split those coins and check for weight in a general procedure that works with any number n >= 3.
To clarify, I need help with figuring out if/how my earlier attempt/example can be applied to create a general comparison algorithm that will apply to any number of coins where x>=3, while the amount of times weighed is O(log base 2 (n)).
Since O(log_2 n) is the same as O(log_b n) for any base b>1, the recursive breakdown into thirds suggested by user #n.1.8e9 in the comments fits that requirement. There's no need to consider prime/odd numbers, as long as we can solve for some specified constant number of coins with a constant number of weighings.
Here, let 3 coins be our base case. After weighing all 3 pairings (technically, we can get away with 2 weighings), we will know exactly which of the 3 coins are light, if any. So if we split a pile of 11 coins into thirds of 3 each, we can take the 2 leftover coins, borrow any other coin from the other piles, perform the 3 weighings, and then discard the 2 leftover coins since we know their status. As long as there are O(log n) splitting stages, dealing with the leftovers won't affect the asymptotics.
The only complex part of the proof is that after the first step, we go from the '0, 1 or 2 fakes' problem to either two 'exactly 1 fake' subproblems or a '1 or 2 fakes' subproblem. Assuming you know the solution to the original 'exactly 1 fake' problem with 1 + log_3 n weighings, the proof should look fairly similar.
The procedure for 'at most 2 fake' and '1 or 2 fakes' is the same. Given n coins, we divide them into three groups of floor(n/3) coins (and treat any leftovers as we did above). If n <= 3, stop and just perform all weighings. Otherwise, given piles A, B and C, perform the 3 pair weighings (A, B), (A, C) and (B, C).
If they all weigh the same (A=B=C), there are no fake coins.
If one pile is different, there are two cases: the single pile is lighter or heavier than the other two.
If it is lighter (say, A < B, A < C, and B = C), then pile A has exactly 1 or 2 fake coins and we have a single problem instance on n/3 coins (discard piles B and C).
If the outlier is heavier (say, A = B, A < C, and B < C), then piles A and B have exactly one fake coin each, which is the standard counterfeit problem.
To prove the bound on number of weighings, you probably need to use induction. Each recursion level requires at most 6 weighings, so an upper bound formula for the number of weighings required when there may be up to 2 fake coins remaining is T(n) = max(T(n/3), 2 * (1 + log_3(n/3))) + 6, where the 1 + log_3 (n/3) term is the standard upper bound with perfect strategy to find one light coin among n/3 coins (where we take the floor of all divisions to get integers).
This question already has an answer here:
Given n coins, some of which are heavier, find the number of heavy coins? [closed]
(1 answer)
Closed 8 years ago.
Suppose you are given n coins, some of which are heavy and the others
light. All heavy coins have the same weight, as do all the light coins, and
the weight of a heavy coin is strictly greater than the weight of a light coin.
At least one of the coins is known to be light. You are given a balance,
using which you can weigh a subset of coins against another disjoint subset
of coins. Show how you can determine the number of heavy coins using
O(log2 n) weighings.
I guess this must be a generalization of the problem where you have 8 coins and one of them is light. So you can perform a kind of binary search in order to find the lightest coin using a pair of scales balance. However, it is strange that you are supposed to find several light coins at the same time. In this case, this does not seem to scale with log2 n.
See the example below in order to understand my point.
In the case of 8 coins where one of them is light. You should follow three steps:
Step 1) Divide the sample in two parts and find the lightest part. => 1 weighting. [You got a sample with 4 coins that is lighter]
Step 2) Divide the lightest part of the previos procedure and weight these parts to find the lightest part. => + 1 weighting [You got a sample with 2 coins]
Step 3) Now you have only two coins. You have only to weight them to find the lightest.
Off course, the generalization to a sample of size n is trivial.
The proof that this scales with log2 n follows the binary search proof.
However, if the number of light coins is different from 1, you cannot focus only in the lightest part of the sample. [Disclaimer: Maybe I am wrong, but it is difficult to say that this will scale with log2 n. FOr instance, consider the situation where the number of light coins scales with n (the number of coins)]
Actually, the most beautiful solution to this problem is to find the lightest coin in only two weightings:
Step 1) Divide your sample in 3 parts. The first part has three coins, the second part also has three coins and the last part only 2.
Step 2) Weight the first and the second part. There are three situations:
a) The first part is lighter.
b) The second part is lighter.
c) The first and the second part have the same weight.
If (a or b) wight two of them. If they have the same weight, the other one that was not weighted is the lighter. On the other hand, if they dont have the same weight, one of them is the lighter
if(c) just weight the two coins to find the lighter one.
This can also be generalized, but the generalization is much more complicated.
I have encoutered an algorithm question:
Fully Connection
Given n cities which spreads along a line, let Xi be the position of city i and Pi be its population.
Now we begin to lay cables between every two of the cities based on their distance and population. Given two cities i and j, the cost to lay cable between them is |Xi-Xj|*max(Pi,Pj). How much does it cost to lay all the cables?
For example, given:
i Xi Pi
- -- --
1 1 4
2 2 5
3 3 6
Then the total cost can be calculated as:
i j |Xi-Xj| max(Pi, Pj) Segment Cost
- - ------ ----------- ------------
1 2 1 5 5
2 3 1 6 6
1 3 2 6 12
So that the total cost is 5+6+12 = 23.
While this can clearly be done in O(n2) time, can it be done in asymptotically less time?
I can think of faster solution. If I am not wrong it goes to O(n*logn). Now let's first sort all the cities according to Pi. This is O(n* log n). Then we start processing the cities in increasing order of Pi. the reason being - you always know you have max (Pi, Pj) = Pi in this case. We only want to add all the segments that come from relations with Pi. Those that will connect with larger indexes will be counted when they will be processed.
Now the thing I was able to think of was to use several index trees in order to reduce the complexity of the algorithm. First index tree is counting the number of nodes and can process queries of the kind: how many nodes are to the right of xi in logarithmic time. Lets call this number NR. The second index tree can process queries of the kind: what is the sum of distances from all the points to the right of a given x. The distances are counted towards a fixed point that is guaranteed to be to the right of the rightmost point, lets call its x XR.Lets call this number SUMD. Then the sum of the distances to all points to the right of our point can be found that way: NR * dist(Xi, XR) - SUMD. Then all these contribute (NR * dist(Xi, XR) - SUMD) *Pi to the result. The same for the left points and you get the answer. After you process the ith point you add it to the index trees and can go on.
Edit: Here is one article about Biary index trees: http://community.topcoder.com/tc?module=Static&d1=tutorials&d2=binaryIndexedTrees
This is the direct connections problem from codesprint 2.
They will be posting worked solutions to all problems within a week on their website.
(They have said "Now that the contest is over, we're totally cool with everyone discussing their solutions to the problems.")
I have this homework assignment that I think I managed to solve, but am not entirely sure as I cannot prove my solution. I would like comments on what I did, its correctness and whether or not there's a better solution.
The problem is as follows: we have N groups of people, where group ihas g[i]people in it. We want to put these people on two rows of S seats each, such that: each group can only be put on a single row, in a contiguous sequence, OR if the group has an even number of members, we can split them in two and put them on two rows, but with the condition that they must form a rectangle (so they must have the same indices on both rows). What is the minimum number of seats S needed so that nobody is standing up?
Example: groups are 4 11. Minimum S is 11. We put all 4 in one row, and the 11 on the second row. Another: groups are 6 2. We split the 6 on two rows, and also the two. Minimum is therefore 4 seats.
This is what I'm thinking:
Calculate T = (sum of all groups + 1) / 2. Store the group numbers in an array, but split all the even values x in two values of x / 2 each. So 4 5 becomes 2 2 5. Now run subset sum on this vector, and find the minimum value higher than or equal to T that can be formed. That value is the minimum number of seats per row needed.
Example: 4 11 => 2 2 11 => T = (15 + 1) / 2 = 8. Minimum we can form from 2 2 11 that's >= 8 is 11, so that's the answer.
This seems to work, at least I can't find any counter example. I don't have a proof though. Intuitively, it seems to always be possible to arrange the people under the required conditions with the number of seats supplied by this algorithm.
Any hints are appreciated.
I think your solution is correct. The minimum number of seats per row in an optimal distribution would be your T (which is mathematically obvious).
Splitting even numbers is also correct, since they have two possible arrangements; by logically putting all the "rectangular" groups of people on one end of the seat rows you can also guarantee that they will always form a proper rectangle, so that this condition is met as well.
So the question boils down to finding a sum equal or as close as possible to T (e.g. partition problem).
Minor nit: I'm not sure if the proposed solution above works in the edge case where each group has 0 members, because your numerator in T = SUM ALL + 1 / 2 is always positive, so there will never be a subset sum that is greater than or equal to T.
To get around this, maybe a modulus operation might work here. We know that we need at least n seats in a row if n is the maximal odd term, so maybe the equation should have a max(n * (n % 2)) term in it. It will come out to max(odd) or 0. Since the maximal odd term is always added to S, I think this is safe (stated boldly without proof...).
Then we want to know if we should split the even terms or not. Here's where the subset sum approach might work, but with T simply equal to SUM ALL / 2.
i got this task last week but can't find a good algorithm to solve the problem. So here is the description:
You can build a stable tower with building cubes by not putting bigger cubes to smaller ones and if you don't put harder cubes into lighter ones. Make a programme which gives you the highest possible tower with n number of cubes.
Input:
In the first row of in.txt there is the number of cubes n (1 =< n =<1000). the following n line consisting two positive integer, a cube's sidelength and weight (both of them are not higher than 2000) there are no similar cubes which sidelength and wieght is the same
Output:
you have to write the highest possible stable tower's m number into out.txt. into the second row you have to write in the ordinal number m of the tower from bottom to top. by the height of the tower we mean the amount of all of the cubes's sidelength (not the number of cubes). if there are more than one solution, you can give whatever you want
example for input and output:
input:
5
10 3
20 5
15 6
15 1
10 2
and the output:
3
2 1 5
here are limits on the program. Time limit: 0.2 sec. Memory limit: 16 Mb
I hope you can help me to solve this. thx for all help
The relationship "block A can be placed on top of block B" defines a partial order on the blocks. You can use Kahn's algorithm (aka "topological sort") to turn this into a total order, which you can then traverse in depth order to find the longest path.