In the following MIT lecture:
https://www.youtube.com/watch?v=JZHBa-rLrBA at 1:07:00 ,professor taught to calculate number of probes in unsuccessful search.
But my method of calculating doesn't matches his.
My answer is:
Number of probes=
m= no. of slots in hash table
n= no. of elements (keys)
Explanation:
1.The hash function can hit an empty slot with probability m-n/m.
2.Or it can hit a preoccupied key slot with probability n/m.
3.Now in case 2, we will have to again call hash function and there are two chances:
(i) We get a slot with no key with probability (m-n)/(m-1).
(ii) We get a slot with key with probability (n-1)/(m-1).
4.Now repeat case 3 but with different probabilities as shown in the image
Why am I getting different answer. What's wrong with it?
The problem asks us to find the expected number of probes that need to be done in a hash table.
You must do one no matter what, so you have 1 to start with. Then, there is an n / m chance that you have a collision. You got this right in your explanation.
If you have a collision, you must do another probe (and maybe even more). And so on, so the answer is the one the professor gets:
1 + (n / m)(1 + ((n - 1) / (m - 1))(1 + ...))
You don't multiply with the probability that you get an empty slot. You multiply the probability of not getting an empty slot with the number of operations you have to do if you don't get an empty slot (1, because you have to do at least one more probe in that case).
It is meaningless to multiply the probability of getting an open slot with the probability of not getting one, like you're doing. Remember that we want to find the expected number of probes that we need to do. So you multiply the number of operations (probes) at each step with the probability that you don't get what you'd ideally like to get (an empty slot), because if this event happens, then we'll have to do more operations (probes), otherwise we're done.
This is explained very well in the lecture you linked to if you watch carefully until the end.
Related
A few days ago I came across such a problem at the contest my uni was holding:
Given the history of guesses in a mastermind game using digits instead
of colors in a form of pairs (x, y) where x is the guess and y is how
many digits were placed correctly, guess the correct number. Each
input is guaranteed to have a solution.
Example for a 5-place game:
(90342, 2)
(70794, 0)
(39458, 2)
(34109, 1)
(51545, 2)
(12531, 1)
Should yield:
39542
Create an algorithm to correctly guess the result in an n-place
mastermind given the history.
So the only idea I had was to keep the probability of each digit being correct based on the correct shots in a given guess and then try to generate the most possible number, then the next one and so on - so for example we'd have 9 being 40% possible for the first place (cause the first guess has 2/5=40% correct), 7 being impossible and so on. Then we do the same for other places in the number and finally generate a number with the highest probability to test it against all the guesses.
The problem with this approach, though, is that generating the next possible number, and the next, and so on (as we probably won't score a home run in the first try) is really non-trivial (or at least I don't see an easy way of implementing this) and since this contest had something like a 90 minute timeframe and this wasn't the only problem, I don't think something so elaborate was the anticipated approach.
So how could one do it easier?
An approach that comes to mind is to write a routine that can generally filter an enumeration of combinations based on a particular try and its score.
So for your example, you would initially pick one of the most constrained tries (one of the ones with a score of 2) as a filter and then enumerate all combinations that satisfy it.
The output from that enumeration is then used as input to a filter run for the next unprocessed try, and so on, until the list of tries is exhausted.
The candidate try that comes out of the final enumeration is the solution.
Probability does not apply here. In this case a number is either right or wrong. There is no "partially right".
For 5 digits you can just test all 100,000 possible numbers against the given history and throw out the ones where the matches are incorrect. This approach becomes impractical for larger numbers at some point. You will be left with a list of numbers that meet the criteria. If there is exactly one in the list, then you have solved it.
python code, where matches counts the matching digits of its 2 parameters:
for k in range(0,100000):
if matches(k,90342)==2 and matches(k,70794)==0 and matches(k,39458)==2 and matches(k,34109)==1 and matches(k,51545)==2 and matches(k,12531):
print k
prints:
39542
I am trying to find a dynamic approach to multiply each element in a linear sequence to the following element, and do the same with the pair of elements, etc. and find the sum of all of the products. Note that any two elements cannot be multiplied. It must be the first with the second, the third with the fourth, and so on. All I know about the linear sequence is that there are an even amount of elements.
I assume I have to store the numbers being multiplied, and their product each time, then check some other "multipliable" pair of elements to see if the product has already been calculated (perhaps they possess opposite signs compared to the current pair).
However, by my understanding of a linear sequence, the values must be increasing or decreasing by the same amount each time. But since there are an even amount of numbers, I don't believe it is possible to have two "multipliable" pairs be the same (with potentially opposite signs), due to the issue shown in the following example:
Sequence: { -2, -1, 0, 1, 2, 3 }
Pairs: -2*-1, 0*1, 2*3
Clearly, since there are an even amount of pairs, the only case in which the same multiplication may occur more than once is if the elements are increasing/decreasing by 0 each time.
I fail to see how this is a dynamic programming question, and if anyone could clarify, it would be greatly appreciated!
A quick google for define linear sequence gave
A number pattern which increases (or decreases) by the same amount each time is called a linear sequence. The amount it increases or decreases by is known as the common difference.
In your case the common difference is 1. And you are not considering any other case.
The same multiplication may occur in the following sequence
Sequence = {-3, -1, 1, 3}
Pairs = -3 * -1 , 1 * 3
with a common difference of 2.
However this is not necessarily to be solved by dynamic programming. You can just iterate over the numbers and store the multiplication of two numbers in a set(as a set contains unique numbers) and then find the sum.
Probably not what you are looking for, but I've found a closed solution for the problem.
Suppose we observe the first two numbers. Note the first number by a, the difference between the numbers d. We then count for a total of 2n numbers in the whole sequence. Then the sum you defined is:
sum = na^2 + n(2n-1)ad + (4n^2 - 3n - 1)nd^2/3
That aside, I also failed to see how this is a dynamic problem, or at least this seems to be a problem where dynamic programming approach really doesn't do much. It is not likely that the sequence will go from negative to positive at all, and even then the chance that you will see repeated entries decreases the bigger your difference between two numbers is. Furthermore, multiplication is so fast the overhead from fetching them from a data structure might be more expensive. (mul instruction is probably faster than lw).
I came across a Q that was asked in one of the interviews..
Q - Imagine you are given a really large stream of data elements (queries on google searches in May, products bought at Walmart during the Christmas season, names in a phone book, whatever). Your goal is to efficiently return a random sample of 1,000 elements evenly distributed from the original stream. How would you do it?
I am looking for -
What does random sampling of a data set mean?
(I mean I can simply do a coin toss and select a string from input if outcome is 1 and do this until i have 1000 samples..)
What are things I need to consider while doing so? For example .. taking contiguous strings may be better than taking non-contiguous strings.. to rephrase - Is it better if i pick contiguous 1000 strings randomly.. or is it better to pick one string at a time like coin toss..
This may be a vague question.. I tried to google "randomly sample data set" but did not find any relevant results.
Binary sample/don't sample may not be the right answer.. suppose you want to sample 1000 strings and you do it via coin toss.. This would mean that approximately after visiting 2000 strings.. you will be done.. What about the rest of the strings?
I read this post - http://gregable.com/2007/10/reservoir-sampling.html
which answers this Q quite clearly..
Let me put the summary here -
SIMPLE SOLUTION
Assign a random number to every element as you see them in the stream, and then always keep the top 1,000 numbered elements at all times.
RESERVOIR SAMPLING
Make a reservoir (array) of 1,000 elements and fill it with the first 1,000 elements in your stream.
Start with i = 1,001. With what probability after the 1001'th step should element 1,001 (or any element for that matter) be in the set of 1,000 elements? The answer is easy: 1,000/1,001. So, generate a random number between 0 and 1, and if it is less than 1,000/1,001 you should take element 1,001.
If you choose to add it, then replace any element (say element #2) in the reservoir chosen randomly. The element #2 is definitely in the reservoir at step 1,000 and the probability of it getting removed is the probability of element 1,001 getting selected multiplied by the probability of #2 getting randomly chosen as the replacement candidate. That probability is 1,000/1,001 * 1/1,000 = 1/1,001. So, the probability that #2 survives this round is 1 - that or 1,000/1,001.
This can be extended for the i'th round - keep the i'th element with probability 1,000/i and if you choose to keep it, replace a random element from the reservoir. The probability any element before this step being in the reservoir is 1,000/(i-1). The probability that they are removed is 1,000/i * 1/1,000 = 1/i. The probability that each element sticks around given that they are already in the reservoir is (i-1)/i and thus the elements' overall probability of being in the reservoir after i rounds is 1,000/(i-1) * (i-1)/i = 1,000/i.
I think you have used the word infinite a bit loosely , the very premise of sampling is every element has an equal chance to be in the sample and that is only possible if you at least go through every element. So I would translate infinite to mean a large number indicating you need a single pass solution rather than multiple passes.
Reservoir sampling is the way to go though the analysis from #abipc seems in the right direction but is not completely correct.
It is easier if we are firstly clear on what we want. Imagine you have N elements (N unknown) and you need to pick 1000 elements. This means we need to device a sampling scheme where the probability of any element being there in the sample is exactly 1000/N , so each element has the same probability of being in sample (no preference to any element based on its position on the original list). The scheme mentioned by #abipc works fine, the probability calculations goes like this -
After first step you have 1001 elements so we need to pick each element with probability 1000/1001. We pick the 1001st element with exactly that probability so that is fine. Now we also need to show that every other element also has the same probability of being in the sample.
p(any other element remaining in the sample) = [ 1 - p(that element is
removed from sample)]
= [ 1 - p(1001st element is selected) * p(the element is picked to be removed)
= [ 1 - (1000/1001) * (1/1000)] = 1000/1001
Great so now we have proven every element has a probability of 1000/1001 to be in the sample. This precise argument can be extended for the ith step using induction.
As I know such class of algorithms is called Reservoir Sampling algorithms.
I know one of it from DataMining, but don't know the name of it:
Collect first S elements in your storage with max.size equal to S.
Suppose next element of the stream has number N.
With probability S/N catch new element, else discard it
If you catched element N, then replace one of the elements in the sameple S, picked it uniformally.
N=N+1, get next element, goto 1
It can be theoretically proved that at any step of such stream processing your storage with size S contains elements with equal probablity S/N_you_have_seen.
So for example S=10;
N_you_have_seen=10^6
S - is finite number;
N_you_have_seen - can be infinite number;
In writing some code today, I have happened upon a circumstance that has caused me to write a binary search of a kind I have never seen before. Does this binary search have a name, and is it really a "binary" search?
Motivation
First of all, in order to make the search easier to understand, I will explain the use case that spawned its creation.
Say you have a list of ordered numbers. You are asked to find the index of the number in the list that is closest to x.
int findIndexClosestTo(int x);
The calls to findIndexClosestTo() always follow this rule:
If the last result of findIndexClosestTo() was i, then indices closer to i have greater probability of being the result of the current call to findIndexClosestTo().
In other words, the index we need to find this time is more likely to be closer to the last one we found than further from it.
For an example, imagine a simulated boy that walks left and right on the screen. If we are often querying the index of the boy's location, it is likely he is somewhere near the last place we found him.
Algorithm
Given the case above, we know the last result of findIndexClosestTo() was i (if this is actually the first time the function has been called, i defaults to the middle index of the list, for simplicity, although a separate binary search to find the result of the first call would actually be faster), and the function has been called again. Given the new number x, we follow this algorithm to find its index:
interval = 1;
Is the number we're looking for, x, positioned at i? If so, return i;
If not, determine whether x is above or below i. (Remember, the list is sorted.)
Move interval indices in the direction of x.
If we have found x at our new location, return that location.
Double interval. (i.e. interval *= 2)
If we have passed x, go back interval indices, set interval = 1, go to 4.
Given the probability rule stated above (under the Motivation header), this appears to me to be the most efficient way to find the correct index. Do you know of a faster way?
In the worst case, your algorithm is O((log n)^2).
Suppose you start at 0 (with interval = 1), and the value you seek actually resides at location 2^n - 1.
First you will check 1, 2, 4, 8, ..., 2^(n-1), 2^n. Whoops, that overshoots, so go back to 2^(n-1).
Next you check 2^(n-1)+1, 2^(n-1)+2, ..., 2^(n-1)+2^(n-2), 2^(n-1)+2^(n-1). That last term is 2^n, so whoops, that overshot again. Go back to 2^(n-1) + 2^(n-2).
And so on, until you finally reach 2^(n-1) + 2^(n-2) + ... + 1 == 2^n - 1.
The first overshoot took log n steps. The next took (log n)-1 steps. The next took (log n) - 2 steps. And so on.
So, worst case, you took 1 + 2 + 3 + ... + log n == O((log n)^2) steps.
A better idea, I think, is to switch to traditional binary search once you overshoot the first time. That will preserve the O(log n) worst case performance of the algorithm, while tending to be a little faster when the target really is nearby.
I do not know a name for this algorithm, but I do like it. (By a bizarre coincidence, I could have used it yesterday. Really.)
What you are doing is (IMHO) a version of Interpolation search
In a interpolation search you assume numbers are equally distributed, and then you try to guess the location of a number from first and last number and length of the array.
In your case, you are modifying the interpolation-algo such that you assume the Key is very close to the last number you searched.
Also note that your algo is similar to algo where TCP tries to find the optimal packet size. (dont remember the name :( )
Start slow
Double the interval
if Packet fails restart from the last succeeded packet./ Restart
from default packet size.. 3.
Your routine is typical of interpolation routines. You don't lose much if you call it with random numbers (~ standard binary search), but if you call it with slowly increasing numbers, it won't take long to find the correct index.
This is therefore a sensible default behavior for searching an ordered table for interpolation purposes.
This method is discussed with great length in Numerical Recipes 3rd edition, section 3.1.
This is talking off the top of my head, so I've nothing to back it up but gut feeling!
At step 7, if we've passed x, it may be faster to halve interval, and head back towards x - effectively, interval = -(interval / 2), rather than resetting interval to 1.
I'll have to sketch out a few numbers on paper, though...
Edit: Apologies - I'm talking nonsense above: ignore me! (And I'll go away and have a proper think about it this time...)
I have a 2D array that holds unique integers - this represents a physical container with rows/columns - in each position there is a vial.
I know the integers that should be in the array and where they should be located.
My array however is shuffled with potentially many/all unique integers in the wrong positions.
I now need to sort the array - however this maps to a physical process and therefore I really want to reduce the number of sort steps involved due to potential human error.
Is this just a plain sort? or is there a more specific name for this scenario? Is there well known solutions?
My colleague has suggested just creating a list of swap [1][1] with [2][1] type instructions, which seems reasonable however I can't quite get my head around if the order of swaps is important.
All assistance grateful.
If you really can tell, just by looking at the vial, where it belongs, the shortest way is to take the first vial that is in the wrong place out, then put it where it belongs, take whatever was there, put it to its proper place, etc., until you happen to get the vial that belongs where you originally made a "hole". Then repeat.
Since you take out each vial at most once, and only if it is in the wrong place, I think that this is optimal with respect to physical motion.
Sorting algorithms are analysed by the number of comparisons and the number of swaps required. Since for a human operator the cost of a swap is much higher than the cost of a comparison, you want a 2D sort that minimizes the number of swaps required.
"I can't quite get my head around if the order of swaps is important."
I general yes, it is. For a simple example consider the starting list of 3 elements, X Y Z.
The result of "swap 1 with 2, then 2 with 3" is Y Z X.
The result of "swap 2 with 3, then 1 with 2" is Z X Y.
The list of swaps you come up with will probably be (at most) 1 for each element that is out of place, and will swap that element with whatever is in its correct place. So for example you might swap [0][0] with wherever it belongs. Unless the place where it belongs happens to contain the element that belongs in [0][0], then your next swap could be, again [0][0] with wherever that belongs. So certainly the order of swaps is important - this second swap is only correct because the first swap has already happened, and moved some particular element into [0][0].
If two consecutive swaps are disjoint, though, then you can reverse their order: (1 2)(3 4) is equivalent to (3 4)(1 2), where (x y) is a mathematical notation for "swap x with y".
It's a theorem that any permutation can be written as a set of disjoint cycles. This decomposition into cycles is unique apart from which element in your cycle you choose to list first, and the order the cycles are listed, both of which are irrelevant to the result. The notation (1 2 3) means "move 1 to 2, 2 to 3, and 3 to 1", and is a 3-cycle. It's exactly the same as (2 3 1), but different from (1 3 2).
Depending how your human operative works, it might well be more efficient for them to carry out an n-cycle rather than an equivalent n swaps. So once you know how to sort your array (that is, you know what permutation must be performed on it to get it into order), it may be that the best thing to do is to generate that decomposition.