Shortest possible sequence to get to a number - algorithm

The first element of the sequence is 1.
i.e. A[0]=1
A[i+1] can either be 2A[i] or A[i]+1.
We have to find the shortest possible sequence.
e.g.:
If N is 18
A[0]=1, A[1]=2,4,8,9,18
So my code was basically.
int count = 0;
for (int i = N, i != 1;){
if (i % 2 == 0) {
i /= 2;
++count;
} else{
--i;
++count;
}
return count;
As you can see, this algorithm is pretty straightforward and will return the shortest length. However, the worst time complexity is O(N)
Is there any way to do it so that it is O(logN)?

Your algorithm is already O(log(N)). To see this, your algorithm can be rewritten like this:
int count = 0;
for (int i = N; i != 1;) {
if (i % 2 == 1) {
--i
++count;
}
i /= 2;
++count;
}
return count;
For each bit, you are either dividing by two, or subtracting one and dividing by two. Since the number of operations you do per bit does not depend on the size of N, the time is O(log(N)).
Some details on why this works
If you think in terms of binary. Multiplying by two is shifting the bits left one. Adding one is setting the right-most bit to one. So you can find the sequence of operations just by reading off the binary value of N.
N=18 is 10010 in binary, so we have
1 = starting value: 1
0 = multiply by 2 : 2
0 = multiply by 2 : 4
1 = multiply by 2 and add one: 8,9
0 = multiply by 2 : 18
The solution isn't necessarily unique, but it is always at least as short as any other solution. To see why, you just need to observe that adding one two times in a row is equivalent to adding one to bit 1 and zeroing out bit 0:
xxx01 -> xxx10
But you could have gotten the same result by adding one previously instead, so there is no advantage to adding one twice in a row. Therefore, an optimal sequence of operations consists of repeatedly multiplying by two and optionally adding one between each multiplication.

Related

Proper Shuffle: Code jam [2014]

I want to share with Code Jam problem C in round 1A, it's called Proper Shuffle. I am intereseted in knowing your solutions.
Here is the problem:
A permutation of size N is a sequence of N numbers, each between 0 and N-1, where each number appears exactly once. They may appear in any order.
There are many (N factorial, to be precise, but it doesn't matter in this problem) permutations of size N. Sometimes we just want to pick one at random, and of course we want to pick one at random uniformly: each permutation of size N should have the same probability of being chosen.
Here's the pseudocode for one of the possible algorithms to achieve that goal (we'll call it the good algorithm below):
for k in 0 .. N-1:
a[k] = k
for k in 0 .. N-1:
p = randint(k .. N-1)
swap(a[k], a[p])
In the above code, randint(a .. b) returns a uniform random integer between a and b, inclusive.
Here's the same algorithm in words. We start with the identity permutation: all numbers from 0 to N-1 written in increasing order. Then, for each k between 0 and N-1, inclusive, we pick an independent uniform random integer pk between k and N-1, inclusive, and swap the element at position k (0-based) in our permutation with the element at position pk.
Here's an example for N=4. We start with the identity permutation:
0 1 2 3
Now k=0, and we pick a random p0 between 0 and 3, inclusive. Let's say we picked 2. We swap the 0th and 2nd elements, and our permutation becomes:
2 1 0 3
Now k=1, and we pick a random p1 between 1 and 3, inclusive. Let's say we picked 2 again. We swap the 1st and 2nd elements, and our permutation becomes:
2 0 1 3
Now k=2, and we pick a random p2 between 2 and 3, inclusive. Let's say we picked 3. We swap the 2nd and 3rd elements, and our permutation becomes:
2 0 3 1
Now k=3, and we pick a random p3 between 3 and 3, inclusive. The only choice is 3. We swap the 3rd and 3rd elements, which means that the permutation doesn't change:
2 0 3 1
The process ends now, and this is our random permutation.
There are many other algorithms that produce a random permutation uniformly. However, there are also many algorithms to generate a random permutation that look very similar to this algorithm, but are not uniform — some permutations are more likely to be produced by those algorithms than others.
Here's one bad algorithm of this type. Take the good algorithm above, but at each step, instead of picking pk randomly between k and N-1, inclusive, let's pick it randomly between 0 and N-1, inclusive. This is such a small change, but now some permutations are more likely to appear than others!
Here's the pseudocode for this algorithm (we'll call it the bad algorithm below):
for k in 0 .. N-1:
a[k] = k
for k in 0 .. N-1:
p = randint(0 .. N-1)
swap(a[k], a[p])
In each test case, you will be given a permutation that was generated in the following way: first, we choose either the good or the bad algorithm described above, each with probability 50%. Then, we generate a permutation using the chosen algorithm. Can you guess which algorithm was chosen just by looking at the permutation?
Solving this problem
This problem is a bit unusual for Code Jam. You will be given T = 120 permutations of N = 1000 numbers each, and should print an answer for each permutation – this part is as usual. However, you don't need to get all of the answers correct! Your solution will be considered correct if your answers for at least G = 109 cases are correct. However, you must follow the output format, even for cases in which your answer doesn't turn out to be correct. The only thing that can be wrong on any case, yet still allow you to be judged correct, is swapping GOOD for BAD or vice versa; but you should still print either GOOD or BAD for each case.
It is guaranteed that the permutations given to you were generated according to the method above, and that they were generated independently of each other.
This problem involves randomness, and thus it might happen that even the best possible solution doesn't make 109 correct guesses for a certain input, as both the good and the bad algorithms can generate any permutation. Because of that, this problem doesn't have a Large input, and has just the Small input which you can try again if you think you got unlucky. Note that there is the usual 4-minute penalty for incorrect submissions if you later solve that input, even if the only reason you got it wrong was chance.
In our experience with this problem, that did happen (getting wrong answer just because of chance); so if you are confident that your solution should be working, but it failed, it might be a reasonable strategy to try again with the same solution which failed.
Good luck!
Input
The first line of the input gives the number of test cases, T (which will always be 120). Each test case contains two lines: the first line contains the single integer N (which will always be 1000), and the next line contains N space-separated integers - the permutation that was generated using one of the two algorithms.
Output
For each test case, output one line containing "Case #x: y", where x is the test case number (starting from 1) and y is either "GOOD" or "BAD" (without the quotes). You should output "GOOD" if you guess that the permutation was generated by the first algorithm described in the problem statement, and "BAD" if you guess that the permutation was generated by the second algorithm described in the problem statement.
Limits
T = 120
G = 109
N = 1000
Each number in the permutation will be between 0 and N-1 (inclusive), and each number from 0 to N-1 will appear exactly once in the permutation.
Sample
Input
2
3
0 1 2
3
2 0 1
Output
Case #1: BAD
Case #2: GOOD
Note
The sample input doesn't follow the limitations from the problem statement - the real input will be much bigger.
Going off of the excellent research done for What distribution do you get from this broken random shuffle? , the following test works reasonably well. Count the number of indices k where perm[k] > k. If the number is over N/2 + some small quantity, print BAD. Otherwise, print GOOD.
Here's my Java implementation, which passes reasonably consistently. I tuned the threshold manually, so there may be a better setting.
public class UniformOrNonuniform {
public static void main(String[] args) {
java.util.Random source = new java.security.SecureRandom();
int successes = 0;
int[] perm = new int[1000];
for (int i = 0; i < 120; i++) {
identityPerm(perm);
boolean uniform = source.nextBoolean();
if (uniform) {
nextUniformPerm(source, perm);
} else {
nextNonuniformPerm(source, perm);
}
if (isUniformPerm(perm) == uniform) {
successes++;
}
}
System.out.println(successes);
}
private static void identityPerm(int[] perm) {
for (int k = 0; k < perm.length; k++) {
perm[k] = k;
}
}
private static void nextUniformPerm(java.util.Random source,
int[] perm) {
for (int k = 0; k < perm.length; k++) {
swap(perm, k, k + source.nextInt(perm.length - k));
}
}
private static void nextNonuniformPerm(java.util.Random source,
int[] perm) {
for (int k = 0; k < perm.length; k++) {
swap(perm, k, source.nextInt(perm.length));
}
}
private static void swap(int[] perm, int i, int j) {
int temp = perm[i];
perm[i] = perm[j];
perm[j] = temp;
}
private static boolean isUniformPerm(int[] perm) {
int score = 0;
for (int k = 0; k < perm.length; k++) {
if (perm[k] > k) {
score++;
}
}
return score <= 512;
}
}

Sample an index of a maximal number in an array, with a probability of 1/(number of maximal numbers)

This is one of the recent interview question that I faced. Program to return the index of the maximum number in the array [ To Note : the array may or may not contain multiple copies of maximum number ] such that each index ( which contains the maximum numbers ) have the probability of 1/no of max numbers to be returned.
Examples:
[-1 3 2 3 3], each of positions [1,3,4] have the probability 1/3 to be returned (the three 3s)
[ 2 4 6 6 3 1 6 6 ], each of [2,3,6,7] have the probability of 1/4 to be returned (corresponding to the position of the 6s).
First, I gave O(n) time and O(n) space algorithm where I collect the set of max-indexes and then return a random number from the set. But he asked for a O(n) time and O(1) complexity program and then I came up with this.
int find_maxIndex(vector<int> a)
{
max = a[0];
max_index = 0;
count = 0;
for(i = 1 to a.size())
{
if(max < a[i])
{
max = a[i];
count = 0;
}
if(max == a[i])
{
count++;
if(rand < 1/count) //rand = a random number in the range of [0,1]
max_index = i;
}
}
return max_index;
}
I gave him this solution. But my doubt is if this procedure would select one of the indexes of max numbers with equal probability. Hope I am clear.Is there any other method to do this ?
What you have is Reservoir sampling! There is another easy to understand solution, but requires two passes.
int find_maxIndex(vector<int> a){
int count = 1;
int maxElement = a[0];
for(int i = 1; i < a.size(); i++){
if(a[i] == maxElement){
count ++;
} else if(a[i] > maxElement){
count = 1;
maxElement = a[i];
}
}
int occurrence = rand() % count + 1;
int occur = 0;
for(int i = 0; i < a.size(); i++){
if(a[i] == maxElement){
occur++;
if(occur == occurrence) return i;
}
}
}
The algorithm is pretty simple, first find the number of times the max element occurs in the first pass. And choose a random occurrence and return the index of that occurrence. It takes two passes though, but very easy to understand.
Your algorithm works fine, and you can prove it via induction.
That is, assuming it works for any array of size N, prove it works for any array of size N+1.
So, given an array of size N+1, think of it as a sub-array of size N followed a new element at the end. By assumption, your algorithm uniformly selects one of the max elements of the sub-array... And then it behaves as follows:
If the new element is larger than the max of the sub-array, return that element. This is obviously correct.
If the new element is less than the max of the sub-array, return the result of the algorithm on the sub-array. Also obviously correct.
The only slightly tricky part is when the new element equals the max element of the sub-array. In this case, let the number of max elements in the sub-array be k. Then, by hypothesis, your algorithm selected one of them with probability 1/k. By keeping that same element with probability k/(k+1), you make the overall probability of selecting that same element equal 1/k * k /(k+1) == 1/(k+1), as desired. You also select the last element with the same probability, so we are done.
To complete the inductive proof, just verify the algorithm works on an array of size 1. Also, for quality of implementation purposes, fix it not to crash on arrays of size zero :-)
[Update]
Incidentally, this algorithm and its proof are closely related to the Fisher-Yates shuffle (which I always thought was "Knuth's card-shuffling algorithm", but Wikipedia says I am behind the times).
The idea is sound, but the devil is in the details.
First off, what language are you using? It might make a difference. The rand() from C and C++ will return an integer, which isn't likely to be less than 1/count unless it returns 0. Even then, if 1/count is an integer division, that result is always going to be 0.
Also your count is off by 1. It starts as 1 when you get a new max, but you immediately increment it in the next if statement.

There is an array having 1 to 100 numbers randomly placed. But two numbers are missing from the list. What are those two numbers? [duplicate]

I had an interesting job interview experience a while back. The question started really easy:
Q1: We have a bag containing numbers 1, 2, 3, …, 100. Each number appears exactly once, so there are 100 numbers. Now one number is randomly picked out of the bag. Find the missing number.
I've heard this interview question before, of course, so I very quickly answered along the lines of:
A1: Well, the sum of the numbers 1 + 2 + 3 + … + N is (N+1)(N/2) (see Wikipedia: sum of arithmetic series). For N = 100, the sum is 5050.
Thus, if all numbers are present in the bag, the sum will be exactly 5050. Since one number is missing, the sum will be less than this, and the difference is that number. So we can find that missing number in O(N) time and O(1) space.
At this point I thought I had done well, but all of a sudden the question took an unexpected turn:
Q2: That is correct, but now how would you do this if TWO numbers are missing?
I had never seen/heard/considered this variation before, so I panicked and couldn't answer the question. The interviewer insisted on knowing my thought process, so I mentioned that perhaps we can get more information by comparing against the expected product, or perhaps doing a second pass after having gathered some information from the first pass, etc, but I really was just shooting in the dark rather than actually having a clear path to the solution.
The interviewer did try to encourage me by saying that having a second equation is indeed one way to solve the problem. At this point I was kind of upset (for not knowing the answer before hand), and asked if this is a general (read: "useful") programming technique, or if it's just a trick/gotcha answer.
The interviewer's answer surprised me: you can generalize the technique to find 3 missing numbers. In fact, you can generalize it to find k missing numbers.
Qk: If exactly k numbers are missing from the bag, how would you find it efficiently?
This was a few months ago, and I still couldn't figure out what this technique is. Obviously there's a Ω(N) time lower bound since we must scan all the numbers at least once, but the interviewer insisted that the TIME and SPACE complexity of the solving technique (minus the O(N) time input scan) is defined in k not N.
So the question here is simple:
How would you solve Q2?
How would you solve Q3?
How would you solve Qk?
Clarifications
Generally there are N numbers from 1..N, not just 1..100.
I'm not looking for the obvious set-based solution, e.g. using a bit set, encoding the presence/absence each number by the value of a designated bit, therefore using O(N) bits in additional space. We can't afford any additional space proportional to N.
I'm also not looking for the obvious sort-first approach. This and the set-based approach are worth mentioning in an interview (they are easy to implement, and depending on N, can be very practical). I'm looking for the Holy Grail solution (which may or may not be practical to implement, but has the desired asymptotic characteristics nevertheless).
So again, of course you must scan the input in O(N), but you can only capture small amount of information (defined in terms of k not N), and must then find the k missing numbers somehow.
Here's a summary of Dimitris Andreou's link.
Remember sum of i-th powers, where i=1,2,..,k. This reduces the problem to solving the system of equations
a1 + a2 + ... + ak = b1
a12 + a22 + ... + ak2 = b2
...
a1k + a2k + ... + akk = bk
Using Newton's identities, knowing bi allows to compute
c1 = a1 + a2 + ... ak
c2 = a1a2 + a1a3 + ... + ak-1ak
...
ck = a1a2 ... ak
If you expand the polynomial (x-a1)...(x-ak) the coefficients will be exactly c1, ..., ck - see Viète's formulas. Since every polynomial factors uniquely (ring of polynomials is an Euclidean domain), this means ai are uniquely determined, up to permutation.
This ends a proof that remembering powers is enough to recover the numbers. For constant k, this is a good approach.
However, when k is varying, the direct approach of computing c1,...,ck is prohibitely expensive, since e.g. ck is the product of all missing numbers, magnitude n!/(n-k)!. To overcome this, perform computations in Zq field, where q is a prime such that n <= q < 2n - it exists by Bertrand's postulate. The proof doesn't need to be changed, since the formulas still hold, and factorization of polynomials is still unique. You also need an algorithm for factorization over finite fields, for example the one by Berlekamp or Cantor-Zassenhaus.
High level pseudocode for constant k:
Compute i-th powers of given numbers
Subtract to get sums of i-th powers of unknown numbers. Call the sums bi.
Use Newton's identities to compute coefficients from bi; call them ci. Basically, c1 = b1; c2 = (c1b1 - b2)/2; see Wikipedia for exact formulas
Factor the polynomial xk-c1xk-1 + ... + ck.
The roots of the polynomial are the needed numbers a1, ..., ak.
For varying k, find a prime n <= q < 2n using e.g. Miller-Rabin, and perform the steps with all numbers reduced modulo q.
EDIT: The previous version of this answer stated that instead of Zq, where q is prime, it is possible to use a finite field of characteristic 2 (q=2^(log n)). This is not the case, since Newton's formulas require division by numbers up to k.
You will find it by reading the couple of pages of Muthukrishnan - Data Stream Algorithms: Puzzle 1: Finding Missing Numbers. It shows exactly the generalization you are looking for. Probably this is what your interviewer read and why he posed these questions.
Also see sdcvvc's directly related answer, which also includes pseudocode (hurray! no need to read those tricky math formulations :)) (thanks, great work!).
We can solve Q2 by summing both the numbers themselves, and the squares of the numbers.
We can then reduce the problem to
k1 + k2 = x
k1^2 + k2^2 = y
Where x and y are how far the sums are below the expected values.
Substituting gives us:
(x-k2)^2 + k2^2 = y
Which we can then solve to determine our missing numbers.
As #j_random_hacker pointed out, this is quite similar to Finding duplicates in O(n) time and O(1) space, and an adaptation of my answer there works here too.
Assuming that the "bag" is represented by a 1-based array A[] of size N - k, we can solve Qk in O(N) time and O(k) additional space.
First, we extend our array A[] by k elements, so that it is now of size N. This is the O(k) additional space. We then run the following pseudo-code algorithm:
for i := n - k + 1 to n
A[i] := A[1]
end for
for i := 1 to n - k
while A[A[i]] != A[i]
swap(A[i], A[A[i]])
end while
end for
for i := 1 to n
if A[i] != i then
print i
end if
end for
The first loop initialises the k extra entries to the same as the first entry in the array (this is just a convenient value that we know is already present in the array - after this step, any entries that were missing in the initial array of size N-k are still missing in the extended array).
The second loop permutes the extended array so that if element x is present at least once, then one of those entries will be at position A[x].
Note that although it has a nested loop, it still runs in O(N) time - a swap only occurs if there is an i such that A[i] != i, and each swap sets at least one element such that A[i] == i, where that wasn't true before. This means that the total number of swaps (and thus the total number of executions of the while loop body) is at most N-1.
The third loop prints those indexes of the array i that are not occupied by the value i - this means that i must have been missing.
I asked a 4-year-old to solve this problem. He sorted the numbers and then counted along. This has a space requirement of O(kitchen floor), and it works just as easy however many balls are missing.
Not sure, if it's the most efficient solution, but I would loop over all entries, and use a bitset to remember, which numbers are set, and then test for 0 bits.
I like simple solutions - and I even believe, that it might be faster than calculating the sum, or the sum of squares etc.
I haven't checked the maths, but I suspect that computing Σ(n^2) in the same pass as we compute Σ(n) would provide enough info to get two missing numbers, Do Σ(n^3) as well if there are three, and so on.
The problem with solutions based on sums of numbers is they don't take into account the cost of storing and working with numbers with large exponents... in practice, for it to work for very large n, a big numbers library would be used. We can analyse the space utilisation for these algorithms.
We can analyse the time and space complexity of sdcvvc and Dimitris Andreou's algorithms.
Storage:
l_j = ceil (log_2 (sum_{i=1}^n i^j))
l_j > log_2 n^j (assuming n >= 0, k >= 0)
l_j > j log_2 n \in \Omega(j log n)
l_j < log_2 ((sum_{i=1}^n i)^j) + 1
l_j < j log_2 (n) + j log_2 (n + 1) - j log_2 (2) + 1
l_j < j log_2 n + j + c \in O(j log n)`
So l_j \in \Theta(j log n)
Total storage used: \sum_{j=1}^k l_j \in \Theta(k^2 log n)
Space used: assuming that computing a^j takes ceil(log_2 j) time, total time:
t = k ceil(\sum_i=1^n log_2 (i)) = k ceil(log_2 (\prod_i=1^n (i)))
t > k log_2 (n^n + O(n^(n-1)))
t > k log_2 (n^n) = kn log_2 (n) \in \Omega(kn log n)
t < k log_2 (\prod_i=1^n i^i) + 1
t < kn log_2 (n) + 1 \in O(kn log n)
Total time used: \Theta(kn log n)
If this time and space is satisfactory, you can use a simple recursive
algorithm. Let b!i be the ith entry in the bag, n the number of numbers before
removals, and k the number of removals. In Haskell syntax...
let
-- O(1)
isInRange low high v = (v >= low) && (v <= high)
-- O(n - k)
countInRange low high = sum $ map (fromEnum . isInRange low high . (!)b) [1..(n-k)]
findMissing l low high krange
-- O(1) if there is nothing to find.
| krange=0 = l
-- O(1) if there is only one possibility.
| low=high = low:l
-- Otherwise total of O(knlog(n)) time
| otherwise =
let
mid = (low + high) `div` 2
klow = countInRange low mid
khigh = krange - klow
in
findMissing (findMissing low mid klow) (mid + 1) high khigh
in
findMising 1 (n - k) k
Storage used: O(k) for list, O(log(n)) for stack: O(k + log(n))
This algorithm is more intuitive, has the same time complexity, and uses less space.
A very simple solution to Q2 which I'm surprised nobody answered already. Use the method from Q1 to find the sum of the two missing numbers. Let's denote it by S, then one of the missing numbers is smaller than S/2 and the other is bigger than S/2 (duh). Sum all the numbers from 1 to S/2 and compare it to the formula's result (similarly to the method in Q1) to find the lower between the missing numbers. Subtract it from S to find the bigger missing number.
Wait a minute. As the question is stated, there are 100 numbers in the bag. No matter how big k is, the problem can be solved in constant time because you can use a set and remove numbers from the set in at most 100 - k iterations of a loop. 100 is constant. The set of remaining numbers is your answer.
If we generalise the solution to the numbers from 1 to N, nothing changes except N is not a constant, so we are in O(N - k) = O(N) time. For instance, if we use a bit set, we set the bits to 1 in O(N) time, iterate through the numbers, setting the bits to 0 as we go (O(N-k) = O(N)) and then we have the answer.
It seems to me that the interviewer was asking you how to print out the contents of the final set in O(k) time rather than O(N) time. Clearly, with a bit set, you have to iterate through all N bits to determine whether you should print the number or not. However, if you change the way the set is implemented you can print out the numbers in k iterations. This is done by putting the numbers into an object to be stored in both a hash set and a doubly linked list. When you remove an object from the hash set, you also remove it from the list. The answers will be left in the list which is now of length k.
To solve the 2 (and 3) missing numbers question, you can modify quickselect, which on average runs in O(n) and uses constant memory if partitioning is done in-place.
Partition the set with respect to a random pivot p into partitions l, which contain numbers smaller than the pivot, and r, which contain numbers greater than the pivot.
Determine which partitions the 2 missing numbers are in by comparing the pivot value to the size of each partition (p - 1 - count(l) = count of missing numbers in l and
n - count(r) - p = count of missing numbers in r)
a) If each partition is missing one number, then use the difference of sums approach to find each missing number.
(1 + 2 + ... + (p-1)) - sum(l) = missing #1 and
((p+1) + (p+2) ... + n) - sum(r) = missing #2
b) If one partition is missing both numbers and the partition is empty, then the missing numbers are either (p-1,p-2) or (p+1,p+2)
depending on which partition is missing the numbers.
If one partition is missing 2 numbers but is not empty, then recurse onto that partiton.
With only 2 missing numbers, this algorithm always discards at least one partition, so it retains O(n) average time complexity of quickselect. Similarly, with 3 missing numbers this algorithm also discards at least one partition with each pass (because as with 2 missing numbers, at most only 1 partition will contain multiple missing numbers). However, I'm not sure how much the performance decreases when more missing numbers are added.
Here's an implementation that does not use in-place partitioning, so this example does not meet the space requirement but it does illustrate the steps of the algorithm:
<?php
$list = range(1,100);
unset($list[3]);
unset($list[31]);
findMissing($list,1,100);
function findMissing($list, $min, $max) {
if(empty($list)) {
print_r(range($min, $max));
return;
}
$l = $r = [];
$pivot = array_pop($list);
foreach($list as $number) {
if($number < $pivot) {
$l[] = $number;
}
else {
$r[] = $number;
}
}
if(count($l) == $pivot - $min - 1) {
// only 1 missing number use difference of sums
print array_sum(range($min, $pivot-1)) - array_sum($l) . "\n";
}
else if(count($l) < $pivot - $min) {
// more than 1 missing number, recurse
findMissing($l, $min, $pivot-1);
}
if(count($r) == $max - $pivot - 1) {
// only 1 missing number use difference of sums
print array_sum(range($pivot + 1, $max)) - array_sum($r) . "\n";
} else if(count($r) < $max - $pivot) {
// mroe than 1 missing number recurse
findMissing($r, $pivot+1, $max);
}
}
Demo
For Q2 this is a solution that is a bit more inefficient than the others, but still has O(N) runtime and takes O(k) space.
The idea is to run the original algorithm two times. In the first one you get a total number which is missing, which gives you an upper bound of the missing numbers. Let's call this number N. You know that the missing two numbers are going to sum up to N, so the first number can only be in the interval [1, floor((N-1)/2)] while the second is going to be in [floor(N/2)+1,N-1].
Thus you loop on all numbers once again, discarding all numbers that are not included in the first interval. The ones that are, you keep track of their sum. Finally, you'll know one of the missing two numbers, and by extension the second.
I have a feeling that this method could be generalized and maybe multiple searches run in "parallel" during a single pass over the input, but I haven't yet figured out how.
Here's a solution that uses k bits of extra storage, without any clever tricks and just straightforward. Execution time O (n), extra space O (k). Just to prove that this can be solved without reading up on the solution first or being a genius:
void puzzle (int* data, int n, bool* extra, int k)
{
// data contains n distinct numbers from 1 to n + k, extra provides
// space for k extra bits.
// Rearrange the array so there are (even) even numbers at the start
// and (odd) odd numbers at the end.
int even = 0, odd = 0;
while (even + odd < n)
{
if (data [even] % 2 == 0) ++even;
else if (data [n - 1 - odd] % 2 == 1) ++odd;
else { int tmp = data [even]; data [even] = data [n - 1 - odd];
data [n - 1 - odd] = tmp; ++even; ++odd; }
}
// Erase the lowest bits of all numbers and set the extra bits to 0.
for (int i = even; i < n; ++i) data [i] -= 1;
for (int i = 0; i < k; ++i) extra [i] = false;
// Set a bit for every number that is present
for (int i = 0; i < n; ++i)
{
int tmp = data [i];
tmp -= (tmp % 2);
if (i >= even) ++tmp;
if (tmp <= n) data [tmp - 1] += 1; else extra [tmp - n - 1] = true;
}
// Print out the missing ones
for (int i = 1; i <= n; ++i)
if (data [i - 1] % 2 == 0) printf ("Number %d is missing\n", i);
for (int i = n + 1; i <= n + k; ++i)
if (! extra [i - n - 1]) printf ("Number %d is missing\n", i);
// Restore the lowest bits again.
for (int i = 0; i < n; ++i) {
if (i < even) { if (data [i] % 2 != 0) data [i] -= 1; }
else { if (data [i] % 2 == 0) data [i] += 1; }
}
}
Motivation
If you want to solve the general-case problem, and you can store and edit the array, then Caf's solution is by far the most efficient. If you can't store the array (streaming version), then sdcvvc's answer is the only type of solution currently suggested.
The solution I propose is the most efficient answer (so far on this thread) if you can store the array but can't edit it, and I got the idea from Svalorzen's solution, which solves for 1 or 2 missing items. This solution takes Θ(k*n) time and O(min(k,log(n))) and Ω(log(k)) space. It also works well with parallelism.
Concept
The idea is that if you use the original approach of comparing sums:
sum = SumOf(1,n) - SumOf(array)
... then you take the average of the missing numbers:
average = sum/n_missing_numbers
... which provides a boundary: Of the missing numbers, there's guaranteed to be at least one number less-or-equal to average, and at least one number greater than average. This means that we can split into sub problems that each scan the array [O(n)] and are only concerned with their respective sub-arrays.
Code
C-style solution (don't judge me for the global variables, I'm just trying to make the code readable for non-c folks):
#include "stdio.h"
// Example problem:
const int array [] = {0, 7, 3, 1, 5};
const int N = 8; // size of original array
const int array_size = 5;
int SumOneTo (int n)
{
return n*(n-1)/2; // non-inclusive
}
int MissingItems (const int begin, const int end, int & average)
{
// We consider only sub-array elements with values, v:
// begin <= v < end
// Initialise info about missing elements.
// First assume all are missing:
int n = end - begin;
int sum = SumOneTo(end) - SumOneTo(begin);
// Minus everything that we see (ie not missing):
for (int i = 0; i < array_size; ++i)
{
if ((begin <= array[i]) && (array[i] < end))
{
--n;
sum -= array[i];
}
}
// used by caller:
average = sum/n;
return n;
}
void Find (const int begin, const int end)
{
int average;
if (MissingItems(begin, end, average) == 1)
{
printf(" %d", average); // average(n) is same as n
return;
}
Find(begin, average + 1); // at least one missing here
Find(average + 1, end); // at least one here also
}
int main ()
{
printf("Missing items:");
Find(0, N);
printf("\n");
}
Analysis
Ignoring recursion for a moment, each function call clearly takes O(n) time and O(1) space. Note that sum can equal as much as n(n-1)/2, so requires double the amount of bits needed to store n-1. At most this means than we effectively need two extra elements worth of space, regardless of the size of the array or k, hence it's still O(1) space under the normal conventions.
It's not so obvious how many function calls there are for k missing elements, so I'll provide a visual. Your original sub-array (connected array) is the full array, which has all k missing elements in it. We'll imagine them in increasing order, where -- represent connections (part of same sub-array):
m1 -- m2 -- m3 -- m4 -- (...) -- mk-1 -- mk
The effect of the Find function is to disconnect the missing elements into different non-overlapping sub-arrays. It guarantees that there's at least one missing element in each sub-array, which means breaking exactly one connection.
What this means is that regardless of how the splits occur, it will always take k-1 Find function calls to do the work of finding the sub-arrays that have only one missing element in it.
So the time complexity is Θ((k-1 + k) * n) = Θ(k*n).
For the space complexity, if we divide proportionally each time then we get O(log(k)) space complexity, but if we only separate one at a time it gives us O(k).
See here for a proof as to why the space complexity is O(log(n)). Given that above we've shown that it's also O(k), then we know that it's O(min(k,log(n))).
May be this algorithm can work for question 1:
Precompute xor of first 100 integers(val=1^2^3^4....100)
xor the elements as they keep coming from input stream ( val1=val1^next_input)
final answer=val^val1
Or even better:
def GetValue(A)
val=0
for i=1 to 100
do
val=val^i
done
for value in A:
do
val=val^value
done
return val
This algorithm can in fact be expanded for two missing numbers. The first step remains the same. When we call GetValue with two missing numbers the result will be a a1^a2 are the two missing numbers. Lets say
val = a1^a2
Now to sieve out a1 and a2 from val we take any set bit in val. Lets say the ith bit is set in val. That means that a1 and a2 have different parity at ith bit position.
Now we do another iteration on the original array and keep two xor values. One for the numbers which have the ith bit set and other which doesn't have the ith bit set. We now have two buckets of numbers, and its guranteed that a1 and a2 will lie in different buckets. Now repeat the same what we did for finding one missing element on each of the bucket.
There is a general way to solve streaming problems like this.
The idea is to use a bit of randomization to hopefully 'spread' the k elements into independent sub problems, where our original algorithm solves the problem for us. This technique is used in sparse signal reconstruction, among other things.
Make an array, a, of size u = k^2.
Pick any universal hash function, h : {1,...,n} -> {1,...,u}. (Like multiply-shift)
For each i in 1, ..., n increase a[h(i)] += i
For each number x in the input stream, decrement a[h(x)] -= x.
If all of the missing numbers have been hashed to different buckets, the non-zero elements of the array will now contain the missing numbers.
The probability that a particular pair is sent to the same bucket, is less than 1/u by definition of a universal hash function. Since there are about k^2/2 pairs, we have that the error probability is at most k^2/2/u=1/2. That is, we succeed with probability at least 50%, and if we increase u we increase our chances.
Notice that this algorithm takes k^2 logn bits of space (We need logn bits per array bucket.) This matches the space required by #Dimitris Andreou's answer (In particular the space requirement of polynomial factorization, which happens to also be randomized.)
This algorithm also has constant time per update, rather than time k in the case of power-sums.
In fact, we can be even more efficient than the power sum method by using the trick described in the comments.
Can you check if every number exists? If yes you may try this:
S = sum of all numbers in the bag (S < 5050)
Z = sum of the missing numbers 5050 - S
if the missing numbers are x and y then:
x = Z - y and
max(x) = Z - 1
So you check the range from 1 to max(x) and find the number
You can solve Q2 if you have the sum of both lists and the product of both lists.
(l1 is the original, l2 is the modified list)
d = sum(l1) - sum(l2)
m = mul(l1) / mul(l2)
We can optimise this since the sum of an arithmetic series is n times the average of the first and last terms:
n = len(l1)
d = (n/2)*(n+1) - sum(l2)
Now we know that (if a and b are the removed numbers):
a + b = d
a * b = m
So we can rearrange to:
a = s - b
b * (s - b) = m
And multiply out:
-b^2 + s*b = m
And rearrange so the right side is zero:
-b^2 + s*b - m = 0
Then we can solve with the quadratic formula:
b = (-s + sqrt(s^2 - (4*-1*-m)))/-2
a = s - b
Sample Python 3 code:
from functools import reduce
import operator
import math
x = list(range(1,21))
sx = (len(x)/2)*(len(x)+1)
x.remove(15)
x.remove(5)
mul = lambda l: reduce(operator.mul,l)
s = sx - sum(x)
m = mul(range(1,21)) / mul(x)
b = (-s + math.sqrt(s**2 - (-4*(-m))))/-2
a = s - b
print(a,b) #15,5
I do not know the complexity of the sqrt, reduce and sum functions so I cannot work out the complexity of this solution (if anyone does know please comment below.)
Here is a solution that doesn't rely on complex math as sdcvvc's/Dimitris Andreou's answers do, doesn't change the input array as caf and Colonel Panic did, and doesn't use the bitset of enormous size as Chris Lercher, JeremyP and many others did. Basically, I began with Svalorzen's/Gilad Deutch's idea for Q2, generalized it to the common case Qk and implemented in Java to prove that the algorithm works.
The idea
Suppose we have an arbitrary interval I of which we only know that it contains at least one of the missing numbers. After one pass through the input array, looking only at the numbers from I, we can obtain both the sum S and the quantity Q of missing numbers from I. We do this by simply decrementing I's length each time we encounter a number from I (for obtaining Q) and by decreasing pre-calculated sum of all numbers in I by that encountered number each time (for obtaining S).
Now we look at S and Q. If Q = 1, it means that then I contains only one of the missing numbers, and this number is clearly S. We mark I as finished (it is called "unambiguous" in the program) and leave it out from further consideration. On the other hand, if Q > 1, we can calculate the average A = S / Q of missing numbers contained in I. As all numbers are distinct, at least one of such numbers is strictly less than A and at least one is strictly greater than A. Now we split I in A into two smaller intervals each of which contains at least one missing number. Note that it doesn't matter to which of the intervals we assign A in case it is an integer.
We make the next array pass calculating S and Q for each of the intervals separately (but in the same pass) and after that mark intervals with Q = 1 and split intervals with Q > 1. We continue this process until there are no new "ambiguous" intervals, i.e. we have nothing to split because each interval contains exactly one missing number (and we always know this number because we know S). We start out from the sole "whole range" interval containing all possible numbers (like [1..N] in the question).
Time and space complexity analysis
The total number of passes p we need to make until the process stops is never greater than the missing numbers count k. The inequality p <= k can be proved rigorously. On the other hand, there is also an empirical upper bound p < log2N + 3 that is useful for large values of k. We need to make a binary search for each number of the input array to determine the interval to which it belongs. This adds the log k multiplier to the time complexity.
In total, the time complexity is O(N ᛫ min(k, log N) ᛫ log k). Note that for large k, this is significantly better than that of sdcvvc/Dimitris Andreou's method, which is O(N ᛫ k).
For its work, the algorithm requires O(k) additional space for storing at most k intervals, that is significantly better than O(N) in "bitset" solutions.
Java implementation
Here's a Java class that implements the above algorithm. It always returns a sorted array of missing numbers. Besides that, it doesn't require the missing numbers count k because it calculates it in the first pass. The whole range of numbers is given by the minNumber and maxNumber parameters (e.g. 1 and 100 for the first example in the question).
public class MissingNumbers {
private static class Interval {
boolean ambiguous = true;
final int begin;
int quantity;
long sum;
Interval(int begin, int end) { // begin inclusive, end exclusive
this.begin = begin;
quantity = end - begin;
sum = quantity * ((long)end - 1 + begin) / 2;
}
void exclude(int x) {
quantity--;
sum -= x;
}
}
public static int[] find(int minNumber, int maxNumber, NumberBag inputBag) {
Interval full = new Interval(minNumber, ++maxNumber);
for (inputBag.startOver(); inputBag.hasNext();)
full.exclude(inputBag.next());
int missingCount = full.quantity;
if (missingCount == 0)
return new int[0];
Interval[] intervals = new Interval[missingCount];
intervals[0] = full;
int[] dividers = new int[missingCount];
dividers[0] = minNumber;
int intervalCount = 1;
while (true) {
int oldCount = intervalCount;
for (int i = 0; i < oldCount; i++) {
Interval itv = intervals[i];
if (itv.ambiguous)
if (itv.quantity == 1) // number inside itv uniquely identified
itv.ambiguous = false;
else
intervalCount++; // itv will be split into two intervals
}
if (oldCount == intervalCount)
break;
int newIndex = intervalCount - 1;
int end = maxNumber;
for (int oldIndex = oldCount - 1; oldIndex >= 0; oldIndex--) {
// newIndex always >= oldIndex
Interval itv = intervals[oldIndex];
int begin = itv.begin;
if (itv.ambiguous) {
// split interval itv
// use floorDiv instead of / because input numbers can be negative
int mean = (int)Math.floorDiv(itv.sum, itv.quantity) + 1;
intervals[newIndex--] = new Interval(mean, end);
intervals[newIndex--] = new Interval(begin, mean);
} else
intervals[newIndex--] = itv;
end = begin;
}
for (int i = 0; i < intervalCount; i++)
dividers[i] = intervals[i].begin;
for (inputBag.startOver(); inputBag.hasNext();) {
int x = inputBag.next();
// find the interval to which x belongs
int i = java.util.Arrays.binarySearch(dividers, 0, intervalCount, x);
if (i < 0)
i = -i - 2;
Interval itv = intervals[i];
if (itv.ambiguous)
itv.exclude(x);
}
}
assert intervalCount == missingCount;
for (int i = 0; i < intervalCount; i++)
dividers[i] = (int)intervals[i].sum;
return dividers;
}
}
For fairness, this class receives input in form of NumberBag objects. NumberBag doesn't allow array modification and random access and also counts how many times the array was requested for sequential traversing. It is also more suitable for large array testing than Iterable<Integer> because it avoids boxing of primitive int values and allows wrapping a part of a large int[] for a convenient test preparation. It is not hard to replace, if desired, NumberBag by int[] or Iterable<Integer> type in the find signature, by changing two for-loops in it into foreach ones.
import java.util.*;
public abstract class NumberBag {
private int passCount;
public void startOver() {
passCount++;
}
public final int getPassCount() {
return passCount;
}
public abstract boolean hasNext();
public abstract int next();
// A lightweight version of Iterable<Integer> to avoid boxing of int
public static NumberBag fromArray(int[] base, int fromIndex, int toIndex) {
return new NumberBag() {
int index = toIndex;
public void startOver() {
super.startOver();
index = fromIndex;
}
public boolean hasNext() {
return index < toIndex;
}
public int next() {
if (index >= toIndex)
throw new NoSuchElementException();
return base[index++];
}
};
}
public static NumberBag fromArray(int[] base) {
return fromArray(base, 0, base.length);
}
public static NumberBag fromIterable(Iterable<Integer> base) {
return new NumberBag() {
Iterator<Integer> it;
public void startOver() {
super.startOver();
it = base.iterator();
}
public boolean hasNext() {
return it.hasNext();
}
public int next() {
return it.next();
}
};
}
}
Tests
Simple examples demonstrating the usage of these classes are given below.
import java.util.*;
public class SimpleTest {
public static void main(String[] args) {
int[] input = { 7, 1, 4, 9, 6, 2 };
NumberBag bag = NumberBag.fromArray(input);
int[] output = MissingNumbers.find(1, 10, bag);
System.out.format("Input: %s%nMissing numbers: %s%nPass count: %d%n",
Arrays.toString(input), Arrays.toString(output), bag.getPassCount());
List<Integer> inputList = new ArrayList<>();
for (int i = 0; i < 10; i++)
inputList.add(2 * i);
Collections.shuffle(inputList);
bag = NumberBag.fromIterable(inputList);
output = MissingNumbers.find(0, 19, bag);
System.out.format("%nInput: %s%nMissing numbers: %s%nPass count: %d%n",
inputList, Arrays.toString(output), bag.getPassCount());
// Sieve of Eratosthenes
final int MAXN = 1_000;
List<Integer> nonPrimes = new ArrayList<>();
nonPrimes.add(1);
int[] primes;
int lastPrimeIndex = 0;
while (true) {
primes = MissingNumbers.find(1, MAXN, NumberBag.fromIterable(nonPrimes));
int p = primes[lastPrimeIndex]; // guaranteed to be prime
int q = p;
for (int i = lastPrimeIndex++; i < primes.length; i++) {
q = primes[i]; // not necessarily prime
int pq = p * q;
if (pq > MAXN)
break;
nonPrimes.add(pq);
}
if (q == p)
break;
}
System.out.format("%nSieve of Eratosthenes. %d primes up to %d found:%n",
primes.length, MAXN);
for (int i = 0; i < primes.length; i++)
System.out.format(" %4d%s", primes[i], (i % 10) < 9 ? "" : "\n");
}
}
Large array testing can be performed this way:
import java.util.*;
public class BatchTest {
private static final Random rand = new Random();
public static int MIN_NUMBER = 1;
private final int minNumber = MIN_NUMBER;
private final int numberCount;
private final int[] numbers;
private int missingCount;
public long finderTime;
public BatchTest(int numberCount) {
this.numberCount = numberCount;
numbers = new int[numberCount];
for (int i = 0; i < numberCount; i++)
numbers[i] = minNumber + i;
}
private int passBound() {
int mBound = missingCount > 0 ? missingCount : 1;
int nBound = 34 - Integer.numberOfLeadingZeros(numberCount - 1); // ceil(log_2(numberCount)) + 2
return Math.min(mBound, nBound);
}
private void error(String cause) {
throw new RuntimeException("Error on '" + missingCount + " from " + numberCount + "' test, " + cause);
}
// returns the number of times the input array was traversed in this test
public int makeTest(int missingCount) {
this.missingCount = missingCount;
// numbers array is reused when numberCount stays the same,
// just Fisher–Yates shuffle it for each test
for (int i = numberCount - 1; i > 0; i--) {
int j = rand.nextInt(i + 1);
if (i != j) {
int t = numbers[i];
numbers[i] = numbers[j];
numbers[j] = t;
}
}
final int bagSize = numberCount - missingCount;
NumberBag inputBag = NumberBag.fromArray(numbers, 0, bagSize);
finderTime -= System.nanoTime();
int[] found = MissingNumbers.find(minNumber, minNumber + numberCount - 1, inputBag);
finderTime += System.nanoTime();
if (inputBag.getPassCount() > passBound())
error("too many passes (" + inputBag.getPassCount() + " while only " + passBound() + " allowed)");
if (found.length != missingCount)
error("wrong result length");
int j = bagSize; // "missing" part beginning in numbers
Arrays.sort(numbers, bagSize, numberCount);
for (int i = 0; i < missingCount; i++)
if (found[i] != numbers[j++])
error("wrong result array, " + i + "-th element differs");
return inputBag.getPassCount();
}
public static void strideCheck(int numberCount, int minMissing, int maxMissing, int step, int repeats) {
BatchTest t = new BatchTest(numberCount);
System.out.println("╠═══════════════════════╬═════════════════╬═════════════════╣");
for (int missingCount = minMissing; missingCount <= maxMissing; missingCount += step) {
int minPass = Integer.MAX_VALUE;
int passSum = 0;
int maxPass = 0;
t.finderTime = 0;
for (int j = 1; j <= repeats; j++) {
int pCount = t.makeTest(missingCount);
if (pCount < minPass)
minPass = pCount;
passSum += pCount;
if (pCount > maxPass)
maxPass = pCount;
}
System.out.format("║ %9d %9d ║ %2d %5.2f %2d ║ %11.3f ║%n", missingCount, numberCount, minPass,
(double)passSum / repeats, maxPass, t.finderTime * 1e-6 / repeats);
}
}
public static void main(String[] args) {
System.out.println("╔═══════════════════════╦═════════════════╦═════════════════╗");
System.out.println("║ Number count ║ Passes ║ Average time ║");
System.out.println("║ missimg total ║ min avg max ║ per search (ms) ║");
long time = System.nanoTime();
strideCheck(100, 0, 100, 1, 20_000);
strideCheck(100_000, 2, 99_998, 1_282, 15);
MIN_NUMBER = -2_000_000_000;
strideCheck(300_000_000, 1, 10, 1, 1);
time = System.nanoTime() - time;
System.out.println("╚═══════════════════════╩═════════════════╩═════════════════╝");
System.out.format("%nSuccess. Total time: %.2f s.%n", time * 1e-9);
}
}
Try them out on Ideone
I think this can be done without any complex mathematical equations and theories. Below is a proposal for an in place and O(2n) time complexity solution:
Input form assumptions :
# of numbers in bag = n
# of missing numbers = k
The numbers in the bag are represented by an array of length n
Length of input array for the algo = n
Missing entries in the array (numbers taken out of the bag) are replaced by the value of the first element in the array.
Eg. Initially bag looks like [2,9,3,7,8,6,4,5,1,10].
If 4 is taken out, value of 4 will become 2 (the first element of the array).
Therefore after taking 4 out the bag will look like [2,9,3,7,8,6,2,5,1,10]
The key to this solution is to tag the INDEX of a visited number by negating the value at that INDEX as the array is traversed.
IEnumerable<int> GetMissingNumbers(int[] arrayOfNumbers)
{
List<int> missingNumbers = new List<int>();
int arrayLength = arrayOfNumbers.Length;
//First Pass
for (int i = 0; i < arrayLength; i++)
{
int index = Math.Abs(arrayOfNumbers[i]) - 1;
if (index > -1)
{
arrayOfNumbers[index] = Math.Abs(arrayOfNumbers[index]) * -1; //Marking the visited indexes
}
}
//Second Pass to get missing numbers
for (int i = 0; i < arrayLength; i++)
{
//If this index is unvisited, means this is a missing number
if (arrayOfNumbers[i] > 0)
{
missingNumbers.Add(i + 1);
}
}
return missingNumbers;
}
Thanks for this very interesting question:
It's because you reminded me Newton's work which really can solve this problem
Please refer Newton's Identities
As number of variables to find = number of equations (must for consistency)
I believe for this we should raise power to bag numbers so as to create number of different equations.
I don't know but, I believe if there should a function say f for which we'll add f( xi )
x1 + x2 + ... + xk = z1
x12 + x22 + ... + xk2 = z2
............
............
............
x1k + x2k + ... + xkk = zk
rest is a mathematical work not sure about time and space complexity but Newton's Identities will surely play important role.
Can't we use set theory
.difference_update() or Is there any chance of Linear Algebra in this question method?
You'd probably need clarification on what O(k) means.
Here's a trivial solution for arbitrary k: for each v in your set of numbers, accumulate the sum of 2^v. At the end, loop i from 1 to N. If sum bitwise ANDed with 2^i is zero, then i is missing. (Or numerically, if floor of the sum divided by 2^i is even. Or sum modulo 2^(i+1)) < 2^i.)
Easy, right? O(N) time, O(1) storage, and it supports arbitrary k.
Except that you're computing enormous numbers that on a real computer would each require O(N) space. In fact, this solution is identical to a bit vector.
So you could be clever and compute the sum and the sum of squares and the sum of cubes... up to the sum of v^k, and do the fancy math to extract the result. But those are big numbers too, which begs the question: what abstract model of operation are we talking about? How much fits in O(1) space, and how long does it take to sum up numbers of whatever size you need?
I have read all thirty answers and found the simplest one i.e to use a bit array of 100 to be the best. But as the question said we can't use an array of size N, I would use O(1) space complexity and k iterations i.e O(NK) time complexity to solve this.
To make the explanation simpler, consider I have been given numbers from 1 to 15 and two of them are missing i.e 9 and 14 but I don't know. Let the bag look like this:
[8,1,2,12,4,7,5,10,11,13,15,3,6].
We know that each number is represented internally in the form of bits.
For numbers till 16 we only need 4 bits. For numbers till 10^9, we will need 32 bits. But let's focus on 4 bits and then later we can generalize it.
Now, assume if we had all the numbers from 1 to 15, then internally, we would have numbers like this (if we had them ordered):
0001
0010
0011
0100
0101
0110
0111
1000
1001
1010
1011
1100
1101
1110
1111
But now we have two numbers missing. So our representation will look something like this (shown ordered for understanding but can be in any order):
(2MSD|2LSD)
00|01
00|10
00|11
-----
01|00
01|01
01|10
01|11
-----
10|00
missing=(10|01)
10|10
10|11
-----
11|00
11|01
missing=(11|10)
11|11
Now let's make a bit array of size 2 that holds the count of numbers with corresponding 2 most significant digits. i.e
= [__,__,__,__]
00,01,10,11
Scan the bag from left and right and fill the above array such that each of bin of bit array contains the count of numbers. The result will be as under:
= [ 3, 4, 3, 3]
00,01,10,11
If all the numbers would have been present, it would have looked like this:
= [ 3, 4, 4, 4]
00,01,10,11
Thus we know that there are two numbers missing: one whose most 2 significant digits are 10 and one whose most 2 significant bits are 11. Now scan the list again and fill out a bit array of size 2 for the lower 2 significant digits. This time, only consider elements whose most 2 significant digits are 10. We will have the bit array as:
= [ 1, 0, 1, 1]
00,01,10,11
If all numbers of MSD=10 were present, we would have 1 in all the bins but now we see that one is missing. Thus we have the number whose MSD=10 and LSD=01 is missing which is 1001 i.e 9.
Similarly, if we scan again but consider only elements whose MSD=11,we get MSD=11 and LSD=10 missing which is 1110 i.e 14.
= [ 1, 0, 1, 1]
00,01,10,11
Thus, we can find the missing numbers in a constant amount of space. We can generalize this for 100, 1000 or 10^9 or any set of numbers.
References: Problem 1.6 in http://users.ece.utexas.edu/~adnan/afi-samples-new.pdf
Very nice problem. I'd go for using a set difference for Qk. A lot of programming languages even have support for it, like in Ruby:
missing = (1..100).to_a - bag
It's probably not the most efficient solution but it's one I would use in real life if I was faced with such a task in this case (known boundaries, low boundaries). If the set of number would be very large then I would consider a more efficient algorithm, of course, but until then the simple solution would be enough for me.
You could try using a Bloom Filter. Insert each number in the bag into the bloom, then iterate over the complete 1-k set until reporting each one not found. This may not find the answer in all scenarios, but might be a good enough solution.
I'd take a different approach to that question and probe the interviewer for more details about the larger problem he's trying to solve. Depending on the problem and the requirements surrounding it, the obvious set-based solution might be the right thing and the generate-a-list-and-pick-through-it-afterward approach might not.
For example, it might be that the interviewer is going to dispatch n messages and needs to know the k that didn't result in a reply and needs to know it in as little wall clock time as possible after the n-kth reply arrives. Let's also say that the message channel's nature is such that even running at full bore, there's enough time to do some processing between messages without having any impact on how long it takes to produce the end result after the last reply arrives. That time can be put to use inserting some identifying facet of each sent message into a set and deleting it as each corresponding reply arrives. Once the last reply has arrived, the only thing to be done is to remove its identifier from the set, which in typical implementations takes O(log k+1). After that, the set contains the list of k missing elements and there's no additional processing to be done.
This certainly isn't the fastest approach for batch processing pre-generated bags of numbers because the whole thing runs O((log 1 + log 2 + ... + log n) + (log n + log n-1 + ... + log k)). But it does work for any value of k (even if it's not known ahead of time) and in the example above it was applied in a way that minimizes the most critical interval.
This might sound stupid, but, in the first problem presented to you, you would have to see all the remaining numbers in the bag to actually add them up to find the missing number using that equation.
So, since you get to see all the numbers, just look for the number that's missing. The same goes for when two numbers are missing. Pretty simple I think. No point in using an equation when you get to see the numbers remaining in the bag.
You can motivate the solution by thinking about it in terms of symmetries (groups, in math language). No matter the order of the set of numbers, the answer should be the same. If you're going to use k functions to help determine the missing elements, you should be thinking about what functions have that property: symmetric. The function s_1(x) = x_1 + x_2 + ... + x_n is an example of a symmetric function, but there are others of higher degree. In particular, consider the elementary symmetric functions. The elementary symmetric function of degree 2 is s_2(x) = x_1 x_2 + x_1 x_3 + ... + x_1 x_n + x_2 x_3 + ... + x_(n-1) x_n, the sum of all products of two elements. Similarly for the elementary symmetric functions of degree 3 and higher. They are obviously symmetric. Furthermore, it turns out they are the building blocks for all symmetric functions.
You can build the elementary symmetric functions as you go by noting that s_2(x,x_(n+1)) = s_2(x) + s_1(x)(x_(n+1)). Further thought should convince you that s_3(x,x_(n+1)) = s_3(x) + s_2(x)(x_(n+1)) and so on, so they can be computed in one pass.
How do we tell which items were missing from the array? Think about the polynomial (z-x_1)(z-x_2)...(z-x_n). It evaluates to 0 if you put in any of the numbers x_i. Expanding the polynomial, you get z^n-s_1(x)z^(n-1)+ ... + (-1)^n s_n. The elementary symmetric functions appear here too, which is really no surprise, since the polynomial should stay the same if we apply any permutation to the roots.
So we can build the polynomial and try to factor it to figure out which numbers are not in the set, as others have mentioned.
Finally, if we are concerned about overflowing memory with large numbers (the nth symmetric polynomial will be of the order 100!), we can do these calculations mod p where p is a prime bigger than 100. In that case we evaluate the polynomial mod p and find that it again evaluates to 0 when the input is a number in the set, and it evaluates to a non-zero value when the input is a number not in the set. However, as others have pointed out, to get the values out of the polynomial in time that depends on k, not N, we have to factor the polynomial mod p.
I believe I have a O(k) time and O(log(k)) space algorithm, given that you have the floor(x) and log2(x) functions for arbitrarily big integers available:
You have an k-bit long integer (hence the log8(k) space) where you add the x^2, where x is the next number you find in the bag: s=1^2+2^2+... This takes O(N) time (which is not a problem for the interviewer). At the end you get j=floor(log2(s)) which is the biggest number you're looking for. Then s=s-j and you do again the above:
for (i = 0 ; i < k ; i++)
{
j = floor(log2(s));
missing[i] = j;
s -= j;
}
Now, you usually don't have floor and log2 functions for 2756-bit integers but instead for doubles. So? Simply, for each 2 bytes (or 1, or 3, or 4) you can use these functions to get the desired numbers, but this adds an O(N) factor to time complexity
Try to find the product of numbers from 1 to 50:
Let product, P1 = 1 x 2 x 3 x ............. 50
When you take out numbers one by one, multiply them so that you get the product P2. But two numbers are missing here, hence P2 < P1.
The product of the two mising terms, a x b = P1 - P2.
You already know the sum, a + b = S1.
From the above two equations, solve for a and b through a quadratic equation. a and b are your missing numbers.

array- having some issues [duplicate]

An interesting interview question that a colleague of mine uses:
Suppose that you are given a very long, unsorted list of unsigned 64-bit integers. How would you find the smallest non-negative integer that does not occur in the list?
FOLLOW-UP: Now that the obvious solution by sorting has been proposed, can you do it faster than O(n log n)?
FOLLOW-UP: Your algorithm has to run on a computer with, say, 1GB of memory
CLARIFICATION: The list is in RAM, though it might consume a large amount of it. You are given the size of the list, say N, in advance.
If the datastructure can be mutated in place and supports random access then you can do it in O(N) time and O(1) additional space. Just go through the array sequentially and for every index write the value at the index to the index specified by value, recursively placing any value at that location to its place and throwing away values > N. Then go again through the array looking for the spot where value doesn't match the index - that's the smallest value not in the array. This results in at most 3N comparisons and only uses a few values worth of temporary space.
# Pass 1, move every value to the position of its value
for cursor in range(N):
target = array[cursor]
while target < N and target != array[target]:
new_target = array[target]
array[target] = target
target = new_target
# Pass 2, find first location where the index doesn't match the value
for cursor in range(N):
if array[cursor] != cursor:
return cursor
return N
Here's a simple O(N) solution that uses O(N) space. I'm assuming that we are restricting the input list to non-negative numbers and that we want to find the first non-negative number that is not in the list.
Find the length of the list; lets say it is N.
Allocate an array of N booleans, initialized to all false.
For each number X in the list, if X is less than N, set the X'th element of the array to true.
Scan the array starting from index 0, looking for the first element that is false. If you find the first false at index I, then I is the answer. Otherwise (i.e. when all elements are true) the answer is N.
In practice, the "array of N booleans" would probably be encoded as a "bitmap" or "bitset" represented as a byte or int array. This typically uses less space (depending on the programming language) and allows the scan for the first false to be done more quickly.
This is how / why the algorithm works.
Suppose that the N numbers in the list are not distinct, or that one or more of them is greater than N. This means that there must be at least one number in the range 0 .. N - 1 that is not in the list. So the problem of find the smallest missing number must therefore reduce to the problem of finding the smallest missing number less than N. This means that we don't need to keep track of numbers that are greater or equal to N ... because they won't be the answer.
The alternative to the previous paragraph is that the list is a permutation of the numbers from 0 .. N - 1. In this case, step 3 sets all elements of the array to true, and step 4 tells us that the first "missing" number is N.
The computational complexity of the algorithm is O(N) with a relatively small constant of proportionality. It makes two linear passes through the list, or just one pass if the list length is known to start with. There is no need to represent the hold the entire list in memory, so the algorithm's asymptotic memory usage is just what is needed to represent the array of booleans; i.e. O(N) bits.
(By contrast, algorithms that rely on in-memory sorting or partitioning assume that you can represent the entire list in memory. In the form the question was asked, this would require O(N) 64-bit words.)
#Jorn comments that steps 1 through 3 are a variation on counting sort. In a sense he is right, but the differences are significant:
A counting sort requires an array of (at least) Xmax - Xmin counters where Xmax is the largest number in the list and Xmin is the smallest number in the list. Each counter has to be able to represent N states; i.e. assuming a binary representation it has to have an integer type (at least) ceiling(log2(N)) bits.
To determine the array size, a counting sort needs to make an initial pass through the list to determine Xmax and Xmin.
The minimum worst-case space requirement is therefore ceiling(log2(N)) * (Xmax - Xmin) bits.
By contrast, the algorithm presented above simply requires N bits in the worst and best cases.
However, this analysis leads to the intuition that if the algorithm made an initial pass through the list looking for a zero (and counting the list elements if required), it would give a quicker answer using no space at all if it found the zero. It is definitely worth doing this if there is a high probability of finding at least one zero in the list. And this extra pass doesn't change the overall complexity.
EDIT: I've changed the description of the algorithm to use "array of booleans" since people apparently found my original description using bits and bitmaps to be confusing.
Since the OP has now specified that the original list is held in RAM and that the computer has only, say, 1GB of memory, I'm going to go out on a limb and predict that the answer is zero.
1GB of RAM means the list can have at most 134,217,728 numbers in it. But there are 264 = 18,446,744,073,709,551,616 possible numbers. So the probability that zero is in the list is 1 in 137,438,953,472.
In contrast, my odds of being struck by lightning this year are 1 in 700,000. And my odds of getting hit by a meteorite are about 1 in 10 trillion. So I'm about ten times more likely to be written up in a scientific journal due to my untimely death by a celestial object than the answer not being zero.
As pointed out in other answers you can do a sort, and then simply scan up until you find a gap.
You can improve the algorithmic complexity to O(N) and keep O(N) space by using a modified QuickSort where you eliminate partitions which are not potential candidates for containing the gap.
On the first partition phase, remove duplicates.
Once the partitioning is complete look at the number of items in the lower partition
Is this value equal to the value used for creating the partition?
If so then it implies that the gap is in the higher partition.
Continue with the quicksort, ignoring the lower partition
Otherwise the gap is in the lower partition
Continue with the quicksort, ignoring the higher partition
This saves a large number of computations.
To illustrate one of the pitfalls of O(N) thinking, here is an O(N) algorithm that uses O(1) space.
for i in [0..2^64):
if i not in list: return i
print "no 64-bit integers are missing"
Since the numbers are all 64 bits long, we can use radix sort on them, which is O(n). Sort 'em, then scan 'em until you find what you're looking for.
if the smallest number is zero, scan forward until you find a gap. If the smallest number is not zero, the answer is zero.
For a space efficient method and all values are distinct you can do it in space O( k ) and time O( k*log(N)*N ). It's space efficient and there's no data moving and all operations are elementary (adding subtracting).
set U = N; L=0
First partition the number space in k regions. Like this:
0->(1/k)*(U-L) + L, 0->(2/k)*(U-L) + L, 0->(3/k)*(U-L) + L ... 0->(U-L) + L
Find how many numbers (count{i}) are in each region. (N*k steps)
Find the first region (h) that isn't full. That means count{h} < upper_limit{h}. (k steps)
if h - count{h-1} = 1 you've got your answer
set U = count{h}; L = count{h-1}
goto 2
this can be improved using hashing (thanks for Nic this idea).
same
First partition the number space in k regions. Like this:
L + (i/k)->L + (i+1/k)*(U-L)
inc count{j} using j = (number - L)/k (if L < number < U)
find first region (h) that doesn't have k elements in it
if count{h} = 1 h is your answer
set U = maximum value in region h L = minimum value in region h
This will run in O(log(N)*N).
I'd just sort them then run through the sequence until I find a gap (including the gap at the start between zero and the first number).
In terms of an algorithm, something like this would do it:
def smallest_not_in_list(list):
sort(list)
if list[0] != 0:
return 0
for i = 1 to list.last:
if list[i] != list[i-1] + 1:
return list[i-1] + 1
if list[list.last] == 2^64 - 1:
assert ("No gaps")
return list[list.last] + 1
Of course, if you have a lot more memory than CPU grunt, you could create a bitmask of all possible 64-bit values and just set the bits for every number in the list. Then look for the first 0-bit in that bitmask. That turns it into an O(n) operation in terms of time but pretty damned expensive in terms of memory requirements :-)
I doubt you could improve on O(n) since I can't see a way of doing it that doesn't involve looking at each number at least once.
The algorithm for that one would be along the lines of:
def smallest_not_in_list(list):
bitmask = mask_make(2^64) // might take a while :-)
mask_clear_all (bitmask)
for i = 1 to list.last:
mask_set (bitmask, list[i])
for i = 0 to 2^64 - 1:
if mask_is_clear (bitmask, i):
return i
assert ("No gaps")
Sort the list, look at the first and second elements, and start going up until there is a gap.
We could use a hash table to hold the numbers. Once all numbers are done, run a counter from 0 till we find the lowest. A reasonably good hash will hash and store in constant time, and retrieves in constant time.
for every i in X // One scan Θ(1)
hashtable.put(i, i); // O(1)
low = 0;
while (hashtable.get(i) <> null) // at most n+1 times
low++;
print low;
The worst case if there are n elements in the array, and are {0, 1, ... n-1}, in which case, the answer will be obtained at n, still keeping it O(n).
You can do it in O(n) time and O(1) additional space, although the hidden factor is quite large. This isn't a practical way to solve the problem, but it might be interesting nonetheless.
For every unsigned 64-bit integer (in ascending order) iterate over the list until you find the target integer or you reach the end of the list. If you reach the end of the list, the target integer is the smallest integer not in the list. If you reach the end of the 64-bit integers, every 64-bit integer is in the list.
Here it is as a Python function:
def smallest_missing_uint64(source_list):
the_answer = None
target = 0L
while target < 2L**64:
target_found = False
for item in source_list:
if item == target:
target_found = True
if not target_found and the_answer is None:
the_answer = target
target += 1L
return the_answer
This function is deliberately inefficient to keep it O(n). Note especially that the function keeps checking target integers even after the answer has been found. If the function returned as soon as the answer was found, the number of times the outer loop ran would be bound by the size of the answer, which is bound by n. That change would make the run time O(n^2), even though it would be a lot faster.
Thanks to egon, swilden, and Stephen C for my inspiration. First, we know the bounds of the goal value because it cannot be greater than the size of the list. Also, a 1GB list could contain at most 134217728 (128 * 2^20) 64-bit integers.
Hashing part
I propose using hashing to dramatically reduce our search space. First, square root the size of the list. For a 1GB list, that's N=11,586. Set up an integer array of size N. Iterate through the list, and take the square root* of each number you find as your hash. In your hash table, increment the counter for that hash. Next, iterate through your hash table. The first bucket you find that is not equal to it's max size defines your new search space.
Bitmap part
Now set up a regular bit map equal to the size of your new search space, and again iterate through the source list, filling out the bitmap as you find each number in your search space. When you're done, the first unset bit in your bitmap will give you your answer.
This will be completed in O(n) time and O(sqrt(n)) space.
(*You could use use something like bit shifting to do this a lot more efficiently, and just vary the number and size of buckets accordingly.)
Well if there is only one missing number in a list of numbers, the easiest way to find the missing number is to sum the series and subtract each value in the list. The final value is the missing number.
int i = 0;
while ( i < Array.Length)
{
if (Array[i] == i + 1)
{
i++;
}
if (i < Array.Length)
{
if (Array[i] <= Array.Length)
{//SWap
int temp = Array[i];
int AnoTemp = Array[temp - 1];
Array[temp - 1] = temp;
Array[i] = AnoTemp;
}
else
i++;
}
}
for (int j = 0; j < Array.Length; j++)
{
if (Array[j] > Array.Length)
{
Console.WriteLine(j + 1);
j = Array.Length;
}
else
if (j == Array.Length - 1)
Console.WriteLine("Not Found !!");
}
}
Here's my answer written in Java:
Basic Idea:
1- Loop through the array throwing away duplicate positive, zeros, and negative numbers while summing up the rest, getting the maximum positive number as well, and keep the unique positive numbers in a Map.
2- Compute the sum as max * (max+1)/2.
3- Find the difference between the sums calculated at steps 1 & 2
4- Loop again from 1 to the minimum of [sums difference, max] and return the first number that is not in the map populated in step 1.
public static int solution(int[] A) {
if (A == null || A.length == 0) {
throw new IllegalArgumentException();
}
int sum = 0;
Map<Integer, Boolean> uniqueNumbers = new HashMap<Integer, Boolean>();
int max = A[0];
for (int i = 0; i < A.length; i++) {
if(A[i] < 0) {
continue;
}
if(uniqueNumbers.get(A[i]) != null) {
continue;
}
if (A[i] > max) {
max = A[i];
}
uniqueNumbers.put(A[i], true);
sum += A[i];
}
int completeSum = (max * (max + 1)) / 2;
for(int j = 1; j <= Math.min((completeSum - sum), max); j++) {
if(uniqueNumbers.get(j) == null) { //O(1)
return j;
}
}
//All negative case
if(uniqueNumbers.isEmpty()) {
return 1;
}
return 0;
}
As Stephen C smartly pointed out, the answer must be a number smaller than the length of the array. I would then find the answer by binary search. This optimizes the worst case (so the interviewer can't catch you in a 'what if' pathological scenario). In an interview, do point out you are doing this to optimize for the worst case.
The way to use binary search is to subtract the number you are looking for from each element of the array, and check for negative results.
I like the "guess zero" apprach. If the numbers were random, zero is highly probable. If the "examiner" set a non-random list, then add one and guess again:
LowNum=0
i=0
do forever {
if i == N then leave /* Processed entire array */
if array[i] == LowNum {
LowNum++
i=0
}
else {
i++
}
}
display LowNum
The worst case is n*N with n=N, but in practice n is highly likely to be a small number (eg. 1)
I am not sure if I got the question. But if for list 1,2,3,5,6 and the missing number is 4, then the missing number can be found in O(n) by:
(n+2)(n+1)/2-(n+1)n/2
EDIT: sorry, I guess I was thinking too fast last night. Anyway, The second part should actually be replaced by sum(list), which is where O(n) comes. The formula reveals the idea behind it: for n sequential integers, the sum should be (n+1)*n/2. If there is a missing number, the sum would be equal to the sum of (n+1) sequential integers minus the missing number.
Thanks for pointing out the fact that I was putting some middle pieces in my mind.
Well done Ants Aasma! I thought about the answer for about 15 minutes and independently came up with an answer in a similar vein of thinking to yours:
#define SWAP(x,y) { numerictype_t tmp = x; x = y; y = tmp; }
int minNonNegativeNotInArr (numerictype_t * a, size_t n) {
int m = n;
for (int i = 0; i < m;) {
if (a[i] >= m || a[i] < i || a[i] == a[a[i]]) {
m--;
SWAP (a[i], a[m]);
continue;
}
if (a[i] > i) {
SWAP (a[i], a[a[i]]);
continue;
}
i++;
}
return m;
}
m represents "the current maximum possible output given what I know about the first i inputs and assuming nothing else about the values until the entry at m-1".
This value of m will be returned only if (a[i], ..., a[m-1]) is a permutation of the values (i, ..., m-1). Thus if a[i] >= m or if a[i] < i or if a[i] == a[a[i]] we know that m is the wrong output and must be at least one element lower. So decrementing m and swapping a[i] with the a[m] we can recurse.
If this is not true but a[i] > i then knowing that a[i] != a[a[i]] we know that swapping a[i] with a[a[i]] will increase the number of elements in their own place.
Otherwise a[i] must be equal to i in which case we can increment i knowing that all the values of up to and including this index are equal to their index.
The proof that this cannot enter an infinite loop is left as an exercise to the reader. :)
The Dafny fragment from Ants' answer shows why the in-place algorithm may fail. The requires pre-condition describes that the values of each item must not go beyond the bounds of the array.
method AntsAasma(A: array<int>) returns (M: int)
requires A != null && forall N :: 0 <= N < A.Length ==> 0 <= A[N] < A.Length;
modifies A;
{
// Pass 1, move every value to the position of its value
var N := A.Length;
var cursor := 0;
while (cursor < N)
{
var target := A[cursor];
while (0 <= target < N && target != A[target])
{
var new_target := A[target];
A[target] := target;
target := new_target;
}
cursor := cursor + 1;
}
// Pass 2, find first location where the index doesn't match the value
cursor := 0;
while (cursor < N)
{
if (A[cursor] != cursor)
{
return cursor;
}
cursor := cursor + 1;
}
return N;
}
Paste the code into the validator with and without the forall ... clause to see the verification error. The second error is a result of the verifier not being able to establish a termination condition for the Pass 1 loop. Proving this is left to someone who understands the tool better.
Here's an answer in Java that does not modify the input and uses O(N) time and N bits plus a small constant overhead of memory (where N is the size of the list):
int smallestMissingValue(List<Integer> values) {
BitSet bitset = new BitSet(values.size() + 1);
for (int i : values) {
if (i >= 0 && i <= values.size()) {
bitset.set(i);
}
}
return bitset.nextClearBit(0);
}
def solution(A):
index = 0
target = []
A = [x for x in A if x >=0]
if len(A) ==0:
return 1
maxi = max(A)
if maxi <= len(A):
maxi = len(A)
target = ['X' for x in range(maxi+1)]
for number in A:
target[number]= number
count = 1
while count < maxi+1:
if target[count] == 'X':
return count
count +=1
return target[count-1] + 1
Got 100% for the above solution.
1)Filter negative and Zero
2)Sort/distinct
3)Visit array
Complexity: O(N) or O(N * log(N))
using Java8
public int solution(int[] A) {
int result = 1;
boolean found = false;
A = Arrays.stream(A).filter(x -> x > 0).sorted().distinct().toArray();
//System.out.println(Arrays.toString(A));
for (int i = 0; i < A.length; i++) {
result = i + 1;
if (result != A[i]) {
found = true;
break;
}
}
if (!found && result == A.length) {
//result is larger than max element in array
result++;
}
return result;
}
An unordered_set can be used to store all the positive numbers, and then we can iterate from 1 to length of unordered_set, and see the first number that does not occur.
int firstMissingPositive(vector<int>& nums) {
unordered_set<int> fre;
// storing each positive number in a hash.
for(int i = 0; i < nums.size(); i +=1)
{
if(nums[i] > 0)
fre.insert(nums[i]);
}
int i = 1;
// Iterating from 1 to size of the set and checking
// for the occurrence of 'i'
for(auto it = fre.begin(); it != fre.end(); ++it)
{
if(fre.find(i) == fre.end())
return i;
i +=1;
}
return i;
}
Solution through basic javascript
var a = [1, 3, 6, 4, 1, 2];
function findSmallest(a) {
var m = 0;
for(i=1;i<=a.length;i++) {
j=0;m=1;
while(j < a.length) {
if(i === a[j]) {
m++;
}
j++;
}
if(m === 1) {
return i;
}
}
}
console.log(findSmallest(a))
Hope this helps for someone.
With python it is not the most efficient, but correct
#!/usr/bin/env python3
# -*- coding: UTF-8 -*-
import datetime
# write your code in Python 3.6
def solution(A):
MIN = 0
MAX = 1000000
possible_results = range(MIN, MAX)
for i in possible_results:
next_value = (i + 1)
if next_value not in A:
return next_value
return 1
test_case_0 = [2, 2, 2]
test_case_1 = [1, 3, 44, 55, 6, 0, 3, 8]
test_case_2 = [-1, -22]
test_case_3 = [x for x in range(-10000, 10000)]
test_case_4 = [x for x in range(0, 100)] + [x for x in range(102, 200)]
test_case_5 = [4, 5, 6]
print("---")
a = datetime.datetime.now()
print(solution(test_case_0))
print(solution(test_case_1))
print(solution(test_case_2))
print(solution(test_case_3))
print(solution(test_case_4))
print(solution(test_case_5))
def solution(A):
A.sort()
j = 1
for i, elem in enumerate(A):
if j < elem:
break
elif j == elem:
j += 1
continue
else:
continue
return j
this can help:
0- A is [5, 3, 2, 7];
1- Define B With Length = A.Length; (O(1))
2- initialize B Cells With 1; (O(n))
3- For Each Item In A:
if (B.Length <= item) then B[Item] = -1 (O(n))
4- The answer is smallest index in B such that B[index] != -1 (O(n))

Report all missing numbers in an array of integers (represented in binary)

I recently had a friend report to me that during a job interview he was asked the following question, which seems to be a pretty popular one:
You are given a list L[1...n] that contains all the elements from 0 to n except one. The elements of this list are represented in binary and are not given in any particular order, and the only operation we can use to access them is to fetch the jth bit of L[i] in constant time.
How can you find the missing number in O(n) ?
He was able to answer this question (which I believe has multiple solutions, none of which being too complicated). For example, the following pseudo-code solves the above problem:
Say all numbers are represented by k bits and set j as the least significant bit (initially the rightmost).
1. Starting from j, separate all the numbers in L into two sets (S1 containing all numbers that have 1 as its jth bit, and S2 containing all numbers that have 0 in that position).
2. The smaller of the two sets contains the missing number, recurse on this subset and set j = j-1
At each iteration we reduce the size of the set by half. So initially we have O(n), followed by O(n/2), O(n/4) ... = O(n)
However the follow-up question was: "What if we now have k numbers missing in our list L and we wish to report all k numbers while still keeping the O(n) complexity and the limitations of the initial problem? How to do?
Any suggestions?
bool J[1..n + 1]={false,false...}
int temp;
for(i = 1; i <= n; i++)
{
temp=bitwisecopy of L[i];
J[temp + 1]=true
}
for(i = 1; i <= n+1; i++)
{
if(J[i]==false)
print i + 1;
}
Lol thats the jist of it...I think indices may be messed up.
Am I understanding the problem correctly? It wasn't all the clear to me what exactly was meant by the only operation is access the jth bit of L[i].
You can solve the original problem in O(n) by just doing a linear walk of the array until you find a number that doesn't match the expected value, like so (yes, I know I'm using an array of ints to approximate the array of bits, but the concept is the same):
int[] bits = {1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0};
int bitIndex = 0;
for (int num = 1; num < Integer.MAX_VALUE; num++) {
int numBits = (int) (Math.log(num) / Math.log(2)) + 1;
int nextNum = 0;
for (int index = 0; index < numBits; index++) {
nextNum = (nextNum << 1) | bits[bitIndex + index];
}
if (nextNum != num) {
System.out.println("Missing number: expected=" + num + ", actual=" + nextNum);
break;
}
bitIndex += numBits;
}
If you want to print all of the numbers that are not present in the array while keeping O(n) runtime, just replace the break; with num = nextNum; to continue checking for the next number.
Though there are some potential problems with this approach. If multiple consecutive numbers are missing then all bets are off. Also if the number of bits in num + 1 is larger than the number of bits in num, and num is missing from the bit array, then the bit index will be out of alignment with the data.
Of course, if multiple numbers are allowed to be missing, then the problem isn't really solvable. Consider for example:
{1,1,1,1,1,1,1}
It's just as valid in this case to say that I have numbers 1, 3, and 15 as it is to say that I only have 127 or that I have 7 and 15. When multiple consecutive values are permitted to go missing, the way to parse the bits essentially becomes arbitrary.
So perhaps one way to answer the second question is to read all the bits into a single large integer, and say "you have [very large number], and all the numbers before it are missing". Then you've produced a valid answer in O(n) time.
My idea is to solve it in the following way:
lets say 2^M is the lowest power of 2 that higher than N:
2^M>N, 2^M-1 <= N
now go over all the numbers from 1 to 2^M-1 and do bitwise XOR between all the numbers
(since you can only go over bit J each time do it for each digis separately - it's the same)
the result of all the XORs will be the number you are looking for.
for example: if N=6, and the missing number is 3:
M=3 => 2^M-1=7 =>
1 XOR 2 XOR 4 XOR 5 XOR 6 XOR 7 = 3

Resources