Counting elementary operations - algorithm

I need to count the elementary operations of the code below:
public static int findmax(int[] a, int x) {
int currentMax = a[0];
for (int i = 1; i < a.length; i++) {
if (a[i] > currentMax) {
currentMax = a[i];
}
}
return currentMax;
}
I understand that a primitive operation (such as assigning a value to a variable) is given a value of 1. So here assigning a[0] to currentMax accounts for 1 primitive operation executed.
Within the for loop: assigning 1 to i, also accounts for 1. And i < a.length, and i++ are n - 1 each (i.e 2(n-1)). However, I get confused as to how to deal with the if statement. I'm aware that we're looking for the worst case (so we'd need to perform the if condition and the statement nested within that block). But I'm not sure what this is in terms of a primitive operation.

Before the loop iterations
int currentMax = a[0];
Assignment: counts for 1.
int i = 1
Assignment: counts for 1
For each of the n iterations of the loop (note that here, n=a.length-1)
i < a.length
Comparison (returns true): counts as 1
i++
Incrementation: counts as 1
a[i] > currentMax
Comparison: counts as 1
currentMax = a[i];
Assignment: counts as 1
When existing the loop
i < a.length
Comparison (return false): counts as 1
CONCLUSION
You have in the worst case 1 + 1 + n*(1+1+1+1) + 1 = 4*n + 3 elementary operations, hence the conplexity of your algorithm is Θ(n).
More specifically, to handle the if statement, you have of course to take into account the computation of its argument, but the word "if" itself doesn't count. The processor just jumps instantly to the next instruction depending on the result. Some may argue that this conditionnal jump may count as 1, but anyway this has no importance, since 4*n + 3 is the same complexity as 5*n + 3, i.e. Θ(n).
If you want to be precise and keep the constants, then you have to specify what does it mean exactly, such as:
n+2 assignments
n incrementations
2*n+1 comparisons
In which case it is clear what you decided to count as elementary operations or not. But for instance, you could have also decided that accessing the array like a[i] was worth counting (it is actually one pointer addition plus one memory access), so you would add:
2*n+1 array access
Or if you want to be more precise, and separate the fact that one of the access is a[0] and do not perform pointer arithmetic, you would say:
2*n+1 memory access
2*n pointer additions
So you see that it is up to you to decide what do you count as "elementary operations", and all answers are equally true.

Related

What's the time complexity of the code snippet below?

It looks like some sort of a partial-sort.
int n = a.length;
for(int i = 0; i < n; i++) {
while(a[i] != i) {
if(a[i] < 0 || a[i] >= n) //avoid stepping out of range
break;
if(a[i] == a[a[i]]) //avoid inf loop by duplicates
break;
int t = a[i];
a[i] = a[t];
a[t] = t;
}
}
return a;
On first look, seems like O(N^2) but when I run it seems O(N). Any ideas? Thanks in advance.
You're right that it's O(n):
To help explain this I'll make up a definition:
Reflective: An element, a[i], in an array, a, is reflective if a[i] = i.
Iterations of while loop that do result in a break:
For each value of i, you can have exactly one break that's executed within the while loop (including the while condition). As there's n values of i, this means there's n total iterations of the while loop that result in a break.
Iterations of while loop that don't result in a break:
For this part it might help to imagine our array where each element is either reflective (1), or non-reflective (0):
| 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 1 |
Once we have passed the break points, then we know that a[i] != a[a[i]] (ie. if we name a[i] as t, then we know that a[t] != t). And because we later assign a[t] = t, then we have changed an element of the array from non-reflective to reflective. Note that nowhere in your code do we make a reflective element non-reflective: The assignment a[i] = a[t] could result in a[i] being non-reflective, but we also know that it wasn't reflective to begin with because the while statement was true: a[i] != i.
From our visual, this means that no 1 ever changes to a 0, and yet every iteration of the while loop (that passes the break points) results in at least one 0 flipping to a 1.
Once you observe that every (non-break) iteration of the inner loop takes at least one (possibly two) non-reflective elements from the array and converts it to become permanently reflective, then we realise that the total amount of (non-break) iterations of the inner loop cannot exceed n for the entire run-time of the program.
In summary: i is iterated and checked in the for loop n times, and each does a constant amount of work, c1. There's n total iterations of the iterations of the while loop that correspond to a break, and at most n iterations that don't correspond to a break. Hence there's at most 2n iterations of the while loop in total. The work done in a single iteration of the while loop is some max constant, c2.
Hence time complexity <= c1*n + c2*2*n = O(n).
As for the function of the code, it rearranges elements to make as many of them reflective as possible: if after this function a[i] is non-reflective, then the value i isn't present in the array.

Order and Growth Function of Loops

I'm trying to find the order and growth function of this for loop inside a function which takes in an array of length n > 2.
This function orders the array in ascending order. I'm trying to find the order for a worst case scenario: when the array is ordered initially in descending order and the function therefore has to iterate through the array many times to sort it.
Here is the loop:
for (int next = 1; next < array.length; next++) {
int value = array[next];
int index = next;
while (index > 0 && value < array[index - 1]) {
array[index] = array[index - 1];
index--;
}
array[index] = value;
}
I've been racking my brains trying to figure it out. Writing tests, writing tons of functions down and I get close, but never right on. How would you go through such a loop to find it's order and growth function?
Any direction would greatly be appreciated. Thank you so much.
Let n be the array length. This loop is of worst-case running time O(n^2). The simple way to see this is as follows:
When next = 1, the number of operations done by the while loop is at most 1. When next = 2, the number of operations is at most 2, and so on until next = (n - 1). We can ignore the other operations done by the for loop because they constitute lower order terms which are irrelevant to growth.
So now, the number of operations is k * (1 + 2 + 3 + 4 + ... + (n - 1)) = k*(n*(n - 1)/2) = kn^2 - kn/2 where k is a constant factor.
Therefore, the growth of the function is of order n^2.
Edit:
To address the comment, we usually do not count the total number of statements because there is no standard to do so.
For example, would you count one iteration of the follow loop as one statement (a single print statement) or two statements (the print statement and incrementing i)?
for (int i = 0; i < n; i++)
{
print(i);
}
In addition, it is frankly not a very useful metric. In most cases, we only care about the highest order term of an algorithm.
However, to answer your question, I would count the loop as performing these many statements:
2n^2 + 2n - 3.

Changing complexity from O(n) to O(1)

For the following code :
s = 0 ;
for(i=m ; i<=(2*n-1) ; i+=m) {
if(i<=n+1){
s+=(i-1)/2 ;
}
else{
s+=(2*n-i+1)/2 ;
}
}
I want to change the complexity of the code from O(n)
to O(1) . So I wanted to eliminate the for loop . But as
the sum s stores values like (i-1)/2 or (2*n-i+1)/2 so eliminating the loop involves tedious calculation of floor value of each (i-1)/2 or (2*n-i+1)/2 . It became very difficult for me to do so as I might have derived the wrong formula in sums of floors . Can u please help me in Changing complexity from O(n) to O(1). Or please help me with this floor summations . Is there any other way to reduce the complexity ? If yes ... then how ?
As Don Roby said, there is a plain old arithmetic solution to your problem. Let me show you how to do it for the first values of i.
* EDIT 2 : CODE FOR THE LOWER PART *
for(int i=m ; i<= n+1 ; i+=m)//old computation
s+=(i-1)/2 ;
int a = (n+1)/m; // maximum value of i
int b = (a*(a+1))/2; //
int v = 0;
int p;
if(m % 2 == 0){
p = m/2;
v = b*p-a; // this term is always here
}
else{
p = (m - 1)/2;
int sum1 = ((a/2)*(a/2 +1))/2;
int sum2 = (((a-1)/2)*((a-1)/2 +1))/2;
v = b*p -a ;// this term is always here
v+= sum1 + a/2; //sum( 1 <= j <= a )(j-1), j pair
v+= sum2; //sum( 1 <= j <= a )(j-1), j impair
}
System.out.println( " Are both result equals ? "+ (s == v));
How do I come up with it? I take
for(i=m ; i<= n+1 ; i+=m)
s+=(i-1)/2 ;
I make a change
for(j=1 ; j*m <= n-1 ; j++)
s+=(j*m-1)/2 ;
I pose a=Math.floor(n+1/m). There are 3 cases :
m is pair, then interior of the loop is s+= p*j. The result is
b(a*(a+1))/2 -a
m is impair and the iterator j is pair
m is impair and the iterator j is impair
When m is impair, you can write m = 2p + 1 and the interior of the loop becomes
s+= p*j + (j-1)/2
p*j is the same as before, now you need to break the division by assuming j is always pair or j always impair and summing both values.
The next loop you need to compute is
for(int i=a+1 ; i<= (2*n-1) ; i+=m)// a is (n+1)/m
s+=(2*n-i+1)/2;
which is the same as
for(int i=1 ; i<= (2*n-1)-a ; i+=m)
s+= (2n-a)/2 - (i-1)/2;
This loop is similar to the first one, so there is not much work to do...
Indeed this is tedious..
My approach to this would be to first write characterizing tests asserting the values produced for different values of m and n, and then start refactoring.
Your main loop has a change of logic based on getting halfway through (the if(i<=n+1) choice), so I'd first split it into two loops based on that.
Then you have in each of the resulting loops, a computation that varies principally on whether i is even or odd. Split each into 2 more loops separating thes, and the floor computations may be simpler to understand. Alternatively, you might see a pattern of repeated values that lets you simplify these loops in a different way.
Each of the resulting loops will likely be something resembling a sum of an arithmetic progression, so you'll likely find that they can be replaced by closed form computations not requiring loops at all.
While you go along this path, you might also refactor to extract portions of the computation to functions. Write characterizing tests for these as you extract them.
Keep running all your tests as you proceed and you'll likely be able to reduce this to a sum of simple computations, which might then reduce further by plain old arithmetic.

Complexity analysis of SelectionSort

Here's a SelectionSort routine I wrote. Is my complexity analysis that follows correct?
public static void selectionSort(int[] numbers) {
// Iterate over each cell starting from the last one and working backwards
for (int i = numbers.length - 1; i >=1; i--)
{
// Always set the max pos to 0 at the start of each iteration
int maxPos = 0;
// Start at cell 1 and iterate up to the second last cell
for (int j = 1; j < i; j++)
{
// If the number in the current cell is larger than the one in maxPos,
// set a new maxPos
if (numbers[j] > numbers[maxPos])
{
maxPos = j;
}
}
// We now have the position of the maximum number. If the maximum number is greater
// than the number in the current cell swap them
if (numbers[maxPos] > numbers[i])
{
int temp = numbers[i];
numbers[i] = numbers[maxPos];
numbers[maxPos] = temp;
}
}
}
Complexity Analysis
Outter Loop (comparison & assignment): 2 ops performed n times = 2n ops
Assigning maxPos: n ops
Inner Loop (comparison & assignment): 2 ops performed 2n^2 times = 2n² ops
Comparison of array elements (2 array references & a comparison): 3n² ops
Assigning new maxPos: n² ops
Comparison of array elements (2 array references & a comparison): 3n² ops
Assignment & array reference: 2n² ops
Assignment & 2 array references: 3n² ops
Assignment & array reference: 2n² ops
Total number of primitive operations
2n + n + 2n² + 3n² + n^2 + 3n² + 2n² + 3n² + 2n² = 16n² + 3n
Leading to Big Oh(n²)
Does that look correct? Particularly when it comes to the inner loop and the stuff inside it...
Yes, O(N2) is correct.
Edit: It's a little hard to guess at exactly what they may want as far as "from first principles" goes, but I would guess that they're looking for (in essence) something on the order of a proof (or at least indication) that the basic definition of big-O is met:
there exist positive constants c and n0 such that:
0 ≤ f(n) ≤ cg(n) for all n ≥ n0.
So, the next step after finding 16N2+3N would be to find the correct values for n0 and c. At least at first glance, c appears to be 16, and n0, -3, (which is probably treated as 0, negative numbers of elements having no real meaning).
Generally it is pointless (and incorrect) to add up actual operations, because operations take various numbers of processor cycles, some of them dereference values from memory which takes a lot more time, then it gets even more complex because compilers optimize code, then you have stuff like cache locality, etc, so unless you know really, really well how everything works underneath, you are adding up apples and oranges. You can't just add up "j < i", "j++", and "numbers[i] = numbers[maxPos]" as if they were equal, and you don't need to do so - for the purpose of complexity analysis, a constant time block is a constant time block. You are not doing low level code optimization.
The complexity is indeed N^2, but your coefficients are meaningless.

array- having some issues [duplicate]

An interesting interview question that a colleague of mine uses:
Suppose that you are given a very long, unsorted list of unsigned 64-bit integers. How would you find the smallest non-negative integer that does not occur in the list?
FOLLOW-UP: Now that the obvious solution by sorting has been proposed, can you do it faster than O(n log n)?
FOLLOW-UP: Your algorithm has to run on a computer with, say, 1GB of memory
CLARIFICATION: The list is in RAM, though it might consume a large amount of it. You are given the size of the list, say N, in advance.
If the datastructure can be mutated in place and supports random access then you can do it in O(N) time and O(1) additional space. Just go through the array sequentially and for every index write the value at the index to the index specified by value, recursively placing any value at that location to its place and throwing away values > N. Then go again through the array looking for the spot where value doesn't match the index - that's the smallest value not in the array. This results in at most 3N comparisons and only uses a few values worth of temporary space.
# Pass 1, move every value to the position of its value
for cursor in range(N):
target = array[cursor]
while target < N and target != array[target]:
new_target = array[target]
array[target] = target
target = new_target
# Pass 2, find first location where the index doesn't match the value
for cursor in range(N):
if array[cursor] != cursor:
return cursor
return N
Here's a simple O(N) solution that uses O(N) space. I'm assuming that we are restricting the input list to non-negative numbers and that we want to find the first non-negative number that is not in the list.
Find the length of the list; lets say it is N.
Allocate an array of N booleans, initialized to all false.
For each number X in the list, if X is less than N, set the X'th element of the array to true.
Scan the array starting from index 0, looking for the first element that is false. If you find the first false at index I, then I is the answer. Otherwise (i.e. when all elements are true) the answer is N.
In practice, the "array of N booleans" would probably be encoded as a "bitmap" or "bitset" represented as a byte or int array. This typically uses less space (depending on the programming language) and allows the scan for the first false to be done more quickly.
This is how / why the algorithm works.
Suppose that the N numbers in the list are not distinct, or that one or more of them is greater than N. This means that there must be at least one number in the range 0 .. N - 1 that is not in the list. So the problem of find the smallest missing number must therefore reduce to the problem of finding the smallest missing number less than N. This means that we don't need to keep track of numbers that are greater or equal to N ... because they won't be the answer.
The alternative to the previous paragraph is that the list is a permutation of the numbers from 0 .. N - 1. In this case, step 3 sets all elements of the array to true, and step 4 tells us that the first "missing" number is N.
The computational complexity of the algorithm is O(N) with a relatively small constant of proportionality. It makes two linear passes through the list, or just one pass if the list length is known to start with. There is no need to represent the hold the entire list in memory, so the algorithm's asymptotic memory usage is just what is needed to represent the array of booleans; i.e. O(N) bits.
(By contrast, algorithms that rely on in-memory sorting or partitioning assume that you can represent the entire list in memory. In the form the question was asked, this would require O(N) 64-bit words.)
#Jorn comments that steps 1 through 3 are a variation on counting sort. In a sense he is right, but the differences are significant:
A counting sort requires an array of (at least) Xmax - Xmin counters where Xmax is the largest number in the list and Xmin is the smallest number in the list. Each counter has to be able to represent N states; i.e. assuming a binary representation it has to have an integer type (at least) ceiling(log2(N)) bits.
To determine the array size, a counting sort needs to make an initial pass through the list to determine Xmax and Xmin.
The minimum worst-case space requirement is therefore ceiling(log2(N)) * (Xmax - Xmin) bits.
By contrast, the algorithm presented above simply requires N bits in the worst and best cases.
However, this analysis leads to the intuition that if the algorithm made an initial pass through the list looking for a zero (and counting the list elements if required), it would give a quicker answer using no space at all if it found the zero. It is definitely worth doing this if there is a high probability of finding at least one zero in the list. And this extra pass doesn't change the overall complexity.
EDIT: I've changed the description of the algorithm to use "array of booleans" since people apparently found my original description using bits and bitmaps to be confusing.
Since the OP has now specified that the original list is held in RAM and that the computer has only, say, 1GB of memory, I'm going to go out on a limb and predict that the answer is zero.
1GB of RAM means the list can have at most 134,217,728 numbers in it. But there are 264 = 18,446,744,073,709,551,616 possible numbers. So the probability that zero is in the list is 1 in 137,438,953,472.
In contrast, my odds of being struck by lightning this year are 1 in 700,000. And my odds of getting hit by a meteorite are about 1 in 10 trillion. So I'm about ten times more likely to be written up in a scientific journal due to my untimely death by a celestial object than the answer not being zero.
As pointed out in other answers you can do a sort, and then simply scan up until you find a gap.
You can improve the algorithmic complexity to O(N) and keep O(N) space by using a modified QuickSort where you eliminate partitions which are not potential candidates for containing the gap.
On the first partition phase, remove duplicates.
Once the partitioning is complete look at the number of items in the lower partition
Is this value equal to the value used for creating the partition?
If so then it implies that the gap is in the higher partition.
Continue with the quicksort, ignoring the lower partition
Otherwise the gap is in the lower partition
Continue with the quicksort, ignoring the higher partition
This saves a large number of computations.
To illustrate one of the pitfalls of O(N) thinking, here is an O(N) algorithm that uses O(1) space.
for i in [0..2^64):
if i not in list: return i
print "no 64-bit integers are missing"
Since the numbers are all 64 bits long, we can use radix sort on them, which is O(n). Sort 'em, then scan 'em until you find what you're looking for.
if the smallest number is zero, scan forward until you find a gap. If the smallest number is not zero, the answer is zero.
For a space efficient method and all values are distinct you can do it in space O( k ) and time O( k*log(N)*N ). It's space efficient and there's no data moving and all operations are elementary (adding subtracting).
set U = N; L=0
First partition the number space in k regions. Like this:
0->(1/k)*(U-L) + L, 0->(2/k)*(U-L) + L, 0->(3/k)*(U-L) + L ... 0->(U-L) + L
Find how many numbers (count{i}) are in each region. (N*k steps)
Find the first region (h) that isn't full. That means count{h} < upper_limit{h}. (k steps)
if h - count{h-1} = 1 you've got your answer
set U = count{h}; L = count{h-1}
goto 2
this can be improved using hashing (thanks for Nic this idea).
same
First partition the number space in k regions. Like this:
L + (i/k)->L + (i+1/k)*(U-L)
inc count{j} using j = (number - L)/k (if L < number < U)
find first region (h) that doesn't have k elements in it
if count{h} = 1 h is your answer
set U = maximum value in region h L = minimum value in region h
This will run in O(log(N)*N).
I'd just sort them then run through the sequence until I find a gap (including the gap at the start between zero and the first number).
In terms of an algorithm, something like this would do it:
def smallest_not_in_list(list):
sort(list)
if list[0] != 0:
return 0
for i = 1 to list.last:
if list[i] != list[i-1] + 1:
return list[i-1] + 1
if list[list.last] == 2^64 - 1:
assert ("No gaps")
return list[list.last] + 1
Of course, if you have a lot more memory than CPU grunt, you could create a bitmask of all possible 64-bit values and just set the bits for every number in the list. Then look for the first 0-bit in that bitmask. That turns it into an O(n) operation in terms of time but pretty damned expensive in terms of memory requirements :-)
I doubt you could improve on O(n) since I can't see a way of doing it that doesn't involve looking at each number at least once.
The algorithm for that one would be along the lines of:
def smallest_not_in_list(list):
bitmask = mask_make(2^64) // might take a while :-)
mask_clear_all (bitmask)
for i = 1 to list.last:
mask_set (bitmask, list[i])
for i = 0 to 2^64 - 1:
if mask_is_clear (bitmask, i):
return i
assert ("No gaps")
Sort the list, look at the first and second elements, and start going up until there is a gap.
We could use a hash table to hold the numbers. Once all numbers are done, run a counter from 0 till we find the lowest. A reasonably good hash will hash and store in constant time, and retrieves in constant time.
for every i in X // One scan Θ(1)
hashtable.put(i, i); // O(1)
low = 0;
while (hashtable.get(i) <> null) // at most n+1 times
low++;
print low;
The worst case if there are n elements in the array, and are {0, 1, ... n-1}, in which case, the answer will be obtained at n, still keeping it O(n).
You can do it in O(n) time and O(1) additional space, although the hidden factor is quite large. This isn't a practical way to solve the problem, but it might be interesting nonetheless.
For every unsigned 64-bit integer (in ascending order) iterate over the list until you find the target integer or you reach the end of the list. If you reach the end of the list, the target integer is the smallest integer not in the list. If you reach the end of the 64-bit integers, every 64-bit integer is in the list.
Here it is as a Python function:
def smallest_missing_uint64(source_list):
the_answer = None
target = 0L
while target < 2L**64:
target_found = False
for item in source_list:
if item == target:
target_found = True
if not target_found and the_answer is None:
the_answer = target
target += 1L
return the_answer
This function is deliberately inefficient to keep it O(n). Note especially that the function keeps checking target integers even after the answer has been found. If the function returned as soon as the answer was found, the number of times the outer loop ran would be bound by the size of the answer, which is bound by n. That change would make the run time O(n^2), even though it would be a lot faster.
Thanks to egon, swilden, and Stephen C for my inspiration. First, we know the bounds of the goal value because it cannot be greater than the size of the list. Also, a 1GB list could contain at most 134217728 (128 * 2^20) 64-bit integers.
Hashing part
I propose using hashing to dramatically reduce our search space. First, square root the size of the list. For a 1GB list, that's N=11,586. Set up an integer array of size N. Iterate through the list, and take the square root* of each number you find as your hash. In your hash table, increment the counter for that hash. Next, iterate through your hash table. The first bucket you find that is not equal to it's max size defines your new search space.
Bitmap part
Now set up a regular bit map equal to the size of your new search space, and again iterate through the source list, filling out the bitmap as you find each number in your search space. When you're done, the first unset bit in your bitmap will give you your answer.
This will be completed in O(n) time and O(sqrt(n)) space.
(*You could use use something like bit shifting to do this a lot more efficiently, and just vary the number and size of buckets accordingly.)
Well if there is only one missing number in a list of numbers, the easiest way to find the missing number is to sum the series and subtract each value in the list. The final value is the missing number.
int i = 0;
while ( i < Array.Length)
{
if (Array[i] == i + 1)
{
i++;
}
if (i < Array.Length)
{
if (Array[i] <= Array.Length)
{//SWap
int temp = Array[i];
int AnoTemp = Array[temp - 1];
Array[temp - 1] = temp;
Array[i] = AnoTemp;
}
else
i++;
}
}
for (int j = 0; j < Array.Length; j++)
{
if (Array[j] > Array.Length)
{
Console.WriteLine(j + 1);
j = Array.Length;
}
else
if (j == Array.Length - 1)
Console.WriteLine("Not Found !!");
}
}
Here's my answer written in Java:
Basic Idea:
1- Loop through the array throwing away duplicate positive, zeros, and negative numbers while summing up the rest, getting the maximum positive number as well, and keep the unique positive numbers in a Map.
2- Compute the sum as max * (max+1)/2.
3- Find the difference between the sums calculated at steps 1 & 2
4- Loop again from 1 to the minimum of [sums difference, max] and return the first number that is not in the map populated in step 1.
public static int solution(int[] A) {
if (A == null || A.length == 0) {
throw new IllegalArgumentException();
}
int sum = 0;
Map<Integer, Boolean> uniqueNumbers = new HashMap<Integer, Boolean>();
int max = A[0];
for (int i = 0; i < A.length; i++) {
if(A[i] < 0) {
continue;
}
if(uniqueNumbers.get(A[i]) != null) {
continue;
}
if (A[i] > max) {
max = A[i];
}
uniqueNumbers.put(A[i], true);
sum += A[i];
}
int completeSum = (max * (max + 1)) / 2;
for(int j = 1; j <= Math.min((completeSum - sum), max); j++) {
if(uniqueNumbers.get(j) == null) { //O(1)
return j;
}
}
//All negative case
if(uniqueNumbers.isEmpty()) {
return 1;
}
return 0;
}
As Stephen C smartly pointed out, the answer must be a number smaller than the length of the array. I would then find the answer by binary search. This optimizes the worst case (so the interviewer can't catch you in a 'what if' pathological scenario). In an interview, do point out you are doing this to optimize for the worst case.
The way to use binary search is to subtract the number you are looking for from each element of the array, and check for negative results.
I like the "guess zero" apprach. If the numbers were random, zero is highly probable. If the "examiner" set a non-random list, then add one and guess again:
LowNum=0
i=0
do forever {
if i == N then leave /* Processed entire array */
if array[i] == LowNum {
LowNum++
i=0
}
else {
i++
}
}
display LowNum
The worst case is n*N with n=N, but in practice n is highly likely to be a small number (eg. 1)
I am not sure if I got the question. But if for list 1,2,3,5,6 and the missing number is 4, then the missing number can be found in O(n) by:
(n+2)(n+1)/2-(n+1)n/2
EDIT: sorry, I guess I was thinking too fast last night. Anyway, The second part should actually be replaced by sum(list), which is where O(n) comes. The formula reveals the idea behind it: for n sequential integers, the sum should be (n+1)*n/2. If there is a missing number, the sum would be equal to the sum of (n+1) sequential integers minus the missing number.
Thanks for pointing out the fact that I was putting some middle pieces in my mind.
Well done Ants Aasma! I thought about the answer for about 15 minutes and independently came up with an answer in a similar vein of thinking to yours:
#define SWAP(x,y) { numerictype_t tmp = x; x = y; y = tmp; }
int minNonNegativeNotInArr (numerictype_t * a, size_t n) {
int m = n;
for (int i = 0; i < m;) {
if (a[i] >= m || a[i] < i || a[i] == a[a[i]]) {
m--;
SWAP (a[i], a[m]);
continue;
}
if (a[i] > i) {
SWAP (a[i], a[a[i]]);
continue;
}
i++;
}
return m;
}
m represents "the current maximum possible output given what I know about the first i inputs and assuming nothing else about the values until the entry at m-1".
This value of m will be returned only if (a[i], ..., a[m-1]) is a permutation of the values (i, ..., m-1). Thus if a[i] >= m or if a[i] < i or if a[i] == a[a[i]] we know that m is the wrong output and must be at least one element lower. So decrementing m and swapping a[i] with the a[m] we can recurse.
If this is not true but a[i] > i then knowing that a[i] != a[a[i]] we know that swapping a[i] with a[a[i]] will increase the number of elements in their own place.
Otherwise a[i] must be equal to i in which case we can increment i knowing that all the values of up to and including this index are equal to their index.
The proof that this cannot enter an infinite loop is left as an exercise to the reader. :)
The Dafny fragment from Ants' answer shows why the in-place algorithm may fail. The requires pre-condition describes that the values of each item must not go beyond the bounds of the array.
method AntsAasma(A: array<int>) returns (M: int)
requires A != null && forall N :: 0 <= N < A.Length ==> 0 <= A[N] < A.Length;
modifies A;
{
// Pass 1, move every value to the position of its value
var N := A.Length;
var cursor := 0;
while (cursor < N)
{
var target := A[cursor];
while (0 <= target < N && target != A[target])
{
var new_target := A[target];
A[target] := target;
target := new_target;
}
cursor := cursor + 1;
}
// Pass 2, find first location where the index doesn't match the value
cursor := 0;
while (cursor < N)
{
if (A[cursor] != cursor)
{
return cursor;
}
cursor := cursor + 1;
}
return N;
}
Paste the code into the validator with and without the forall ... clause to see the verification error. The second error is a result of the verifier not being able to establish a termination condition for the Pass 1 loop. Proving this is left to someone who understands the tool better.
Here's an answer in Java that does not modify the input and uses O(N) time and N bits plus a small constant overhead of memory (where N is the size of the list):
int smallestMissingValue(List<Integer> values) {
BitSet bitset = new BitSet(values.size() + 1);
for (int i : values) {
if (i >= 0 && i <= values.size()) {
bitset.set(i);
}
}
return bitset.nextClearBit(0);
}
def solution(A):
index = 0
target = []
A = [x for x in A if x >=0]
if len(A) ==0:
return 1
maxi = max(A)
if maxi <= len(A):
maxi = len(A)
target = ['X' for x in range(maxi+1)]
for number in A:
target[number]= number
count = 1
while count < maxi+1:
if target[count] == 'X':
return count
count +=1
return target[count-1] + 1
Got 100% for the above solution.
1)Filter negative and Zero
2)Sort/distinct
3)Visit array
Complexity: O(N) or O(N * log(N))
using Java8
public int solution(int[] A) {
int result = 1;
boolean found = false;
A = Arrays.stream(A).filter(x -> x > 0).sorted().distinct().toArray();
//System.out.println(Arrays.toString(A));
for (int i = 0; i < A.length; i++) {
result = i + 1;
if (result != A[i]) {
found = true;
break;
}
}
if (!found && result == A.length) {
//result is larger than max element in array
result++;
}
return result;
}
An unordered_set can be used to store all the positive numbers, and then we can iterate from 1 to length of unordered_set, and see the first number that does not occur.
int firstMissingPositive(vector<int>& nums) {
unordered_set<int> fre;
// storing each positive number in a hash.
for(int i = 0; i < nums.size(); i +=1)
{
if(nums[i] > 0)
fre.insert(nums[i]);
}
int i = 1;
// Iterating from 1 to size of the set and checking
// for the occurrence of 'i'
for(auto it = fre.begin(); it != fre.end(); ++it)
{
if(fre.find(i) == fre.end())
return i;
i +=1;
}
return i;
}
Solution through basic javascript
var a = [1, 3, 6, 4, 1, 2];
function findSmallest(a) {
var m = 0;
for(i=1;i<=a.length;i++) {
j=0;m=1;
while(j < a.length) {
if(i === a[j]) {
m++;
}
j++;
}
if(m === 1) {
return i;
}
}
}
console.log(findSmallest(a))
Hope this helps for someone.
With python it is not the most efficient, but correct
#!/usr/bin/env python3
# -*- coding: UTF-8 -*-
import datetime
# write your code in Python 3.6
def solution(A):
MIN = 0
MAX = 1000000
possible_results = range(MIN, MAX)
for i in possible_results:
next_value = (i + 1)
if next_value not in A:
return next_value
return 1
test_case_0 = [2, 2, 2]
test_case_1 = [1, 3, 44, 55, 6, 0, 3, 8]
test_case_2 = [-1, -22]
test_case_3 = [x for x in range(-10000, 10000)]
test_case_4 = [x for x in range(0, 100)] + [x for x in range(102, 200)]
test_case_5 = [4, 5, 6]
print("---")
a = datetime.datetime.now()
print(solution(test_case_0))
print(solution(test_case_1))
print(solution(test_case_2))
print(solution(test_case_3))
print(solution(test_case_4))
print(solution(test_case_5))
def solution(A):
A.sort()
j = 1
for i, elem in enumerate(A):
if j < elem:
break
elif j == elem:
j += 1
continue
else:
continue
return j
this can help:
0- A is [5, 3, 2, 7];
1- Define B With Length = A.Length; (O(1))
2- initialize B Cells With 1; (O(n))
3- For Each Item In A:
if (B.Length <= item) then B[Item] = -1 (O(n))
4- The answer is smallest index in B such that B[index] != -1 (O(n))

Resources