Complexity of famous iterative task - time

We have array of size n with n integers in it.The task is to find the first larger element after a given one and if there is none return -1;
FOR example input 7 2 4 6 16 output: 16,4,6,-1
Whats the average complexity with naive double for-cycle solution?

Related

find 4th smallest element in linear time

So i had an exercise given to me about 2 months ago, that says the following:
Given n (n>=4) distinct elements, design a divide & conquer algorithm to compute the 4th smallest element. Your algorithm should run in linear time in the worst case.
I had an extremely hard time with this problem, and could only find relevant algorithms that runs in the worst case O(n*k). After several weeks of trying, we managed, with the help of our teacher, "solve" this problem. The final algorithm is as follows:
Rules: The input size can only be of size 2^k
(1): Divide input into n/2. One left array, one right array.
(2): If input size == 4, sort the arrays using merge sort.
(2.1) Merge left array with right array into a new result array with length 4.
(2.2) Return element at index [4-1]
(3): Repeat step 1
This is solved recursively and our base case is at step 2. Step 2.2 means that for all
of our recursive calls that we did, we will get a final result array of length 4, and at that
point, we can justr return the element at index [4-1].
With this algorithm, my teacher claims that this runs in linear time. My problem with that statement is that we are diving the input until we reach sub-arrays with an input size of 4, and then that is sorted. So for an input size of 8, we would sort 2 sub-arrays with length 4, since 8/4 = 2. How is this in any case linear time? We are still sorting the whole input size but in blocks aren't we? This really does not make sense to me. It doesn't matter if we sort the whole input size at it is, or divide it into sub-arrays with size of 4,and sort them like that? It will still be a worst time of O(n*log(n))?
Would appreciate some explanations on this !
To make proving that algorithm runs in linear time, let's modify it a bit (we will only change an order of dividing and merging blocks, nothing more):
(1): Divide input into n/4 blocks, each has size 4.
(2): Until there is more than one block, repeat:
Merge each pair of adjacent blocks into one block of size 4.
(For example, if we have 4 blocks, we will split them in 2 pairs -
first pair contains first and second blocks,
second pair contains third and fourth blocks.
After merging we will have 2 blocks -
the first one contains 4 least elements from blocks 1 and 2,
the second one contains 4 least elements from blocks 3 and 4).
(3): The answer is the last element of that one block left.
Proof: It's a fact that array of constant length (in your case, 4) can be sorted in constant time. Let k = log(n). Loop (2) runs k-2 iterations (on each iteration the count of elements left is divided by 2, until 4 elements are left).
Before i-th iteration (0 <= i < k-2) there are (2^(k-i)) elements left, so there are 2^(k-i-2) blocks and we will merge 2^(k-i-3) pairs of blocks. Let's find how many pairs we will merge in all iterations. Count of merges equals
mergeOperationsCount = 2^(k-3) + 2^(k-4) + .... + 2^(k-(k-3)) =
= 2^(k-3) * (1 + 1/2 + 1/4 + 1/8 + .....) < 2^(k-2) = O(2^k) = O(n)
Since we can merge each pair in constant time (because thay have constant size), and the only operation we make is merging pairs, the algorithm runs in O(n).
And after this proof, I want to notice that there is another linear algorithm which is trivial, but it is not divide-and-conquer.

Efficiently generate primes in Python and calculate complexity

Generating prime numbers from 1 to n Python 3. How to improve efficiency and what is the complexity?
Input: A number, max (a large number)
Output: All the primes from 1 to max
Output is in the form of a list and will be [2,3,5,7,11,13,.......]
The code attempts to perform this task in an efficient way (least time complexity).
from math import sqrt
max = (10**6)*3
print("\nThis code prints all primes till: " , max , "\n")
list_primes=[2]
def am_i_prime(num):
"""
Input/Parameter the function takes: An integer number
Output: returns True, if the number is prime and False if not
"""
decision=True
i=0
while(list_primes[i] <= sqrt(num)): #Till sqrt(n) to save comparisons
if(num%list_primes[i]==0):
decision=False
break
#break is inserted so that we get out of comparisons faster
#Eg. for 1568, we should break from the loop as soon as we know that 1568%2==0
i+=1
return decision
for i in range(3,max,2): #starts from 3 as our list contains 2 from the beginning
if am_i_prime(i)==True:
list_primes.append(i) #if a number is found to be prime, we append it to our list of primes
print(list_primes)
How can I make this faster? Where can I improve?
What is the time complexity of this code? Which steps are inefficient?
In what ways is the Sieve of Eratosthenes more efficient than this?
Working for the first few iterations:-
We have a list_primes which contains prime numbers. It initially contains only 2.
We go to the next number, 3. Is 3 divisible by any of the numbers in list_primes? No! We append 3 to list_primes. Right now, list_primes=[2,3]
We go to the next number 4. Is 4 divisible by any of the numbers in list_primes? Yes (4 is divisible by 2). So, we don't do anything. Right now list_primes=[2,3]
We go to the next number, 5. Is 5 divisible by any of the numbers in list_primes? No! We append 5 to list_primes. Right now, list_primes=[2,3,5]
We go to the next number, 6. Is 6 divisible by any of the numbers in list_primes? Yes (6 is divisible by 2 and also divisible by 3). So, we don't do anything. Right now list_primes=[2,3,5]
And so on...
Interestingly, it takes a rather deep mathematical theorem to prove that your algorithm is correct at all. The theorem is: "For every n ≥ 2, there is a prime number between n and n^2". I know it has been proven, and much stricter bounds are proven, but I must admit I wouldn't know how to prove it myself. And if this theorem is not correct, then the loop in am_i_prime can go past the bounds of the array.
The number of primes ≤ k is O (k / log k) - this is again a very deep mathematical theorem. Again, beyond me to prove.
But anyway, there are about n / log n primes up to n, and for these primes the loop will iterate through all primes up to n^(1/2), and there are O (n^(1/2) / log n) of them.
So for the primes alone, the runtime is therefore O (n^1.5 / log^2 n), so that is a lower bound. With some effort it should be possible to prove that for all numbers, the runtime is asymptotically the same.
O (n^1.5 / log n) is obviously an upper bound, but experimentally the number of divisions to find all primes ≤ n seems to be ≤ 2 n^1.5 / log^2 n, where log is the natural logarithm.
The following rearrangement and optimization of your code will reach your maximum in nearly 1/2 the time of your original code. It combines your top level loop and predicate function into a single function to eliminate overhead and manages squares (square roots) more efficiently:
def get_primes(maximum):
primes = []
if maximum > 1:
primes.append(2)
squares = [4]
for number in range(3, maximum, 2):
i = 0
while squares[i] <= number:
if number % primes[i] == 0:
break
i += 1
else: # no break
primes.append(number)
squares.append(number * number)
return primes
maximum = 10 ** 6 * 3
print(get_primes(maximum))
However, a sieve-based algorithm will easily beat this (as it avoids division and/or multiplication). Your code has a bug: setting max = 1 will create the list [2] instead of the correct answer of an empty list. Always test both ends of your limits.
O(N**2)
Approximately speaking, the first call to am_I_prime does 1 comparison, the second does 2, ..., so the total count is 1 + 2 + ... + N, which is (N * (N-1)) / 2, which has order N-squared.

Difference between O(m+n) and O(mn)?

I was trying to find the complexities of an algorithm via different approaches. Mathematically I came across one O(m+n) and another O(mn) approach. However I am unable to grasp or say visualize this. It's not like I look at them and get the "Ahh! That's what's going on" feeling! Can someone explain this using their own examples or any other tool?
O(m+n) example:
for(int i = 0, i < m, i++)
//code
for(int j = 0, j < n, j++)
//code
m iterations of code happen. Then n iterations of code happens.
O(mn) example:
for(int i = 0, i < m, i++)
for(int j = 0, j < n, j++)
//code
For every iteration of m, we have n iterations of code. Imagine iterating over a non-square 2D array.
m and n do not necessarily equal the same value. If they did equal the same value, then for O(m+n):
O(m+n) => O(m+m) => O(2m) => O(m)
I'd recommend looking at this question/answer in order to understand that last transition.
And for O(mn):
O(mn) => O(mm) => O(m^2)
My recommendation for finding intuition is thought experiments as follows:
First, realize that m and n are two different measurements of the input. They might be the lengths of two input streams, the lengths of sides of a matrix, or the counts of two different attributes of the same data structure, such as edge and node count of the same graph, or any similar measures.
The intuition is that big-O expresses a bound on the true run time (or some other aspect such as comparison count or space needed) of an algorithm in terms of a simple function - call that R(m, n) - multiplied by some arbitrary constant. We ignore the constant factors and think of all algorithms bounded by the same R as a family by calling their run times O( R(m, n) ).
Consequently, big O(m + n) says that the true run time is bounded by some function R(m,n) = C(m + n) for suitably big m and n. For the graph example, this says that the actual run time of the algorithm will be bounded by a multiple of the sum of the number of vertices and edges.
You can think of the bounding function as a graph in 3d with axes m, n, and R(m,n). Or you can think of charts:
R(m,n) = m + n
--------------
m= 1 2 3 4
n=1 1 2 3 4
2 3 4 5 6
3 4 5 6 7
4 5 6 7 8
For R(m,n) = mn, you have
R(m,n) = mn
--------------
m= 1 2 3 4
n=1 1 2 3 4
2 2 4 6 8
3 3 6 9 12
4 4 8 12 16
As a 3d graph, the first function is a plane and the second is a much faster-growing function at almost all points. This means that if m and n grow large enough, an O(mn) bound will ultimately be larger (corresponding to a potentially slower program) than an O(m+n) because the constants become insignificant.
For an example of the cost of rapid growth, suppose an O(m+n) algorithm has an extra constant factor of 3 in its runtime bound (making it potentially very slow on small inputs compared to both algorithms above):
R(m,n) = 3(m + n)
--------------
m= 1 2 3 4
n=1 3 9 12 15
2 9 12 15 18
3 12 15 18 21
4 15 18 21 24
So the the O(m + n) looks like it's bound is less constrained than the O(mn) one in the chart above. But look at the case m=n=100. Here bound on the O(m + n) algorithm is 3(m + n) = 600. But the O(mn) algorithm with the small constant has bound mn = 10000. Clearly you want the first if m and n are large.
#Anonymous raised a fine point on the initial version of this article, which confused big-O and big-Theta. Big-O only deals with bounds or upper limits on the quantity being measured. For example, this means that every O(n) algorithm is also O(n log n) and O(n^2). If the real run time is bounded by the slower-growing function, it is also bounded by all faster-growing ones.
Yet it is quite common for people to say "this algorithms is O(n)" while meaning that the bound is tight. That is, that Cn is an upper bound on the run time for some constant C and Dn is also a lower bound for some other constant D (and suitably large n). Such a tight bound is properly stated as Theta(n), not O(n). The run time of a Theta(R(m, n)) algorithm is (roughly speaking) proportional to R(m, n).
I'll add finally that there are many cases where you can't ignore constants. There exist lots of algorithms in the literature that are asymptotically "faster" than those in common use, but have constants so large that for practical problem sizes they are always too slow. Computational geometry has many examples. Radix 2 sort is another. It's Theta(n), but in practice a good quicksort (Theta(n log n) average case) will beat it on arrays of size up to at least 10^8 integers.
O(m+n) is much (an order of magnitude) faster than O(mn).
The O(m+n) algorithm could be one that iterates 2 sets and does a constant time (O(1)) operation on each element.
The O(mn) algorithm could be one that iterates the first set and does a linear search (O(n)) for the matching element in the second set.
The O(mn) algorithm is probably what professors would call The Naive Approach

program that checks if any even number greater than 4 is a sum of two prime numbers

I have the following problem:
Given that the even numbers greater than 4 can be obtained by addition of 2 prime
numbers, I have to write an algorithm which check it. The algorithm should take less time that O(n^2).
For example there is a set of numbers from 6 to n. If we have the number 6 the answer is 6=3+3 and for 22=17+5 and so on.
My first idea:
S - set of n numbers
for i=1 to n {
//removing odd numbers
if (S[i]%2!=0)
continue;
result = false;
for j=2 to S[i]-2{
if (j.isPrime) // prime test can be done in O(log^2(n))
if ((S[i]-j).isPrime)
result = true;
break;
else
continue;
}
if (result == false)
break;
}
Since I use 2 for-loops, the total running time of this algorithm should be
O(n*n)*O(log^2(n)) = O(n^2*log^2(n)) which is not less than O(n^2).
Does anybody have an idea to reduce the running time to get the required time of less than O(n^2)?
If set contains big numbers I've got nothing.
If max(S) < n ^ 2 / log(n) than:
You should preprocess which numbers from interval [1, max(S)] are primes.
For preprocessing you can use sieve of Eratosthenes
Then, you are able to check if number is a prime in O(1) and complexity of your solution becomes O(N^2).
This is Goldbach's conjecture. Primality testing is known to be in P (polynomial time), but the break-even is ridiculously high - in practice, you will not be able to do this in anywhere near O(n^2).
If we assume you only need to deal with relatively small numbers, and can precompute the primes up to a certain limit, you still need to find candidate pairs. The prime counting function gives approximately: n / ln(n) primes, less than (n). Subtracting the candidate prime (p) from (n) gives an odd number (q). If you can look up the primality of (q) with a complexity of: n.ln(n), or better - i.e., an O(1) lookup table for all odd numbers less than the limit - you can achieve O(n^2) or better.
You can run only until square root of N, this sufficient for determine if the number is prime or not.
this will reduce your running time.
also take a look at the following question - Program to find prime numbers

What would cause an algorithm to have O(log n) complexity?

My knowledge of big-O is limited, and when log terms show up in the equation it throws me off even more.
Can someone maybe explain to me in simple terms what a O(log n) algorithm is? Where does the logarithm come from?
This specifically came up when I was trying to solve this midterm practice question:
Let X(1..n) and Y(1..n) contain two lists of integers, each sorted in nondecreasing order. Give an O(log n)-time algorithm to find the median (or the nth smallest integer) of all 2n combined elements. For ex, X = (4, 5, 7, 8, 9) and Y = (3, 5, 8, 9, 10), then 7 is the median of the combined list (3, 4, 5, 5, 7, 8, 8, 9, 9, 10). [Hint: use concepts of binary search]
I have to agree that it's pretty weird the first time you see an O(log n) algorithm... where on earth does that logarithm come from? However, it turns out that there's several different ways that you can get a log term to show up in big-O notation. Here are a few:
Repeatedly dividing by a constant
Take any number n; say, 16. How many times can you divide n by two before you get a number less than or equal to one? For 16, we have that
16 / 2 = 8
8 / 2 = 4
4 / 2 = 2
2 / 2 = 1
Notice that this ends up taking four steps to complete. Interestingly, we also have that log2 16 = 4. Hmmm... what about 128?
128 / 2 = 64
64 / 2 = 32
32 / 2 = 16
16 / 2 = 8
8 / 2 = 4
4 / 2 = 2
2 / 2 = 1
This took seven steps, and log2 128 = 7. Is this a coincidence? Nope! There's a good reason for this. Suppose that we divide a number n by 2 i times. Then we get the number n / 2i. If we want to solve for the value of i where this value is at most 1, we get
n / 2i ≤ 1
n ≤ 2i
log2 n ≤ i
In other words, if we pick an integer i such that i ≥ log2 n, then after dividing n in half i times we'll have a value that is at most 1. The smallest i for which this is guaranteed is roughly log2 n, so if we have an algorithm that divides by 2 until the number gets sufficiently small, then we can say that it terminates in O(log n) steps.
An important detail is that it doesn't matter what constant you're dividing n by (as long as it's greater than one); if you divide by the constant k, it will take logk n steps to reach 1. Thus any algorithm that repeatedly divides the input size by some fraction will need O(log n) iterations to terminate. Those iterations might take a lot of time and so the net runtime needn't be O(log n), but the number of steps will be logarithmic.
So where does this come up? One classic example is binary search, a fast algorithm for searching a sorted array for a value. The algorithm works like this:
If the array is empty, return that the element isn't present in the array.
Otherwise:
Look at the middle element of the array.
If it's equal to the element we're looking for, return success.
If it's greater than the element we're looking for:
Throw away the second half of the array.
Repeat
If it's less than the the element we're looking for:
Throw away the first half of the array.
Repeat
For example, to search for 5 in the array
1 3 5 7 9 11 13
We'd first look at the middle element:
1 3 5 7 9 11 13
^
Since 7 > 5, and since the array is sorted, we know for a fact that the number 5 can't be in the back half of the array, so we can just discard it. This leaves
1 3 5
So now we look at the middle element here:
1 3 5
^
Since 3 < 5, we know that 5 can't appear in the first half of the array, so we can throw the first half array to leave
5
Again we look at the middle of this array:
5
^
Since this is exactly the number we're looking for, we can report that 5 is indeed in the array.
So how efficient is this? Well, on each iteration we're throwing away at least half of the remaining array elements. The algorithm stops as soon as the array is empty or we find the value we want. In the worst case, the element isn't there, so we keep halving the size of the array until we run out of elements. How long does this take? Well, since we keep cutting the array in half over and over again, we will be done in at most O(log n) iterations, since we can't cut the array in half more than O(log n) times before we run out of array elements.
Algorithms following the general technique of divide-and-conquer (cutting the problem into pieces, solving those pieces, then putting the problem back together) tend to have logarithmic terms in them for this same reason - you can't keep cutting some object in half more than O(log n) times. You might want to look at merge sort as a great example of this.
Processing values one digit at a time
How many digits are in the base-10 number n? Well, if there are k digits in the number, then we'd have that the biggest digit is some multiple of 10k. The largest k-digit number is 999...9, k times, and this is equal to 10k + 1 - 1. Consequently, if we know that n has k digits in it, then we know that the value of n is at most 10k + 1 - 1. If we want to solve for k in terms of n, we get
n ≤ 10k+1 - 1
n + 1 ≤ 10k+1
log10 (n + 1) ≤ k + 1
(log10 (n + 1)) - 1 ≤ k
From which we get that k is approximately the base-10 logarithm of n. In other words, the number of digits in n is O(log n).
For example, let's think about the complexity of adding two large numbers that are too big to fit into a machine word. Suppose that we have those numbers represented in base 10, and we'll call the numbers m and n. One way to add them is through the grade-school method - write the numbers out one digit at a time, then work from the right to the left. For example, to add 1337 and 2065, we'd start by writing the numbers out as
1 3 3 7
+ 2 0 6 5
==============
We add the last digit and carry the 1:
1
1 3 3 7
+ 2 0 6 5
==============
2
Then we add the second-to-last ("penultimate") digit and carry the 1:
1 1
1 3 3 7
+ 2 0 6 5
==============
0 2
Next, we add the third-to-last ("antepenultimate") digit:
1 1
1 3 3 7
+ 2 0 6 5
==============
4 0 2
Finally, we add the fourth-to-last ("preantepenultimate"... I love English) digit:
1 1
1 3 3 7
+ 2 0 6 5
==============
3 4 0 2
Now, how much work did we do? We do a total of O(1) work per digit (that is, a constant amount of work), and there are O(max{log n, log m}) total digits that need to be processed. This gives a total of O(max{log n, log m}) complexity, because we need to visit each digit in the two numbers.
Many algorithms get an O(log n) term in them from working one digit at a time in some base. A classic example is radix sort, which sorts integers one digit at a time. There are many flavors of radix sort, but they usually run in time O(n log U), where U is the largest possible integer that's being sorted. The reason for this is that each pass of the sort takes O(n) time, and there are a total of O(log U) iterations required to process each of the O(log U) digits of the largest number being sorted. Many advanced algorithms, such as Gabow's shortest-paths algorithm or the scaling version of the Ford-Fulkerson max-flow algorithm, have a log term in their complexity because they work one digit at a time.
As to your second question about how you solve that problem, you may want to look at this related question which explores a more advanced application. Given the general structure of problems that are described here, you now can have a better sense of how to think about problems when you know there's a log term in the result, so I would advise against looking at the answer until you've given it some thought.
When we talk about big-Oh descriptions, we are usually talking about the time it takes to solve problems of a given size. And usually, for simple problems, that size is just characterized by the number of input elements, and that's usually called n, or N. (Obviously that's not always true-- problems with graphs are often characterized in numbers of vertices, V, and number of edges, E; but for now, we'll talk about lists of objects, with N objects in the lists.)
We say that a problem "is big-Oh of (some function of N)" if and only if:
For all N > some arbitrary N_0, there is some constant c, such that the runtime of the algorithm is less than that constant c times (some function of N.)
In other words, don't think about small problems where the "constant overhead" of setting up the problem matters, think about big problems. And when thinking about big problems, big-Oh of (some function of N) means that the run-time is still always less than some constant times that function. Always.
In short, that function is an upper bound, up to a constant factor.
So, "big-Oh of log(n)" means the same thing that I said above, except "some function of N" is replaced with "log(n)."
So, your problem tells you to think about binary search, so let's think about that. Let's assume you have, say, a list of N elements that are sorted in increasing order. You want to find out if some given number exists in that list. One way to do that which is not a binary search is to just scan each element of the list and see if it's your target number. You might get lucky and find it on the first try. But in the worst case, you'll check N different times. This is not binary search, and it is not big-Oh of log(N) because there's no way to force it into the criteria we sketched out above.
You can pick that arbitrary constant to be c=10, and if your list has N=32 elements, you're fine: 10*log(32) = 50, which is greater than the runtime of 32. But if N=64, 10*log(64) = 60, which is less than the runtime of 64. You can pick c=100, or 1000, or a gazillion, and you'll still be able to find some N that violates that requirement. In other words, there is no N_0.
If we do a binary search, though, we pick the middle element, and make a comparison. Then we throw out half the numbers, and do it again, and again, and so on. If your N=32, you can only do that about 5 times, which is log(32). If your N=64, you can only do this about 6 times, etc. Now you can pick that arbitrary constant c, in such a way that the requirement is always met for large values of N.
With all that background, what O(log(N)) usually means is that you have some way to do a simple thing, which cuts your problem size in half. Just like the binary search is doing above. Once you cut the problem in half, you can cut it in half again, and again, and again. But, critically, what you can't do is some preprocessing step that would take longer than that O(log(N)) time. So for instance, you can't shuffle your two lists into one big list, unless you can find a way to do that in O(log(N)) time, too.
(NOTE: Nearly always, Log(N) means log-base-two, which is what I assume above.)
In the following solution, all the lines with a recursive call are done on
half of the given sizes of the sub-arrays of X and Y.
Other lines are done in a constant time.
The recursive function is T(2n)=T(2n/2)+c=T(n)+c=O(lg(2n))=O(lgn).
You start with MEDIAN(X, 1, n, Y, 1, n).
MEDIAN(X, p, r, Y, i, k)
if X[r]<Y[i]
return X[r]
if Y[k]<X[p]
return Y[k]
q=floor((p+r)/2)
j=floor((i+k)/2)
if r-p+1 is even
if X[q+1]>Y[j] and Y[j+1]>X[q]
if X[q]>Y[j]
return X[q]
else
return Y[j]
if X[q+1]<Y[j-1]
return MEDIAN(X, q+1, r, Y, i, j)
else
return MEDIAN(X, p, q, Y, j+1, k)
else
if X[q]>Y[j] and Y[j+1]>X[q-1]
return Y[j]
if Y[j]>X[q] and X[q+1]>Y[j-1]
return X[q]
if X[q+1]<Y[j-1]
return MEDIAN(X, q, r, Y, i, j)
else
return MEDIAN(X, p, q, Y, j, k)
The Log term pops up very often in algorithm complexity analysis. Here are some explanations:
1. How do you represent a number?
Lets take the number X = 245436. This notation of “245436” has implicit information in it. Making that information explicit:
X = 2 * 10 ^ 5 + 4 * 10 ^ 4 + 5 * 10 ^ 3 + 4 * 10 ^ 2 + 3 * 10 ^ 1 + 6 * 10 ^ 0
Which is the decimal expansion of the number. So, the minimum amount of information we need to represent this number is 6 digits. This is no coincidence, as any number less than 10^d can be represented in d digits.
So how many digits are required to represent X? Thats equal to the largest exponent of 10 in X plus 1.
==> 10 ^ d > X
==> log (10 ^ d) > log(X)
==> d* log(10) > log(X)
==> d > log(X) // And log appears again...
==> d = floor(log(x)) + 1
Also note that this is the most concise way to denote the number in this range. Any reduction will lead to information loss, as a missing digit can be mapped to 10 other numbers. For example: 12* can be mapped to 120, 121, 122, …, 129.
2. How do you search for a number in (0, N - 1)?
Taking N = 10^d, we use our most important observation:
The minimum amount of information to uniquely identify a value in a range between 0 to N - 1 = log(N) digits.
This implies that, when asked to search for a number on the integer line, ranging from 0 to N - 1, we need at least log(N) tries to find it. Why? Any search algorithm will need to choose one digit after another in its search for the number.
The minimum number of digits it needs to choose is log(N). Hence the minimum number of operations taken to search for a number in a space of size N is log(N).
Can you guess the order complexities of binary search, ternary search or deca search? Its O(log(N))!
3. How do you sort a set of numbers?
When asked to sort a set of numbers A into an array B, here’s what it looks like ->
Permute Elements
Every element in the original array has to be mapped to it’s corresponding index in the sorted array. So, for the first element, we have n positions. To correctly find the corresponding index in this range from 0 to n - 1, we need…log(n) operations.
The next element needs log(n-1) operations, the next log(n-2) and so on. The total comes to be:
==> log(n) + log(n - 1) + log(n - 2) + … + log(1)Using log(a) + log(b) = log(a * b), ==> log(n!)
This can be approximated to nlog(n) - n. Which is O(n*log(n))!
Hence we conclude that there can be no sorting algorithm that can do better than O(n*log(n)). And some algorithms having this complexity are the popular Merge Sort and Heap Sort!
These are some of the reasons why we see log(n) pop up so often in the complexity analysis of algorithms. The same can be extended to binary numbers. I made a video on that here.
Why does log(n) appear so often during algorithm complexity analysis?
Cheers!
We call the time complexity O(log n), when the solution is based on iterations over n, where the work done in each iteration is a fraction of the previous iteration, as the algorithm works towards the solution.
Can't comment yet... necro it is!
Avi Cohen's answer is incorrect, try:
X = 1 3 4 5 8
Y = 2 5 6 7 9
None of the conditions are true, so MEDIAN(X, p, q, Y, j, k) will cut both the fives. These are nondecreasing sequences, not all values are distinct.
Also try this even-length example with distinct values:
X = 1 3 4 7
Y = 2 5 6 8
Now MEDIAN(X, p, q, Y, j+1, k) will cut the four.
Instead I offer this algorithm, call it with MEDIAN(1,n,1,n):
MEDIAN(startx, endx, starty, endy){
if (startx == endx)
return min(X[startx], y[starty])
odd = (startx + endx) % 2 //0 if even, 1 if odd
m = (startx+endx - odd)/2
n = (starty+endy - odd)/2
x = X[m]
y = Y[n]
if x == y
//then there are n-2{+1} total elements smaller than or equal to both x and y
//so this value is the nth smallest
//we have found the median.
return x
if (x < y)
//if we remove some numbers smaller then the median,
//and remove the same amount of numbers bigger than the median,
//the median will not change
//we know the elements before x are smaller than the median,
//and the elements after y are bigger than the median,
//so we discard these and continue the search:
return MEDIAN(m, endx, starty, n + 1 - odd)
else (x > y)
return MEDIAN(startx, m + 1 - odd, n, endy)
}

Resources