Time Complexity of the following code fragment? - algorithm

I calculated it to be O(N^2), but my instructor marked it incorrect in the exam. The Correct answer was O(1). Can anyone help me, how did the time complexity come out to be O(1)?

The outer loop will run for 2N times. (int j = 2 * N) and later decrementing everytime by 1)
And since N is not changing, and the i is assigned the values of N always (int i = N), the inner loop will always run for logN base 2 times.
(Notice the way i changes i = i div 2)
Therefore, the complexity is O(NlogN)

Question: What happens when you repeatedly half input(or search space) ?(Like in Binary Search).
Answer: Well, you get log(N) complexity. (Reference : The Algorithm Design Manual by Steven S. Skiena)
See the inner loop in your algorithm, i = i div 2 makes it a log(N) complexity loop. Therefore the overall complexity will be N log(N).
Take this with a pinch of salt : Whenever you divide your input (search space) by 2, 3 , 4 or whatever constant number greater than 1, you get log(N) complexity.
P.S. : the complexity of your algorithm is nowhere near to O(1).

Related

What is big O of this pseudocode?

For this pseudocode, what would be big O of this:
BinarySum(A, i, n) {
if n=1 then
return A[i] // base case
return BinarySum(A, i, n/2) + BinarySum(A, i+[n/2], [n/2])
}
This is really confusing because I think it is O(logn) since you are dividing by 2 each time you run the function but then some people I spoke to think it is O(n) and not O(logn) because the algorithm doesn't half the problem by picking and choosing one half of the array. What is it?
TL;DR
The runtime is θ(n).
How to determine runtime
The recurrence relation for the algorithm is
T(n) = 2T(n/2) + O(1)
Because we have two recursive calls for half of the array in every call and need constant time in a single call. BTW: this is the same recurrence relation which also describes a binary tree traversal.
You can use the Master-theorem to determine the runtime here since a >= 1 and b > 1 and the recurrence relation has the form
T(1) = 1; T(n) = aT(n/b) + f(n)
This is case one of the theorem meaning f(n) = O(n ^ logb(a)). This is because with a = 2, b = 2 and f(n) = O(1) like in this case log2(2) = 1 and therefore f(n) = O(1) = O(n¹) = O(n).
When case one is applicable the Master-theorem says that the runtime of the algorithm is θ(n).
First, let me say that I like the other answer that links the recurrence with the traversal of binary trees. For a balanced binary tree, this is indeed the recurrence, and so the complexity must necessarily be the same as a depth-first traversal, which we all know is O(n). I like this analogy because it clearly says that the result doesn't just apply to the recurrence T(n) = 2T(n/2) + O(1) but to anything where you split the input into chunks of sizes m[0], m[1], ... that sum to size n and do T(n) = T(m[0]) + T(m[1]) + T(m[2]) + ... + O(1). You don't have to split the input into two equally sized parts to get O(n); you just have to spent constant time and then recurse on disjoint parts of the input.
Using the Master's Theorem, I feel, is a bit overkill for this one. It is a big gun, and it gives us the correct answer, but if you are like me, it doesn't give you much intuition about why the answer is what it is. With this particular recurrence, we can get the correct answer and an intuitive understanding of it with a few drawings.
We can break down what happens at each level of the recursion and maybe draw it like this:
We have the work that we are handling on the left, i.e., the size of the input and the actual time we spend at each function call on the right. We have input size n at the first level, but we only spend one "computation unit" of time on it. We have two chunks of size n/2 that we spend two units of time on at the next level. At the next level, we have four chunks, each of size n/4, and we spend four units on them. This continues until our chunks have size one, and we have n of those, that we spend n units of time on.
The total time we spend is the area of the red blocks on the right. The depth of the recursion is O(log n) but we don't need to worry about that to analyse the time. We will just flip the "time" bit and look at it this way:
The total time we spend must be n for the original bottom layer (now top layer), n/2 for the next layer, n/4 for the next, and so on. Move n outside of parentheses, and all we have to worry about is the sum 1+1/2+1/4+1/8+.... The sum ends at some point, of course. We only have O(log n) terms. But we don't have to worry about that at all, because even if it continued forever we wouldn't sum to more than two.
You might recognise this as a geometric series. They have the form sum x^i for i=0 to infinity, and when |x|<1 they converge to 1/(1-x). Proving this takes a little calculus, but when x = 1/2 as we have, it is easy to draw the series and get the result from there.
Take the size n layer and then start putting the remaining layers next to each other under it. First, you put down the n/2 layer. It takes half of the space. Then you put the n/4 layer next to it, and it takes half of the remaining space. The n/8 layer will take half of the remaining space, the n/16 layer will take half of the remaining space, and it will continue like this as if it were a reenactment of Zeno's paradox.
If you keep taking half of what is left forever, you can never get more than you started with, so adding up all the layers except the first cannot give you more time spent than you spent on the very first layer. The total time you would do if you kept recursing forever (and time worked like real numbers) would be linear. Since you stop before forever, it is still going to be linear. Infinity gives us O(n) so recursion depth O(log n) will as well.
Of course, getting O(n) from observing that T(n) < T'(n) = O(n) where T'(n) continues subdividing forever only tells us that T(n) = O(n) and not that T(n) = Omega(n), there you have to show that you don't spend substantially less time than n, but considering that the largest layer is n, it should be obvious that the recursion also runs in Omega(n).
If you don't cut the data size in half every recursion, but cut the data in some other chunks that add up to n, you still get O(n) of course--think of traversing a tree--but it gets a hell of a lot harder to draw, and I've never managed to come up with a good illustration of that. For splitting the data in half, though, the drawing is simple and the conclusions we draw from it gives us the correct running time.
The same drawing also tells us that the recurrence T(n) = T(n/2) + O(n) is in O(n). Here, we don't have to flip the first drawing, because we start out with the largest layer on top. We spend time n then n/2 then n/4 and so on, and we end up spending 2n time units. Because 2 isn't special here, we have T(n) = T(f·n) + O(n) = O(n) for any fraction 0 ≤ f < 1, it is just a lot harder to draw when f ≠ 1/2.
First a bug:
BinarySum(A, i, n) {
if n=1 then
return A[i] // base case
return BinarySum(A, i, n/2) + BinarySum(A, i+(n/2), (n - n/2))
// ^^^^
}
The second half might be of uneven length. And then the last value was dropped for n/2.
Recursively this might be several values.
On the complexity. Having A[0] + A[1] + A[2] + ... + A[n-1].
The recursion goes down to a single A[i] and for every + above adds exactly 1 left and right. So (n-1 subtrees + n leafs = 2n-1) O(n). Furthermore the call tree is irrelevant. BinarySum is not faster (than non-binary sums) unless using multithreading.
There's a difference between making just one recursive function call, like binary search does, and two recursive function calls, like your code does. In both cases, the maximum depth of the recursion is O(log n), but the total cost of the algorithms are different. I think you are confusing the maximum depth of the recursion with total running time of the algorithm.
The given function does a constant amount c of work before making two recursive function calls. Let c denote the work done by a function outside its recursive calls. You can draw a recursion tree where each node is the cost of a function call. The root node has a cost of c. The root node has two children because there are two recursive calls, each with a cost of c. Each of these children makes two further recursive calls; hence, the root node has 4 grandchildren, each with a cost of c. This continues until we hit the base case.
The total cost of the recursion tree is the cost of the root node (which is c), plus the cost of its children (which is 2c), plus the cost of the grandchilden (which is 4c), and so on, until we hit the n leaves (which have a total cost of nc, where for simplicity we'll assume n is a power of 2). The total cost of all levels of the recursion tree is c+2c+4c+8c+...+nc = O(nc) = O(n). Here, we used the fact that in an increasing geometric series, the total sum is dominated by the last term (the sum is essentially just the last term, up to constant factors, which are subsumed in asymptotic notation). This sum had O(log n) terms, but the sum is O(n).
Equivalently, the recurrence describing the running time of your algorithm is T(n) = 2T(n/2)+c, and by the Master theorem, the solution is T(n) = O(n). This is different from binary search, which has the recurrence T(n)=1T(n/2)+c, which has the solution T(n)=O(log n). For binary search, the total cost of all levels of the recursion tree would be c+c+...+c; here, the sum has O(log n) terms and the sum is O(log n).

time complexity of some recursive and none recursive algorithm

I have two pseudo-code algorithms:
RandomAlgorithm(modVec[0 to n − 1])
b = 0;
for i = 1 to n do
b = 2.b + modVec[n − i];
for i = 1 to b do
modVec[i mod n] = modVec[(i + 1) mod n];
return modVec;
Second:
AnotherRecursiveAlgo(multiplyVec[1 to n])
if n ≤ 2 do
return multiplyVec[1] × multiplyVec[1];
return
multiplyVec[1] × multiplyVec[n] +
AnotherRecursiveAlgo(multiplyVec[1 to n/3]) +
AnotherRecursiveAlgo(multiplyVec[2n/3 to n]);
I need to analyse the time complexity for these algorithms:
For the first algorithm i got the first loop is in O(n),the second loop has a best case and a worst case , best case is we have O(1) the loop runs once, the worst case is we have a big n on the first loop, but i don't know how to write this idea as a time complexity cause i usually get b=sum(from 1 to n-1) of 2^n-1 . modVec[n-1] and i get stuck here.
For the second loop i just don't get how to solve the time complexity of this one, we usually have it dependant on n , so we need the formula i think.
Thanks for the help.
The first problem is a little strange, all right.
If it helps, envision modVec as an array of 1's and 0's.
In this case, the first loop converts this array to a value.
This is O(n)
For instance, (1, 1, 0, 1, 1) will yield b = 27.
Your second loop runs b times. The dominating term for the value of b is 2^(n-1), a.k.a. O(2^n). The assignment you do inside the loop is O(1).
The second loop does depend on n. Your base case is a simple multiplication, O(1). The recursion step has three terms:
simple multiplication
recur on n/3 elements
recur on n/3 elements (from 2n/3 to the end is n/3 elements)
Just as your binary partitions result in log[2] complexities, this one will result in log[3]. The base doesn't matter; the coefficient (two recursive calls) doesn't' matter. 2*O(log3) is still O(log N).
Does that push you to a solution?
First Loop
To me this boils down to the O(First-For-Loop) + O(Second-For-Loop).
O(First-For-Loop) is simple = O(n).
O(Second-For-Loop) interestingly depends on n. Therefore, to me it's can be depicted as O(f(n)), where f(n) is some function of n. Not completely sure if I understand the f(n) based on the code presented.
The answer consequently becomes O(n) + O(f(n)). This could boil down to O(n) or O(f(n)) depending upon which one is larger and more dominant (since the lower order terms don't matter in the big-O notation.
Second Loop
In this case, I see that each call to the function invokes 3 additional calls...
The first call seems to be an O(1) call. So it won't matter.
The second and the third calls seems to recurses the function.
Therefore each function call is resulting in 2 additional recursions.
Consequently , the time complexity on this would be O(2^n).

How is Summation(n) Theta(n^2) according to its formula but Theta(n) ij we just look at it as a single for loop?

Our prof and various materials say Summation(n) = (n) (n+1) /2 and hence is theta(n^2). But intuitively, we just need one loop to find the sum of first n terms! So, it has to be theta(n).I'm wondering what am I missing here?!
All of these answers are misunderstanding the problem just like the original question: The point is not to measure the runtime complexity of an algorithm for summing integers, it's talking about how to reason about the complexity of an algorithm which takes i steps during each pass for i in 1..n. Consider insertion sort: On each step i to insert one member of the original list the output list is i elements long, thus it takes i steps (on average) to perform the insert. What is the complexity of insertion sort? It's the sum of all of those steps, or the sum of i for i in 1..n. That sum is n(n+1)/2 which has an n^2 in it, thus insertion sort is O(n^2).
The running time of the this code is Θ(1) (assuming addition/subtraction and multiplaction are constant time operations):
result = n*(n + 1)/2 // This statement executes once
The running time of the following pseudocode, which is what you described, is indeed Θ(n):
result = 0
for i from 1 up to n:
result = result + i // This statement executes exactly n times
Here is another way to compute it which has a running time of Θ(n²):
result = 0
for i from 1 up to n:
for j from i up to n:
result = result + 1 // This statement executes exactly n*(n + 1)/2 times
All three of those code blocks compute the natural numbers' sum from 1 to n.
This Θ(n²) loop is probably the type you are being asked to analyse. Whenever you have a loop of the form:
for i from 1 up to n:
for j from i up to n:
// Some statements that run in constant time
You have a running time complexity of Θ(n²), because those statements execute exactly summation(n) times.
I think the problem is that you're incorrectly assuming that the summation formula has time complexity theta(n^2).
The formula has an n^2 in it, but it doesn't require a number of computations or amount of time proportional to n^2.
Summing everything up to n in a loop would be theta(n), as you say, because you would have to iterate through the loop n times.
However, calculating the result of the equation n(n+1)/2 would just be theta(1) as it's a single calculation that is performed once regardless of how big n is.
Summation(n) being n(n+1)/2 refers to the sum of numbers from 1 to n. Which is a mathematical formula and can be calculated without a loop which is O(1) time. If you iterate an array to sum all values that is an O(n) algorithm.

Prove 3-Way Quicksort Big-O Bound

For 3-way Quicksort (dual-pivot quicksort), how would I go about finding the Big-O bound? Could anyone show me how to derive it?
There's a subtle difference between finding the complexity of an algorithm and proving it.
To find the complexity of this algorithm, you can do as amit said in the other answer: you know that in average, you split your problem of size n into three smaller problems of size n/3, so you will get, in è log_3(n)` steps in average, to problems of size 1. With experience, you will start getting the feeling of this approach and be able to deduce the complexity of algorithms just by thinking about them in terms of subproblems involved.
To prove that this algorithm runs in O(nlogn) in the average case, you use the Master Theorem. To use it, you have to write the recursion formula giving the time spent sorting your array. As we said, sorting an array of size n can be decomposed into sorting three arrays of sizes n/3 plus the time spent building them. This can be written as follows:
T(n) = 3T(n/3) + f(n)
Where T(n) is a function giving the resolution "time" for an input of size n (actually the number of elementary operations needed), and f(n) gives the "time" needed to split the problem into subproblems.
For 3-Way quicksort, f(n) = c*n because you go through the array, check where to place each item and eventually make a swap. This places us in Case 2 of the Master Theorem, which states that if f(n) = O(n^(log_b(a)) log^k(n)) for some k >= 0 (in our case k = 0) then
T(n) = O(n^(log_b(a)) log^(k+1)(n)))
As a = 3 and b = 3 (we get these from the recurrence relation, T(n) = aT(n/b)), this simplifies to
T(n) = O(n log n)
And that's a proof.
Well, the same prove actually holds.
Each iteration splits the array into 3 sublists, on average the size of these sublists is n/3 each.
Thus - number of iterations needed is log_3(n) because you need to find number of times you do (((n/3) /3) /3) ... until you get to one. This gives you the formula:
n/(3^i) = 1
Which is satisfied for i = log_3(n).
Each iteration is still going over all the input (but in a different sublist) - same as quicksort, which gives you O(n*log_3(n)).
Since log_3(n) = log(n)/log(3) = log(n) * CONSTANT, you get that the run time is O(nlogn) on average.
Note, even if you take a more pessimistic approach to calculate the size of the sublists, by taking minimum of uniform distribution - it will still get you first sublist of size 1/4, 2nd sublist of size 1/2, and last sublist of size 1/4 (minimum and maximum of uniform distribution), which will again decay to log_k(n) iterations (with a different k>2) - which will yield O(nlogn) overall - again.
Formally, the proof will be something like:
Each iteration takes at most c_1* n ops to run, for each n>N_1, for some constants c_1,N_1. (Definition of big O notation, and the claim that each iteration is O(n) excluding recursion. Convince yourself why this is true. Note that in here - "iteration" means all iterations done by the algorithm in a certain "level", and not in a single recursive invokation).
As seen above, you have log_3(n) = log(n)/log(3) iterations on average case (taking the optimistic version here, same principles for pessimistic can be used)
Now, we get that the running time T(n) of the algorithm is:
for each n > N_1:
T(n) <= c_1 * n * log(n)/log(3)
T(n) <= c_1 * nlogn
By definition of big O notation, it means T(n) is in O(nlogn) with M = c_1 and N = N_1.
QED

Big O of this equation?

for (int j=0,k=0; j<n; j++)
for (double m=1; m<n; m*=2)
k++;
I think it's O(n^2) but I'm not certain. I'm working on a practice problem and I have the following choices:
O(n^2)
O(2^n)
O(n!)
O(n log(n))
Hmmm... well, break it down.
It seems obvious that the outer loop is O(n). It is increasing by 1 each iteration.
The inner loop however, increases by a power of 2. Exponentials are certainly related (in fact inversely) to logarithms.
Why have you come to the O(n^2) solution? Prove it.
Its O(nlog2n). The code block runs n*log2n times.
Suppose n=16; Then the first loop runs 16 (=n) times. And the second loops runs 4(=log2n) times (m=1,2,4,8). So the inner statement k++ runs 64 times = (n*log2n) times.
lets look at the worst-case behaviour. for second loop search continues from 1, 2, 4, 8.... lets say n is 2^k for some k >= 0. in the worst-case we might end up searching until 2^k and realise we overshot the target. Now we know that target can be in 2^(k - 1) and 2^k. The number of elements in that range are 2^(k - 1) (think a second.). The number of elements that we have examined so far is O(k) which is O(logn) and for first loop it's O(n).(too simple to find out). then order of whole code will O(n(logn)).
A generic way to approach these sorts of problems is to consider the order of each loop, and because they are nested, you can multiply the "O" notations.
Some simple rules for big "O":
O(n)O(m) = O(nm)
O(n) + O(m) = O(n + m)
O(kn) = O(n) where 'k' is some constant
The 'j' loop iterates across n elements, so clearly it is O(n).
The 'm' loop iterates across log(n) elements, so it is O(log(n)).
Since the loops are nested, our final result would O(n) * O(log(n)) = O(n*log(n)).

Resources