Asymptotic analaysis nlogn vs. n - runtime

image of code that is analyzed for asymptotic runtime
I was given a problem of analyzing the runtime of this code, and I came up with NlogN since the outerloop is Logn and the innerloop is N, so the loops multiply out to NlogN. However, the solution says this is incorrect and says the actual runtime of this code is O(n) due to the inner loop running on O(n), but for some reason the loops don't multiply so Logn is dropped due to it being lower order?
Can someone help me out on understanding this? Why is LogN + N instead of LogN * N?

Okay, let's count how many time the instruction sum++ is triggered
in the first loop of while it will be triggered 2^0 = 1
in the second one 2^1 = 2 ,in the third it is 2^2 = 4 and ..so on
in the loop number var loop it will be triggered 2^var times
so the overall count is
2^0 + 2^1 + 2^2 + ... + 2^var ,which is equal to 2^(var+1) - 1
now our problem is to find out the value of var, it is obviously log(N)
so the overall is 2^(log(N)+1) - 1, which is O(N)

Related

What will be the time complexity of this code fragment?

Given this question where increment over the iterator i happens by incrementing its value by its own log value. What will be the big O time complexity for this fragment?
i = 10
while(i<n) {
i=i+log(i);
}
Interesting question! Here's an initial step toward working out the runtime.
Let's imagine that we have reached a point in the loop where the value of i has reached 2k for some natural number k. Then for i to increase to 2k+1, we'll need approximately 2k / k iterations to elapse. Why? That's because
The amount i needs to increase is 2k+1 - 2k = 2k(2 - 1) = 2k, and
At each step, increasing i by log i will increase i by roughly log 2k = k.
We can therefore break the algorithm apart into "stages." In the first stage, we grow from size 23 to size 24, requiring (roughly) 23/3 steps. In the second stage, we grow from size 24 to size 25, requiring (roughly) 24 / 4 steps. After repeating this process many times, we eventually grow from size n/2 = 2log n - 1 to size n = 2log n, requiring (roughly) 2log n / log n steps. The total amount of work done is therefore given by
23 / 3 + 24/4 + 25 / 5 + ... + 2log n / log n.
The goal now is to find some bounds on this expression. We can see that the sum is at least equal to its last term, which is 2log n / log n = n / log n, so the work done is Ω(n / log n). We can also see that the work done is less than
23 + 24 + 25 + ... + 2log n
≤ 2log n + 1 (sum of a geometric series)
= 2n,
so the work done is O(n). That sandwiches the runtime between Ω(n / log n) and O(n).
It turns out that Θ(n / log n) is indeed a tight bound here, which can be proved by doing a more nuanced analysis of the summation.
Let us look a bit about the definition of g(n)=O(f(n)): saying the function g is of order O(f(n)) means there exists a number n0 and a constant c such that for all n>n0 it is g(n)<=cf(n).
Looking at the worst case scenario, the while will run for maximum n times which means we can say your code is of order O(n).
Now let's assume that the code inside the while loop is
(*) while(i<n) {
i = i + i ;
}
which obviously should skip more iteration that the original one. So we can use this code to estimate a lower bound. Examining (*) we see that in each iteration the counter will be double, if we think a little bit about it we see that for n iteration each time half of the input will be thrown out. so the code in (*) will have worst case asymptotic runtime O(log n).
Now we estimate that the original code should be between both so we can say its asymptotic lower bound is Ω(log n) and its asymptotic upper bound is O(n).

Big-O complexity in a loop

If you had an algorithm with a loop that executed n steps the first time through,then n − 2 the second time, n − 4 the next time, and kept repeating until the last time through the loop it executed 2 steps, what would be the complexity measure of this loop?
I believe this exhibits O(n^2) complexity, as the number of steps not being executed increases quadratically. I am having a hard time visualizing such the loop itself, which makes me less confident about my answer.
Any kind of help/second opinion is greatly appreciated :)
You are correct that the complexity is Θ(n2). This is because what you describe is an arithmetic progression:
(n - 2) + (n - 4) + ... + 2 (or an odd number at the end)
(which is, obviously, 2 + 4 + 6 + ... + (n - 2) or the odd-beginning equivalent, BTW).
Using the formula for the sum, it is the average of the first and last elements, times the number of elements. Each of these terms is Θ(n), and their product is Θ(n2).

Time complexity of the following algorithm?

I'm learning Big-O notation right now and stumbled across this small algorithm in another thread:
i = n
while (i >= 1)
{
for j = 1 to i // NOTE: i instead of n here!
{
x = x + 1
}
i = i/2
}
According to the author of the post, the complexity is Θ(n), but I can't figure out how. I think the while loop's complexity is Θ(log(n)). The for loop's complexity from what I was thinking would also be Θ(log(n)) because the number of iterations would be halved each time.
So, wouldn't the complexity of the whole thing be Θ(log(n) * log(n)), or am I doing something wrong?
Edit: the segment is in the best answer of this question: https://stackoverflow.com/questions/9556782/find-theta-notation-of-the-following-while-loop#=
Imagine for simplicity that n = 2^k. How many times x gets incremented? It easily follows this is Geometric series
2^k + 2^(k - 1) + 2^(k - 2) + ... + 1 = 2^(k + 1) - 1 = 2 * n - 1
So this part is Θ(n). Also i get's halved k = log n times and it has no asymptotic effect to Θ(n).
The value of i for each iteration of the while loop, which is also how many iterations the for loop has, are n, n/2, n/4, ..., and the overall complexity is the sum of those. That puts it at roughly 2n, which gets you your Theta(n).

Worst case time complexity for this stupid sort?

The code looks like:
for (int i = 1; i < N; i++) {
if (a[i] < a[i-1]) {
swap(i, i-1);
i = 0;
}
}
After trying out a few things i figure the worst case is when the input array is in descending order. Then looks like the compares will be maximum and hence we will consider only compares. Then it seems it would be a sum of sums, i.e sum of ... {1+2+3+...+(n-1)}+{1+2+3+...+(n-2)}+{1+2+3+...+(n-3)}+ .... + 1 if so what would be O(n) ?
If I am not on the right path can someone point out what O(n) would be and how can it be derived? cheers!
For starters, the summation
(1 + 2 + 3 + ... + n) + (1 + 2 + 3 + ... + n - 1) + ... + 1
is not actually O(n). Instead, it's O(n3). You can see this because the sum 1 + 2 + ... + n = O(n2, and there are n copies of each of them. You can more properly show that this summation is Θ(n3) by looking at the first n / 2 of these terms. Each of those terms is at least 1 + 2 + 3 + ... + n / 2 = Θ(n2), so there are n / 2 copies of something that's Θ(n2), giving a tight bound of Θ(n3).
We can upper-bound the total runtime of this algorithm at O(n3) by noting that every swap decreases the number of inversions in the array by one (an inversion is a pair of elements out of place). There can be at most O(n2) inversions in an array and a sorted array has no inversions in it (do you see why?), so there are at most O(n2) passes over the array and each takes at most O(n) work. That collectively gives a bound of O(n3).
Therefore, the Θ(n3) worst-case runtime you've identified is asymptotically tight, so the algorithm runs in time O(n3) and has worst-case runtime Θ(n3).
Hope this helps!
It does one iteration of the list per swap. The maximum number of swaps necessary is O(n * n) for a reversed list. Doing each iteration is O(n).
Therefore the algorithm is O(n * n * n).
This is one half of the infamous Bubble Sort, which has a O(N^2). This partial sort has O(N) because the For loop goes from 1 to N. After one iteration, you will end up with the largest element at the end of the list and the rest of the list in some changed order. To be a proper Bubble Sort, it needs another loop inside this one to iterate j from 1 to N-i and do the same thing. The If goes inside the inner loop.
Now you have two loops, one inside the other, and they both go from 1 to N (sort of). You will have N * N or N^2 iterations. Thus O(N^2) for the Bubble Sort.
Now you have take your next step as a programmer: Finish writing the Bubble Sort and make it work correctly. Try it with different lengths of list a and see how long it takes. Then never use it again. ;-)

time complexity calculation for two for loops with connecting variables

what would be the time coplexity of this:
for(k=1;K<=n;k*=2)
for(j=1;j<=k;j++)
sum++
For this i thought as
1. Outer Loop will run logn times
2. Inner Loop will also run logn times.because i think inner loop j is related to k. So how much ever k runs, same is the running time for j too. So total = O(logn * logn)
but in text they have given total= O(2n-1).
can you please explain
when k is 1 (sum++) runs 1 times
when k is 2 (sum++) runs 2 times
when k is 4 (sum++) runs 4 times
when k is n = 2^k (sum++) runs 2^k times
so we must calculate
1+2+4+ ... + 2^k = 2^0 + 2^1 + 2^2 + .... + 2^k = (1 - 2^(k+1))/(1-2)
because we put n = 2^k so :
k = log(n)
2^(log(n)) = n^(log(2))
2* 2^k -1 = 2*n - 1
This problem is most easily interpreted by forgetting that the outer loop is there and first looking at the inner loop complexity. Suppose that the inner loop runs 'M' times... then the total number of 'sum++' operations will be,
1 + 2 + 4 + ... + 2^(M-1)
This sum can be reduced to '2^(M) - 1' by noticing that this is a binary number composed of all 1's. Now the question is what is M? You've already figure this out, M = log(n)+1 (the +1 is because the loop must run at least once). Plugging this into the inner loop complexity leaves us with,
2^(log(n)+1)-1 = 2*n - 1.
Thus the entire loop scales as O(2n-1). Hope this helps!

Resources