Determining and analysis the big-O runtimes of these different loops - algorithm

here is the simple code, i want to find out the time complexity of the code
i have already done analysis on it, my teacher told me that there is a mistake in it. i am not able to figure out where i am wrong. need a help in it. thanks
j = 2
while(j<n)
{
k=j
while(k < n)
{
sum + = a[k]*b[k]
k = k*k
}
k = log(n)
j += log(k)
}
here what i got the answer
time complexity = O(n/loglogn).
i just want to know where i am wrong

You go from 2 to n, adding log log n to the accumulator each step, so you do indeed have n / log log n steps.
However, what is done per step? Each step, you go from j to n, multiplying the accumulator by itself each step. How many operations is that? I'm not 100% sure, but based on messing around a bit and on this answer, this seems to end up being log log (n - j) steps, or log log n for short.
So, n / log log n steps, doing log log n operations each step, gives you an O(n / log log n * log log n), or O(n) algorithm.
Some experimentation seems to more or less bear this out (Python), although n_ops appears to flag a bit as n gets bigger:
import math
def doit(n):
n_ops = 0
j = 2
while j < n:
k = j
while k < n:
# sum + = a[k]*b[k]
k = k*k
n_ops += 1
k = math.log(n, 2)
j += math.log(k, 2)
n_ops += 1
return n_ops
Results:
>>> doit(100)
76
>>> doit(1000)
614
>>> doit(10000)
5389
>>> doit(100000)
49418
>>> doit(1000000)
463527

Ok. Let's see. The
k=j
while(k < n)
{
sum + = a[k]*b[k]
k = k*k
}
bit takes what takes j^(2^i) to reach n. I.e. what 2^i takes to reach log_j(n) which is log_2(log_j(n)). Now you have
j = 2
while(j<n)
{
// stuff that takes log_2(log_j(n))
j += log(log(n))
}
This would require n/log(log(n)) of steps but those steps take different time. If they took equal time, you would be right. But instead you have to sum for {j from 2 to n/log(log(n))} log_2(log_j(n)) which is
sum for {j from 2 to n/log(log(n))} [log_2(log(n)) - log_2(log(j))]
which is not that simple. Well, at least, I think I've pointed where you are probably wrong, which was the question.

Related

Run-Time complexities of the following functions

I need some help with these functions and if the run-time complexities for it are correct, I'm learning the concepts currently in class and I've looked at videos and such but I can't find any videos explaining these tougher ones, so I'm hoping I can get some help here if I'm doing it right.
sum = 0
for i = 1 to n*n
for j = 1 to i * i * i
sum++
For this one I am thinking the answer is O(n^5) because the outer loop is running n^2 times while the inner loop will be running n^3 times and together that'll make n^5
sum = 0
for i = 1 to n^2 // O(n^2) times
j = i
while j > 0 //(O(n+1) since the while loop will check one more time if the loop is valid
sum++
j = (j div 5)
for this run time I'm assuming its going to run O(n^3 + 1) times since outer loop is running n^2 times and while loop will be n+1 and together thats n^3 + 1.
for i = 1 to n // n times
for j = 1 to n { // n^2 times
C[i,j] = 0
for k = 1 to n // n^3 times?
C[i,j] = C[i,j] + A[i,k]*B[k,j]
}
so for this one I'm thinking it's O(n^6) but I am really iffy on this one. I have seen some examples online where people will figure the loop to be O(n log n) but I am totally lost on how that is found. Any help would be greatly appreciated!
Your understanding of the first and the third algorithms looks correct. The second, however, is totally off. The inner loop
while j > 0 //(O(n+1) since the while loop will check one more time if the loop is valid
sum++
j = (j div 5)
starts with j being equal to i and divides j by 5 at each iteration, so it runs log(i) times. In turn, i varies from 1 to n^2, and the total execution time is a
sum[i: 1..n^2] log(i)
By the property of a logarithm this sum is equal to log ((n^2)!). Using Stirling approximation for factorial one obtains the time complexity being O(n^2 log(n^2)) = O(2 n^2 log(n)) = O(n^2 log(n)).

Is this loop O(nlog(n))?

I have a nested for loop that I am trying to analyze the efficiency of. The loop looks like this:
int n = 1000;
for (int i = 0; i < n; i++) {
for (int j = 0; j < i; j++) {
System.out.print("*");
}
}
I don't believe that this algorithm is O(n^2) because the inner loop does not run n times, it only runs i times. However, it certainly is not O(n). So I hypothesize that it must be between the two efficiencies, which gives O(nlog(n)). Is this accurate or is it really a O(n^2) algorithm and I'm misunderstanding the effect the inner loop has on the efficiency?
Your algorithm will run a triangular number of times:
n * (n + 1) / 2
In the above case, n = 999 because the first j loop doesn't run:
(999 * 1000) / 2 = 499500
It is lower than n**2, but it still is O(n**2), because n * (n + 1) / 2 is n**2 / 2 + n / 2. When n is large, you can ignore n / 2 compared to n**2 / 2, and you can also ignore the constant 1 / 2 factor.
I kind of get your doubts, but try to think in this way: what value will i have in the worst case scenario? Answer is n-1, right? So, as the complexity is evaluated by considering the worst case scenario it turns out that it is O(n^2) as n * (n-1) ~ n^2.
The number of iterations is sum from i=0 to n-1 (sum from j=0 to i-1 (1)). The inner sum is obviously equal to i. sum from i=0 to n-1 (i) = n * (n-1) / 2 = O(n^2) is well known.

Time complexity of this simple code

In pseudo-code:
j = 5;
while (j <= n) {
j = j* j* j * j;
}
What is the time complexity of this code?
It is way shorter than O(logn), is there even any reason to go lower than that?
Let's trace through the execution of the code. Suppose we start with initial value j0:
0. j ← j0
1. j ← j0^4
2. j ← [j0^4]^4 = j0^(4^2)
3. j ← [j0^(4^2)]^4 = j0^(4^3)
4. j ← [j0^(4^3)]^4 = j0^(4^4)
...
m. j ← [j0^(4^(m-1))]^4 = j0^(4^m)
... after m loops.
The loop terminates when the value exceeds n:
j0^(4^m) > n
→m > log(4, log(j0, n))
Thus the time complexity is O(m) = O(log log n).
I used help from MathSE to find out how to solve this. The answer is same as another one by #meowgoesthedog, but I understand it the following way:
On every iteration, the value of j is going to increase by its own 4th power. Or, we can look at it from the side of n, that on every iteration n is going to reduce by its 4th root. Hence, the recurrence will look like:
T(n) = 1 + T(n1/4)
For any integer k, with 24k + 1 <= n <= 24k + 1, the recurrence will become:
T(n) = 1 + k
if we go on to assume that the 4th root will always be an integer. It won't matter if it is not as the constant of +/- 1 will be ignored in the Big-O calculation.
Now, since the assumption of 4th root being an integer simplifies things for us, we can try to solve the following equation:
n = 24k,
with the equation yielding k = (Log(Log(n)) - Log(2))/Log(4).
This implies that O(T(n)) = O(Log(Log(n))).

Big Theta complexity for simple algorithm

I have the following code and have to determine the big theta complexity
for i =1 to n do
for j = 1 to n do
k=j
while k<= n do
k = k*3
end while
end for
end for
It's easy to see that the first two for-loops run n times each, but the while loop is throwing my off. The first time it runs log3(n) times, but after that i can't really tell.
Anyone who can help?
Let T be the run time. It is clear T is Ω(n2). We can use Stirling's Approximation to expand ln n! to get
T = ∑i ∑j ⌈lg3(n/j)⌉ = n * O(∑j ln n - ln j + 1) = n * O(n ln n - ln n! + n) = n * O(n ln n - (n ln n - n + O(ln n)) + n) = O(n2)
Thus T = Θ(n2)
Solution without using heavy-weight math:
Turn the problem on its head: instead of thinking about the first time the inner loop runs, think about the last time: it runs only once. In fact, the innermost loop runs only once for most values of j.
It runs once when j > n/3, that is, for 2n/3 values of j
It runs twice when n/9 < j <= n/3, that is, for 2n/9 values of j
It runs 3 times when n/27 < j <= n/9, that is, for 2n/27 values of j
It runs 4 times when n/81 < j <= n/27, that is, for 2n/81 values of j
...
The total number of times the innermost loop runs is going to be
1 * 2n/3 + 2 * 2n/9 + 3 * 2n/27 + 4 * 2n/81 + ...
= 2n(1/3 + 2/9 + 3/27 + ... )
< 2n Sum[k/3^k, for k=1 to infinity]
It's easy to see that the series Sum[k/3^k] converges (ratio test). Therefore the j-loop runs in O(n) time, and the entire thing in O(n²) time.

O(n log log n) time complexity

I have a short program here:
Given any n:
i = 0;
while (i < n) {
k = 2;
while (k < n) {
sum += a[j] * b[k]
k = k * k;
}
i++;
}
The asymptotic running time of this is O(n log log n). Why is this the case? I get that the entire program will at least run n times. But I'm not sure how to find log log n. The inner loop is depending on k * k, so it's obviously going to be less than n. And it would just be n log n if it was k / 2 each time. But how would you figure out the answer to be log log n?
For mathematical proof, inner loop can be written as:
T(n) = T(sqrt(n)) + 1
w.l.o.g assume 2 ^ 2 ^ (t-1)<= n <= 2 ^ (2 ^ t)=>
we know 2^2^t = 2^2^(t-1) * 2^2^(t-1)
T(2^2^t) = T(2^2^(t-1)) + 1=T(2^2^(t-2)) + 2 =....= T(2^2^0) + t =>
T(2^2^(t-1)) <= T(n) <= T(2^2^t) = T(2^2^0) + log log 2^2^t = O(1) + loglogn
==> O(1) + (loglogn) - 1 <= T(n) <= O(1) + loglog(n) => T(n) = Teta(loglogn).
and then total time is O(n loglogn).
Why inner loop is T(n)=T(sqrt(n)) +1?
first see when inner loop breaks, when k>n, means before that k was at least sqrt(n), or in two level before it was at most sqrt(n), so running time will be T(sqrt(n)) + 2 ≥ T(n) ≥ T(sqrt(n)) + 1.
Time Complexity of a loop is O(log log n) if the loop variables is reduced / increased exponentially by a constant amount. If the loop variable is divided / multiplied by a constant amount then complexity is O(Logn).
Eg: in your case value of k is as follow. Let i in parenthesis denote the number of times the loop has been executed.
2 (0) , 2^2 (1), 2^4 (2), 2^8 (3), 2^16(4), 2^32 (5) , 2^ 64 (6) ...... till n (k) is reached.
The value of k here will be O(log log n) which is the number of times the loop has executed.
For the sake of assumption lets assume that n is 2^64. Now log (2^64) = 64 and log 64 = log (2^6) = 6. Hence your program ran 6 times when n is 2^64.
I think if the codes are like this, it should be n*log n;
i = 0;
while (i < n) {
k = 2;
while (k < n) {
sum += a[j] * b[k]
k *= c;// c is a constant bigger than 1 and less than k;
}
i++;
}
Okay, So let's break this down first -
Given any n:
i = 0;
while (i < n) {
k = 2;
while (k < n) {
sum += a[j] * b[k]
k = k * k;
}
i++;
}
while( i<n ) will run for n+1 times but we'll round it off to n times.
now here comes the fun part, k<n will not run for n times instead it will run for log log n times because here instead of incrementing k by 1,in each loop we are incrementing it by squaring it. now this means it'll take only log log n time for the loop. you'll understand this when you learn design and analysis of algorithm
Now we combine all the time complexity and we get n.log log n time here I hope you get it now.

Resources