Time complexity of this simple code - algorithm

In pseudo-code:
j = 5;
while (j <= n) {
j = j* j* j * j;
}
What is the time complexity of this code?
It is way shorter than O(logn), is there even any reason to go lower than that?

Let's trace through the execution of the code. Suppose we start with initial value j0:
0. j ← j0
1. j ← j0^4
2. j ← [j0^4]^4 = j0^(4^2)
3. j ← [j0^(4^2)]^4 = j0^(4^3)
4. j ← [j0^(4^3)]^4 = j0^(4^4)
...
m. j ← [j0^(4^(m-1))]^4 = j0^(4^m)
... after m loops.
The loop terminates when the value exceeds n:
j0^(4^m) > n
→m > log(4, log(j0, n))
Thus the time complexity is O(m) = O(log log n).

I used help from MathSE to find out how to solve this. The answer is same as another one by #meowgoesthedog, but I understand it the following way:
On every iteration, the value of j is going to increase by its own 4th power. Or, we can look at it from the side of n, that on every iteration n is going to reduce by its 4th root. Hence, the recurrence will look like:
T(n) = 1 + T(n1/4)
For any integer k, with 24k + 1 <= n <= 24k + 1, the recurrence will become:
T(n) = 1 + k
if we go on to assume that the 4th root will always be an integer. It won't matter if it is not as the constant of +/- 1 will be ignored in the Big-O calculation.
Now, since the assumption of 4th root being an integer simplifies things for us, we can try to solve the following equation:
n = 24k,
with the equation yielding k = (Log(Log(n)) - Log(2))/Log(4).
This implies that O(T(n)) = O(Log(Log(n))).

Related

Is this algorithm O(log log n) complexity?

In particular, I'm interested in finding the Theta complexity. I can see the algorithm is bounded by log(n) but I'm not sure how to proceed considering the problem size decreases exponentially.
i = n
j = 2
while (i >= 1)
i = i/j
j = 2j
The simplest way to answer your question is to look at the algorithm through the eyes of the logarithm (in my case the binary logarithm):
log i_0 = log n
log j_0 = 1
k = 0
while (log i_k >= 0) # as log increases monotonically
log i_{k+1} = log i_k - log j_k
log j_{k+1} = (log j_k) + 1
k++
This way we see that log i decreases by log j = k + 1 during every step.
Now when will we exit the loop?
This happens for
The maximum number of steps is thus the smallest integer k such that
holds.
Asymptotically, this is equivalent to , so your algorithm is in
Let us denote i(k) and j(k) the value of i and j at iteration k (so assume that i(1)=n and j(1)=2 ). We can easily prove by induction that j(k)=2^k and that
Knowing the above formula on i(k), you can compute an upper bound on the value of k that is needed in order to have i(k) <= 1 and you will obtain that the complexity is

Write an algorithm to efficiently find all i and j for any given N such that N=i^j

I am looking for an efficient algorithm of the problem, for any N find all i and j such that N=i^j.
I can solve it of O(N^2) as follows,
for i=1 to N
{
for j=1 to N
{
if((Power(i,j)==N)
print(i,j)
}
}
I am looking for better algorithm(or program in any language)if possible.
Given that i^j=N, you can solve the equation for j by taking the log of both sides:
j log(i) = log(N) or j = log(N) / log(i). So the algorithm becomes
for i=2 to N
{
j = log(N) / log(i)
if((Power(i,j)==N)
print(i,j)
}
Note that due to rounding errors with floating point calculations, you might want to check j-1 and j+1 as well, but even so, this is an O(n) solution.
Also, you need to skip i=1 since log(1) = 0 and that would result in a divide-by-zero error. In other words, N=1 needs to be treated as a special case. Or not allowed, since the solution for N=1 is i=1 and j=any value.
As M Oehm pointed out in the comments, an even better solution is to iterate over j, and compute i with pow(n,1.0/j). That reduces the time complexity to O(logN), since the maximum value for j is log2(N).
Here is a method you can use.
Lets say you have to solve an equation:
a^b = n //b and n are known
You can find this using binary search. If, you get a condition such that,
x^b < n and (x+1)^b > n
Then, no pair (a,b) exists such that a^b = n.
If you apply this method in range for b from 1..log(n), you should get all possible pairs.
So, complexity of this method will be: O(log n * log n)
Follow these steps:
function ifPower(n,b)
min=1, max=n;
while(min<max)
mid=min + (max-min)/2
k = mid^b, l = (mid + 1)^b;
if(k == n)
return mid;
if(l == n)
return mid + 1;
if(k < n && l > n)
return -1;
if(k < n)
max = mid - 1;
else
min = mid + 2; //2 as we are already checking for mid+1
return -1;
function findAll(n)
s = log2(n)
for i in range 2 to s //starting from 2 to ignore base cases, powers 0,1...you can handle them if required
p = ifPower(n,i)
if(b != -1)
print(b,i)
Here, in the algorithm, a^b means a raised to power of b and not a xor b (its obvs, but just saying)

Time complexity analysis for a while loop with a cubed iterator

I am trying to analyze the time complexity of the following algorithm:
Algorithm WhileLoop2(n):
x = 0
j = 3
while(j <= n)
x = x + 1
j = j * j * j
return x
I think that while it's looping, j will be 3j, 9j, 27j,... but I'm not sure what complexity that would correlate to. I have heard that it is O(loglogn) but I am not sure how to check it or how to get to that point. Any help would be greatly appreciated.
The algorithm will iterate as long as the condition is fulfilled: j <= n. So it stops when j > n. So it will iterate with j =
3
27 (3^3)
19683 (27^3 = 3^(3^3))
...
so the last j is equal to 3^(3^i) where i is the iteration count. So when the algorithm stops we have: 3^3^i > n.Taking log(3) from both sides we get: 3^i > log(3)(n). Taking the log(3) again we get: i > log(3)(log(3)(n)). So indeed it's O(loglogn).

Running time T(n) of algorithm

I have analyzed running time of following alogirthm i analyzed theta but can its running time could be Big O?
Cost Time
1. for i ←1 to n c1 n
2. do for j ← i to n c2 n
3. do k ← k+ j c3 n-1
T(n) = c1n +c2n+c3(n-1)
= C1n+C2n+C3(n-1)
= n(C1+C2)+n-1
= n+n-1
Or T(n) = Ө(n)
So running time is Ө(n)
Your loop will continue as follows (well-known arithmetic progression formula):
-which also can be estimated as since big-O gives majority estimation.
1. for i ←1 to n c1 n
2. do for j ← i to n c2 n
3. do k ← k+ j c3 1
T(n) = n * n * 1 = O(n^2) #Giulio Franco
It's a nested loop that does a constant time operation in there.
do k ← k+ j is constant because it's a fixed length of time for the operation to take place no matter what inputs you put. k + j
loop(n)
loop(n)
constant time(1)
When it's a loop inside a loop you multiply. n*n*1
loop(n)
loop(n)
These loops aren't nested.
this would be n + n
O(n+n) which reduces to O(n)

O(n log log n) time complexity

I have a short program here:
Given any n:
i = 0;
while (i < n) {
k = 2;
while (k < n) {
sum += a[j] * b[k]
k = k * k;
}
i++;
}
The asymptotic running time of this is O(n log log n). Why is this the case? I get that the entire program will at least run n times. But I'm not sure how to find log log n. The inner loop is depending on k * k, so it's obviously going to be less than n. And it would just be n log n if it was k / 2 each time. But how would you figure out the answer to be log log n?
For mathematical proof, inner loop can be written as:
T(n) = T(sqrt(n)) + 1
w.l.o.g assume 2 ^ 2 ^ (t-1)<= n <= 2 ^ (2 ^ t)=>
we know 2^2^t = 2^2^(t-1) * 2^2^(t-1)
T(2^2^t) = T(2^2^(t-1)) + 1=T(2^2^(t-2)) + 2 =....= T(2^2^0) + t =>
T(2^2^(t-1)) <= T(n) <= T(2^2^t) = T(2^2^0) + log log 2^2^t = O(1) + loglogn
==> O(1) + (loglogn) - 1 <= T(n) <= O(1) + loglog(n) => T(n) = Teta(loglogn).
and then total time is O(n loglogn).
Why inner loop is T(n)=T(sqrt(n)) +1?
first see when inner loop breaks, when k>n, means before that k was at least sqrt(n), or in two level before it was at most sqrt(n), so running time will be T(sqrt(n)) + 2 ≥ T(n) ≥ T(sqrt(n)) + 1.
Time Complexity of a loop is O(log log n) if the loop variables is reduced / increased exponentially by a constant amount. If the loop variable is divided / multiplied by a constant amount then complexity is O(Logn).
Eg: in your case value of k is as follow. Let i in parenthesis denote the number of times the loop has been executed.
2 (0) , 2^2 (1), 2^4 (2), 2^8 (3), 2^16(4), 2^32 (5) , 2^ 64 (6) ...... till n (k) is reached.
The value of k here will be O(log log n) which is the number of times the loop has executed.
For the sake of assumption lets assume that n is 2^64. Now log (2^64) = 64 and log 64 = log (2^6) = 6. Hence your program ran 6 times when n is 2^64.
I think if the codes are like this, it should be n*log n;
i = 0;
while (i < n) {
k = 2;
while (k < n) {
sum += a[j] * b[k]
k *= c;// c is a constant bigger than 1 and less than k;
}
i++;
}
Okay, So let's break this down first -
Given any n:
i = 0;
while (i < n) {
k = 2;
while (k < n) {
sum += a[j] * b[k]
k = k * k;
}
i++;
}
while( i<n ) will run for n+1 times but we'll round it off to n times.
now here comes the fun part, k<n will not run for n times instead it will run for log log n times because here instead of incrementing k by 1,in each loop we are incrementing it by squaring it. now this means it'll take only log log n time for the loop. you'll understand this when you learn design and analysis of algorithm
Now we combine all the time complexity and we get n.log log n time here I hope you get it now.

Resources