Running time T(n) of algorithm - algorithm

I have analyzed running time of following alogirthm i analyzed theta but can its running time could be Big O?
Cost Time
1. for i ←1 to n c1 n
2. do for j ← i to n c2 n
3. do k ← k+ j c3 n-1
T(n) = c1n +c2n+c3(n-1)
= C1n+C2n+C3(n-1)
= n(C1+C2)+n-1
= n+n-1
Or T(n) = Ө(n)
So running time is Ө(n)

Your loop will continue as follows (well-known arithmetic progression formula):
-which also can be estimated as since big-O gives majority estimation.

1. for i ←1 to n c1 n
2. do for j ← i to n c2 n
3. do k ← k+ j c3 1
T(n) = n * n * 1 = O(n^2) #Giulio Franco
It's a nested loop that does a constant time operation in there.
do k ← k+ j is constant because it's a fixed length of time for the operation to take place no matter what inputs you put. k + j
loop(n)
loop(n)
constant time(1)
When it's a loop inside a loop you multiply. n*n*1
loop(n)
loop(n)
These loops aren't nested.
this would be n + n
O(n+n) which reduces to O(n)

Related

Solving Running time Big Theta Notation

Consider finding the total running time as function of n in these two loops:
(1)
q <- 1
while q <= n
p <- 1
while p <= q
p <- p + 1
q <- 2 * q
(2)
q,s <- 1, 1
while s < n
for j <- 1 to s
k <- 1
while k < = j
k <- 2 * k
q <- q + 1
s <- q * q
Going off of what I know I believe:
(1) is theta(n * lg(n)) where n represents time for inner loop, and
lg(n) for the outer loop.
(2) is theta(n * lg(n) * sqrt(n)) where n represents time for the for loop,
sqrt(n) for the outer loop, and lg(n) for the inner while loop.
I am not sure if this is correct. Any advice would be appreciated.
(1):
This is not the correct way to view this, in fact, the inner while-loop does not do "n cycles lg n times", it does q cycles whatever this number may be each iteration!
The correct way to analyze this is saying that the inner while-loop runs q times, and q takes the numbers 1, 2, 4, ... , n (Yes, the out while-loop runs Θ(lg n) times).
Thus the whole running times is:
1 + 2 + 4 + ... + L beware that if n is not a perfect power of 2, it goes up to the largest power less than n. Thus we can say it runs until it hits n (L = Θ(n))
Computing this gives us a geometric progression with lg(n) elements:
1 + 2 + 4 + ... + n = Θ(n)
(2):
Not a final solution, but a hint/kickstart
Your analysis is still wrong by saying the for-loop runs ~n times, this loop simply runs s times, and s is changing with each and every iteration. On iteration t we have s = t^2.
The analysis goes as so:
The for-loop and its inneer while-loop are correlated, j runs from 1-s and the while loop runs lg(j) - they are correlated because j is changed in each of every iteration of the for-loop. But we need to keep in mind that s is changing as-well, and so the for-loop runs s ∈ {1, 4, 9, ..., n}

confused about a nested loop having linear complexity(Big-Oh = O(n)) but I worked it to be logarithmic

Computing complexity and Big o of an algorithm
T(n) = 5n log n + 3log n + 2 // to the base 2 big o = o(n log n)
for(int x = 0,i = 1;i <= N;i*=2)
{
for(int j = 1 ;j <= i ;j++)
{
x++;
}
}
The Big o expected was linear where as mine is logarithmic
Your Big-Oh analysis is not correct. While it is true that the outer loop is executed log n times, the inner loop is linear in i at each iteration.
If you count the total number of iterations of the inner loop, you will see that the whole thing is linear:
The inner loop will do ‍1 + 2 + 4 + 8 + 16 + ... + (the last power of 2 <= N) iterations. This sum will be between N and 2*N, which makes the whole loop linear.
Let me explain why your analysis is wrong.
It is clear that inner loop will execute 1 + 2 + 4 + ... + 2^k times where k is the biggest integer which satisfies equation . This implies that upper bound for k is
Without loss of generality we can take upper bound for k and assume that k is integer, complexity equals to 1 + 2 + 4 + ... + = which is geometric series so it is equal to
Therefore in O notation it is O(n)
First, you should notice that your analysis is not logarithmic! As N \log N is not logarithmic.
Also, the time complexity is T(n) = sum_{j = 0}^{log(n)} 2^j (as the value of i duplicated each time). Hence, T(n) = 2^(log(N) + 1) - 1 = 2N - 1 = \Theta(N).

Time complexity of this simple code

In pseudo-code:
j = 5;
while (j <= n) {
j = j* j* j * j;
}
What is the time complexity of this code?
It is way shorter than O(logn), is there even any reason to go lower than that?
Let's trace through the execution of the code. Suppose we start with initial value j0:
0. j ← j0
1. j ← j0^4
2. j ← [j0^4]^4 = j0^(4^2)
3. j ← [j0^(4^2)]^4 = j0^(4^3)
4. j ← [j0^(4^3)]^4 = j0^(4^4)
...
m. j ← [j0^(4^(m-1))]^4 = j0^(4^m)
... after m loops.
The loop terminates when the value exceeds n:
j0^(4^m) > n
→m > log(4, log(j0, n))
Thus the time complexity is O(m) = O(log log n).
I used help from MathSE to find out how to solve this. The answer is same as another one by #meowgoesthedog, but I understand it the following way:
On every iteration, the value of j is going to increase by its own 4th power. Or, we can look at it from the side of n, that on every iteration n is going to reduce by its 4th root. Hence, the recurrence will look like:
T(n) = 1 + T(n1/4)
For any integer k, with 24k + 1 <= n <= 24k + 1, the recurrence will become:
T(n) = 1 + k
if we go on to assume that the 4th root will always be an integer. It won't matter if it is not as the constant of +/- 1 will be ignored in the Big-O calculation.
Now, since the assumption of 4th root being an integer simplifies things for us, we can try to solve the following equation:
n = 24k,
with the equation yielding k = (Log(Log(n)) - Log(2))/Log(4).
This implies that O(T(n)) = O(Log(Log(n))).

Big-O (O(n)) and Big-Omega (Ω(n)) time complexity of the Count(A, B, n) algorithm

What is the big-O (O(n)) and Big-Omega (Ω(n)) time complexity of the Count(A, B, n) algorithm below in terms of n
Algorithm Count (A, B, n)
I did the following:
Algorithm Count (A, B, n)
Input: Arrays A and B of size n, where n is even.
A, B store integers. # of operations:
i <- 0 1
sum <- 0 1
while i < n/2 do n/2 + 1
if A[i + n/2] < 0 then n/2 + 1
for j <- i + n/2 to n do n/2 + n/2+1 + … + n = n(n+1)/2
sum <- sum + B[j] n/2 + … + n = n(n+1)/2
i <- i + 1 n/2 + 1
return sum _ _1______
The Algorithm Count runs in Big-O and Big-Omega of O(n^2) and Ω(n^2). This algorithm involves a nested "for" loop. The maximum number of primitive operations executed by the algorithm count is 0.5n^2+ n + 1 in both the worst case and best case.
I think the worst case is O(n^2), while the best case is Ω(1). The best case is when you have an array of size 2, and A[1] >= 0, in which case the algorithm only has to pass through the loop once. If n can be 0 (i.e. empty arrays), then that's even better.
To illustrate the best case (n = 2, assuming 0 is not acceptable, and A[1] >= 0), let's assume that assignment operation takes constant time, C1.
i <- 0
sum <- 0
takes constant time 2*C1.
while 0 < 2/2 do
takes constant time C2.
if A[0 + 2/2] < 0 then //evaluates to false when A[1] >= 0
takes constant time C3, and the nested for loop will never get executed.
i <- i + 1
takes constant time C4. The loop checks invariant:
while 1 < 2/2 do
which takes C2. Then the operation returns:
return sum
which takes C5.
So the operation in the best case scenario where n = 2 and A[1] >= 0 is:
2*C1 + 2*C2 + C3 + C4 + C5 = O(1)
Now, you can argue that it should have been:
2*C1 + (n/2 + 1)*C2 + C3 + C4 + C5 = O(n)
but we already know that at best n = 2, which is constant.
In the case where the best scenario is when n = 0:
2*C1 + C2 + C5 = O(1)

3D Matrix traversal Big-O

My attempt for the Big-O of each of these two algorithms..
1) Algorithm threeD(matrix, n)
// a 3D matrix of size n x n x n
layer ← 0
while (layer < n)
row ← 0
while (row < layer)
col ← 0
while (col < row)
print matrix[layer][row][col]
col ← col + 1
done
row ← row + 1
done
layer ← layer * 2
done
O((n^2)log(n)) because the two outer loops are each O(N) and the innermost one seems to be O(log n)
2) Algorithm Magic(n)
//Integer, n > 0
i ← 0
while (i < n)
j ← 0
while (j < power(2,i))
j ← j + 1
done
i ← i + 1
done
O(N) for outer loop, O(2^n) for inner? = O(n(2^n))?
1. Algorithm
First of all: This algorithm never terminates due to layer is initiated with zero. layer is only multipyed by 2 so it will never get bigger than zero, specially not bigger than n.
To get this work, you have to start with layer > 0.
So lets start with layer = 1.
The time-complexity can be written as T(n) = T(n/2) + n^2.
You can get this by looking that way: At the end the layer is setted at most to n. Then the inner loops do n^2 steps. Bevor that, the layer was only half that big. So you have to do the n^2 steps on the last rould of the outer loop and all stepps of the round bevor wirtten as T(n/2).
The masters theorem gets you Theta(n^2).
2. Algorithm
You can just count the steps:
2^0 + 2^1 + 2^2 + ... + 2^(n-1) = sum_(i=0)^(n-1)2^i = 2^n-1
To get this simplification just take a look at binary numbers: The sum of steps corresponds to a binary number containing only 1's (like 1111 1111). This number equals 2^n-1.
So the time complexity is Theta(2^n)
Notice: Both your Big-O's are not wrong, there are bette boundings.

Resources