Big-Oh and theta notation of a specific function... Running time - performance

If evaluating f(n) is theta(n)
i = 1;
sum = 0;
while (i <= n)
do if (f(i) > k)
then sum += f(i);
i = 2*i;
Would the running time of this be O(n^3) because of the n times the functions are possibly being called or would it be O(n)? Or is it something in terms of theta since that's the information we know? I am very lost on this...

The i variable doubles each time => will reach n in Log2(n) time.
The evaluation of f will be done Log2(n) times => the function time complexity is O(N x LogN).
In fact, if computing f(i) has complexity O(i), then the time complexity is:
1 + 2 + 4 + ... + 2^(Log2(n)) = n (there are Log2(n) steps) => O(n)

Related

Time Complexity in asymptotic analysis log n and log (n+m)

Just some interesting discussion inspired by a conversation in my class.
There are two algorithms, one has time complexity log n and another log (n+m).
Am I correct to argue for average cases, log (n+m) one will perform faster while they make no differences in running time when considering it asymptotically? Because taking the limit of both and f1'/f2' will result in a constant, therefore they have the same order of growth.
Thanks!
As I can see from the question, both n and m are independent variables. So
when stating that
O(m + n) = O(n)
it should hold for any m, which is not: the counter example is
m = exp(n)
O(log(m + n)) = O(log(n + exp(n))) = O(log(exp(n))) = O(n) > O(log(n))
That's why in general case we can only say, that
O(log(m + n)) >= O(log(n))
An interesting problem is when O(m + n) = O(n). If m grows not faster then polynom from n, i.e. O(m) <= O(P(n)):
O(log(m + n)) = O(log(P(n) + n)) = O(log(P(n))) = k * O(log(n)) = O(log(n))
In case of (multi)graphs seldom have we that many edges O(m) > P(n): even complete graph Kn contains only m = n * (n - 1) / 2 = P(n) edges, that's why
O(m + n) = O(n)
holds for ordinary graph (no parallel/multiple edges, no loops)

proving the big-oh etc for an algorithm

I'm learning how to prove/disprove big-Oh, big-Omega, and little-oh, and I have the following algorithm f(n). However I'm unsure how to prove this f(n) as it has an if statement which I've never come across before. How can I prove, for example, that this f(n) is O( n^2 )?
if n is even
4 sum(n/2,n)
else
2n-1 sum(n−3,n)
where sum(j,k) is a ‘partial arithmetic sum’ of the integers from j up to k, that is sum(j,k)=
if j > k
0
else
j+(j+1)+(j+2)+...+j
e.g. sum(3,4) = 3 + 4 = 7, etc.
Note that sum(j,k) = sum(1,k) – sum(1,j-1).
ok. Got it no worries. I'll try to help you understand this.
Big O notation is used to define an upper limit on how much time a program will take in term of its input size.
Let's try to see how much time each statement will take in this function
f(n) {
if n is even // O(1) .....#1
4 * sum(n/2,n) // O(n) .....#2
else // O(1) ................#3
(2n-1) * sum(n−3,n) // O(n) .......#4
}
if n is even
This can be done by a check like if ((n%2) == 0)) As you can see that this is a constant time operation. no loop nothing just one computation.
sum(j, k) function is being computated by iterating from j to k whenever j <= k. So, it will run (k - j + 1) times which is linear time
So, total complexity will be summation of complexity of the if block or the else block
For analyzing complexity, one needs to take care of worst time.
Complexity of if block = #1 + #2 = O(1) + O(n) = O(n)
Similarly for else block = #3 + #4 = O(1) + O(n) = O(n)
Max of both = maximum of(O(n), O(n)) = O(n)
Thus, the overall complexity = O(n)

What is the time complexity of this pseudocode?

This is pseudocode. I tried to calculate the time complexity of this function as this answer said. It should be like:
n + n/3 + n/9 + ...
Maybe the time complexity is something like O(nlog(n)) I guess? Or the log(n) should be log(n) base 3? Someone said the time complexity is O(n), which is totally unacceptable for me.
j = n
while j >= 1 {
for i = 1 to j {
x += 1
}
j /= 3
}
The algorithm will run in:
n + n/3 + n/9 + ... = series ~= O(3/2 * n) = O(n)
since 3/2 is a constant. Here the k-th loop will run in n/3k steps.
Please notice the crucial difference from the linked question, where the outer loop runs n times and that is fixed.

Calculating Theta(n) of an algorithm

I am trying to calculate Theta(n) of the following algorithm
for i = 1 -> n
for j = 1 -> n
B[i,j] = findMax(A,i,j)
findMax(A,i,j)
if j < i
return 0
else
max = A[i]
for k = i + 1 -> j
if max < A[k]
max = A[k]
return max
I know that O, theta, and omega roughly translate to
O ≈ ≤
Ω ≈ ≥
Θ ≈ =
For the algorithm I think that omega = n^2, o = n^3, but i'm not sure what theta would be. Any ideas?
If you calculate the number of times the code line
if max < A[k]
is executed depending on n you would get Theta(n^3) executions. Thus the running time of your algorithm is Thetat(n^3) as well.
Theta(n^3). There are n^2 iterations of the nested for loop. Approximately half of these iterations run in O(1) time, when j < i. The other half of these iterations have on average an n/2 difference for j-i, so the other half of the iterations take Theta(n/2) time. Since approximately half of the n^2 iterations take on average n/2 time, n^2/2 * n/2 = n^3/4 = Theta(n^3) time for half of the iterations. The other half of the n^2 iterations take n^2/2 = Theta(n^2) time. Thus total runtime = Theta(n^3).

Big O of this code

I am doing the exercise of Skiena's book on algorithms and I am stuck in this question:
I need to calculate the big O of the following algorithm:
function mystery()
r=0
for i=1 to n-1 do
for j=i+1 to n do
for k=1 to j do
r=r+1
Here, the big O of the outermost loop will be O(n-1) and the middle loop will be O(n!). Please tell me if I am wrong here.
I am not able to calculate the big O of the innermost loop.
Can anyone please help me with this?
Here's a more rigorous way to approach solving this problem:
Define the run-time of the algorithm to be f(n) since n is our only input. The outer loop tells us this
f(n) = Sum(g(i,n), 1, n-1)
where Sum(expression, min, max) is the sum of expression from i = min to i = max. Notice that, the expression in this case is an evaluation of g(i, n) with a fixed i (the summation index) and n (the input to f(n)). And we can peel another layer and defineg(i, n):
g(i, n) = Sum(h(j), i+1, n), where i < n
which is the sum of h(j) where j ranges of i+1 to n. Finally we can just define
h(j) = Sum(O(1), 1, j)
since we've assumed that r = r+1 takes time O(1).
Notice at this point that we haven't done any hand-waving, saying stuff like "oh you can just multiply the loops together. The 'innermost operation' is the only one that counts." That statement isn't even true for all algorithms. Here's an example:
for (i = 0; i < n; i++)
{
for (j = 0; j < n; j++)
{
Solve_An_NP_Complete_Problem(n);
for (k = 0; k < n; k++)
{
count++;
}
}
}
The above algorithm isn't O(n^3)... it's not even polynomial.
Anyways, now that I've established the superiority of a rigorous evaluation (:D) we need to work our way upwards so we can figure out what the upper bound is for f(n). First, it's easy to see that h(j) is O(j) (just use the definition of Big-Oh). From that we can now rewrite g(i, n) as:
g(i, n) = Sum(O(j), i+1, n)
=> g(i, n) = O(i+1) + O(i+2) + ... O(n-1) + O(n)
=> g(i, n) = O(n^2 - i^2 - 2i - 1) (because we can sum Big-Oh functions
in this way using lemmas based on
the definition of Big-Oh)
=> g(i, n) = O(n^2) (because g(i, n) is only defined for i < n. Also, note
that before this step we had a Theta bound, which is
stronger!)
And so we can rewrite f(n) as:
f(n) = Sum(O(n^2), 1, n-1)
=> f(n) = (n-1)*O(n^2)
=> f(n) = O(n^3)
You might consider proving the lower bound to show that f(n) = Theta(n^3). The trick there is note simplifying g(i, n) = O(n^2) but keeping the tight bound when computing f(n). It requires some ugly algebra, but I'm pretty sure (i.e. I haven't actually done it) that you will be able to prove f(n) = Omega(n^3) as well (or just Theta(n^3) directly if you're really meticulous).
The analysis takes place as follows:
the outer loop goes from i = [1, n - 1], so the program would have linear complexity O(i) = O(n) if, inside this loop, only constant operations were performed;
however, inside the first loop, for each i, some operation is executed n times -- starting from j = [i + 1, n - 1]. This gives a total complexity of O(i * j) = O(n * n) = O(n^2);
finally, the innermost loop will be executed, for each i, and for each j, k times, with k ranging from k = [j, n - 1]. As j begins with i + 1 = 1 + 1, k is assymptotically equal to n, which gives you O(i * j * k) = O(n * n * n) = O(n^3).
The complexity of the program itself corresponds to the number of iterations of your innermost operations -- or the sums of complexities of innermost operations. If you had:
for i = 1:n
for j = 1:n
--> innermost operation (executed n^2 times)
for k = 1:n
--> innermost operation (executed n^3 times)
endfor
for l = 1:n
--> innermost operation (executed n^3 times)
endfor
endfor
endfor
The total complexity would be given as O(n^2) + O(n^3) + O(n^3), which is equal to max(O(n^2), O(n^3), O(n^3)), or O(n^3).
It cannot be factorial. For example if u have two nested cycles the big O will be n^2, not n^n. So three cycles cannot give more than n^3. Keep digging ;)

Resources