Big O Notation: The sum of function body - algorithm

If a function body invokes 3 different functions, all of the order O(n), how do I calculate the order of the outer (containing) function? Yes, this is homework, and I've surprisingly failed to find a relevant example in the textbook, nor the slides of our recent lectures.
private void bigFunction(){
smallFunction1(); // O(n)
smallFunction2(); // O(n)
smallFunction3(); // O(n)
} // Now what does this result in?
My initial thought is O(n), but I want to be certain.

Yes, that's correct. The cost of doing any constant number of O(n) operations is O(n).
Specifically, O(n) × O(1) = O(n).
Hope this helps!

3 x O(n) = O(n) since we are trying to find time complexity, biggest complexity will be the answer, O(n) is the biggest in that algorithm.

You need to keep in mind the definition of big-oh:
A function f(x) is said to be O(g(n)) if there are numbers K and T
(which you can of course choose freely) such that for all x > T, f(x)
< K * g(x).
Of particular importance is the fact that you are free to select any K that fits the bill, not just the smallest K that does. It is this property that leads to g(n) always being shown as not having any constant factors: the following two scenarios are completely equivalent:
f(x) = x, g(n) = 2n, K = 1
f(x) = x, g(n) = 4n, K = 1/2
Since you can make g have any constant factor you like simply by selecting K appropriately, in practice we do not bother and always treat g as always having no constant factor.
At this point it should be clear that O(g(n)) + O(g(n)) is still O(g(n)), because for the sum you can simply choose "double the usual" value for K and still have the same form for g(n). Therefore the sum of any constant number of O(n) functions is still O(n).

The best way of being really sure of something like this is to construct a proof based on the definition, quoted in #Jon's answer:
A function f(x) is said to be O(g(n)) if there are numbers
K and T such that for all x > T, f(x) < K * g(x).
Let f_1(n) be the time for smallFunction1(), f_2(n) for smallFunction2(), and f_3(n) for smallFunction3(), all in a size n problem.
Because f_1(n) is O(n), there exist K_1 and T_1 such that, for all n > T_1, f_1(n) < K_1 * n.
Similarly, there exists K_2, T_2, K_3, and T_3 such that, for all n > T_2, f_2(n) < K_2 * n and for all n > T_3, f_3(n) < K_3 * n.
Let K equal K_1 + K_2 + K_3 and let T equal max(T_1, T_2, T_3). Then for all n > T, f_1(n) < K_1 * n, f_2(n) < K_2 * n, f_3(n) < K_3 * n.
The time to run the three functions consecutively, f_1(n) + f_2(n) + f_3(n), is less than K_1 * n + K_2 * n + K_3 * n = (K_1 + K_2 + K_3) * n = K * n, so the total time is O(n).

Related

Complexity of f(k) when f(n) = O(n!) and k=n*(n-1)

I have the following problem. Let's suppose we have function f(n). Complexity of f(n) is O(n!). However, there is also parameter k=n*(n-1). My question is - what is the complexity of f(k)? Is it f(k)=O(k!/k^2) or something like that, taking into consideration that there is a quadratic relation between k and n?
Computational complexity is interpreted base on the size of the input. Hence, if f(n) = O(n!) when your input is n, then f(k) = O(k!) when your input is k.
Therefore, you don't need to compute the complexity for each value of input for the function f(n). For example, f(2) = O(2!), you don't need to compute the complexity of f(10) likes O((5*2)!) as 10 = 5 * 2, and try to simplify it base on the value of 2!. We can say f(10) = O(10!).
Anyhow, if you want compute (n*(n-1))! = (n^2 - n)!(n^2 - n + 1)...(n^2 - n + n) /(n^2 - n + 1)...(n^2 - n + n) = (n^2)!/\theta(n^3) = O((n^2)!/n^(2.9))
Did you consider that there is a m, such that the n you used in your f(n) is equal to m * (m - 1).
Does that change the complexity?
The n in f(n) = O(n!) represents all the valid inputs.
You are trying to pass a variable k whose actual value in terms of another variable is n * (n - 1). That does not change the complexity. It will be O(k!) only.

Runtime analysis clarification

Just for clarification. If you have an algorithm that calls 3 different functions. Each of these functions has a runtime of logn. The runtime of the algorithm is bigO(log n) correct? The definition of bigO being f(n) = O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n. For this situation. we could look at c as 3 for the 3 functions and g(n) being logn?
It depends on how the algorithm calls these functions. If the algorithm looks like
function algorithm(input) {
f(input'); // size of input' = O(size of input)
g(input''); // size of input'' = (size of input)
h(input'''); // size of input''' = O(size of input)
}
then the running time is the sum of running times of the functions the algorithm calls. Thus if f, g, and h run in time O(log n) then the algorithm also runs in time O(log n).
Let's say that your function is f(n) and the 3 functions it calls are f_1(n), f_2(n) and f_3(n). Let also T(f(n)) be the running time of f(n).
If for any i, function f_i(n) has running time O(log(n)), then it means by the definition, that there exist c_i > 0 and n_i >= 0, such that for all n >= n_i, T(f_i(n)) <= c_i * log(n).
From the above fact, in order to proof that T(f(n)) is also O(log(n)), you just need to find the constants n0 >= 0, c > 0, such that for all n >= n0, T(f(n)) <= c * log(n).
It turns out that if you pick n0 = max(n_1, n_2, n_3), and c = 3 * max(c_1, c_2, c_3), the condition is fullfilled, so indeed T(f(n)) = O(log(n)). This is sufficient, because we know that the only thing f(n) does is that it calls f_1(n), f_2(n) and f_3(n), and each of these functions is called exactly once.

∑ Log(i) = big theta(f(n))?

Given S(n) = ∑ Log(i) such that sigma runs from i=1 to n
What is a simple function f(n) so that the sum S(n) is in big Theta of f(n)?
I am thinking about f(n) = loglogn because I believe its within the boundaries of the initial value of the summation which is log1=0 and the terminal value of the summation which is logn.
Hence it would satisfy the definition of big theta..
Is this right? otherwise please help
Draw a picture and convince yourself of the following.
The integral from 1 to N of log(x) is < ∑ Log(i) is < the integral from 1 to N+1 of log(x).
Therefore N log(N) - N < ∑ Log(i) < (N+1) log(N+1) - (N+1).
Both bounds are big Theta of N log(N).

Algorithm Analysis (Big O and Big Omega)

I got this question wrong on an exam : Name a function that is neither O(n) nor Omega(n).
After attempting to learn this stuff on my own through youtube, I'm thinking this may be a correct answer:
(n3 (1 + sin n)) is neither O(n) nor Omega(n).
Would that be accurate?
Name a function that is neither O(n) nor Omega(n)
Saying f ∈ O(g) means the quotient
f(x)/g(x)
is bounded from above for all sufficiently large x.
f ∈ Ω(g) on the other hand means the quotient
f(x)/g(x)
is bounded below away from zero for all sufficiently large x.
So to find a function that is neither O(n) nor Ω(n) means finding a function f such that the quotient
f(x)/x
becomes arbitrarily large, and arbitrarily close to zero on every interval [y, ∞).
I'm thinking this may be a correct answer: (n^3 (1 + sin n)) is neither O(n) nor Omega(n).
Let's plug it in our quotient:
(n^3*(1 + sin n))/n = n^2*(1 + sin n)
The n^2 grows to infinity, and the factor 1 + sin n is larger than 1 for roughly three out of every six n. So one every interval [y, ∞) the quotient becomes arbitrarily large. Given an arbitrary K > 0, let N_0 = y + K + 1 and N_1 the smallest of N_0 + i, i = 0, 1, ..., 4 such that sin (N_0+i) > 0. Then f(N_1)/N_1 > (y + K + 1)² > K² + K > K.
For the Ω(n) part, it's not so easy to prove, although I believe it is satisfied.
But, we can modify the function a bit, retaining the idea of multiplying a growing function with an oscillating one in such a way that the proof becomes simple.
Instead of sin n, let us choose cos (π*n), and, to offset the zeros, add a fast decreasing function to it.
f'(n) = n^3*(1 + cos (π*n) + 1/n^4)
now,
/ n^3*(2 + 1/n^4), if n is even
f'(n) = <
\ 1/n , if n is odd
and it is obvious that f' is neither bounded from above, nor from below by any positive constant multiple of n.
I would consider something like a binary search. This is both O(log N) and Ω(log N). Since Omega is defined as a lower bound, it's not allowed to exceed the function itself -- so O(log N) definitely is not Ω(N).
I think some of the comments on the deleted answer deserve some...clarification -- perhaps even outright correction. To quote from CLRS, "Ω-notation gives a lower bound for a function to within a constant factor."
Since N2 differs from N by more than a constant factor, N2 is not Ω(N).

(log n)^k = O(n)? For k greater or equal to 1

(log n)^k = O(n)? For k greater or equal to 1.
My professor presented us with this statement in class, however I am not sure what it means for a function to a have a time complexity of O(n). Even stuff like n^2 = O(n^2), how can a function f(x) have a run time complexity?
As for the statement how does it equal O(n) rather than O((logn)^k)?
(log n)^k = O(n)?
Yes. The definition of big-Oh is that a function f is in O(g(n)) if there exist positive constants N and c, such that for all n > N: f(n) <= c*g(n). In this case f(n) is (log n)^k and g(n) is n, so if we insert that into the definition we get: "there exist constants N and c, such that for all n > N: (log n)^k <= c*n". This is true so (log n)^k is in O(n).
how can a function f(x) have a run time complexity
It doesn't. Nothing about big-Oh notation is specific to run-time complexity. Big-Oh is a notation to classify the growth of functions. Often the functions we're talking about measure the run-time of certain algorithms, but we can use big-Oh to talk about arbitrary functions.
f(x) = O(g(x)) means f(x) grows slower or comparably to g(x).
Technically this is interpreted as "We can find an x value, x_0, and a scale factor, M, such that this size of f(x) past x_0 is less than the scaled size of g(x)." Or in math:
|f(x)| < M |g(x)| for all x > x_0.
So for your question:
log(x)^k = O(x)? is asking : is there an x_0 and M such that
log(x)^k < M x for all x>x_0.
The existence of such M and x_0 can be done using various limit results and is relatively simple using L'Hopitals rule .. however it can be done without calculus.
The simplest proof I can come up with that doesn't rely on L'Hopitals rule uses the Taylor series
e^z = 1 + z + z^2/2 + ... = sum z^m / m!
Using z = (N! x)^(1/N) we can see that
e^(x^(1/N)) = 1 + (N! x)^(1/N) + (N! x)^(2/N)/2 + ... (N! x)^(N/N)/N! + ...
For x>0 all terms are positive so, keeping only the Nth term we get that
e^((N! x)^(1/N)) = N! x / N! + (...)
= x + (...)
> x for x > 0
Taking logarithms of both sides (since log is monotonic increasing), then raising to Nth power (also monotonic increasing since N>0)
(N! x)^(1/N) > log x for x > 0
N! x > (log x)^n for x > 0
Which is exactly the result we need, (log x)^N < M x for some M and all x > x_0, with M = N! and x_0=0

Resources