Given S(n) = ∑ Log(i) such that sigma runs from i=1 to n
What is a simple function f(n) so that the sum S(n) is in big Theta of f(n)?
I am thinking about f(n) = loglogn because I believe its within the boundaries of the initial value of the summation which is log1=0 and the terminal value of the summation which is logn.
Hence it would satisfy the definition of big theta..
Is this right? otherwise please help
Draw a picture and convince yourself of the following.
The integral from 1 to N of log(x) is < ∑ Log(i) is < the integral from 1 to N+1 of log(x).
Therefore N log(N) - N < ∑ Log(i) < (N+1) log(N+1) - (N+1).
Both bounds are big Theta of N log(N).
Related
The problem is that I need to know if log(n-f(n)) is big theta of log(n), where f(n) is a lower order function than n, e.g., log(n) or sqrt(n).
I tried to use some log rules and plotting seems to confirm the bound, but I can't get it exactly.
As f(n) is a lower order function than n, f(n) = o(n). Hence, n-o(n) < 2n and n - o(n) = O(n). Also, n - o(n) > n - 0.01 n <=> 0.01 n > o(n) (0.01 can be specified with the o(n)). Therfore, n - o(n) = Omega(n), and n-o(n) = Theta(n).
As log function is an increasing function we can say log(n-o(n)) = Theta(log(n)).
log n^2 is equivalent to 2logn which grows at the same rate as logn, as I disregard the factors and constants. but if I was to square the whole term so that I end up with (logn)^2 is it also big theta of logn?
No. If f is any unbounded function then f(n)^2 is not O(f).
Because f(n)^2 = O(f) means there's a c and N such that n > N implies f(n)^2 <= cf(n). Which implies f(n) <= c, and so f is bounded.
log(n) is unbounded, so log(n)^2 is not O(log(n)).
log (n^2) = 2 log(n)
and as you know x^2 is not in thetha(x).
Think this way: let N=log(n). Then f1(N)=N^2 where f2(N)=N, obviously,
N=o(N^2)!=theta(N^2), i.e., log(n)=o((log(n))^2)!=theta((log(n))^2).
Also, lim {n->inf} f2(n) / f1(n) = lim {n->inf} 1 / log(n) = 0, by definition of small o (https://en.wikipedia.org/wiki/Big_O_notation) it implies f2(n)=o(f1(n)).
If a function body invokes 3 different functions, all of the order O(n), how do I calculate the order of the outer (containing) function? Yes, this is homework, and I've surprisingly failed to find a relevant example in the textbook, nor the slides of our recent lectures.
private void bigFunction(){
smallFunction1(); // O(n)
smallFunction2(); // O(n)
smallFunction3(); // O(n)
} // Now what does this result in?
My initial thought is O(n), but I want to be certain.
Yes, that's correct. The cost of doing any constant number of O(n) operations is O(n).
Specifically, O(n) × O(1) = O(n).
Hope this helps!
3 x O(n) = O(n) since we are trying to find time complexity, biggest complexity will be the answer, O(n) is the biggest in that algorithm.
You need to keep in mind the definition of big-oh:
A function f(x) is said to be O(g(n)) if there are numbers K and T
(which you can of course choose freely) such that for all x > T, f(x)
< K * g(x).
Of particular importance is the fact that you are free to select any K that fits the bill, not just the smallest K that does. It is this property that leads to g(n) always being shown as not having any constant factors: the following two scenarios are completely equivalent:
f(x) = x, g(n) = 2n, K = 1
f(x) = x, g(n) = 4n, K = 1/2
Since you can make g have any constant factor you like simply by selecting K appropriately, in practice we do not bother and always treat g as always having no constant factor.
At this point it should be clear that O(g(n)) + O(g(n)) is still O(g(n)), because for the sum you can simply choose "double the usual" value for K and still have the same form for g(n). Therefore the sum of any constant number of O(n) functions is still O(n).
The best way of being really sure of something like this is to construct a proof based on the definition, quoted in #Jon's answer:
A function f(x) is said to be O(g(n)) if there are numbers
K and T such that for all x > T, f(x) < K * g(x).
Let f_1(n) be the time for smallFunction1(), f_2(n) for smallFunction2(), and f_3(n) for smallFunction3(), all in a size n problem.
Because f_1(n) is O(n), there exist K_1 and T_1 such that, for all n > T_1, f_1(n) < K_1 * n.
Similarly, there exists K_2, T_2, K_3, and T_3 such that, for all n > T_2, f_2(n) < K_2 * n and for all n > T_3, f_3(n) < K_3 * n.
Let K equal K_1 + K_2 + K_3 and let T equal max(T_1, T_2, T_3). Then for all n > T, f_1(n) < K_1 * n, f_2(n) < K_2 * n, f_3(n) < K_3 * n.
The time to run the three functions consecutively, f_1(n) + f_2(n) + f_3(n), is less than K_1 * n + K_2 * n + K_3 * n = (K_1 + K_2 + K_3) * n = K * n, so the total time is O(n).
(log n)^k = O(n)? For k greater or equal to 1.
My professor presented us with this statement in class, however I am not sure what it means for a function to a have a time complexity of O(n). Even stuff like n^2 = O(n^2), how can a function f(x) have a run time complexity?
As for the statement how does it equal O(n) rather than O((logn)^k)?
(log n)^k = O(n)?
Yes. The definition of big-Oh is that a function f is in O(g(n)) if there exist positive constants N and c, such that for all n > N: f(n) <= c*g(n). In this case f(n) is (log n)^k and g(n) is n, so if we insert that into the definition we get: "there exist constants N and c, such that for all n > N: (log n)^k <= c*n". This is true so (log n)^k is in O(n).
how can a function f(x) have a run time complexity
It doesn't. Nothing about big-Oh notation is specific to run-time complexity. Big-Oh is a notation to classify the growth of functions. Often the functions we're talking about measure the run-time of certain algorithms, but we can use big-Oh to talk about arbitrary functions.
f(x) = O(g(x)) means f(x) grows slower or comparably to g(x).
Technically this is interpreted as "We can find an x value, x_0, and a scale factor, M, such that this size of f(x) past x_0 is less than the scaled size of g(x)." Or in math:
|f(x)| < M |g(x)| for all x > x_0.
So for your question:
log(x)^k = O(x)? is asking : is there an x_0 and M such that
log(x)^k < M x for all x>x_0.
The existence of such M and x_0 can be done using various limit results and is relatively simple using L'Hopitals rule .. however it can be done without calculus.
The simplest proof I can come up with that doesn't rely on L'Hopitals rule uses the Taylor series
e^z = 1 + z + z^2/2 + ... = sum z^m / m!
Using z = (N! x)^(1/N) we can see that
e^(x^(1/N)) = 1 + (N! x)^(1/N) + (N! x)^(2/N)/2 + ... (N! x)^(N/N)/N! + ...
For x>0 all terms are positive so, keeping only the Nth term we get that
e^((N! x)^(1/N)) = N! x / N! + (...)
= x + (...)
> x for x > 0
Taking logarithms of both sides (since log is monotonic increasing), then raising to Nth power (also monotonic increasing since N>0)
(N! x)^(1/N) > log x for x > 0
N! x > (log x)^n for x > 0
Which is exactly the result we need, (log x)^N < M x for some M and all x > x_0, with M = N! and x_0=0
I want to prove the following statement
2^(⌊lg n⌋+⌈lg n⌉)∕n ∈ Θ(n)
I know that to prove it, we have to find the constants c1>0, c2>0, and n0>0 such that
c1.g(n) <= f(n) <= c2.g(n) for all n >= n0
In other words, we have to prove f(n) <= c.g(n) and f(n) >= c.g(n).
The problem is how to prove the left hand side (2^(⌊lg n⌋+⌈lg n⌉)∕n)
Thank you
You can start by expanding the exponential. It is equal to n1*n2/n, where n1<=n<=n2, 2*n1>n and n*2>n2. The rest should be easy.
Here's a derivation for the upper bound:
2^(⌊lg n⌋+⌈lg n⌉)/n
= 2^(2⌊lg n⌋+1)/n
<= 2^(2 lg n + 1)/n
= 2^(2 lg n) 2^(1) / n
= 2 n^2 / n
= 2 n
= O(n)
So we know your function can be bounded above by 2*n. Now we do the lower bound:
2^(⌊lg n⌋+⌈lg n⌉)/n
= 2^(2⌈lg n⌉ - 1) / n
>= 2^(2 lg n - 1)/n
= 2^(2 lg n) 2^(-1) / n
= 1/2 n^2 / n
= 1/2 n
= O(n)
We now know that your function can be bounded below by n/2.
Checked on gnuplot; these answers look good and tight. This is a purely algebraic solution using the definition if floor() and ceiling() functions.