What is the exact runtime in Big-O of those functions - algorithm

I wanted to ask if someone could answer this questions and also check my solutions. I don't really get the Big-O of these functions.
e(n)= n! + 2^n
f(n)= log10(n) * n/2 + n
g(n) = n2 + n * log(n)
h(n) = 2^30n * 4^log2(n)
I thought that:
e(n) n! because n! is exponentially rising.
f(n) n(log)n but I don't really know why
g(n) n^2
h(n) n
I would appreciate any answer. Thanks.

Big O notation allows us to evaluate the growth of a function with respect to some quantity as it tends towards a infinity.
The Big O notation of a function is that of it's fastest growing constituent. Big O notation bounds the growth of a function.
For example if a function is said to be O(n).
There exists some k such that f(n) <= k * n for all n.
We ignore all other constituents of the equation as we are looking at values that tend towards infinity. As n tends towards infinity the other constituents of the equation are drowned out.
We ignore any constants as we are describing the general relationship of the function as it tends towards some value. Constants make things harder to analyze.
e(n) = n! + 2^n. A factorial is the fastest growing constituent in the equation so the Big O notation is O(n!).
f(n) = log10(n) * n/2 + n. The fast growing constituent is log10(n) * n/2. we say that the Big O notation is O(nlogn). It does not matter what base of log we use as we can convert between log bases by using a constant factor.
g(n) = n^2 + n * log(n). The Big O notation is n^2. This is because n^2 grows faster than n * log(n) with respect to n.
h(n) = (2^30) * n * 4 ^ log2(n). The time complexity is O(nlogn). The constants in this equation are 2^30 and 4 so we ignore these values. When doing this you can clearly see that the time complexity is O(nlogn).

Related

Explanation for Big-O notation comparison in different complexity class

Why Big-O notation can not compare algorithms in the same complexity class. Please explain, I can not find any detailed explanation.
So, O(n^2) says that this algorithm requires less or equal number of operations to perform. So, when you have algorithm A which requires f(n) = 1000n^2 + 2000n + 3000 operations and algorithm B which requires g(n) = n^2 + 10^20 operations. They're both O(n^2)
For small n the first algorithm will perform better than the second one. And for big ns second algorithm looks better since it has 1 * n^2, but first has 1000 * n^2.
Also, h(n) = n is also O(n^2) and k(n) = 5 is O(n^2). So, I can say that k(n) is better than h(n) because I know how these functions look like.
Consider the case when I don't know how functions k(n) and h(n) look like. The only thing I'm given is k(n) ~ O(n^2), h(n) ~ O(n^2). Can I say which function is better? No.
Summary
You can't say which function is better because Big O notation stays for less or equal. And following is true
O(1) is O(n^2)
O(n) is O(n^2)
How to compare functions?
There is Big Omega notation which stays for greater or equal, for example f(n) = n^2 + n + 1, this function is Omega(n^2) and Omega(n) and Omega(1). When function has complexity equal to some asymptotic, Big Theta is used, so for f(n) described above we can say that:
f(n) is O(n^3)
f(n) is O(n^2)
f(n) is Omega(n^2)
f(n) is Omega(n)
f(n) is Theta(n^2) // this is only one way we can describe f(n) using theta notation
So, to compare asymptotics of functions you need to use Theta instead of Big O or Omega.

Asymptotic complexity of logarithmic functions

I know that in terms of complexity, O(logn) is faster than O(n), which is faster than O(nlogn), which is faster than O(n2).
But what about O(n2) and O(n2log), or O(n2.001) and O(n2log):
T1(n)=n^2 + n^2logn
What is the big Oh and omega of this function? Also, what's little oh?
versus:
T2(n)=n^2.001 + n^2logn
Is there any difference in big Oh now?
I'm having trouble understanding how to compare logn with powers of n. As in, is logn approximately n^0.000000...1 or n^1.000000...1?
O(n^k) is faster than O(n^k') for all k, k' >= 0 and k' > k
O(n^2) would be faster than O(n^2*logn)
Note that you can only ignore constants, nothing involving the input size can be ignored.
Thus, complexity of T(n)=n^2 + n^2logn would be the worse of the two, which is O(n^2logn).
Little-oh
Little oh in loose terms is a guaranteed upper bound. Yes, it is called little, and it is more restrictive.
n^2 = O(n^k) for k >= 2 but n^2 = o(n^k) for k > 2
Practically, it is Big-Oh which takes most of the limelight.
What about T(n)= n^2.001 + n^2logn?
We have n2.001 = n2*n0.001 and n2 * log(n).
To settle the question, we need to figure out what would eventually be bigger, n0.001 or log(n).
It turns out that a function of the form nk with k > 0 will eventually take over log(n) for a sufficiently large n.
Same is the case here, and thus T(n) = O(n2.001).
Practically though, log(n) will be larger than n0.001.
(103300)0.001 < log(103300) (1995.6 < 3300), and the sufficiently large n in this case would be just around 103650, an astronomical number.
Worth mentioning again, 103650. There are 1082 atoms in the universe.
T(n)=n^2 + n^2logn
What is the big Oh and omega of this function? Also, what's little oh?
Quoting a previous answer:
Don't forget big O notation represents a set. O(g(n)) is the set of
of all function f such that f does not grows faster than g,
formally is the same is saying that there exists C and n0 such
that we have |f(n)| <= C|g(n)| for every n >= n0. The expression
f(n) = O(g(n)) is a shorthand for saying that f(n) is in the set
O(g(n))
Also you can think of big O as ≤ and of small o as < (reference). So you care of more of finding relevant big O bound than small o. In your case it's even appropriate to use big theta which is =. Since n^2 log n dominates n^2 it's true that
T1(n)=n^2 + n^2logn = Ө(n^2 logn)
Now the second part. log n grows so slowly that even n^e, e > 0 dominates it. Interestingly, you can even prove that lim n^e/(logn)^k=inf as n goes to infinity. From this you have that n^0.001 dominates log n then
T2(n)=n^2.001 + n^2logn = Ө(n^2.001).
If f(n) = Ө(g(n)) it's also true that f(n) = O(g(n)) so to answer your question:
T1(n)=O(n^2 logn)
T2(n)=O(n^2.001)

How to we find a Tight Big O expression

for(i: 1 to n^2)
x = x + 1;
return x + 1;
N is the number of inputs. N>1 and tends to infinity
I understand that the worst (and the best) case running time is n^2 + 1. Hence, it'll be O(n^2). However, how do I find if it is a tight, big O expression? How do we find a tight big O expression? What is that?
Let your function be f(n)
You already worked out that the best case is n^2 (ommit +1 since constants indepently from the input are ignored).
More formally: Ω(n^2)
--> means you can find a function u(n) and a constant k such that k * u(n) <= f(n) in terms of complexity.
As you said the worst case is n^2 too.
Again formally: O(n^2)
--> means you can find a function v(n) and a constant k such that k * v(n) >= f(n) in terms of complexity.
Since Ω and O are sets of functions Theta is defined as the intersection of Ω and O. "Theta is upper and lower bound."
The intersection is n^2 --> Theta (n^2)
In an descriptive manner:
f(n) grows not much faster (or equal) than u(n).
f(n) grows not much faster (or equal) than v(n).
--> f(n) grows exactly like n^2
Keep in mind that mor often Ω and O are sets of different classes of functions.
E.g. Ω(log n)(searching in a tree) and O(n) (searching in a String) When dealing with such a case you are unable to specify an exact Theta value.

Difference between Big-Theta and Big O notation in simple language

While trying to understand the difference between Theta and O notation I came across the following statement :
The Theta-notation asymptotically bounds a function from above and below. When
we have only an asymptotic upper bound, we use O-notation.
But I do not understand this. The book explains it mathematically, but it's too complex and gets really boring to read when I am really not understanding.
Can anyone explain the difference between the two using simple, yet powerful examples.
Big O is giving only upper asymptotic bound, while big Theta is also giving a lower bound.
Everything that is Theta(f(n)) is also O(f(n)), but not the other way around.
T(n) is said to be Theta(f(n)), if it is both O(f(n)) and Omega(f(n))
For this reason big-Theta is more informative than big-O notation, so if we can say something is big-Theta, it's usually preferred. However, it is harder to prove something is big Theta, than to prove it is big-O.
For example, merge sort is both O(n*log(n)) and Theta(n*log(n)), but it is also O(n2), since n2 is asymptotically "bigger" than it. However, it is NOT Theta(n2), Since the algorithm is NOT Omega(n2).
Omega(n) is asymptotic lower bound. If T(n) is Omega(f(n)), it means that from a certain n0, there is a constant C1 such that T(n) >= C1 * f(n). Whereas big-O says there is a constant C2 such that T(n) <= C2 * f(n)).
All three (Omega, O, Theta) give only asymptotic information ("for large input"):
Big O gives upper bound
Big Omega gives lower bound and
Big Theta gives both lower and upper bounds
Note that this notation is not related to the best, worst and average cases analysis of algorithms. Each one of these can be applied to each analysis.
I will just quote from Knuth's TAOCP Volume 1 - page 110 (I have the Indian edition). I recommend reading pages 107-110 (section 1.2.11 Asymptotic representations)
People often confuse O-notation by assuming that it gives an exact order of Growth; they use it as if it specifies a lower bound as well as an upper bound. For example, an algorithm might be called inefficient because its running time is O(n^2). But a running time of O(n^2) does not necessarily mean that running time is not also O(n)
On page 107,
1^2 + 2^2 + 3^2 + ... + n^2 = O(n^4) and
1^2 + 2^2 + 3^2 + ... + n^2 = O(n^3) and
1^2 + 2^2 + 3^2 + ... + n^2 = (1/3) n^3 + O(n^2)
Big-Oh is for approximations. It allows you to replace ~ with an equals = sign. In the example above, for very large n, we can be sure that the quantity will stay below n^4 and n^3 and (1/3)n^3 + n^2 [and not simply n^2]
Big Omega is for lower bounds - An algorithm with Omega(n^2) will not be as efficient as one with O(N logN) for large N. However, we do not know at what values of N (in that sense we know approximately)
Big Theta is for exact order of Growth, both lower and upper bound.
I am going to use an example to illustrate the difference.
Let the function f(n) be defined as
if n is odd f(n) = n^3
if n is even f(n) = n^2
From CLRS
A function f(n) belongs to the set Θ(g(n)) if there exist positive
constants c1 and c2 such that it can be "sandwiched" between c1g(n)
and c2g(n), for sufficiently large n.
AND
O(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤
f(n) ≤ cg(n) for all n ≥ n0}.
AND
Ω(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤
cg(n) ≤ f(n) for all n ≥ n0}.
The upper bound on f(n) is n^3. So our function f(n) is clearly O(n^3).
But is it Θ(n^3)?
For f(n) to be in Θ(n^3) it has to be sandwiched between two functions one forming the lower bound, and the other the upper bound, both of which grown at n^3. While the upper bound is obvious, the lower bound can not be n^3. The lower bound is in fact n^2; f(n) is Ω(n^2)
From CLRS
For any two functions f(n) and g(n), we have f(n) = Θ(g(n)) if and
only if f(n) = O(g(n)) and f(n) = Ω(g(n)).
Hence f(n) is not in Θ(n^3) while it is in O(n^3) and Ω(n^2)
If the running time is expressed in big-O notation, you know that the running time will not be slower than the given expression. It expresses the worst-case scenario.
But with Theta notation you also known that it will not be faster. That is, there is no best-case scenario where the algorithm will retun faster.
This gives are more exact bound on the expected running time. However for most purposes it is simpler to ignore the lower bound (the possibility of faster execution), while you are generally only concerned about the worst-case scenario.
Here's my attempt:
A function, f(n) is O(n), if and only if there exists a constant, c, such that f(n) <= c*g(n).
Using this definition, could we say that the function f(2^(n+1)) is O(2^n)?
In other words, does a constant 'c' exist such that 2^(n+1) <= c*(2^n)? Note the second function (2^n) is the function after the Big O in the above problem. This confused me at first.
So, then use your basic algebra skills to simplify that equation. 2^(n+1) breaks down to 2 * 2^n. Doing so, we're left with:
2 * 2^n <= c(2^n)
Now its easy, the equation holds for any value of c where c >= 2. So, yes, we can say that f(2^(n+1)) is O(2^n).
Big Omega works the same way, except it evaluates f(n) >= c*g(n) for some constant 'c'.
So, simplifying the above functions the same way, we're left with (note the >= now):
2 * 2^n >= c(2^n)
So, the equation works for the range 0 <= c <= 2. So, we can say that f(2^(n+1)) is Big Omega of (2^n).
Now, since BOTH of those hold, we can say the function is Big Theta (2^n). If one of them wouldn't work for a constant of 'c', then its not Big Theta.
The above example was taken from the Algorithm Design Manual by Skiena, which is a fantastic book.
Hope that helps. This really is a hard concept to simplify. Don't get hung up so much on what 'c' is, just break it down into simpler terms and use your basic algebra skills.

How to calculate big-theta

Can some one provide me a real time example for how to calculate big theta.
Is big theta some thing like average case, (min-max)/2?
I mean (minimum time - big O)/2
Please correct me if I am wrong, thanks
Big-theta notation represents the following rule:
For any two functions f(n), g(n), if f(n)/g(n) and g(n)/f(n) are both bounded as n grows to infinity, then f = Θ(g) and g = Θ(f). In that case, g is both an upper bound and a lower bound on the growth of f.
Here's an example algorithm:
def find-minimum(List)
min = +∞
foreach value in List
min = value if min > value
return min
We wish to evaluate the cost function c(n) where n is the size of the input list. This algorithm will perform one comparison for every item in the list, so c(n) = n.
c(n)/n = 1 which remains bounded as n goes to infinity, so c(n) grows no faster than n. This is what is meant by big-O notation c(n) = O(n). Conversely, n/C(n) = 1 also remains bounded, so c(n) grows no slower than n. Since it grows neither slower nor faster, it must grow at the same speed. This is what is meant by theta notation c(n) = Θ(n).
Note that c(n)/n² is also bounded, so c(n) = O(n²) as well — big-O notation is merely an upper bound on the complexity, so any O(n) function is also O(n²), O(n³)...
However, since n²/c(n) = n is not bounded, then c(n) ≠ Θ(n²). This is the interesting property of big-theta notation: it's both an upper bound and a lower bound on the complexity.
Big theta is a tight bound, for a function T(n): if: Omega(f(n))<=T(n)<=O(f(n)), then Theta(f(n)) is the tight bound for T(n).
In other words Theta(f(n)) 'describes' a function T(n), if both O [big O] and Omega, 'describe' the same T, with the same f.
for example, a quicksort [with correct median choices], always takes at most O(nlogn), at at least Omega(nlogn), so quicksort [with good median choices] is Theta(nlogn)
EDIT:
added discussion in comments:
Searching an array is still Theta(n). the Theta function does not indicate worst/best case, but the behavior of the desired case. i.e, searching for an array, T(n)=number of ops for worst case. in here, obviously T(n)<=O(n), but also T(n)>=n/2, because at worst case you need to iterate the whole array, so T(n)>=Omega(n) and therefore Theta(n) is asymptotic bound.
From http://en.wikipedia.org/wiki/Big_O_notation#Related_asymptotic_notations, we learn that "Big O" denotes an upper bound, whereas "Big Theta" denotes an upper and lower bound, i.e. in the limit as n goes to infinity:
f(n) = O(g(n)) --> |f(n)| < k.g(n)
f(n) = Theta(g(n)) --> k1.g(n) < f(n) < k2.g(n)
So you cannot infer Big Theta from Big O.
ig-Theta (Θ) notation provides an asymptotic upper and lower bound on the growth rate of an algorithm's running time. To calculate the big-Theta notation of a function, you need to find two non-negative functions, f(n) and g(n), such that:
There exist positive constants c1, c2 and n0 such that 0 <= c1 * g(n) <= f(n) <= c2 * g(n) for all n >= n0.
f(n) and g(n) have the same asymptotic growth rate.
The big-Theta notation for the function f(n) is then written as Θ(g(n)). The purpose of this notation is to provide a rough estimate of the running time, ignoring lower order terms and constant factors.
For example, consider the function f(n) = 2n^2 + 3n + 1. To calculate its big-Theta notation, we can choose g(n) = n^2. Then, we can find c1 and c2 such that 0 <= c1 * n^2 <= 2n^2 + 3n + 1 <= c2 * n^2 for all n >= n0. For example, c1 = 1/2 and c2 = 2. So, f(n) = Θ(n^2).

Resources