How to evaluate below expression involving asymptotic notations? - algorithm

If
f(n)=ϴ(n),g(n)=ϴ(n)
and
h(n)=Ω(n)
Then how to evaluate f(n)g(n)+h(n)?
I approached like f(n)g(n)=ϴ(n^2), now what will be Ω(n)+ϴ(n^2). According to me the lower bound of this expression should be Ω(n), and upper bound should be O(n^2), but what should be the tightest bound for this expression?

For some constants k1, k2, l1, l2 and m > 0, we have:
f(n) is ϴ(n)
=> k1*n < f(n) < k2*n, for n sufficiently large
g(n) is ϴ(n)
=> l1*n < g(n) < g2*n, for n sufficiently large
h(n) is Ω(n)
=> m*n < h(n), for n sufficiently large
Then, f(n)*h(n):
for f(n) * h(n):
k1*l1*n^2 < f(n)*g(n) < k2*l2*n^2, for n sufficiently large
So we can just write p(n) = f(n)*g(n) and use constants c1=k1*l1 and c2=k2*l2, and we have:
p(n) (= f(n)*g(n)) is in ϴ(n^2), since
c1*n^2 < p(n) < c2*n^2
Then, finally, what complexity does p(n) + h(n) have? We have:
c1*n^2 + m*n < p(n) + h(n), for n sufficiently large
Since we never got an upper bound on h(n), we can't really say anything regarding the upper bound on p(n) + h(n). This is imperative: h(n) in Ω(n) only says that h(n) grows at least as fast as n (linear) asymptotically, but we don't know if this is a tight lower bound. It might be a very sloppy lower bound for a exponential time function.
Subsequently, we can only state something regarding the lower bound:
p(n) + h(n) = f(n)*g(n) + h(n) is in Ω(n^2)
I.e., f(n)*g(n) + h(n) grows at least as n^2 (i.e., in Ω(n^2)) asymptotically.
A note as to your approach: you are right (as shown above) that f(n)g(n) is in ϴ(n^2), but note that this implies that a tight lower bound of f(n)g(n) + h(n) can never be less than k*n^2: i.e., f(n)g(n) + h(n) in Ω(n^2) is a given, and a better (tigher) lower bound than what your suggested; Ω(n). Recall that the fastest growing terms dominate asymptotic behavior.
For reference, see e.g.
https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/asymptotic-notation

Related

Complexity analysis after multiplication of two functions

Given F(n) = θ(n)
H(n) = O(n)
G(n) = Ω(n)
then what will be order of F(n) + [G(n) . H(n)] ?
edit: F(n) = θ(n) not Q(n)
There isn't enough information to say anything about the function P(n) = G(n)*H(n). All we know is that G grows at least linearly; it could be growing quadratically, cubically, even exponentially. Likewise, we only know that H grows at most linearly; it could only be growing logarithmically, or be constant, or even be decreasing. As a result, P(n) itself could be decreasing or increasing without bound, which means the sum F(n) + P(n) could also be decreasing or increasing without bound.
Suppose, though, that we could assume that H(n) = Ω(1) (i.e., it is at least not decreasing). Now we can say the following about P(n):
P(n) = H(n) * G(n)
>= C1 * G(n)
= Ω(G(n)) = Ω(n)
P(n) <= C1*n * G(n)
= O(n*G(n))
Thus F(n) + P(n) = Ω(n) and F(n) + P(n) = O(n*G(n)), but nothing more can be said; both bounds are as tight as we can make them without more information about H or G.

Big O complexity with n^2 log(n)

Two questions:
First, if f(n) = n(3n + nlog(n)) then why is f(n) Ω(n2)?
Second, why is n2log(n) not O(n2)?
These are both consequences of the fact that log(n) tends to infinity as n tends to infinity.
1) n(3n + nlog(n)) is omega(n^2) because for large n the 3n is negligible and n^2log(n) is bounded below by n^2
2) n^2log(n) is not O(n^2) since, for any constant K > 0, for any n > e^K you have that n^2log(n) > Kn^2, so no K satisfies n^2log(n) < Kn^2 for all but finitely many n.
Ω( ) is for putting the bound under the function. That means that the function will have running time complexity at least more that what is mentioned between the parentheses in Ω( ). Now, for the function f(n) = n(3n + nlog(n)), the dominating function out of the two functions involved (3n2 and n2Log(n)) is n2Log(n).
Therefore, any function smaller than the dominating function can act as the lower bound even if it may not be the tightest possible lower bound. So, f(n) is Ω(n2).
Next, growth rate of n2Log(n) is higher than n2 (as I already mentioned above which one is dominating). Therefore, n2Log(n) is a tighter bound (and therefore a better determinant) about the Big - O of f(n). Hence, f(n) is O(n2Log(n)) instead of O(n2).

How can an algorithm that is O(n) also be O(n^2), O(n^1000000), O(2^n)?

So the answer to this question What is the difference between Θ(n) and O(n)?
states that "Basically when we say an algorithm is of O(n), it's also O(n2), O(n1000000), O(2n), ... but a Θ(n) algorithm is not Θ(n2)."
I understand Big O to represent upper bound or worst case with that I don't understand how O(n) is also O(n2) and the other cases worse than O(n).
Perhaps I have some fundamental misunderstandings. Please help me understand this as I have been struggling for a while.
Thanks.
It's helpful to think of what big-Oh means: if a function is O(n), then c*n, where c is some positive number, is the upper-bound. If c*n is an upper-bound, it's clear that for integers, c*n^2 would also be an upper-bound. Also c*n^3, c*n^4, c*n^1000, etc.
The below graph shows the growth of functions, which are upper bounds of the function "to the right" of it; i.e., it grows faster on smaller n.
Suppose the running time of your algorithm is T(n) = 3n + 6 (i.e., an arbitrary polynomial of order 1).
It's true that T(n) = O(n) because 3n + 6 < 4n for all n > 5 (to use the definition of big-oh notation). It's also true that T(n) = O(n^2) because 3n + 6 < n^2 for all n > 5 (to use the defintion again).
It's also true that T(n) = Θ(n) because, in addition to the proof that it was O(n), it is true that 3n + 6 > n for all n > 1. However, you cannot prove that 3n + 6 > c n^2 for any value of c for arbitrarily large n. (Proof sketch: lim (cn^2 - 3n - 6) > 0 as n -> infinity).
I understand Big O to represent upper bound or worst case with that I don't understand how O(n) is also O(n2) and the other cases worse than O(n).
Intuitively, an "upper bound of x" means that something will always be less than or equal to x. If something is less than or equal to x, it is also less than or equal to x^2 and x^1000, for large enough values of x. So x^2 and x^1000 can also be upper bounds.
This is what Big-oh represents: upper bounds.
When we say that f(n) = O(g(n)), we mean only that for all sufficiently large n, there exists a constant c such that f(n) <= cg(n). Note that if f(n) = O(g(n)), we can always choose a function h(n) bigger than g(n) and since g(n) is eventually less than h(n), we have f(n) <= cg(n) <= ch(n), so f(n) = O(h(n)) as well.
Note that the O bound is not tight. The theta bound is the intersection of O(g(n)) and Omega(g(n)), where Omega gives the lower bound (it's like O, the upper bound, but bounds from below instead). If f(n) is bounded below by g(n), and h(n) is bigger than g(n), then if follows that f(n) is not (necessarily) bounded below by h(n).

Interview questions

This is an interview question:
Given: f(n) = O(n)
g(n) = O(n²)
find f(n) + g(n) and f(n)⋅g(n)?
What would be the answer for this question?
When this answer was prepared, f(n) was shown as o(n) and g(n) as Θ(n²).
From f(n) = o(n) and g(n) = Θ(n²) you get a lower bound of o(n²) for f(n) + g(n), but you don't get an upper bound on f(n) + g(n) because no upper bound was given on f(n). [Note, in above, Θ is a big-θ, or big theta]
For f(n)·g(n), you get a lower bound of o(n³) because Θ(n²) implies lower and upper bounds of o(n²) and O(n²) for g(n). Again, no upper bound on f(n)·g(n) is available, because f(n) can be arbitrarily large; for f(n), we only have an o(n) lower bound.
With the question modified to give only upper bounds on f and g, as f(n) = O(n) and g(n) = O(n²), we have that f(n)+g(n) is O(n²) and f(n)·g(n) is O(n³).
To show this rigorously is a bit tedious, but is quite
straightforward. Eg, for the f(n)·g(n) case, suppose that by the definitions of O(n) and O(n²) we are given C, X, K, Y such that n>X ⇒ C·n > f(n) and n>Y ⇒ K·n² > g(n). Let J=C·K and Z=max(X,Y). Then n>Z ⇒ J·n³ > f(n)·g(n) which proves that f(n)·g(n) is O(n³).
O(f(n) + g(n)) = O(max{f(n), g(n)})
so for first
f(n) + g(n) = O(max{n, n^2}) = O(n^2)
for
f(n) ⋅ g(n)
we will have
O(f(n) ⋅ g(n)) = O(n ⋅ n^2) = O(n^3)
Think about it this way.
f(n) = c.n + d
g(n) = a.n^2 + b.n + p
Then,
f(n) + g(n) = a.n^2 + (lower powers of n)
And,
f(n).g(n) = x.n^3 + (lower powers of n)
It follows that O(f(n) + g(n)) = O(n^2)
and O(f(n).g(n)) = O(n^3)
This question can be understood like this :-
f(n)=O(n) means it takes O(n) time to compute f(n).
Similarly,
for g(n) which requires O(n^2) time
So,
P(n)=f(n)+g(n) would definitely take O(n)+O(n^2)+O(1)(for addition,
once you know the value of both f and g)
. Hence, this new function
P(n) would require O(n^2) time.
Same is the case for
Q(n) =f(n)*g(n) which requires O(n^2) time
.

How to calculate big-theta

Can some one provide me a real time example for how to calculate big theta.
Is big theta some thing like average case, (min-max)/2?
I mean (minimum time - big O)/2
Please correct me if I am wrong, thanks
Big-theta notation represents the following rule:
For any two functions f(n), g(n), if f(n)/g(n) and g(n)/f(n) are both bounded as n grows to infinity, then f = Θ(g) and g = Θ(f). In that case, g is both an upper bound and a lower bound on the growth of f.
Here's an example algorithm:
def find-minimum(List)
min = +∞
foreach value in List
min = value if min > value
return min
We wish to evaluate the cost function c(n) where n is the size of the input list. This algorithm will perform one comparison for every item in the list, so c(n) = n.
c(n)/n = 1 which remains bounded as n goes to infinity, so c(n) grows no faster than n. This is what is meant by big-O notation c(n) = O(n). Conversely, n/C(n) = 1 also remains bounded, so c(n) grows no slower than n. Since it grows neither slower nor faster, it must grow at the same speed. This is what is meant by theta notation c(n) = Θ(n).
Note that c(n)/n² is also bounded, so c(n) = O(n²) as well — big-O notation is merely an upper bound on the complexity, so any O(n) function is also O(n²), O(n³)...
However, since n²/c(n) = n is not bounded, then c(n) ≠ Θ(n²). This is the interesting property of big-theta notation: it's both an upper bound and a lower bound on the complexity.
Big theta is a tight bound, for a function T(n): if: Omega(f(n))<=T(n)<=O(f(n)), then Theta(f(n)) is the tight bound for T(n).
In other words Theta(f(n)) 'describes' a function T(n), if both O [big O] and Omega, 'describe' the same T, with the same f.
for example, a quicksort [with correct median choices], always takes at most O(nlogn), at at least Omega(nlogn), so quicksort [with good median choices] is Theta(nlogn)
EDIT:
added discussion in comments:
Searching an array is still Theta(n). the Theta function does not indicate worst/best case, but the behavior of the desired case. i.e, searching for an array, T(n)=number of ops for worst case. in here, obviously T(n)<=O(n), but also T(n)>=n/2, because at worst case you need to iterate the whole array, so T(n)>=Omega(n) and therefore Theta(n) is asymptotic bound.
From http://en.wikipedia.org/wiki/Big_O_notation#Related_asymptotic_notations, we learn that "Big O" denotes an upper bound, whereas "Big Theta" denotes an upper and lower bound, i.e. in the limit as n goes to infinity:
f(n) = O(g(n)) --> |f(n)| < k.g(n)
f(n) = Theta(g(n)) --> k1.g(n) < f(n) < k2.g(n)
So you cannot infer Big Theta from Big O.
ig-Theta (Θ) notation provides an asymptotic upper and lower bound on the growth rate of an algorithm's running time. To calculate the big-Theta notation of a function, you need to find two non-negative functions, f(n) and g(n), such that:
There exist positive constants c1, c2 and n0 such that 0 <= c1 * g(n) <= f(n) <= c2 * g(n) for all n >= n0.
f(n) and g(n) have the same asymptotic growth rate.
The big-Theta notation for the function f(n) is then written as Θ(g(n)). The purpose of this notation is to provide a rough estimate of the running time, ignoring lower order terms and constant factors.
For example, consider the function f(n) = 2n^2 + 3n + 1. To calculate its big-Theta notation, we can choose g(n) = n^2. Then, we can find c1 and c2 such that 0 <= c1 * n^2 <= 2n^2 + 3n + 1 <= c2 * n^2 for all n >= n0. For example, c1 = 1/2 and c2 = 2. So, f(n) = Θ(n^2).

Resources