Here's the equation:
Upper Bound:
Without the log i understand the Upper bound on this would be O(n^2), but with the log will the upper bound be O(log n^2)? Or is the log negated?
Lower Bound:
If we assume that this is only run once, then shouldn't this be lower bounded by O(1)?
log(n^2) = 2*log(n). That means O(log n^2) = O(log n).
First of all lower bound is marked as Ω not O.
Also, Ω(1) is a lower bound but it's not a tight one since for n >= 3:
2log(3n + n^2) > log(n) = Ω(log(n))
and for the upper bound:
2log(3n + n^2) < 2 * log(n^3) = 6log(n) = O(log(n))
And since F(n) = O(log(n)) and F(n) = Ω(log(n))
it means it's a tight bound and it's marked as: Θ(log(n))
Related
If
f(n)=ϴ(n),g(n)=ϴ(n)
and
h(n)=Ω(n)
Then how to evaluate f(n)g(n)+h(n)?
I approached like f(n)g(n)=ϴ(n^2), now what will be Ω(n)+ϴ(n^2). According to me the lower bound of this expression should be Ω(n), and upper bound should be O(n^2), but what should be the tightest bound for this expression?
For some constants k1, k2, l1, l2 and m > 0, we have:
f(n) is ϴ(n)
=> k1*n < f(n) < k2*n, for n sufficiently large
g(n) is ϴ(n)
=> l1*n < g(n) < g2*n, for n sufficiently large
h(n) is Ω(n)
=> m*n < h(n), for n sufficiently large
Then, f(n)*h(n):
for f(n) * h(n):
k1*l1*n^2 < f(n)*g(n) < k2*l2*n^2, for n sufficiently large
So we can just write p(n) = f(n)*g(n) and use constants c1=k1*l1 and c2=k2*l2, and we have:
p(n) (= f(n)*g(n)) is in ϴ(n^2), since
c1*n^2 < p(n) < c2*n^2
Then, finally, what complexity does p(n) + h(n) have? We have:
c1*n^2 + m*n < p(n) + h(n), for n sufficiently large
Since we never got an upper bound on h(n), we can't really say anything regarding the upper bound on p(n) + h(n). This is imperative: h(n) in Ω(n) only says that h(n) grows at least as fast as n (linear) asymptotically, but we don't know if this is a tight lower bound. It might be a very sloppy lower bound for a exponential time function.
Subsequently, we can only state something regarding the lower bound:
p(n) + h(n) = f(n)*g(n) + h(n) is in Ω(n^2)
I.e., f(n)*g(n) + h(n) grows at least as n^2 (i.e., in Ω(n^2)) asymptotically.
A note as to your approach: you are right (as shown above) that f(n)g(n) is in ϴ(n^2), but note that this implies that a tight lower bound of f(n)g(n) + h(n) can never be less than k*n^2: i.e., f(n)g(n) + h(n) in Ω(n^2) is a given, and a better (tigher) lower bound than what your suggested; Ω(n). Recall that the fastest growing terms dominate asymptotic behavior.
For reference, see e.g.
https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/asymptotic-notation
So the answer to this question What is the difference between Θ(n) and O(n)?
states that "Basically when we say an algorithm is of O(n), it's also O(n2), O(n1000000), O(2n), ... but a Θ(n) algorithm is not Θ(n2)."
I understand Big O to represent upper bound or worst case with that I don't understand how O(n) is also O(n2) and the other cases worse than O(n).
Perhaps I have some fundamental misunderstandings. Please help me understand this as I have been struggling for a while.
Thanks.
It's helpful to think of what big-Oh means: if a function is O(n), then c*n, where c is some positive number, is the upper-bound. If c*n is an upper-bound, it's clear that for integers, c*n^2 would also be an upper-bound. Also c*n^3, c*n^4, c*n^1000, etc.
The below graph shows the growth of functions, which are upper bounds of the function "to the right" of it; i.e., it grows faster on smaller n.
Suppose the running time of your algorithm is T(n) = 3n + 6 (i.e., an arbitrary polynomial of order 1).
It's true that T(n) = O(n) because 3n + 6 < 4n for all n > 5 (to use the definition of big-oh notation). It's also true that T(n) = O(n^2) because 3n + 6 < n^2 for all n > 5 (to use the defintion again).
It's also true that T(n) = Θ(n) because, in addition to the proof that it was O(n), it is true that 3n + 6 > n for all n > 1. However, you cannot prove that 3n + 6 > c n^2 for any value of c for arbitrarily large n. (Proof sketch: lim (cn^2 - 3n - 6) > 0 as n -> infinity).
I understand Big O to represent upper bound or worst case with that I don't understand how O(n) is also O(n2) and the other cases worse than O(n).
Intuitively, an "upper bound of x" means that something will always be less than or equal to x. If something is less than or equal to x, it is also less than or equal to x^2 and x^1000, for large enough values of x. So x^2 and x^1000 can also be upper bounds.
This is what Big-oh represents: upper bounds.
When we say that f(n) = O(g(n)), we mean only that for all sufficiently large n, there exists a constant c such that f(n) <= cg(n). Note that if f(n) = O(g(n)), we can always choose a function h(n) bigger than g(n) and since g(n) is eventually less than h(n), we have f(n) <= cg(n) <= ch(n), so f(n) = O(h(n)) as well.
Note that the O bound is not tight. The theta bound is the intersection of O(g(n)) and Omega(g(n)), where Omega gives the lower bound (it's like O, the upper bound, but bounds from below instead). If f(n) is bounded below by g(n), and h(n) is bigger than g(n), then if follows that f(n) is not (necessarily) bounded below by h(n).
This is an interview question:
Given: f(n) = O(n)
g(n) = O(n²)
find f(n) + g(n) and f(n)⋅g(n)?
What would be the answer for this question?
When this answer was prepared, f(n) was shown as o(n) and g(n) as Θ(n²).
From f(n) = o(n) and g(n) = Θ(n²) you get a lower bound of o(n²) for f(n) + g(n), but you don't get an upper bound on f(n) + g(n) because no upper bound was given on f(n). [Note, in above, Θ is a big-θ, or big theta]
For f(n)·g(n), you get a lower bound of o(n³) because Θ(n²) implies lower and upper bounds of o(n²) and O(n²) for g(n). Again, no upper bound on f(n)·g(n) is available, because f(n) can be arbitrarily large; for f(n), we only have an o(n) lower bound.
With the question modified to give only upper bounds on f and g, as f(n) = O(n) and g(n) = O(n²), we have that f(n)+g(n) is O(n²) and f(n)·g(n) is O(n³).
To show this rigorously is a bit tedious, but is quite
straightforward. Eg, for the f(n)·g(n) case, suppose that by the definitions of O(n) and O(n²) we are given C, X, K, Y such that n>X ⇒ C·n > f(n) and n>Y ⇒ K·n² > g(n). Let J=C·K and Z=max(X,Y). Then n>Z ⇒ J·n³ > f(n)·g(n) which proves that f(n)·g(n) is O(n³).
O(f(n) + g(n)) = O(max{f(n), g(n)})
so for first
f(n) + g(n) = O(max{n, n^2}) = O(n^2)
for
f(n) ⋅ g(n)
we will have
O(f(n) ⋅ g(n)) = O(n ⋅ n^2) = O(n^3)
Think about it this way.
f(n) = c.n + d
g(n) = a.n^2 + b.n + p
Then,
f(n) + g(n) = a.n^2 + (lower powers of n)
And,
f(n).g(n) = x.n^3 + (lower powers of n)
It follows that O(f(n) + g(n)) = O(n^2)
and O(f(n).g(n)) = O(n^3)
This question can be understood like this :-
f(n)=O(n) means it takes O(n) time to compute f(n).
Similarly,
for g(n) which requires O(n^2) time
So,
P(n)=f(n)+g(n) would definitely take O(n)+O(n^2)+O(1)(for addition,
once you know the value of both f and g)
. Hence, this new function
P(n) would require O(n^2) time.
Same is the case for
Q(n) =f(n)*g(n) which requires O(n^2) time
.
We use Ө-notation to write worst case running time of insertion sort. But I’m not able to relate properties of Ө-notation with insertion sort, why Ө-notation is suitable to insertion sort. How does the insertion sort function f(n), lies between the c1*n^2 and c2*n^2 for all n>=n0.
Running time of insertion sort as Ө(n^2) implies that it has upper bound O(n^2) and lower bound Ω(n^2). I’m confuse in whether insertion sort lower bound is Ω(n^2) or Ω(n).
The use of Ө-notation :
If any function have same both upper bound and lower bound, we can use Ө-notation to describe its time complexity.Both its upper bound and lower bound can be specified with single notation. It simply tells more about the characteristic of the function.
Example ,
suppose we have a function ,
f(n) = 4logn + loglogn
we can write this function as
f(n) = Ө(logn)
Because its upper bound and lower bound
are O(logn) and Ω(logn) repectively, which are same
so it is legal to write this function as ,
f(n)= Ө(logn)
Proof:
**Finding upper bound :**
f(n) = 4logn+loglogn
For all sufficience value of n>=2
4logn <= 4 logn
loglogn <= logn
Thus ,
f(n) = 4logn+loglogn <= 4logn+logn
<= 5logn
= O(logn) // where c1 can be 5 and n0 =2
**Finding lower bound :**
f(n) = 4logn+loglogn
For all sufficience value of n>=2
f(n) = 4logn+loglogn >= logn
Thus, f(n) = Ω(logn) // where c2 can be 1 and n0=2
so ,
f(n) = Ɵ(logn)
Similarly, in the case of insertion sort:
If running time of insertion sort is described by simple function f(n).
In particular , if f(n) = 2n^2+n+1 then
Finding upper bound:
for all sufficient large value of n>=1
2n^2<=2n^2 ------------------- (1)
n <=n^2 --------------------(2)
1 <=n^2 --------------------(3)
adding eq 1,2 and 3, we get.
2n^2+n+1<= 2n^2+n^2+n^2
that is
f(n)<= 4n^2
f(n) = O(n^2) where c=4 and n0=1
Finding lower bound:
for all sufficient large value of n>=1
2n^2+n^2+1 >= 2n^2
that is ,
f(n) >= 2n^2
f(n) = Ω(n^2) where c=2 and n0=1
because upper bound and lower bound are same,
f(n) = Ө(n^2)
if f(n)= 2n^2+n+1 then, c1*g(n) and c2*g(n) are presented by diagram:
In worst case, insertion sort upper bound and lower bound are O(n^2) and Ω(n^2), therefore in worst case it is legal to write the running of insertion sort as Ө(n^2))
In best case, it would be Ө(n).
The best case running time of insertion time is Ө(n) and worst case is Ө(n^2) to be precise. So the running time of insertion sort is O(n^2) not Ө(n^2). O(n^2) means that the running time of the algorithm should be less than or equal to n^2, where as Ө(n^2) means it should be exactly equal to n^2.
The worst case running time will never be less than Ө(n^2). We use Ө(n^2) because it is more accurate.
Insertion Sort Time "Computational" Complexity: O(n^2), Ω(n)
O(SUM{1..n}) = O(1/2 n(n+1)) = O(1/2 n^2 + 1/2 n)) ~ O(n^2)
Ө(SUM{1..(n/2)}) = Ө(1/8 n(n+2)) = Ө(1/8 n^2 + 1/4 n) ~ Ө(n^2)
Here is a paper that shows that Gapped Insertion Sort is O(n log n), an optimal version of insertion sort: Gapped Insertion Sort
But if you are looking for faster sorting algorithm, there's Counting Sort which has Time: O(3n) at its worst case when k=n (all symbols are unique), Space: O(n)
Can some one provide me a real time example for how to calculate big theta.
Is big theta some thing like average case, (min-max)/2?
I mean (minimum time - big O)/2
Please correct me if I am wrong, thanks
Big-theta notation represents the following rule:
For any two functions f(n), g(n), if f(n)/g(n) and g(n)/f(n) are both bounded as n grows to infinity, then f = Θ(g) and g = Θ(f). In that case, g is both an upper bound and a lower bound on the growth of f.
Here's an example algorithm:
def find-minimum(List)
min = +∞
foreach value in List
min = value if min > value
return min
We wish to evaluate the cost function c(n) where n is the size of the input list. This algorithm will perform one comparison for every item in the list, so c(n) = n.
c(n)/n = 1 which remains bounded as n goes to infinity, so c(n) grows no faster than n. This is what is meant by big-O notation c(n) = O(n). Conversely, n/C(n) = 1 also remains bounded, so c(n) grows no slower than n. Since it grows neither slower nor faster, it must grow at the same speed. This is what is meant by theta notation c(n) = Θ(n).
Note that c(n)/n² is also bounded, so c(n) = O(n²) as well — big-O notation is merely an upper bound on the complexity, so any O(n) function is also O(n²), O(n³)...
However, since n²/c(n) = n is not bounded, then c(n) ≠ Θ(n²). This is the interesting property of big-theta notation: it's both an upper bound and a lower bound on the complexity.
Big theta is a tight bound, for a function T(n): if: Omega(f(n))<=T(n)<=O(f(n)), then Theta(f(n)) is the tight bound for T(n).
In other words Theta(f(n)) 'describes' a function T(n), if both O [big O] and Omega, 'describe' the same T, with the same f.
for example, a quicksort [with correct median choices], always takes at most O(nlogn), at at least Omega(nlogn), so quicksort [with good median choices] is Theta(nlogn)
EDIT:
added discussion in comments:
Searching an array is still Theta(n). the Theta function does not indicate worst/best case, but the behavior of the desired case. i.e, searching for an array, T(n)=number of ops for worst case. in here, obviously T(n)<=O(n), but also T(n)>=n/2, because at worst case you need to iterate the whole array, so T(n)>=Omega(n) and therefore Theta(n) is asymptotic bound.
From http://en.wikipedia.org/wiki/Big_O_notation#Related_asymptotic_notations, we learn that "Big O" denotes an upper bound, whereas "Big Theta" denotes an upper and lower bound, i.e. in the limit as n goes to infinity:
f(n) = O(g(n)) --> |f(n)| < k.g(n)
f(n) = Theta(g(n)) --> k1.g(n) < f(n) < k2.g(n)
So you cannot infer Big Theta from Big O.
ig-Theta (Θ) notation provides an asymptotic upper and lower bound on the growth rate of an algorithm's running time. To calculate the big-Theta notation of a function, you need to find two non-negative functions, f(n) and g(n), such that:
There exist positive constants c1, c2 and n0 such that 0 <= c1 * g(n) <= f(n) <= c2 * g(n) for all n >= n0.
f(n) and g(n) have the same asymptotic growth rate.
The big-Theta notation for the function f(n) is then written as Θ(g(n)). The purpose of this notation is to provide a rough estimate of the running time, ignoring lower order terms and constant factors.
For example, consider the function f(n) = 2n^2 + 3n + 1. To calculate its big-Theta notation, we can choose g(n) = n^2. Then, we can find c1 and c2 such that 0 <= c1 * n^2 <= 2n^2 + 3n + 1 <= c2 * n^2 for all n >= n0. For example, c1 = 1/2 and c2 = 2. So, f(n) = Θ(n^2).