I have used the Master Theorem to solve recurrence relations. I have gotten it down to Θ(3n2-9n). Does this equal Θ(n2)? I have another recurrence for which the solution is Θ(2n3 - 1002). In BigTheta notation do you always use only the largest term? So my second one would be Θ(n3)? It just seems like 100n2 would be more important in the second case. So will it matter if I discard it?
Any suggestions?
Yes. Your assumptions are correct. The first one is Θ(n2) and the second one is Θ(n3). When you are using Θ notation you only require the largest term.
In case of your second recurrence consider the n = 1000, then n3 = 1000000000. Where as 100n2 is just 100000000. As the value of n increases, n3 becomes more and more predominant than 100n2.
For theoretical purpose you don't need to consider the constant, how ever large it might be. But practical applications might prefer an algorithm with a small constant even if the complexity is high. For example it might be better to use an algorithm having complexity 0.01n 3 over an algorithm having 10000n2 complexity if the value of n is not very large.
if we have function f(n) = 3n^2-9n , lower order terms and costants can be ignored, we consider higher order terms ,because they play major role in growth of function.
By considering only higher order term we can easily find the upper bound, here is the example.
f(n)= 3n^2-9n
For all sufficient large value of n>=1,
3n^2<=3n^2
and -9n <= n^2
thus, f(n)=3n^2 -9n <= 3n^2 + n^2
<= 4n^2
*The upper bound of f(n) is 4n^2 , that means for all sufficient large
value of n>=1, the value of f(n) wouldn't be greater than 4n^2.*
therefore, f(n)= Θ(n^2) where c=4 and n0=1
we can directly find the upper bound by saying to ignore lower order terms and constants in the equation f(n)= 3n^2-9n , result will be the same Θ(n^2)
Related
I have some confusion regarding the Asymptotic Analysis of Algorithms.
I have been trying to understand this upper bound case, seen a couple of youtube videos. In one of them, there was an example of this equation
where we have to find the upper bound of the equation 2n+3. So, by looking at this, one can say that it is going o be O(n).
My first question :
In algorithmic complexity, we have learned to drop the constants and find the dominant term, so is this Asymptotic Analysis to prove that theory? or does it have other significance? otherwise, what is the point of this analysis when it is always going to be the biggest n in the equation, example- if it were n+n^2+3, then the upper bound would always be n^2 for some c and n0.
My second question :
as per rule the upper bound formula in Asymptotic Analysis must satisfy this condition f(n) = O(g(n)) IFF f(n) < c.g(n) where n>n0,c>0, n0>=1
i) n is the no of inputs, right? or does n represent the number of steps we perform? and does f(n) represents the algorithm?
ii) In the following video to prove upper bound of the equation 2n+3 could be n^2 the presenter considered c =1, and that is why to satisfy the equation n had to be >= 3 whereas one could have chosen c= 5 and n=1 as well, right? So then why were, in most cases in the video, the presenter was changing the value of n and not c to satisfy the conditions? is there a rule, or is it random? Can I change either c or n(n0) to satisfy the condition?
My Third Question:
In the same video, the presenter mentioned n0 (n not) is the number of steps. Is that correct? I thought n0 is the limit after which the graph becomes the upper bound (after n0, it satisfies the condition for all values of n); hence n0 also represents the input.
Would you please help me understand because people come up with different ideas in different explanations, and I want to understand them correctly?
Edit
The accepted answer clarified all of the questions except the first one. I have gone through many articles on the web, and here I am documenting my conclusion if anyone else has the same question. This will help them.
My first question was
In algorithmic complexity, we have learned to drop the constants and
find the dominant term, so is this Asymptotic Analysis to prove that
theory?
No, Asymptotic Analysis describes the algorithmic complexity, which is all about understanding or visualizing the Asymptotic behavior or the tail behavior of a function or a group of functions by plotting mathematical expression.
In computer science, we use it to evaluate (note: evaluate is not measuring) the performance of an algorithm in terms of input size.
for example, these two functions belong to the same group
mySet = set()
def addToMySet(n):
for i in range(n):
mySet.add(i*i)
mySet2 = set()
def addToMySet2(n):
for i in range(n):
for j in range(500):
mySet2.add(i*j)
Even though the execution time of the addToMySet2(n) is always > the execution time of addToMySet(n), the tail behavior of both of these functions would be the same with respect to the largest n, if one plot them in a graph the tendency of that graph for both of the functions would be linear thus they belong to the same group. Using Asymptotic Analysis, we get to see the behavior and group them.
A mistake that I made assuming upper bound represents the worst case. In reality, The upper bound of any algorithm is associated with all of the best, average, and worst cases. so the correct way of putting that would be
upper/lower bound in the best/average/worst case of an
algorithm
.
We can't relate the upper bound of an algorithm with the worst-case time complexity and the lower bound with the best-case complexity. However, an upper bound can be higher than the worst-case because upper bounds are usually asymptotic formulae that have been proven to hold.
I have seen this kind of question like find the worst-case time complexity of such and such algorithm, and the answer is either O(n) or O(n^2) or O(log-n), etc.
For example, if we consider the function addToMySet2(n), one would say the algorithmic time complexity of that function is O(n), which is technically wrong because there are three factors bound, bound type, (inclusive upper bound and strict upper bound) and case are involved determining the algorithmic time complexity.
When one denote O(n) it is derived from this Asymptotic Analysis f(n) = O(g(n)) IFF for any c>0, there is a n0>0 from which f(n) < c.g(n) (for any n>n0) so we are considering upper bound of best/average/worst case. In the above statement the case is missing.
I think We can consider, when not indicated, the big O notation generally describes an asymptotic upper bound on the worst-case time complexity. Otherwise, one can also use it to express asymptotic upper bounds on the average or best case time complexities
The whole point of asymptotic analysis is to compare algorithms performance scaling. For example, if I write two version of the same algorithm, one with O(n^2) time complexity and the other with O(n*log(n)) time complexity, I know for sure that the O(n*log(n)) one will be faster when n is "big". How big? it depends. You actually can't know unless you benchmark it. What you know is at some point, the O(n*log(n)) will always be better.
Now with your questions:
the "lower" n in n+n^2+3 is "dropped" because it is negligible when n scales up compared to the "dominant" one. That means that n+n^2+3 and n^2 behave the same asymptotically. It is important to note that even though 2 algorithms have the same time complexity, it does not mean they are as fast. For example, one could be always 100 times faster than the other and yet have the exact same complexity.
(i) n can be anything. It may be the size of the input (eg. an algorithm that sorts a list) but it may also be the input itself (eg. an algorithm that give the n-th prime number) or a number of iteration, etc
(ii) he could have taken any c, he chose c=1 as an example as he could have chosen c=1.618. Actually the correct formulation would be:
f(n) = O(g(n)) IFF for any c>0, there is a n0>0 from which f(n) < c.g(n) (for any n>n0)
the n0 from the formula is a pure mathematical construct. For c>0, it is the n value from which the function f is bounded by g. Since n can represent anything (size of a list, input value, etc), it is the same for n0
I finally thought I understood what it means when a function f(n) is sandwiched between a lower and upper bound which are the same class and so can be described as theta(n).
As an example:
f(n) = 2n + 3
1n <= 2n + 3 <= 5n for all values of n >= 1
In the example above it made perfect sense, the order of n is on both sides, so f(n) is sandwiched between 1 * g(n) and 5 * g(n).
It was also a lot clearer when I tried not to use the notations to think about best or worst case and instead as an upper, lower or average bound.
So now thinking I finally understood this and the maths around it I went back to visit this page: https://www.bigocheatsheet.com/ to look at the run times of various search functions and was suddenly confused again about how many of the algorithms there, for example bubble sort, do not have the same order on both sides (upper and lower bound) yet theta is used to describe them.
Bubble sort has Ω(n) and O(n^2) but the theta value is given as Θ(n^2). How is that it can have Θ(n^2) if the upper bound of the function is in the order of N^2 but the lower bound of the function is in the order of n?
Actually, the page you referred to is highly misleading - even if not completely wrong. If you analyze the complexity of an algorithm, you first have to specify the scenario: i.e. whether you are talking about worst-case (the default case), average case or best-case. For each of the three scenarios, you can then give a lower bound (Ω), upper bound (O) or a tight bound (Θ).
Take insertion sort as an example. While the page is, strictly speaking, correct in that the best case is Ω(n), it could just as well (and more precisely) have said that the best case is Θ(n). Similarly, the worst case is indeed O(n²) as stated on that page (as well as Ω(n²) or Ω(n) or O(n³)), but more precisely it's Θ(n²).
Using Ω to always denote the best case and O to always denote the worst-case is, unfortunately, an often made mistake. Takeaway message: the scenario (worst, average, best) and the type of the bound (upper, lower, tight) are two independent dimensions.
For example i have f(N) = 5N +3 from a program. I want to know what is the big (oh) of this function. we say higher order term O(N).
Is this correct method to find big(oh) of any program by dropping lower orders terms and constants?
If we got O(N) by simply looking on that complexity function 5N+3. then, what is the purpose of this formula F(N) <= C* G(N)?
i got to know that, this formula is just for comparing two functions. my question is,
In this formula, F(N) <= C* G(N), i have F(N) = 5N+3, but what is this upper bound G(N)? Where it comes from? where from will we take it?
i have studied many books, and many posts, but i am still facing confusions.
Q: Is this correct method to find big(oh) of any program by dropping
lower orders terms and constants?
Yes, most people who have at least some experience with examining time complexities use this method.
Q: If we got O(N) by simply looking on that complexity function 5N+3.
then, what is the purpose of this formula F(N) <= C* G(N)?
To formally prove that you correctly estimated big-oh for certain algorithm. Imagine that you have F(N) = 5N^2 + 10 and (incorrectly) conclude that the big-oh complexity for this example is O(N). By using this formula you can quickly see that this is not true because there does not exist constant C such that for large values of N holds 5N^2 + 10 <= C * N. This would imply C >= 5N + 10/N, but no matter how large constant C you choose, there is always N larger than this constant, so this inequality does not hold.
Q: In this formula, F(N) <= C* G(N), i have F(N) = 5N+3, but what is this
upper bound G(N)? Where it comes from? where from will we take it?
It comes from examining F(N), specifically by finding its highest order term. You should have some math knowledge to estimate which function grows faster than the other, for start check this useful link. There are several classes of complexities - constant, logarithmic, polynomial, exponential.. However, in most cases it is easy to find the highest order term for any function. If you are not sure, you can always plot a graph of a function or formally prove that one function grows faster than the other. For example, if F(N) = log(N^3) + sqrt(N) maybe it is not clear at first glance what's the highest order term, but if you calculate or plot log(N^3) for N = 1, 10 and 1000 and sqrt(N) for same values, it is immediately clear that sqrt(N) grows faster, so big-oh for this function is O(sqrt(N)).
I know that If I have a for loop, and a nested for loop, which both iterate 1 to n times, I can multiply the run times of both loops to get O(n^2). This is a clean and simple calculation. However, if you had iterations like so,
n = 2, k = 5
n = 3, k = 9
n = 4, k = 14
where k is the number of times the inner for loop iterates. At one point, it is larger than n^2, then it is exactly n^2, then it becomes less than n^2. Assuming you cannot determine k based on n, and maybe even having these points of n very far apart how do you calculate Big-O?
I tried graphing points. And at one point, I could say it was O(n^3) since some points exceed n^2, and further down, it would be O(n^2). Which one should I choose?
You state in your question that k is:
"... At one point, it is larger than n^2"
This is the uncertainty (or non-specificity) in your question that makes it hard to answer rigorously. Anyway, for the remainder of this answer, we shall assume that what you mean by the quote above is that:
For all values of n, the value of k(n) is bounded from above by
C·n^2, for some constant C>0.
From here on, let's refer to this statement as (+).
Now, since you're mentioning Big-O notation, we'll proceed to somewhat loosely define what this actually means:
f(n) = O(g(n)) means c · g(n) is an upper bound on f(n). Thus
there exists some constant c such that f(n) is always ≤ c · g(n),
for sufficiently large n (i.e. , n ≥ n0 for some constant n0).
I.e., Big-O notation is a way to describe an upper bound here for the asymptotic (limiting) behaviour of our algorithm. You write in your question, however, that:
"And at one point, I could say it was O(n^3) since some points exceed n^2, and further down, it would be O(n^2)"
Now this is a very specific analysis of how the inner loop of your algorithm behaves for specific values of n, and really not something that is related to asymptotic analysis (or Big-O notation). We're not interested in specifics about how the algorithms behaves for specific values of n, but whether we can find some general upper bound for the algorithm given n is "sufficiently large" (n ≥ n0 for some constant n0).
Now, with these comments above, we can proceed to analysing the asymptotic behaviour of your algorithm.
We can approach this using Sigma notation, making use of statement (+) above, k(n) < C·n:
The last step (++) follows from the definition of Big-O-notation, that we loosely stated above.
Hence, given that we interpret your information regarding k as (+), your algorithm runs in O(n^3) (which is an upper bound, not necessarily a tight one).
To compare the asymptotic order of the two functions, I calculated the limit of first function over second function, when n goes to infinity.
The answer was 2 (I had to use l'hopital's rule), which means that for really high values of n, log(n^2) is larger than log(5n)
My question is: is it incorrect to say that log(n^2) is asymptotically larger than log(5n)?
My friend told me that when the limit of first function over the second function is a constant, that means that their asymptotic order is equal. Can someone confirm?
Actually log (5n) = log 5 + log n, and log (n^2) = 2 log n, so log(n^2) is larger than log 5n. In addition, we can say that log(n^2) is asymptotically larger than log 5n. One definition of Asymptotic is as follows.
The term asymptotic means approaching a value or curve arbitrarily
closely (i.e., as some sort of limit is taken). A line or curve A that
is asymptotic to given curve C is called the asymptote of C.
Depending on the context we may ignore constant factors, and write they are in the same order. We may express this by existing notations such as O, Θ and Ω. According to the widely accepted definition from the algorithmic standpoint, these two particular functions are asymptotically equivalent:
We say A(n) is asymptotically larger than B(n) if
lim n→∞A(n)/B(n) = ∞
In this case, the above limit converges to 2 (or reverse version 1/2) so they are asymptotically equal.
log(n^2) = 2 * log(n) and log(5n) = log(5) + log(n). So both are asymptotically equal when speaking about algorithms.
log(n^2)=2*log(n)
Assuming log base 2
If the limit was 2, then it means that log(5n) belongs to O(log(n^2)), doesn't it...