What is the best case time complexity of the following algorithm? - algorithm

I am given the following function, where g is some other function that runs in Θ(n²). What is the best case time complexity for this function?
void f(int n) {
if(n % 2 == 0) {
return;
} else {
g(n);
}
}
Clearly, the function runs in constant time if n is even, which tempts me to say Θ(1), but I don't think that's the correct answer because I don't think that's how an asymptotic tight bound is defined.
I've looked at a lot of similar questions on SO regarding big theta notation and best case analysis, but they all pertain to arrays of length n inputs, rather than just an integer. I think the best case analysis makes sense in those cases because it depends on the elements in the array.
However there is no analog to "elements in the array" for this question that seems to matter in determining the complexity other than g, whose complexity is fixed.
This leads me to conclude that the actual best case time complexity is Θ(n²). Is my understanding correct? Is it Θ(n²) or Θ(1)?

You seem quite confused in the usage of Θ notation.
I don't think that's the correct answer because I don't think that's
how an asymptotic tight bound is defined.
Θ notation is an asymptotic tight bound and it may be defined as:
For any two functions f(n) and g(n), we have f(n) = Θ(g(n) if and only if
f(n) = O(g(n)) and f(n) = Ω(g(n)).
O notation gives an asymptotic upper bound and Ω notation gives an asymptotic lower bound.
In the algorithm you provided,
void f(int n)
{
if(n % 2 == 0) return;
else g(n);
}
where g(n) = Θ(n²).
The running time of the algorithm belongs to both O(n²) and Ω(1). We cannot use the Θ notation to describe the running time of your algorithm if we are considering all the possible values of n.
However, if we look at the running time of the algorithm when the only value that n can take is even, which is the best case, then we can say that in the best case, the running time of the algorithm belongs to both O(1) and Ω(1). Hence, we can say that the best case complexity of the algorithm is Θ(1).
Do notice the difference between saying,
The running time of the algorithm is Θ(1). //Wrong in this case
and
The best case running time of the algorithm is Θ(1). //Correct in this case
Similarly, if we look at the running time of the algorithm when the only value that n can take is odd, then we find out that worst case complexity of the algorithm is Θ(n²).
I hope I have helped you.

I don't agree with #aksingh's answer.
From the given,
when n is even, all of the best case, worst case and plain running times are Θ(1), and
when n is odd, all of the best case, worst case and plain running times are Θ(n²).
In fact, there is no difference between the three times (in the asymptotic sense), because of the Θ.
It is important to notice that the function that describes these running times need not be "a single expression".
If you want to obtain results that do not depend on parity, you can write that the best, worst and plain case running times are Ω(1) and O(n²).
To illustrate, the plot shows some hypothetical running times (red dots). In green and magenta, the best and worst cases, for odd n. In blue, even n.
Now in gray, the best case function.

Related

Asymptotic analysis, Upper bound

I have some confusion regarding the Asymptotic Analysis of Algorithms.
I have been trying to understand this upper bound case, seen a couple of youtube videos. In one of them, there was an example of this equation
where we have to find the upper bound of the equation 2n+3. So, by looking at this, one can say that it is going o be O(n).
My first question :
In algorithmic complexity, we have learned to drop the constants and find the dominant term, so is this Asymptotic Analysis to prove that theory? or does it have other significance? otherwise, what is the point of this analysis when it is always going to be the biggest n in the equation, example- if it were n+n^2+3, then the upper bound would always be n^2 for some c and n0.
My second question :
as per rule the upper bound formula in Asymptotic Analysis must satisfy this condition f(n) = O(g(n)) IFF f(n) < c.g(n) where n>n0,c>0, n0>=1
i) n is the no of inputs, right? or does n represent the number of steps we perform? and does f(n) represents the algorithm?
ii) In the following video to prove upper bound of the equation 2n+3 could be n^2 the presenter considered c =1, and that is why to satisfy the equation n had to be >= 3 whereas one could have chosen c= 5 and n=1 as well, right? So then why were, in most cases in the video, the presenter was changing the value of n and not c to satisfy the conditions? is there a rule, or is it random? Can I change either c or n(n0) to satisfy the condition?
My Third Question:
In the same video, the presenter mentioned n0 (n not) is the number of steps. Is that correct? I thought n0 is the limit after which the graph becomes the upper bound (after n0, it satisfies the condition for all values of n); hence n0 also represents the input.
Would you please help me understand because people come up with different ideas in different explanations, and I want to understand them correctly?
Edit
The accepted answer clarified all of the questions except the first one. I have gone through many articles on the web, and here I am documenting my conclusion if anyone else has the same question. This will help them.
My first question was
In algorithmic complexity, we have learned to drop the constants and
find the dominant term, so is this Asymptotic Analysis to prove that
theory?
No, Asymptotic Analysis describes the algorithmic complexity, which is all about understanding or visualizing the Asymptotic behavior or the tail behavior of a function or a group of functions by plotting mathematical expression.
In computer science, we use it to evaluate (note: evaluate is not measuring) the performance of an algorithm in terms of input size.
for example, these two functions belong to the same group
mySet = set()
def addToMySet(n):
for i in range(n):
mySet.add(i*i)
mySet2 = set()
def addToMySet2(n):
for i in range(n):
for j in range(500):
mySet2.add(i*j)
Even though the execution time of the addToMySet2(n) is always > the execution time of addToMySet(n), the tail behavior of both of these functions would be the same with respect to the largest n, if one plot them in a graph the tendency of that graph for both of the functions would be linear thus they belong to the same group. Using Asymptotic Analysis, we get to see the behavior and group them.
A mistake that I made assuming upper bound represents the worst case. In reality, The upper bound of any algorithm is associated with all of the best, average, and worst cases. so the correct way of putting that would be
upper/lower bound in the best/average/worst case of an
algorithm
.
We can't relate the upper bound of an algorithm with the worst-case time complexity and the lower bound with the best-case complexity. However, an upper bound can be higher than the worst-case because upper bounds are usually asymptotic formulae that have been proven to hold.
I have seen this kind of question like find the worst-case time complexity of such and such algorithm, and the answer is either O(n) or O(n^2) or O(log-n), etc.
For example, if we consider the function addToMySet2(n), one would say the algorithmic time complexity of that function is O(n), which is technically wrong because there are three factors bound, bound type, (inclusive upper bound and strict upper bound) and case are involved determining the algorithmic time complexity.
When one denote O(n) it is derived from this Asymptotic Analysis f(n) = O(g(n)) IFF for any c>0, there is a n0>0 from which f(n) < c.g(n) (for any n>n0) so we are considering upper bound of best/average/worst case. In the above statement the case is missing.
I think We can consider, when not indicated, the big O notation generally describes an asymptotic upper bound on the worst-case time complexity. Otherwise, one can also use it to express asymptotic upper bounds on the average or best case time complexities
The whole point of asymptotic analysis is to compare algorithms performance scaling. For example, if I write two version of the same algorithm, one with O(n^2) time complexity and the other with O(n*log(n)) time complexity, I know for sure that the O(n*log(n)) one will be faster when n is "big". How big? it depends. You actually can't know unless you benchmark it. What you know is at some point, the O(n*log(n)) will always be better.
Now with your questions:
the "lower" n in n+n^2+3 is "dropped" because it is negligible when n scales up compared to the "dominant" one. That means that n+n^2+3 and n^2 behave the same asymptotically. It is important to note that even though 2 algorithms have the same time complexity, it does not mean they are as fast. For example, one could be always 100 times faster than the other and yet have the exact same complexity.
(i) n can be anything. It may be the size of the input (eg. an algorithm that sorts a list) but it may also be the input itself (eg. an algorithm that give the n-th prime number) or a number of iteration, etc
(ii) he could have taken any c, he chose c=1 as an example as he could have chosen c=1.618. Actually the correct formulation would be:
f(n) = O(g(n)) IFF for any c>0, there is a n0>0 from which f(n) < c.g(n) (for any n>n0)
the n0 from the formula is a pure mathematical construct. For c>0, it is the n value from which the function f is bounded by g. Since n can represent anything (size of a list, input value, etc), it is the same for n0

Asymptotic Notation for algorithms

I finally thought I understood what it means when a function f(n) is sandwiched between a lower and upper bound which are the same class and so can be described as theta(n).
As an example:
f(n) = 2n + 3
1n <= 2n + 3 <= 5n for all values of n >= 1
In the example above it made perfect sense, the order of n is on both sides, so f(n) is sandwiched between 1 * g(n) and 5 * g(n).
It was also a lot clearer when I tried not to use the notations to think about best or worst case and instead as an upper, lower or average bound.
So now thinking I finally understood this and the maths around it I went back to visit this page: https://www.bigocheatsheet.com/ to look at the run times of various search functions and was suddenly confused again about how many of the algorithms there, for example bubble sort, do not have the same order on both sides (upper and lower bound) yet theta is used to describe them.
Bubble sort has Ω(n) and O(n^2) but the theta value is given as Θ(n^2). How is that it can have Θ(n^2) if the upper bound of the function is in the order of N^2 but the lower bound of the function is in the order of n?
Actually, the page you referred to is highly misleading - even if not completely wrong. If you analyze the complexity of an algorithm, you first have to specify the scenario: i.e. whether you are talking about worst-case (the default case), average case or best-case. For each of the three scenarios, you can then give a lower bound (Ω), upper bound (O) or a tight bound (Θ).
Take insertion sort as an example. While the page is, strictly speaking, correct in that the best case is Ω(n), it could just as well (and more precisely) have said that the best case is Θ(n). Similarly, the worst case is indeed O(n²) as stated on that page (as well as Ω(n²) or Ω(n) or O(n³)), but more precisely it's Θ(n²).
Using Ω to always denote the best case and O to always denote the worst-case is, unfortunately, an often made mistake. Takeaway message: the scenario (worst, average, best) and the type of the bound (upper, lower, tight) are two independent dimensions.

BigO Notation, understanding

I had seen in one of the videos (https://www.youtube.com/watch?v=A03oI0znAoc&t=470s) that, If suppose f(n)= 2n +3, then BigO is O(n).
Now my question is if I am a developer, and I was given O(n) as upperbound of f(n), then how I will understand, what exact value is the upper bound. Because in 2n +3, we remove 2 (as it is a constant) and 3 (because it is also a constant). So, if my function is f(n) where n = 1, I can't say g(n) is upperbound where n = 1.
1 cannot be upperbound for 1. I find hard understanding this.
I know it is a partial (and probably wrong answer)
From Wikipedia,
Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation.
In your example,
f(n) = 2n+3 has the same growth rate as f(n) = n
If you plot the functions, you will see that both functions have the same linear growth; and as n -> infinity, the difference between the 2 gets minimal.
In Big O notation, f(n) = 2n+3 when n=1 means nothing; you need to look at the trend, not discreet values.
As a developer, you will consider big-O as a first indication for deciding which algorithm to use. If you have an algorithm which is say, O(n^2), you will try to understand whether there is another one which is, say, O(n). If the problem is inherently O(n^2), then the big-O notation will not provide further help and you will need to use other criterion for your decision. However, if the problem is not inherently O(n^2), but O(n), you should discard any algorithm that happen to be O(n^2) and find an O(n) one.
So, the big-O notation will help you to better classify the problem and then try to solve it with an algorithm whose complexity has the same big-O. If you are lucky enough as to find 2 or more algorithms with this complexity, then you will need to ponder them using a different criterion.

What is difference between different asymptotic notations?

I am really very confused in asymptotic notations. As far as I know, Big-O notation is for worst cast, omega is for best case and theta is for average case. However, I have always seen Big O being used everywhere, even for best case. For e.g. in the following link, see the table where time complexities of different sorting algorithms are mentioned-
https://en.wikipedia.org/wiki/Best,_worst_and_average_case
Everywhere in the table, big O notation is used irrespective of whether it is best case, worst case or average case. Then what is the use of other two notations and where do we use it?
As far as I know, Big-O notation is for worst cast, omega is for best case and theta is for average case.
They aren't. Omicron is for (asymptotic) upper bound, omega is for lower bound and theta is for tight bound, which is both an upper and a lower bound. If the lower and upper bound of an algorithm are different, then the complexity cannot be expressed with theta notation.
The concept of upper,lower,tight bound are orthogonal to the concept of best,average,worst case. You can analyze the upper bound of each case, and you can analyze different bounds of the worst case (and also any other combination of the above).
Asymptotic bounds are always in relation to the set of variables in the expression. For example, O(n) is in relation to n. The best, average and worst cases emerge from everything else but n. For example, if n is the number of elements, then the different cases might emerge from the order of the elements, or the number of unique elements, or the distribution of values.
However, I have always seen Big O being used everywhere, even for best case.
That's because the upper bound is almost always the one that is the most important and interesting when describing an algorithm. We rarely care about the lower bound. Just like we rarely care about the best case.
The lower bound is sometimes useful in describing a problem that has been proven to have a particular complexity. For example, it is proven that worst case complexity of all general comparison sorting algorithms is Ω(n log n). If the sorting algorithm is also O(n log n), then by definition, it is also Θ(n log n).
Big O is for upper bound, not for worst case! There is no notation specifically for worst case/best case. The examples you are talking about all have Big O because they are all upper bounded by the given value. I suggest you take another look at the book from which you learned the basics because this is immensely important to understand :)
EDIT: Answering your doubt- because usually, we are bothered with our at-most performance i.e. when we say, our algorithm performs in O(logn) in the best case-scenario, we know that its performance will not be worse than logarithmic time in the given scenario. It is the upper bound that we seek to reduce usually and hence we usually mention big O to compare algorithms. (not to say that we never mention the other two)
O(...) basically means "not (much) slower than ...".
It can be used for all three cases ("the worst case is not slower than", "the best case is not slower than", and so on).
Omega is the oppsite: You can say, something can't be much faster than ... . Again, it can be used with all three cases. Compared to O(...), it's not that important, because telling someone "I'm certain my program is not faster than yours" is nothing to be proud of.
Theta is a combination: It's "(more or less) as fast as" ..., not just slower/faster.
The Big-O notation is somethin like this >= in terms of asymptotic equality.
For example if you see this :
x = O(x^2) it does say x <= x^2 (in asymptotic terms).
And it does mean "x is at most as complex as x^2", which is something that you are usually interesting it.
Even when you compare Best/Average case, you can say "At best possible input, I will have AT MOST this complexity".
There are two things mixed up: Big O, Omega, Theta, are purely mathematical constructions. For example, O (f (N)) is the set of functions which are less than c * f (n), for some c > 0, and for all n >= some minimum value N0. With that definition, n = O (f (n^4)), because n ≤ n^4. 100 = O (f (n)), because 100 <= n for n ≥ 100, or 100 <= 100 * n for n ≥ 1.
For an algorithm, you want to give worst case speed, average case speed, rarely the best case speed, sometimes amortised average speed (that's when running an algorithm once does work that can be used when it's run again. Like calculating n! for n = 1, 2, 3, ... where each calculation can take advantage of the previous one). And whatever speed you measure, you can give a result in one of the notations.
For example, you might have an algorithm where you can prove that the worst case is O (n^2), but you cannot prove whether there are faster special cases or not, and you also cannot prove that the algorithm isn't actually faster, like O (n^1.9). So O (n^2) is the only thing that you can prove.

Still sort of confused about Big O notation

So I've been trying to understand Big O notation as well as I can, but there are still some things I'm confused about. So I keep reading that if something is O(n), it usually is referring to the worst-case of an algorithm, but that it doesn't necessarily have to refer to the worst case scenario, which is why we can say the best-case of insertion sort for example is O(n). However, I can't really make sense of what that means. I know that if the worst-case is O(n^2), it means that the function that represents the algorithm in its worst case grows no faster than n^2 (there is an upper bound). But if you have O(n) as the best case, how should I read that as? In the best case, the algorithm grows no faster than n? What I picture is a graph with n as the upper bound, like
If the best case scenario of an algorithm is O(n), then n is the upper bound of how fast the operations of the algorithm grow in the best case, so they cannot grow faster than n...but wouldn't that mean that they can grow as fast as O(log n) or O(1), since they are below the upper bound? That wouldn't make sense though, because O(log n) or O(1) is a better scenario than O(n), so O(n) WOULDN'T be the best case? I'm so lost lol
Big-O, Big-Θ, Big-Ω are independent from worst-case, average-case, and best-case.
The notation f(n) = O(g(n)) means f(n) grows no more quickly than some constant multiple of g(n).
The notation f(n) = Ω(g(n)) means f(n) grows no more slowly than some constant multiple of g(n).
The notation f(n) = Θ(g(n)) means both of the above are true.
Note that f(n) here may represent the best-case, worst-case, or "average"-case running time of a program with input size n.
Furthermore, "average" can have many meanings: it can mean the average input or the average input size ("expected" time), or it can mean in the long run (amortized time), or both, or something else.
Often, people are interested in the worst-case running time of a program, amortized over the running time of the entire program (so if something costs n initially but only costs 1 time for the next n elements, it averages out to a cost of 2 per element). The most useful thing to measure here is the least upper bound on the worst-case time; so, typically, when you see someone asking for the Big-O of a program, this is what they're looking for.
Similarly, to prove a problem is inherently difficult, people might try to show that the worst-case (or perhaps average-case) running time is at least a certain amount (for example, exponential).
You'd use Big-Ω notation for these, because you're looking for lower bounds on these.
However, there is no special relationship between worst-case and Big-O, or best-case and Big-Ω.
Both can be used for either, it's just that one of them is more typical than the other.
So, upper-bounding the best case isn't terribly useful. Yes, if the algorithm always takes O(n) time, then you can say it's O(n) in the best case, as well as on average, as well as the worst case. That's a perfectly fine statement, except the best case is usually very trivial and hence not interesting in itself.
Furthermore, note that f(n) = n = O(n2) -- this is technically correct, because f grows more slowly than n2, but it is not useful because it is not the least upper bound -- there's a very obvious upper bound that's more useful than this one, namely O(n). So yes, you're perfectly welcome to say the best/worst/average-case running time of a program is O(n!). That's mathematically perfectly correct. It's just useless, because when people ask for Big-O they're interested in the least upper bound, not just a random upper bound.
It's also worth noting that it may simply be insufficient to describe the running-time of a program as f(n). The running time often depends on the input itself, not just its size. For example, it may be that even queries are trivially easy to answer, whereas odd queries take a long time to answer.
In that case, you can't just give f as a function of n -- it would depend on other variables as well. In the end, remember that this is just a set of mathematical tools; it's your job to figure out how to apply it to your program and to figure out what's an interesting thing to measure. Using tools in a useful manner needs some creativity, and math is no exception.
Informally speaking, best case has O(n) complexity means that when the input meets
certain conditions (i.e. is best for the algorithm performed), then the count of
operations performed in that best case, is linear with respect to n (e.g. is 1n or 1.5n or 5n).
So if the best case is O(n), usually this means that in the best case it is exactly linear
with respect to n (i.e. asymptotically no smaller and no bigger than that) - see (1). Of course,
if in the best case that same algorithm can be proven to perform at most c * log N operations
(where c is some constant), then this algorithm's best case complexity would be informally
denoted as O(log N) and not as O(N) and people would say it is O(log N) in its best case.
Formally speaking, "the algorithm's best case complexity is O(f(n))"
is an informal and wrong way of saying that "the algorithm's complexity
is Ω(f(n))" (in the sense of the Knuth definition - see (2)).
See also:
(1) Wikipedia "Family of Bachmann-Landau notations"
(2) Knuth's paper "Big Omicron and Big Omega and Big Theta"
(3)
Big Omega notation - what is f = Ω(g)?
(4)
What is the difference between Θ(n) and O(n)?
(5)
What is a plain English explanation of "Big O" notation?
I find it easier to think of O() as about ratios than about bounds. It is defined as bounds, and so that is a valid way to think of it, but it seems a bit more useful to think about "if I double the number/size of inputs to my algorithm, does my processing time double (O(n)), quadruple (O(n^2)), etc...". Thinking about it that way makes it a little bit less abstract - at least to me...

Resources