If big-theta of a function implies another one of the same function, but with different big-theta notation, what does it mean? - big-o

Does it mean that we use the relation of big-theta(f+g)= big-theta(max(f,g)) ?
I tried to take an exponential function and a linear one and if I use the max, it is true since it is always max that wins. (the exponential here)
However if the reverse is done, is it always true?

Related

Is the Big-Omega time complexity of all search algorithms O(1)?

I understand that Big Omega defines the lower bound of s function (or best-case runtime).
Considering that almost every search algorithm could "luck out" and find the target element on the first iteration, would it be fair to say that its Big-Omega time complexity is O(1)?
I also understand that defining O(1) as the big Omega may not be useful -other lower bounds may be tighter, or closer to the evaluated function-, but the question is, is it correct?
I've found multiple sources claiming the linear search is Big-Omega O(n), even if some cases could complete in a single step, which is different from the best-case scenario as I understand it.
The lower bound (𝛺) is not the fastest answer a given algorithm can give.
The lower bound of a given problem is equal to the worst case scenario of the best algorithm that solves the problem. When doing complexity analysis, you should never forget that "luck" is always in the hands of the input (the instance the algorithm is trying to solve).
When trying to find a lower bound, you will imagine the "perfect algorithm" and you will try to "trap" it in a very hard case. Usually the algorithm is not defined and is only described based on its (hypotetical) performances. You would use arguments such as "If the ideal algorithm is that fast, it will not have this particular knowledge and will therefore fail on this particular instance, ie. the ideal algorithm doesn't exist". Replace ideal with the lower bound you are trying to prove.
For example, if we search the lower bound for the min-search problem in an unsorted array is 𝛺(n). The proof for this is quite trivial, and like most of the time, is made by contradiction. Basically, an algorithm A in o(n) will not see at least one item from the input array, if that item it did not saw was the minimum, A will fail. The contradiction proves that the problem is in 𝛺(n).
Maybe you can have a look at that answer I gave on a similar question.
The notations O, o, Θ, Ω, and ω are used in characterizing mathematical functions; for example, f(n) = n3 log n is in O(n4) and in Ω(n3).
So, the question is what mathematical functions we apply them to.
The mathematical functions that we tend to be interested in are things like "the worst-case time complexity of such-and-such algorithm, as a function of the size of its input", or "the average-case space complexity of such-and-such procedure, as a function of the largest element in its input". (Note: when we just say "the complexity of such-and-such algorithm", that's usually shorthand for its worst-case time complexity, as a function of some characteristic of its input that's hopefully obvious in context. Either way, it's still a mathematical function.)
We can use any of these notations in characterizing those functions. In particular, it's fine to use Ω in characterizing the worst case or average case.
We can also use any of these functions in characterizing things like "the best-case […]" — that's unusual, but there are times when it may be relevant. But, notably, we're not limited to Ω for that; just as we can use Ω in characterizing the worst case, we can also use O in characterizing the best case. It's all about what characterizations we're interested in.
You are confusing two different topics: Lower/upper bound, and worst-case/best-case time complexity.
The short answer to your question is: Yes, all search algorithms have a lower bound of Ω(1). Linear search (in the worst case, and on average) also has a lower bound of Ω(n), which is a stronger and more useful claim. The analogy is that 1 < π but also 3 < π, the latter being more useful. So in this sense, you are right.
However, your confusion seems to be between the notations for complexity classes (big-O, big-Ω, big-θ etc), and the concepts of best-case, worst-case, average case. The point is that the best case and the worst case time complexities of an algorithm are completely different functions, and you can use any of the notations above to describe any of them. (NB: Some claim that big-Ω automatically and exclusively describes best case time complexity and that big-O describes worst case, but this is a common misconception. They just describe complexity classes and you can use them with any mathematical functions.)
It is correct to say that the average time complexity linear search is Ω(n), because we are just talking about the function that describes its average time complexity. Its worst-case complexity is a different function, which happens not to be Ω(n), because as you say it can be constant-time.

How can you tell if one function is faster than another function according to O-notation?

I have an issue with how to determine if 1 function is faster or slower than another function. If the professor uses an example of O(1) and O(n), I know O(1) is faster but I really only know that from memorizing the simple functions running time order... But if more complex examples are given, I don't understand how to find the faster function.
For example, let's say I want to compare n^logn and n^(logn)^2 and n^(sqrt(n)). How can I compare these functions and be able to tell which has a fastest and slowest running time (big-O notation)? Is there a step by step process that I can follow each time so that I can use when comparing functions running time?
Here's my thought about the above example. I know n^2 is faster than n^3. So I want to compare the n^____ of each function. So if I plug in n=1000000 in each, logn will have the smallest value, logn^2 will have the second, and logn^sqrt(n) will have the biggest. Does this mean that the smallest value (n^logn) will be the fastest and the biggest value (n^sqrt(n)) will be the slowest?
1. n^logn (fastest)
2. n^logn^2
3. n^sqrt(n) (slowest)
Usually Big O is written as a function of N (except in case of constant, O(1)).
So it is simple a matter of plugging any N (3 or 4 values, or preferably enough values to see the curve) into both functions you are comparing and compute. Graph them if you can.
But you shouldn't need to do that, you should have a basic understanding for the classes of functions for Big O. If you can't calculate it, you should still know that O(log N) is larger than O(1), etc. O notation is about worst case. So usually the comparisons are easy if you are familiar with the most common functions.
Does this mean that the smallest value (n^logn) will be the fastest
and the biggest value (n^sqrt(n)) will be the slowest? 1. n^logn
(fastest) 2. n^logn^2 3. n^sqrt(n) (slowest)
For the purpose of your comparison, yes. O notation is used to compare worst case, complexity, or class of algorithm, so you just assume worst case on all candidates in the comparison. You can't tell from O notation what the best, typical or average performance will be.
Comparing O notations is basically the matter of comparing the curves. I recommend you to draw the curve - that will be helpful to your understanding.
If you use python, I'd like to recommend to try mathplotlib.pyplot. It's very convenient.

Relation between worst case and average case running time of an algorithm

Let's say A(n) is the average running time of an algorithm and W(n) is the worst. Is it correct to say that
A(n) = O(W(n))
is always true?
The Big O notation is kind of tricky, since it only defines an upper bound to the execution time of a given algorithm.
What this means is, if f(x) = O(g(x)) then for every other function h(x) such that g(x) < h(x) you'll have f(x) = O(h(x)) . The problem is, are those over extimated execution times usefull? and the clear answer is not at all. What you usually whant is the "smallest"
upper bound you can get, but this is not strictly required in the definition, so you can play around with it.
You can get some stricter bound using the other notations, such as the Big Theta, as you can read here.
So, the answer to your question is yes, A(n) = O(W(n)), but that doesn't give any usefull information on the algorithm.
If you're mentioning A(n) and W(n) are functions - then, yes, you can do such statement in common case - it is because big-o formal definition.
Note, that in terms on big-o there's no sense to act such way - since it makes understanding of the real complexity worse. (In general, three cases - worst, average, best - are present exactly to show complexity more clear)
Yes, it is not a mistake to say so.
People use asymptotic notation to convey the growth of running time on specific cases in terms of input sizes.To compare the average case complexity with the worst case complexity isn't providing much insight into understanding the function's growth on either of the cases.
Whilst it is not wrong, it fails to provide more information than what we already know.
I'm unsure of exactly what you're trying to ask, but bear in mind the below.
The typical algorithm used to show the difference between average and worst case running time complexities is Quick Sort with poorly chosen pivots.
On average with a random sample of unsorted data, the runtime complexity is n log(n). However, with an already sorted set of data where pivots are taken from either the front/end of the list, the runtime complexity is n^2.

Max and min of functions in order-notation

Order notation question, big-o notation and the like:
What does the max and min of a function mean in terms of order notation?
for example:
DEFINITION:
"Maximum" rules: Suppose that f(n) and g(n) are positive functions for all n > n0.
Then:
O[f(n) + g(n)] = O[max (f(n),g(n)) ]
etc...
I need to use these definitions to prove something for homework.. thanks for the help!
EDIT: f(n) and g(n) are supposed to represent running times of algorithms with respect to input size
With Big-O notation, you're talking about upper bounds on computation. This means that you are only interested in the largest term of the combined function as n (the variable) tends to infinity. What's more, you drop any constant multipliers as well since the formal definition of the notation allows you to discard those parts, which is important as it allows you to focus on the algorithm's behavior and not on the implementation of the algorithm.
So, we are combining two functions through summation. Well, there are two cases (strictly three, but it's symmetric):
One function is of higher order than the other. At that point, the higher order function dominates the lesser; you can pretend that the lesser doesn't exist.
Both functions are of the same order. This is just like you are doing some kind of ratio-ed sum of the two (since we've already thrown away the scaling factors) but then you just end up with the same factor and have just changed the scaling factors a bit.
The net result looks very much like the max() function, even though it's really not (it's actually a generalization of max over the space of functions), so it's very convenient to use the notation.
It is a regular max between natural numbers. f is a function mapped to numbers [f:N->N], and so is g.
Thus, f(n) is in N, and so max(f(n),g(n)) is just standard max: f(n) > g(n) ? f(n) : g(n)
O[max (f(n),g(n)) ] means: which ever is more 'expensive': f or g: it is the upper bound.

O-notation, O(∞) = O(1)?

So a quick thought; Could one argue that O(∞) is actually O(1)?
I mean it isn't depend on input size?
So in some way its, constant, even though it infinity.
Or is the only 'correct' way to express it O(∞)?
Infinity is not a number, or at least not a real number, so the expression is malformed. The correct way to express this is to simply state that a program doesn't terminate. Note: program, not algorithm, as an algorithm is guaranteed to terminate.
(If you wanted, you might be able to define big-O notation on transfinite numbers. I'm not sure if that would be of any use, though.)
Your argument is not quite correct.
Big O notation disregards constant multiples; there's no difference between O(1) and O(42), or between O(log(n)) and O(3π log(n)) .
Standard convention is to not use any constant multiples.
However, O(∞) would mean an “algorithm” that never terminates, as opposed to O(1) which will terminate at some point.
To answer the question :
O-notation, O(∞) = O(1)?
No
The main difference is that O(1) will end at some point, while O(∞) never ends.
They both don't include a variable, but have both different meanings :
O(1) (or O(121) or O(whatever but not infinity) : independendent of the functions arguments, but ending
O(∞) : independendent of the functions arguments, and non ending
As pointed out in another answer, infinity isn't really in the domain of the big-O notation, but the simple 'no' than remains of course, O(1) and O(∞) are not the same.
Big-Oh is a measure of how something the resources required scales as N increases. O(5 hours) and O(5 seconds) are both O(1) since no extra resources are needed as N increases.

Resources