The sum of theta notation and Big o notation - algorithm

Was wondering if I have a algorithm which has two parts with known runtime of theta(nlogn) and O(n).
So the total runtime goes to theta(nlogn) + O(n)
To my knowledge, if for either the sum of two BigOh Notations or the sum of theta notations, we always use the max value of each.
While in this case, as the worst runtime for the part O(n) is anyway smaller than the part theta(nlogn), can I assume the runtime of this algorithm is theta(nlogn)?
Thanks!

Yep, that’s correct. Regardless of whether the O(n) term is tight or not, it’s still a low-order term compared with Θ(n log n).

Related

Is "best case performance Θ(1) -> running time ≠ Θ(log n)" valid?

This is an argument for justifying that the running time of an algorithm can't be considered Θ(f(n)) but should be O(f(n)) instead.
E.g. this question about binary search: Is binary search theta log (n) or big O log(n)
MartinStettner's response is even more confusing.
Consider *-case performances:
Best-case performance: Θ(1)
Average-case performance: Θ(log n)
Worst-case performance: Θ(log n)
He then quotes Cormen, Leiserson, Rivest: "Introduction to Algorithms":
What we mean when we say "the running time is O(n^2)" is that the worst-case running time (which is a function of n) is O(n^2) ...
Doesn't this suggest the terms running time and worst-case running time are synonymous?
Also, if running time refers to a function with natural input f(n), then there has to be Θ class which contains it, e.g. Θ(f(n)), right? This indicates that you are obligated to use O notation only when the running time is not known very precisely (i.e. only an upper bound is known).
When you write O(f(n)) that means that the running time of your algorithm is bounded above by a function c*f(n) where c is a constant. That also means that that your algorithm can complete in much less number of steps than c*f(n). We often use the Big-O notation because we want to include the possibility that the algorithm completes faster than we indicate. On the other hand, Theta(f(n)) means that the algorithm always completes in c*f(n) steps. Binary search is O(log(n)) because usually it will complete in log(n) steps, but could complete in one step if you get lucky (best-case performance).
I get always confused, if I read about running times.
For me a running time is the time an implementation of an algorithm needs to be executed on a computer. This can be differ in many ways and so is a complicated thing.
So I think complexity of an algorithm is a better word.
Now, the complexity is (in most cases) a worst-case-complexity. If you know an upper bound for the worst case, you also know that it can only get better in other cases.
So, if you know, that there exist some (maybe trivial) cases, where your algorithm does only a few (constant number) steps and stops, you don't have to care about an lower bound and so you (normaly) use an upper bound in big-O or little-o notation.
If you do your calculations carfully, you can also use the Θ notation.
But notice: all complexities only hold on the cases they are attached to. This means: If you make assumtions like "Input is a best case", this effects your calculation and also the resulting complexity. In the case of binary search you posted the complexity for three different assumtions.
You can generalize it by saying: "The complexity of Binary Search is in O(log n)", since Θ(log n) means "Ω(log n) and O(log n)" and O(1) is a subset of O(log n).
To summerize:
- If you know the very precise function for the complexity, you can give the complexity in Θ-notation
- If you want to get an overall upper bound, you have to use O-notation, until the lower bound over all input cases does not differ from the upper bound.
- In most cases you have to use the O-notation, since the algorithms are too complex to get a close upper and lower bound.

When big O or Omega or theta can be an element of a set?

I'm trying to figure out the efficiency of my algorithms and I have little bit confusion.
Just need some expert idea to justify my answers or reference me to somewhere which is explaining about the being an element of not in asymptotic subject. (There is many resources but nothing about element of set is found by me)
When we say O(n^2) which is for two loops is it right to say:
n^2 is an element of O(n^3)
To my understanding big O is the worst case and omega is the best efficient case. If we put them on the graph all the cases of n^2 is part of O(n^3) so the first one is not right?
n^3 is an element of omega(n^2)
And also about the second one it is not right. Because some of the best cases of omega(n^2) is not in all the cases of n^3!
Finally is
2^(n+1) element of theta(2^n)
I have no idea how to measure that!
Big O, omega, theta in this context are all complexities. It's the functions with those complexities which form the sets you're thinking of.
Indeed, the set of functions with complexity O(n*n) is a subset of those with complexity O(n*n*n). Simply said, that's because O(n*n*n) means that the complexity is less than c*n*n*n as n goes to infinity, for some constant c. If a function has actual complexity 3*n*n + 7*n, then its complexity as n goes to infinity is obviously less than c*n*n*n, for any c.
Therefore, O(n*n*n) isn't just "three loops", it's "three loops or less".
Ω is the reverse. It's a lower bound for complexity, and c*n*n is a trivial lower bound for n*n*n as n goes to infinity.
The set of functions with complexity Θ(n*n) is the intersection of those with complexities O(n*n) and Ω(n*n). E.g. 3*n doesn't have complexity Θ(n*n) because it doesn't have complexity Ω(n*n), and 7*n*n*n doesn't have complexity Θ(n*n) because it doesn't have complexity O(n*n).
I will list the answers one by one.
1.) n^2 is an element of O(n^3)
True
To know more about Big-Oh read here.
2.) n^3 is an element of omega(n^2)
True
To understand omega notation read here.
3.) 2^(n+1) element of theta(2^n)
True
By now you would know why this is right.(Hint:Constant factor)
Please ask if you have any more questions.

Analyse of an algorithm (N^2)

I need to run an algorithm with worst-case runtime Θ(n^2).
After that I need to run an algorithm 5 times with a runtime of Θ(n^2) every time it runs.
What is the combined worst-case runtime of these algorithms ?
In my head, the formula will look something like this:
( N^2 + (N^2 * 5) )
But when I've to analyse it in theta notation my guess is that it runs in Θ(n^2) time.
Am I right?
Two times O(N^2) is still O(N^2), ten times O(N^2) is still O(N^2), five times O(N^2) is still O(N^2), any times O(N^2) is still O(N^2) as long as 'any' is a constant.
Same answer holds for \Theta instead of O.
It is O(n^2) regardless because what you have is basically O(6n^2), which is still O(n^2) because you can ignore the constant. What you're looking at is something that belongs to a set of functions and not the function itself.
Essentially, 6n^2 ∈ O(n^2).
EDIT
You asked about Θ as well. Θ gives you the lower and upper bound, whereas O gives you the upper bound only. You only get the lower bound with Ω. Θ is the intersection of these two.
Anything that is Θ(f(n)) is also O(f(n)), but not the other way round.

Big O for worst-case running time and Ω is for the best-case, but why is Ω used in worst case sometimes?

I'm confused, I thought that you use Big O for worst-case running time and Ω is for the best-case? Can someone please explain?
And isn't (lg n) the best-case? and (nlg n) is the worst case? Or am I misunderstanding something?
Show that the worst-case running time of Max-Heapify on a heap of size
n is Ω(lg n). ( Hint: For a heap with n nodes, give node values that
cause Max-Heapify to be called recursively at every node on a path
from the root down to a leaf.)
Edit: no this is not homework. im practicing and this has an answer key buy im confused.
http://www-scf.usc.edu/~csci303/cs303hw4solutions.pdf Problem 4(6.2 - 6)
Edit 2: So I misunderstood the question not about Big O and Ω?
It is important to distinguish between the case and the bound.
Best, average, and worst are common cases of interest when analyzing algorithms.
Upper (O, o) and lower (Omega, omega), along with Theta, are common bounds on functions.
When we say "Algorithm X's worst-case time complexity is O(n)", we're saying that the function which represents Algorithm X's performance, when we restrict inputs to worst-case inputs, is asymptotically bounded from above by some linear function. You could speak of a lower bound on the worst-case input; or an upper or lower bound on the average, or best, case behavior.
Case != Bound. That said, "upper on the worst" and "lower on the best" are pretty sensible sorts of metrics... they provide absolute bounds on the performance of an algorithm. It doesn't mean we can't talk about other metrics.
Edit to respond to your updated question:
The question asks you to show that Omega(lg n) is a lower bound on the worst case behavior. In other words, when this algorithm does as much work as it can do for a class of inputs, the amount of work it does grows at least as fast as (lg n), asymptotically. So your steps are the following: (1) identify the worst case for the algorithm; (2) find a lower bound for the runtime of the algorithm on inputs belonging to the worst case.
Here's an illustration of the way this would look for linear search:
In the worst case of linear search, the target item is not in the list, and all items in the list must be examined to determine this. Therefore, a lower bound on the worst-case complexity of this algorithm is O(n).
Important to note: for lots of algorithms, the complexity for most cases will be bounded from above and below by a common set of functions. It's very common for the Theta bound to apply. So it might very well be the case that you won't get a different answer for Omega than you do for O, in any event.
Actually, you use Big O for a function which grows faster than your worst-case complexity, and Ω for a function which grows more slowly than your worst-case complexity.
So here you are asked to prove that your worst case complexity is worse than lg(n).
O is the upper limit (i.e, worst case)
Ω is the lower limit (i.e., best case)
The example is saying that in the worst input for max-heapify ( i guess the worst input is reverse-ordered input) the running time complexity must be (at least) lg n . Hence the Ω (lg n) since it is the lower limit on the execution complexity.

Plain English explanation of Theta notation?

What is a plain English explanation of Theta notation? With as little formal definition as possible and simple mathematics.
How theta notation is different from the Big O notation ? Could anyone explain in plain English?
In algorithm analysis how there are used? I am confused?
If an algorithm's run time is Big Theta(f(n)), it is asymptotically bounded above and below by f(n). Big O is the same except that the bound is only above.
Intuitively, Big O(f(n)) says "we can be sure that, ignoring constant factors and terms, the run time never exceeds f(n)." In rough words, if you think of run time as "bad", then Big O is a worst case. Big Theta(f(n)) says "we can be sure that, ignoring constant factors and terms, the run time always varies as f(n)." In other words, Big Theta is a known tight bound: it's both worst case and best case.
A final try at intuition: Big O is "one-sided." O(n) run time is also O(n^2) and O(2^n). This is not true with Big Theta. If you have an algorithm run time that's O(n), then you already have a proof that it's not Big Theta(n^2). It may or may not be Big Theta(n)
An example is comparison sorting. Information theory tells us sorting requires at least ceiling(n log n) comparisons, and we have actually invented O(n log n) algorithms (where n is number of comparisons), so sorting comparisons are Big Theta(n log n).
I have always wanted to put this down in Simple words. Here is my try.
If an algorithm's time or space complexity is expressed in
Big O : Ex O(n) - means n is the upper limit. Final Value could be less than or equal to n.
Big Omega : Ex Ω(n) - means n is the lower limit. Final Value could be equal to or more than n.
Theta : Ex Θ(n) - means n is the only possible value. (both upper limit & lower limit)

Resources