Analyse of an algorithm (N^2) - algorithm

I need to run an algorithm with worst-case runtime Θ(n^2).
After that I need to run an algorithm 5 times with a runtime of Θ(n^2) every time it runs.
What is the combined worst-case runtime of these algorithms ?
In my head, the formula will look something like this:
( N^2 + (N^2 * 5) )
But when I've to analyse it in theta notation my guess is that it runs in Θ(n^2) time.
Am I right?

Two times O(N^2) is still O(N^2), ten times O(N^2) is still O(N^2), five times O(N^2) is still O(N^2), any times O(N^2) is still O(N^2) as long as 'any' is a constant.
Same answer holds for \Theta instead of O.

It is O(n^2) regardless because what you have is basically O(6n^2), which is still O(n^2) because you can ignore the constant. What you're looking at is something that belongs to a set of functions and not the function itself.
Essentially, 6n^2 ∈ O(n^2).
EDIT
You asked about Θ as well. Θ gives you the lower and upper bound, whereas O gives you the upper bound only. You only get the lower bound with Ω. Θ is the intersection of these two.
Anything that is Θ(f(n)) is also O(f(n)), but not the other way round.

Related

What is Big O of n^2 x logn?

Is it n^2 x logn or n^3? I know both of these answers act as upper bounds, I’m just torn between choosing a tighter but more complex bound (option 1), or a “worse” yet simpler bound (option 2).
Are there general rules to big O functions such as big O functions can never be too complex/a product of two functions?
You already seem to have an excellent understanding of the fact that big-O notation is an upper bound, and also that a function with runtime complexity n^2 logn falls in both O(n^2 logn) and O(n^3), so I'll spare you the mathematics of that. It's immediately clear (from the fact that n^2 logn is in O(n^3)) that O(n^2 logn) is a subset of O(n^3), so the former is a at least as good of a bound. It turns out to be a strictly tighter bound (that can be seen with some basic algebra), which is a definite point in its favor. I do understand your concern about the complexity of bounds, but I wouldn't worry about that. Mathematically, it's best to favor accuracy over simplicity, when the two are at odds, and n^2 logn is not that complex of an expression. So in my mind, O(n^2 logn) is a much better bound to state.
Other examples of similar or greater complexity:
As indicated in the comments, merge sort and quicksort have average time complexity O(n logn).
Interpolation search has an average time complexity of O(log logn).
The average case of Dijkstra's algorithm is stated on Wikipedia to be the absolute mouthful O(E + V log(E/V) log(V)).

The sum of theta notation and Big o notation

Was wondering if I have a algorithm which has two parts with known runtime of theta(nlogn) and O(n).
So the total runtime goes to theta(nlogn) + O(n)
To my knowledge, if for either the sum of two BigOh Notations or the sum of theta notations, we always use the max value of each.
While in this case, as the worst runtime for the part O(n) is anyway smaller than the part theta(nlogn), can I assume the runtime of this algorithm is theta(nlogn)?
Thanks!
Yep, that’s correct. Regardless of whether the O(n) term is tight or not, it’s still a low-order term compared with Θ(n log n).

Why is an algorithm complexity given in the Big O notation instead of Theta?

I know what the Big O, Theta and Omega notations are, but for example, if my algorithm is a for inside of a for, looping n times, my complexity would be O(n²), but why O(n²) instead of ϴ(n²)? Since the complexity IS in fact O(n²) and Ω(n²), then it would also be ϴ(n²), and I just can't see any reason to not use ϴ(n²) instead of O(n²), since ϴ(n²) restricts my complexity with an upper and bottom value, not only upper in the case of O(n²).
If f(n) = Θ(g(n)) then f(n) = O(g(n)). This because Θ(g(n)) ⊆ O (g(n)).
In your specific case if a loop runs exactly n^2 time the time complexity is in both O(n^2) and Θ(n^2).
The main reason why big-O is typically enough is that we are more interested in the worst case time complexity when analyzing the algorithm's performance, and knowing the worst case scenario is usually enough.
Also, not always is possible to find a tight bound.

Time complexity smallest asymptotic running time

what would the time complexity for f(n)=n^3/logn be? And for a double series (2 sums)? I know it is either polynomial or polylog.
f(n) = n^3 / log(n) is obviously Θ(n^3/log(n)), which is also o(n^3), but ω(n^2). Therefore the function does not grow faster than polynomial, but faster than polylogarithmic (Because all polylogarithmic functions grow slower than quadratic and actually slower than any power).
In fact for all ε > 0: f(n) in ω(n^(3-ε)).
Proofs for all of these statements should be pretty simple.
The second question depends on the sum limits and inner term.

When big O or Omega or theta can be an element of a set?

I'm trying to figure out the efficiency of my algorithms and I have little bit confusion.
Just need some expert idea to justify my answers or reference me to somewhere which is explaining about the being an element of not in asymptotic subject. (There is many resources but nothing about element of set is found by me)
When we say O(n^2) which is for two loops is it right to say:
n^2 is an element of O(n^3)
To my understanding big O is the worst case and omega is the best efficient case. If we put them on the graph all the cases of n^2 is part of O(n^3) so the first one is not right?
n^3 is an element of omega(n^2)
And also about the second one it is not right. Because some of the best cases of omega(n^2) is not in all the cases of n^3!
Finally is
2^(n+1) element of theta(2^n)
I have no idea how to measure that!
Big O, omega, theta in this context are all complexities. It's the functions with those complexities which form the sets you're thinking of.
Indeed, the set of functions with complexity O(n*n) is a subset of those with complexity O(n*n*n). Simply said, that's because O(n*n*n) means that the complexity is less than c*n*n*n as n goes to infinity, for some constant c. If a function has actual complexity 3*n*n + 7*n, then its complexity as n goes to infinity is obviously less than c*n*n*n, for any c.
Therefore, O(n*n*n) isn't just "three loops", it's "three loops or less".
Ω is the reverse. It's a lower bound for complexity, and c*n*n is a trivial lower bound for n*n*n as n goes to infinity.
The set of functions with complexity Θ(n*n) is the intersection of those with complexities O(n*n) and Ω(n*n). E.g. 3*n doesn't have complexity Θ(n*n) because it doesn't have complexity Ω(n*n), and 7*n*n*n doesn't have complexity Θ(n*n) because it doesn't have complexity O(n*n).
I will list the answers one by one.
1.) n^2 is an element of O(n^3)
True
To know more about Big-Oh read here.
2.) n^3 is an element of omega(n^2)
True
To understand omega notation read here.
3.) 2^(n+1) element of theta(2^n)
True
By now you would know why this is right.(Hint:Constant factor)
Please ask if you have any more questions.

Resources