A function in O(n^2) but not in Ω(n^2) also in Ω(n) but not in O(n) - big-o

I am working on asymptotic relations in my book for an interview but I do not understand one question.
Give an example of a function that is in O(n^2) but not in Ω(n^2) also in Ω(n) but not in O(n).
Is there a quick way that I can find a suitable function?
Tried many examples but still can't find the perfect function that suits the criteria
Here is the way that I tried
cn < f(n) < cn^2

For the first part:
Give an example of a function that is in O(n^2) but not in Ω(n^2)
you can say for example f(n) = n
and for the second part:
Give an example of a function that is in Ω(n) but not in O(n)
you can say g(n) = n^2
and many more examples you can come up with.
If you want both conditions to be held at the same time:
Give an example of a function that is in O(n^2) but not in Ω(n^2) also in Ω(n) but not in O(n).
you can say: h(n) = n^(1.5)

Related

T(n) >= O(n^2) means T(n) should be greater than any function in the set O(n^) or just greater that one member in that set is okay

If we say the running time of algorithm A is at least O(n^2) or T(n)>=O(n^2), does this mean that T(n) should be greater than any member of the set O(n^2) or just it is sufficient to be greater than at least one member of the set O(n^2)?
In other words does it mean for any member f(n) of O(n^2) T(n)>=f(n) or there should be some function f(n) in the set O(n^2) that T(n)>=f(n)?
T(n) >= O(n^2)
If a very pedantic mathematician writes T(n) >= O(n^2) and they guarantee you that they did not make a mistake, then it means T is greater than at least one member of the set O(n^2). Which is almost an empty statement. In most contexts you can guess that it was a mistake.
The only context in which I might expect such a statement not to be a mistake is an exercise where the teacher gives a function T(n) and asks the students "Which of the following statements are true? a) T(n) = O(n^2) b) T(n) <= O(n^2) c) T(n) >= O(n^2)". However I don't think that would be a very enlightening exercise.
If a computer scientist writes T(n) >= O(n^2), it either means they made a typo, or they don't understand what O(n^2) means.
It is possible that that computer scientist meant T(n) = Omega(n^2) instead. But it's hard to guess what someone meant when they made a nonsensical statement.
"T(n) is at least O(n^2)"
I would interpret the statement "T(n) is at least O(n^2)" to mean "T(n) = O(n^2), but we can probably find a better bound". In other words, we already know that the algorithm does not run in more than n^2, and we suspect it actually runs in much less than n^2.
if I have got explicit reasons to believe the speaker is misusing words and O(), or if I know this speaker frequently makes ambiguous and misleading statements, or if I am a teacher and this statement was made by a student in class, then I would ask them to clarify what they meant exactly, as this mix of at least (which is often employed to speak about lower bounds) and O() (which explicitly means an upper bound) can be confusing.
By definition of O, no. Suppose algorithm A has complexity n^2 / 2. Hence, A is satisfying the lower bound, but for f(n) = n^2, f(n) > T(n) for n > 1.
Moreover, an equivalent definition for your statement for T(n) >= O(n^2) is T(n) = \Omega(n^2).

Functions in o(n) and ω(1)

I was solving some question and I came across this one.
Give a function which is both in o(n) (little-oh) and in ω(1) (little-omega), or state that none exists.
I thought of functions like logn or sqrt(n).
However, I'm still doubtful whether such function will exist or not. Does a constant function make any difference
You are correct.
Proof is based on set theory.
o(n) = O(n) \ Theta(n)
ω(1) = Omega(1) \ Theta(1)
You are looking for something that is in the intersection of o(n) and ω(1)
log(n) is in O(n), and in Omega(1) - and not in Theta(n) nor Theta(1), so it is in the intersection, and thus fits.

If algorithm time complexity is theta(n^2), is it possible that for one input it will run in O(n)?

If algorithm time complexity is theta(n^2), is it possible that for one input it will run in O(n)?
by the definition of theta it seems to be that no input will run in O(n). however some say that its possible.
I really can't think of a scenario that an algorithm that run in theta(n^2), will have one input that may run in O(n).
If its true, can you please explain to me and give me an example?
Thanks a lot!
I think your terminology is tripping you up.
An algorithm cannot be "Θ(n2)." Theta notation describes the growth rates of functions. You can say that an algorithm's runtime is Θ(n2), in which case the algorithm cannot run in time O(n) on any inputs, or you could say that an algorithm's worst-case runtime is Θ(n2), in which case it could conceivably be possible that the algorithm will run in time O(n) for some inputs (take, for example, insertion sort).
Hope this helps!
If algorithm time complexity is theta(n^2), is it possible that for one input it will run in O(n)?
No. Here's why. Lets say that the running time of your algorithm is f(n). Since f(n) = Θ(n) then we'll have for some constants c0>0 and n0>0 such that c0*n^2 <= f(n) for every n >= n0. Lets us suppose that f(n) = O(n). This would mean that for some constants c1>0, n1>0 we would have f(n) <= c1*n for every n>=n1. Then for n >= max(n1, n2) we would have
c0*n^2 <= f(n) <= c1*n => c0*n <= c1 which is not true for n > c1/c0. Contradiction.
Informally, you can always think of O as <= and Θ as = (and of Ω as >=). So you can reformulate your problem as:
if something is equal to n^2 is it less than n?
My understanding that while Big-Oh only asserts the upper bound, Big-Theta asserts an upper bound and a lower bound. By definition, if something performs in theta(n^2), there is no input for which the performance is theta(n).
Note: these all refer to asymptotic complexity. Algorithms can perform differently on smaller inputs, i.e., an algorithm that runs in theta(n^2) might outperform (on smaller inputs) something that runs in theta(n) because of the hidden constant factors.

Analyse of an algorithm (N^2)

I need to run an algorithm with worst-case runtime Θ(n^2).
After that I need to run an algorithm 5 times with a runtime of Θ(n^2) every time it runs.
What is the combined worst-case runtime of these algorithms ?
In my head, the formula will look something like this:
( N^2 + (N^2 * 5) )
But when I've to analyse it in theta notation my guess is that it runs in Θ(n^2) time.
Am I right?
Two times O(N^2) is still O(N^2), ten times O(N^2) is still O(N^2), five times O(N^2) is still O(N^2), any times O(N^2) is still O(N^2) as long as 'any' is a constant.
Same answer holds for \Theta instead of O.
It is O(n^2) regardless because what you have is basically O(6n^2), which is still O(n^2) because you can ignore the constant. What you're looking at is something that belongs to a set of functions and not the function itself.
Essentially, 6n^2 ∈ O(n^2).
EDIT
You asked about Θ as well. Θ gives you the lower and upper bound, whereas O gives you the upper bound only. You only get the lower bound with Ω. Θ is the intersection of these two.
Anything that is Θ(f(n)) is also O(f(n)), but not the other way round.

Big O when adding together different routines

Lets say I have a routine that scans an entire list of n items 3 times, does a sort based on the size, and then bsearches that sorted list n times. The scans are O(n) time, the sort I will call O(n log(n)), and the n times bsearch is O(n log(n)). If we add all 3 together, does it just give us the worst case of the 3 - the n log(n) value(s) or does the semantics allow added times?
Pretty sure, now that I type this out that the answer is n log(n), but I might as well confirm now that I have it typed out :)
The sum is indeed the worst of the three for Big-O.
The reason is that your function's time complexity is
(n) => 3n + nlogn + nlogn
which is
(n) => 3n + 2nlogn
This function is bounded above by 3nlogn, so it is in O(n log n).
You can choose any constant. I just happened to choose 3 because it was a good asymptotic upper bound.
You are correct. When n gets really big, the n log(n) dominates 3n.
Yes it will just be the worst case since O-notation is just about asymptotic performance.
This should of course not be taken to mean that adding these extra steps will have no effect on your programs performance. One of the O(n) steps could easily consume a huge portion of your execution time for the given range of n where your program operates.
As Ray said, the answer is indeed O(n log(n)). The interesting part of this question is that it doesn't matter which way you look at it: does adding mean "actual addition" or does it mean "the worst case". Let's prove that these two ways of looking at it produce the same result.
Let f(n) and g(n) be functions, and without loss of generality suppose f is O(g). (Informally, that g is "worse" than f.) Then by definition, there exists constants M and k such that f(n) < M*g(n) whenever n > k. If we look at in the "worst case" way, we expect that f(n)+g(n) is O(g(n)). Now looking at it in the "actual addition" way, and specializing to the case where n > k, we have f(n) + g(n) < M*g(n) + g(n) = (M+1)*g(n), and so by definition f(n)+g(n) is O(g(n)) as desired.

Resources