Big O when adding together different routines - algorithm

Lets say I have a routine that scans an entire list of n items 3 times, does a sort based on the size, and then bsearches that sorted list n times. The scans are O(n) time, the sort I will call O(n log(n)), and the n times bsearch is O(n log(n)). If we add all 3 together, does it just give us the worst case of the 3 - the n log(n) value(s) or does the semantics allow added times?
Pretty sure, now that I type this out that the answer is n log(n), but I might as well confirm now that I have it typed out :)

The sum is indeed the worst of the three for Big-O.
The reason is that your function's time complexity is
(n) => 3n + nlogn + nlogn
which is
(n) => 3n + 2nlogn
This function is bounded above by 3nlogn, so it is in O(n log n).
You can choose any constant. I just happened to choose 3 because it was a good asymptotic upper bound.

You are correct. When n gets really big, the n log(n) dominates 3n.

Yes it will just be the worst case since O-notation is just about asymptotic performance.
This should of course not be taken to mean that adding these extra steps will have no effect on your programs performance. One of the O(n) steps could easily consume a huge portion of your execution time for the given range of n where your program operates.

As Ray said, the answer is indeed O(n log(n)). The interesting part of this question is that it doesn't matter which way you look at it: does adding mean "actual addition" or does it mean "the worst case". Let's prove that these two ways of looking at it produce the same result.
Let f(n) and g(n) be functions, and without loss of generality suppose f is O(g). (Informally, that g is "worse" than f.) Then by definition, there exists constants M and k such that f(n) < M*g(n) whenever n > k. If we look at in the "worst case" way, we expect that f(n)+g(n) is O(g(n)). Now looking at it in the "actual addition" way, and specializing to the case where n > k, we have f(n) + g(n) < M*g(n) + g(n) = (M+1)*g(n), and so by definition f(n)+g(n) is O(g(n)) as desired.

Related

How is O(N) algorithm also an O(N^2) algorithm?

I was reading about Big-O Notation
So, any algorithm that is O(N) is also an O(N^2).
It seems confusing to me, I know that Big-O gives upper bound only.
But how can an O(N) algorithm also be an O(N^2) algorithm.
Is there any examples where it is the case?
I can't think of any.
Can anyone explain it to me?
"Upper bound" means the algorithm takes no longer than (i.e. <=) that long (as the input size tends to infinity, with relevant constant factors considered).
It does not mean it will ever actually take that long.
Something that's O(n) is also O(n log n), O(n2), O(n3), O(2n) and also anything else that's asymptotically bigger than n.
If you're comfortable with the relevant mathematics, you can also see this from the formal definition.
O notation can be naively read as "less than".
In numbers if I tell you x < 4 well then obviously x<5 and x< 6 and so on.
O(n) means that, if the input size of an algorithm is n (n could be the number of elements, or the size of an element or anything else that mathematically describes the size of the input) then the algorithm runs "about n iterations".
More formally it means that the number of steps x in the algorithm satisfies that:
x < k*n + C where K and C are real positive numbers
In other words, for all possible inputs, if the size of the input is n, then the algorithm executes no more than k*n + C steps.
O(n^2) is similar except the bound is kn^2 + C. Since n is a natural number n^2 >= n so the definition still holds. It is true that, because x < kn + C then x < k*n^2 + C.
So an O(n) algorithm is an O(n^2) algorithm, and an O(N^3 algorithm) and an O(n^n) algorithm and so on.
For something to be O(N), it means that for large N, it is less than the function f(N)=k*N for some fixed k. But it's also less than k*N^2. So O(N) implies O(N^2), or more generally, O(N^m) for all m>1.
*I assumed that N>=1, which is indeed the case for large N.
Big-O notation describes the upper bound, but it is not wrong to say that O(n) is also O(n^2). O(n) alghoritms are subset of O(n^2) alghoritms. It's the same that squares are subsets of all rectangles, but not every rectangle is a square. So technically it is correct to say that O(n) alghoritm is O(n^2) alghoritm even if it is not precise.
Definition of big-O:
Some function f(x) is O(g(x)) iff |f(x)| <= M|g(x)| for all x >= x0.
Clearly if g1(x) <= g2(x) then |f(x)| <= M|g1(x)| <= M|g2(x)|.
For an algorithm with just a single Loop will get a O(n) and algorithm with a nested loop will get a O(n^2).
Now consider the Bubble sort algorithm it uses the nested loop in it,
If we give an already sort set of inputs to a bubble sort algorithm the inner loop will never get executed so for a scenario like this it gets O(n) and for the other cases it gets O(n^2).

Big-O notation confusion

I have come across some of the difficulties during doing this question. The question is: sort the following functions by order of growth from slowest to fastest:
7n^3 − 10n, 4n^2, n, n^8621909, 3n, 2^(log log n), n log n, 6n log n, n!, 1.1^n
And my answer to the question is
n,3n
nlogn, 6nlogn
4n^2(which equals to n^2)
7n^3 – 10n(which equals to n^3)
n^ 8621909
2^loglogn
1.1^n(exponent 2^0.1376n)
n!
Just wondering: can I assume that 2^(loglogn) has the same growth as 2^n? Should I take 1.1^n as a constant?
Just wondering can i assume that 2^(loglogn) has the same growth as
2^n?
No. Assuming that the logs are in base 2 then 2^log(n) mathematically equals to n, so 2^(log(log(n)) = log(n). And of course it does not have the same growth as 2^n.
Should I take 1.1^n as a constant?
No. 1.1^n is still an exponent by n that cannot be ignored - of course it is not a constant.
The correct order is:
2^loglogn = log(n)
n,3n
nlogn, 6nlogn
4n^2
7n^3 – 10n
n^8621909
1.1^n
n!
I suggest to take a look at the Formal definition of the Big-O notation. But, for simplicity, the top of the list goes slower to infinity than the lower functions.
If for example, you will put two of those function on a graph you will see that one function will eventually pass the other one and will go faster to infinity.
Take a look at this comparing n^2 with 2^n. You will notice that 2^n is passing n^2 and going faster to infinity.
You might also want to check the graph in this page.
2log(x) = x, so 2log(log(n)) = log(n), which grows far more slower than 2n.
1.1n is definitely not a constant. 1.1n tends to infinity, and a constant obviously does not.

How can an algorithm have two worst case complexities?

Steven Skiena's The Algorithm design manual's chapter 1 exercise has this question:
Let P be a problem. The worst-case time complexity of P is O(n^2) .
The worst-case time complexity of P is also Ω(n log n) . Let A be an
algorithm that solves P. Which subset of the following statements are
consistent with this information about the complexity of P?
A has worst-case time complexity O(n^2) .
A has worst-case time complexity O(n^3/2).
A has worst-case time complexity O(n).
A has worst-case time complexity ⍬(n^2).
A has worst-case time complexity ⍬(n^3) .
How can an algorithm have two worst-case time complexities?
Is the author trying to say that for some value of n (say e.g. 300) upper bound for algorithm written for solving P is of the order of O(n^2) while for another value of n (say e.g. 3000) the same algorithm worst case was Ω(n log n)?
The answer to your specific question
is the author trying to say that for some value of n (say e.g. 300) upper bound for algorithm written for solving P is of the order of O(n^2) while for another value of n (say e.g. 3000) the same algorithm worst case was Ω(n log n)?
is no. That is not how complexity functions work. :) We don't talk about different complexity classes for different values of n. The complexity refers to the entire algorithm, not to the algorithm at specific sizes. An algorithm has a single time complexity function T(n), which computes how many steps are required to carry out the computation for an input size of n.
In the problem, you are given two pieces of information:
The worst case complexity is O(n^2)
The worst case complexity is Ω(n log n)
All this means is that we can pick constants c1, c2, N1, and N2, such that, for our algorithm's function T(n), we have
T(n) ≤ c1*n^2 for all n ≥ N1
T(n) ≥ c2*n log n for all n ≥ N2
In other words, our T(n) is "asymptotically bounded below" by some constant time n log n and "asymptotically bounded above" by some constant times n^2. It can itself be anything "between" an n log n style function and an n^2 style function. It can even be n log n (since that is bounded above by n^2) or it can be n^2 (since that's bounded below by n log n. It can be something in between, like n(log n)(log n).
It's not so much that an algorithm has "multiple worst case complexities" in the sense it has different behaviors. What are you seeing is an upper bound and a lower bound! And these can, of course, be different.
Now it is possible that you have some "weird" function like this:
def p(n):
if n is even:
print n log n stars
else:
print n*2 stars
This crazy algorithm does have the bounds specified in the problem from the Skiena book. And it has no Θ complexity. That might have been what you were thinking about, but do note that it is not necessary for a complexity function to be this weird in order for us to say the upper and lower bounds differ. The thing to remember is that upper and lower bounds are not tight unless explicitly stated to be so.

Why is the worst case time complexity of this simple algorithm T(n/2) +1 as opposed to n^2+T(n-1)?

The following question was on a recent assignment in University. I would have thought the answer would be n^2+T(n-1) as I thought the n^2 would make it's asymptotic time complexity O(n^2). Where as with T(n/2)+1 its asymptotic time complexity would be O(log2(n)).
The answers were returned and it turns out the correct answer is T(n/2)+1 however I can't get my head around why this is the case.
Could someone possibly explain to me why that's the worst case time complexity of this algorithm? It's possible my understanding of time complexity is just wrong.
The asymptotic time complexity is taking n large. In the case of your example, since the question specifies that k is fixed, the only complexity relevant is the last one. See the Wikipedia formal definition, specifically:
As n grows to infinity, the recursion that dominates T(n) = T(n / 2) + 1. You can prove this as well using the formal definition, basically picking x_0 = 10 * k and showing that a finite M can be found using the first two cases. It should be clear that both log(n) and n^2 satisfy the definition, so the tighter bound is the asymptotic complexity.
What does O (f (n)) mean? It means the time is at most c * f (n), for some unknown and possibly large c.
kevmo claimed a complexity of O (log2 n). Well, you can check all the values n ≤ 10k, and let the largest value of T (n) be X. X might be quite large (about 167 k^3 in this case, I think, but it doesn't actually matter). For larger n, the time needed is at most X + log2 (n). Choose c = X, and this is always less than c * log2 (n).
Of course people usually assume that a O (log n) algorithm would be quick, and this one most certainly isn't if say k = 10,000. So you learned as well that O notation must be handled with care.

Which algorithm is faster O(N) or O(2N)?

Talking about Big O notations, if one algorithm time complexity is O(N) and other's is O(2N), which one is faster?
The definition of big O is:
O(f(n)) = { g | there exist N and c > 0 such that g(n) < c * f(n) for all n > N }
In English, O(f(n)) is the set of all functions that have an eventual growth rate less than or equal to that of f.
So O(n) = O(2n). Neither is "faster" than the other in terms of asymptotic complexity. They represent the same growth rates - namely, the "linear" growth rate.
Proof:
O(n) is a subset of O(2n): Let g be a function in O(n). Then there are N and c > 0 such that g(n) < c * n for all n > N. So g(n) < (c / 2) * 2n for all n > N. Thus g is in O(2n).
O(2n) is a subset of O(n): Let g be a function in O(2n). Then there are N and c > 0 such that g(n) < c * 2n for all n > N. So g(n) < 2c * n for all n > N. Thus g is in O(n).
Typically, when people refer to an asymptotic complexity ("big O"), they refer to the canonical forms. For example:
logarithmic: O(log n)
linear: O(n)
linearithmic: O(n log n)
quadratic: O(n2)
exponential: O(cn) for some fixed c > 1
(Here's a fuller list: Table of common time complexities)
So usually you would write O(n), not O(2n); O(n log n), not O(3 n log n + 15 n + 5 log n).
Timothy Shield's answer is absolutely correct, that O(n) and O(2n) refer to the same set of functions, and so one is not "faster" than the other. It's important to note, though, that faster isn't a great term to apply here.
Wikipedia's article on "Big O notation" uses the term "slower-growing" where you might have used "faster", which is better practice. These algorithms are defined by how they grow as n increases.
One could easily imagine a O(n^2) function that is faster than O(n) in practice, particularly when n is small or if the O(n) function requires a complex transformation. The notation indicates that for twice as much input, one can expect the O(n^2) function to take roughly 4 times as long as it had before, where the O(n) function would take roughly twice as long as it had before.
It depends on the constants hidden by the asymptotic notation. For example, an algorithm that takes 3n + 5 steps is in the class O(n). So is an algorithm that takes 2 + n/1000 steps. But 2n is less than 3n + 5 and more than 2 + n/1000...
It's a bit like asking if 5 is less than some unspecified number between 1 and 10. It depends on the unspecified number. Just knowing that an algorithm runs in O(n) steps is not enough information to decide if an algorithm that takes 2n steps will complete faster or not.
Actually, it's even worse than that: you're asking if some unspecified number between 1 and 10 is larger than some other unspecified number between 1 and 10. The sets you pick from being the same doesn't mean the numbers you happen to pick will be equal! O(n) and O(2n) are sets of algorithms, and because the definition of Big-O cancels out multiplicative factors they are the same set. Individual members of the sets may be faster or slower than other members, but the sets are the same.
Theoretically O(N) and O(2N) are the same.
But practically, O(N) will definitely have a shorter running time, but not significant. When N is large enough, the running time of both will be identical.
O(N) and O(2N) will show significant difference in growth for small numbers of N, But as N value increases O(N) will dominate the growth and coefficient 2 becomes insignificant. So we can say algorithm complexity as O(N).
Example:
Let's take this function
T(n) = 3n^2 + 8n + 2089
For n= 1 or 2, the constant 2089 seems to be the dominant part of function but for larger values of n, we can ignore the constants and 8n and can just concentrate on 3n^2 as it will contribute more to the growth, If the n value still increases the coefficient 3 also seems insignificant and we can say complexity is O(n^2).
For detailed explanation refer here
O(n) is faster however you need to understand that when we talk about Big O, we are measuring the complexity of a function/algorithm, not its speed. And we measure this complexity asymptotically. In lay man terms, when we talk about asymptotic analysis, we take immensely huge values for n. So if you plot the graph for O(n) and O(2n), the values will stay in some particular range from each other for any value of n. They are much closer compared to the other canonical forms like O(nlogn) or O(1), so by convention we approximate the complexity to the canonical form O(n).

Resources