If I have an algorithm with the running time T(n) = 5n^4/100000 + n^3/100, I know that I get Θ(n^4).
Now, if I have something like T(n) = (10n^2 + 20n^4 + 100n^3)/(n^4), does this yield Θ(n^3)?
I am trying to eliminate low-order terms to use the Substitution method to prove this.
Big-Theta means, that growth is both big-O and big-Omega.
So first case in your question is Θ(n^4), not Θ(n^3) since 5n^4/100000 + n^3/100 belongs to O(n^4) and not O(n^3).
Second case:
Thus, it's Θ(1) - because result is O(1) and Ω(1): all members, except 20 (constant) will limit to zero when n is growing.
Related
Why Big-O notation can not compare algorithms in the same complexity class. Please explain, I can not find any detailed explanation.
So, O(n^2) says that this algorithm requires less or equal number of operations to perform. So, when you have algorithm A which requires f(n) = 1000n^2 + 2000n + 3000 operations and algorithm B which requires g(n) = n^2 + 10^20 operations. They're both O(n^2)
For small n the first algorithm will perform better than the second one. And for big ns second algorithm looks better since it has 1 * n^2, but first has 1000 * n^2.
Also, h(n) = n is also O(n^2) and k(n) = 5 is O(n^2). So, I can say that k(n) is better than h(n) because I know how these functions look like.
Consider the case when I don't know how functions k(n) and h(n) look like. The only thing I'm given is k(n) ~ O(n^2), h(n) ~ O(n^2). Can I say which function is better? No.
Summary
You can't say which function is better because Big O notation stays for less or equal. And following is true
O(1) is O(n^2)
O(n) is O(n^2)
How to compare functions?
There is Big Omega notation which stays for greater or equal, for example f(n) = n^2 + n + 1, this function is Omega(n^2) and Omega(n) and Omega(1). When function has complexity equal to some asymptotic, Big Theta is used, so for f(n) described above we can say that:
f(n) is O(n^3)
f(n) is O(n^2)
f(n) is Omega(n^2)
f(n) is Omega(n)
f(n) is Theta(n^2) // this is only one way we can describe f(n) using theta notation
So, to compare asymptotics of functions you need to use Theta instead of Big O or Omega.
I very well know that recursive Fib(n) has time complexity of O(2^n). I am also able to come to that result by solving the following
T(n) = T(n-1)+T(n-2).
But when I take an example I get stuck. For eg: n=4
acc to recursive solution
T(1) = u #some constant
and, T(2) = u #some constant
Now, T(4) = T(3)+T(2)
= T(2)+T(1)+u
= u+u+u
= 3u
I was expecting the result to be 16u.
The Big-O notation is related to the asymptotic complexity, so we are interested in how the complexity grows for big numbers.
Big-O refers actually to an upper limit for the growth of a function. Formally, O(g) is a set of functions that are growing not faster than k*g.
Let me give you a few examples of functions that are in O(2^n):
2^n
2^n - 1000000000000n
2^n - 100
2^n + 1.5^n + n^100
The fact that T(n) in O(2^n) doesn't mean, that the number of operations will be exactly 2^n.
It only means, that the limit of the supremum of a sequence |T(n)/(2^n)| as n -> inf is finite.
We know that for some algorithm with time complexity of lets say T(n) = n^2 + n + 1 we can drop the less significant terms and say that it has a worst case of O(n^2).
What about when we're in the middle of calculating time complexity of an algorithm such as T(n) = 2T(n/2) + n + log(n)? Can we just drop the less significant terms and just say T(n) = 2T(n/2) + n = O(n log(n))?
In this case, yes, you can safely discard the dominated (log n) term. In general, you can do this any time you only need the asymptotic behaviour rather than the exact formula.
When you apply the Master theorem to solve a recurrence relation like
T(n) = a T(n/b) + f(n)
asymptotically, then you don't need an exact formula for f(n), just the asymptotic behaviour, because that's how the Master theorem works.
In your example, a = 2, b = 2, so the critical exponent is c = 1. Then the Master theorem tells us that T(n) is in Θ(n log n) because f(n) = n + log n, which is in Θ(nc) = Θ(n).
We would have reached the same conclusion using f(n) = n, because that's also in Θ(n). Applying the theorem only requires knowing the asymptotic behaviour of f(n), so in this context it's safe to discard dominated terms which don't affect f(n)'s asymptotic behaviour.
First of all you need to understand that T(n) = n^2 + n + 1 is a closed form expression, in simple terms it means you can inject some value for n and you will get the value of this whole expression.
on the other hand T(n) = 2T(n/2) + n + log(n) is a recurrence relation, it means this expression is defined recursively, to get a closed form expression you will have to solve the recurrence relation.
Now to answer your question, in general we drop lower order terms and coefficients when we can clearly see the highest order term, in T(n) = n^2 + n + 1 its n^2. but in a recurrence relation there is no such highest order term, because its not a closed form expression.
but one thing to observe is that highest order term in the closed form expression of a recurrence relation would be result of depth of recurrence tree multiplied with the highest order term in recurrence relation, so in your case it would be depthOf(2T(n/2)) * n, this would result in something like logn*n, so you can say that in terms of big O notation its O(nlogn).
Are these two equal? I read somewhere that O(2lg n) = O(n). Going by this observation, I'm guessing the answer would be no, but I'm not entirely sure. I'd appreciate any help.
Firstly, O(2log(n)) isn't equal to O(n).
To use big O notation, you would find a function that represents the complexity of your algorithm, then you would find the term in that function with the largest growth rate. Finally, you would eliminate any constant factors you could.
e.g. say your algorithm iterates 4n^2 + 5n + 1 times, where n is the size of the input. First, you would take the term with the highest growth rate, in this case 4n^2, then remove any constant factors, leaving O(n^2) complexity.
In your example, O(2log(n)) can be simplified to O(log(n))
Now on to your question.
In computer science, unless specified otherwise, you can generally assume that log(n) actually means the log of n, base 2.
This means, using log laws, 2^log(n) can be simplified to O(n)
Proof:
y = 2^log(n)
log(y) = log(2^log(n))
log(y) = log(n) * log(2) [Log(2) = 1 since we are talking about base 2 here]
log(y) = log(n)
y = n
I'm studying for an exam which is mostly about the time complexity. I've encountered a problem while solving these four questions.
1) if we prove that an algorithm has a time complexity of theta(n^2), is it possible that it takes him the time calculation of O(n) for ALL inputs?
2) if we prove that an algorithm has a time complexity of theta(n^2), is it possible that it takes him the time calculation of O(n) for SOME inputs?
3) if we prove that an algorithm has a time complexity of O(n^2), is it possible that it takes him the time calculation of O(n) for SOME inputs?
4) if we prove that an algorithm has a time complexity of O(n^2), is it possible that it takes him the time calculation of O(n) for ALL inputs?
can anyone tell me how to answer such questions. I'm mostly confused when they ask for "all" or "some" inputs.
thanks
gkovacs90 answer provides a good link : WIKI
T(n) = O(n3), means T(n) grows asymptotically no faster than n3. A constant k>0 exists and for all n>N , T(n) < k*n3
T(n) = Θ(n3), means T(n) grows asymptotically as fast as n3. Two constants k1, k2 >0 exist and for all n>N , k1*n3 < T(n) < k2*n3
so if T(n) = n3 + 2*n + 3
Then T(n) = Θ(n3) is more appropriate than T(n) = O(n3) since we have more information about the way T(n) behaves asymptotically.
T(n) = Θ(n3) means that for n>N the curve of T(n) will "approach" and "stay close" to the curve of k*n3, with k>0.
T(n) = O(n3) means that for n>N the curve of T(n) will always be under to the curve of k*n3, with k>0.
1:No
2:Yes, as gkovacs90 says, for small values of n you can have O(n) time calculation but I would say No for big enough inputs. The notations Theta and Big-O only mean something asymptotically
3:Yes
4:Yes
Example for number 4 (dumm but still true) : for an Array A : Int[] compute the sum of the values. Your algorithm certainly will be :
Given A an Int Array
sum=0
for int a in A
sum = sum + a
end for
return sum
If n is the length of the array A : The time complexity is T(n) = n. So T(n) = O(n2) since T(n) will not grow faster than n2. And still we have for all array a time calculation of O(n).
If you find such a result for a time (or memory) complexity. Then you can (and certainly you must) refine the Big-O / Theta of your function (here obviously we have : Θ(n))
Some last points :
T(n)=Θ(g(n)) implies T(n)=O(g(n)).
In computational complexity theory, the complexity is sometimes computed for best, worst and average cases.
A "barfoot" explanation:
Big O notation is for setting an upper bound. By definition, there is always an index(or an input-length) from wich the notation is correct. So below this index, anything can happen.
For example sorting an array(O(n^2)) with one element takes less time, than writing the elements to the output(O(n)). ( we don't sort, we know it is in the right order, so it takes 0 time ).
So the answers:
1: No
2: Yes
3: Yes
4: Yes
You can find a detailed understandable description at WIKI
And HERE You can find a simpler explanation.