I am working for a project of my class and would like some check/help to see if my shortening of Big O notation is right:
n*O(log(n)) + n * O(log((n)) = 2n*O(log(n)) = n*O(log(n))
n*O(1) + n * O(n) = n*O(n)
Is my shortening correct? and are these can be further shortened?
I would really appreciate any help.
Since n is O(n), the first one is O(nlogn) and the second one is O(n^2).
The proof for n being O(n) can be done using the definition of O(n).
Related
Assume that the worst-case runtime of an algorithm can be described as:
T(n) = O(n) + O(r^2) + O(n-r)
With n being the input size and r being the index at which a partition was created per the algorithm.
Can this equation be simplified further? If the variables were all n then it would be O(n^2) but can the same idea be applied when r is involved?
As O(n-r) is suppressed by O(n) you can write T(n) = O(n) + O(r^2). Also, as you know that r is between 0 and n, you can write T(n) = O(n + r^2). However, the exact term is T(n,r) = O(n + r^2).
I'm studying for an exam, and i've come across the following question:
Provide a precise (Θ notation) bound for the running time as a
function of n for the following function
for i = 1 to n {
j = i
while j < n {
j = j + 4
}
}
I believe the answer would be O(n^2), although I'm certainly an amateur at the subject but m reasoning is the initial loop takes O(n) and the inner loop takes O(n/4) resulting in O(n^2/4). as O(n^2) is dominating it simplifies to O(n^2).
Any clarification would be appreciated.
If you proceed using Sigma notation, and obtain T(n) equals something, then you get Big Theta.
If T(n) is less or equal, then it's Big O.
If T(n) is greater or equal, then it's Big Omega.
I'm taking Data Structures and Algorithm course and I'm stuck at this recursive equation:
T(n) = logn*T(logn) + n
obviously this can't be handled with the use of the Master Theorem, so I was wondering if anybody has any ideas for solving this recursive equation. I'm pretty sure that it should be solved with a change in the parameters, like considering n to be 2^m , but I couldn't manage to find any good fix.
The answer is Theta(n). To prove something is Theta(n), you have to show it is Omega(n) and O(n). Omega(n) in this case is obvious because T(n)>=n. To show that T(n)=O(n), first
Pick a large finite value N such that log(n)^2 < n/100 for all n>N. This is possible because log(n)^2=o(n).
Pick a constant C>100 such that T(n)<Cn for all n<=N. This is possible due to the fact that N is finite.
We will show inductively that T(n)<Cn for all n>N. Since log(n)<n, by the induction hypothesis, we have:
T(n) < n + log(n) C log(n)
= n + C log(n)^2
< n + (C/100) n
= C * (1/100 + 1/C) * n
< C/50 * n
< C*n
In fact, for this function it is even possible to show that T(n) = n + o(n) using a similar argument.
This is by no means an official proof but I think it goes like this.
The key is the + n part. Because of this, T is bounded below by o(n). (or should that be big omega? I'm rusty.) So let's assume that T(n) = O(n) and have a go at that.
Substitute into the original relation
T(n) = (log n)O(log n) + n
= O(log^2(n)) + O(n)
= O(n)
So it still holds.
can anyone help me verifying the following complexities:
10^12 = O(1)?
2^(n+3) + log(n) = O(2^n)?
f(n) = Omega(n) and f(n) = theta(n) <=> f(n) = O(n)
thanks
The first two are right, the last is wrong.
In particular, any value that has no variable attached will be "a constant" and therefore O(1). As for why you're correct on the second, 2^n strictly beats log(n) asymptotically, and 2^(n+3) is equivalent to 8*2^n, or O(1)*O(2^n), and it's generally best to simplify big-O notation to the simplest-looking correct form.
The third condition is wrong because f(n) = O(n) does not imply either of the first two statements.
Please help me on following two functions, I need to simplify them.
O(nlogn + n^1.01)
O(log (n^2))
My current idea is
O(nlogn + n^1.01) = O(nlogn)
O(log (n^2)) = O (log (n^2))
Please kindly help me on these two simplification problems and briefly give an explanation, thanks.
For the second, you have O(lg(n²)) = O(2lg(n)) = O(lg(n)).
For the first, you have O(nlg(n) + n^(1.01)) = O(n(lg(n) + n^(0.01)), you've to decide whatever lg(n) or n^(0.01) grows larger.
For that purpose, you can take the derivative of n^0.01 - lg(n) and see if, at the limit for n -> infinity, it is positive or negative: 0.01/x^(0.99) - 1/x; at the limit, x is bigger than x^0.99, so the difference is positive and thus n^0.01 grows asymptotically faster than log(n), so the complexity is O(n^1.01).
Remember:
log (x * y) = log x + log y
and n^k always grows faster than log n for any k>0.
Putting things together, for the first question O(n*log(n)+n^1.01) the first function grows faster than the second summand, i.e. since nlog(n) > n^1.01 for n greater than about 3, it is O(nlog(n))
In the second case use the formula mentioned by KennyTM, so we get
O(log(n^2)) = O(log(n*n)) = O(log(n)+log(n)) = O(2*log(n)) = O(log(n))
because constant terms can be ignored.