This may be a naive question but I am new to the concept of Big-O notation and complexity and could not found any answer for this. I am dealing with a problem for which the algorithm (2n + 1)! times check a condition. Can I say that the complexity of the problem is O(n!) or the complexity is O((2n + 1)!)?
Use Stirling's approximation:
n! ~ (n / e)^n * sqrt(2 * pi * n)
Then
(2n + 1)! ~ ((2n + 1) / e)^(2n + 1) * sqrt(2 * pi * (2n + 1))
>= (2n / e)^(2n) * sqrt(2 * pi * 2n)
= 2^2n * (n / e)^(2n) * sqrt(2) * sqrt(2 * pi * n)
= sqrt(2) * (2^n)^2 * ((n / e)^n)^2 * sqrt(2 * pi * n)
And now it's pretty clear why there's no hope of O((2n + 1)!) being O(n!): the exponential factors are way worse. It's more like O((2n + 1)!) is O((n!)^2).
Let (N,c) be any ordered pair of positive constants. Let n be any integer such that n>N and n>c.
Then (2n+1)! > (2n+1)*n! > cn!
Thus for any pair of positive constants (N,c) there exists n>N such that (2n+1)! > cn!, so (2n+1)! is not O(n!).
O((2n+1)!) contains a function, (2n+1)!, that is not in O(n!), so O((2n+1)!) and O(n!) are not the same.
(I concur in wanting LaTeX.)
Here is the definition: https://en.wikipedia.org/wiki/Big_O_notation . So we need to check whether there exists a constant c, and n0 such that:
(2n+1)! < cn! for n > n0
Intuitively, from observing how (2n+1)! and n! behave:
http://www.wolframalpha.com/input/?i=n!+%3E%282n+%2B1%29!
(2n+1)! just goes two times faster, so regardless of "c" it will always reach n!. So you can't simplify to n!.
(2n+1)! = n!(n+1)...(2n+1)
O((2n+1)!) = O(n!)O((n+1)...(2n+1))
==>
O(1) = o((n+1)...(2n+1))
O(n!) = o((2n+1)!)
Related
How can I prove that
T(n)=T(n-1)+log n = Θ(nlog(n))
My thinking:
We can get T(n)=∑log(n). To prove the result, We have to prove both ∑log(n)>=Nlog(N) and ∑log(n)<=Nlog(N). The second is easy, I want to know how to prove ∑log(n)>=Nlog(N)?
Assume N > 10. (You only need to prove the bound for "large enough" N.)
Suppose we have sum_n log(n) but we ignore the terms where n < N/10.
We have 9/10 * N terms left, and each term is at least log(N/10). Then:
sum_n log(n) >= (9/10 * N) * log(N/10)
= (9/10 * N) * (log(N) - log(10))
= (constant) * N * log(N) - (constant) * N
which is clearly Omega(N log N).
What about (n-1)!?
Also if you could show me a proof that would help me understand better.
I'm stuck on this one.
To show that (n+1)! is in O(n!) you have to show that there is a constant c so that for all big enough n (n > n0) the inequality
(n+1)! < c n!
holds. However since (n+1)! = (n+1) n! this simplifies to
n+1 < c
which clearly does not hold since c is a constant and n can be arbitrarily large.
On the other hand, (n-1)! is in O(n!). The proof is left as an exercise.
(n+1)! = n! * (n+1)
O((n+1)*n!) = O(nn!+n!) = O(2(nn!)) = O(n*n!) > O(n!)
(n-1)! = n! * n-1
O(n-1)! = O(n!/n) < O(n!)
I wasnt formally introduced to algorithmic complexity so take what I write with a grain of salt
That said, we know n^3 is way worse than n, right?
Well, since (n + 1)! = (n - 1)! * n * (n + 1)
Comparing (n + 1)! to (n - 1)! is like comparing n to n^3
Sorry, I dont have proof but expanding the factorial as above should lead to it
As homework, I was asked to write an algorithm in O(log(n)) and I could calculate the complexity of the one I wrote as O(log(n) + log(n/2) + log(n/4) + log(n/8) + ... + log(2)).
I think it's rather O(n) since it's roughly log(n)*O(log(n)) = O(n). But I was not sure about it. I know homework questions are not welcome here, but I really don't know another way to find out what complexity it is. Googling didn't help me.
To specify: n in N and n = pow(2, c), c in N
No, it's not O(log n). It is O((log n)2).
O(log(n) + log(n/2) + log(n/4) + log(n/8) + ... + log(2))
= O(log(n * n/2 * n/4 * n/8 * ... * 2)) using log multiplication rule
= O(log(n * n * n * ... * n / 2 / 4 / 8 / ... / n))
= O(log(nlog n / 2 / 4 / 8 / ... / n))
= O(log(nlog n / (1 * 2 * 4 * 8 * ... * n/4 * n/2 * n))
= O(log(nlog n / ((1 * n) * (2 * n/2) * (4 * n/4) * ...) group first and last terms
= O(log(nlog n / (n * n * n * ...)) since we grouped terms,
= O(log(nlog n / n(log n)/2) we halved the number of terms
= O(log(nlog n) - log(n(log n)/2)) log division rule
= O(log n.log n) - ((log n)/2).log n) log power rule * 2
= O(log n.log n) - (log n.log n)/2)
= O(log n.log n)/2)
= O((log n)2/2)
= O((log n)2) big-O doesn't care about constants
Start with some basic arithmetic:
O(log n/2) = O( (log n) + log(1/2))
The constant can be ignored. Hence:
O(log n / 2) = O(log n)
So, you are adding a bunch of things that are O(log n). How many? Well, about log(n) worth. So, that means that the algorithm is:
O( (log n)^2)
I can't understand the basic math behind algorithms. For example, here's a question:
If
f(n) = O(g(n))
is
f(n) * log(f(n)^c) = O(g(n) * log(g(n)))
?
How do I go about answering this question? From what I understand so far, f(n) = O(g(n)) only when g(n) <= c(g(n)) and c and n are non-negative. So I need to start plugging values into the above based on that, but how do I do that? Say if I chose c=5 and n=2, would I plug the values like so: f(2) * log(f(2)^5) = 5(g(2) * log(g(2))) Would that mean that the answer to the original question is false?
By f(n) = O(g(n)) , you mean
there exists a k such that f(n) <= k.g(n) for some n >= N_1. -----1
which implies log(f(n)^c) <= log(k^c) + log(g(n)^c) <= K*log(g(n)^c) for n >= N_2 and for K= max{log(k^c),2} -----2
which gives us the the required answer by multiplication of 1& 2
f(n) * log(f(n)^c) = O(g(n) * log(g(n))).
You can do it like that:
f(n) * log(f(n)^c) = c * f(n) * log(f(n)) = O(1) * O(g(n)) * log(O(g(n))) = O(1) * O(g(n)) * O(log(g(n))) = O(g(n) * log(g(n)).
So the answer to the question is true. All what you need here is properties of logarithm function.
There is one step that can be not clear here: why is log(O(g(n))) = O(log(g(n)))?
Proof: if f(n) = O(g(n)), there is a constant C such that for sufficiently large n
f(n) <= C * g(n). Thus, log(f(n)) <= log(C * g(n)) = log(g(n)) + log(C). We can use C2 = 2 and obtain log(f(n)) <= C2 * log(g(n)) for sufficiently large n.
You're given f(n) = O(g(n)) which means that beyond some point f(n) is at most a fixed positive multiple of g(n). You don't know where that point is, and you don't know the positive multiple. Still you know that
(⋆) f(n) ≤ k * g(n) for n > n₀ for some k > 0
even though k and n₀ are unknown. (I used k to avoid a name collision with c in the second part of the problem.)
Now you're asked to show that f(n) * log(f(n)^c) = O(g(n) * log(g(n))).
Logarithms turn multiplication into addition: log(x * y) = log(x) + log(y). That said, unsurprisingly, they turn repeated multiplication (which is exponentiation) into repeated addition (which is multiplication): log(x^y)= y * log(x).
First notice that log(f(n)^c) = c log(f(n)) because log(x^y) = y * log(x). So you may rewrite the problem: show c * f(n) * log(f(n)) = O(g(n) * log(g(n))).
What's more you can abandon the c on the left hand side: if something is at most a constant times something else (big-O), then any multiple of it is at most some other constant multiplied by the something else. Again you can rewrite the problem: show f(n) * log(f(n)) = O(g(n) * log(g(n))).
Now take the logarithm of (⋆):
(⋆) f(n) ≤ k * g(n) for n > n₀ for some k > 0
(⋆⋆) log(f(n)) ≤ log(k) + log(g(n)) for n > n₀ for some k > 0
The second only follows because log is an increasing function, so if numbers increase their logarithms increase --- taking the logarithm preserves order.
Now you need to multiply these two together to get something like the answer. Again this multiplication preserves order. If a ≤ A and b ≤ B then ab ≤ AB.
(⋆⋆⋆) f(n) * log(f(n)) ≤ k * g(n) * log(g(n)) + k * log(k) * g(n)
--- for n > n₀ for some k > 0
Now the left hand side is as in the problem, so you need to show the right hand side is at most some multiple of g(n) * log(g(n)) to finish up.
The first term on the right is exactly the sort of thing you need, but the second term is not a multiple of g(n) * log(g(n)), yet it increases the right hand side if log(k)>0, so you can't just throw it away: doing that would decrease the right hand side, and the left hand side might no longer be at most the right hand side.
(Of course if log(k)≤0 you can just throw the second term away which increases the right hand side and you're done: there's a multiple of g(n) * log(g(n)) there which is what you want.)
Instead what you need to do is increase k * log(k) * g(n) into another multiple of g(n) * log(g(n)). Increasing it means the left hand side is still at most the right hand side, and this makes the whole of the right hand side be a multiple of g(n) * log(g(n)) and the proof will be complete.
To do this, you need to multiply k * log(k) * g(n) by log(g(n)). Provided log(g(n)) ≥ 1 (again beyond some point), which seems reasonable in the theory of algorithms, this will make k * log(k) * g(n) bigger. (I could fuss here, but I won't!) So you may say that
(⋆⋆⋆) f(n) * log(f(n)) ≤ k * g(n) * log(g(n)) + k * log(k) * g(n)
--- for n > n₀ for some k > 0
But log(g(n)) ≥ 1 for n > n₁ and so multiplying the second term by log(g(n)) does not decrease it, so
f(n) * log(f(n)) ≤ k * g(n) * log(g(n)) + k * log(k) * g(n) * log(g(n))
--- for n > max(n₀,n₁) for some k > 0
and simplifying the right hand side
f(n) * log(f(n)) ≤ k(1+log(k)) * g(n) * log(g(n))
--- for n > max(n₀,n₁) for some k > 0
just gives a multiple of g(n) * log(g(n)) because k is a constant. i.e. take c = k(1+log(k)) to be the positive constant you need. (Earlier you eliminated the case where log(k) ≤ 0.) So
f(n) * log(f(n)) = O(g(n) * log(g(n)))
If an algorithm requires
C(n+r-1, r-1) steps
to solve a problem, where n is the number of input,
and r is a constant,
does the steps of algorithm consider exponential growth?
Assuming that C is the binomial coefficient function: C(n + r - 1, r - 1) = (n + r - 1)! / ((r - 1)! * n!). Since r is a constant, we can disregard (r - 1)! when using the big-O notation, so we get O((n + r - 1)! / n!). I assume that this might be homework, so try to take it further from here by yourself. It is possible to reduce (n + r - 1)! / n! to a quite simple expression since it is inside of an O(), and you'll then easily see whether or not it is exponential. (Hint: how many factors are there in (n + r - 1)! / n!?)
No. the complexity would be O(n^(r-1)) which is a polynomial growth instead of (and better than) exponential growth.
let g(n) = C(n+r-1, r-1)
= (n+r-1)! / ((r-1)!n!)
= (n+1)(n+2)...(n+r-2)(n+r-1) / (r-1)!
= n^(r-1) + kn^(r-2) + k'n^(r-3)... k''n + k''' / (r-1)!
it's easy to say that k,k'...k'',k'''and (r-1)! are all constant,
so T(g(n)) = O(n^(r-1))