I have this as a homework question and don't remember learning it in class. Can someone point me in the right direction or have documentation on how to solve these types of problems?
More formally...
First, we prove that if f(n) = 5n, then f ∈ O(n). In order to show this, we must show that for some sufficiently large k and i ≥ k, f(i) ≤ ci. Fortunately, c = 5 makes this trivial.
Next, I'll prove that for all f ∈ O(n) that f ∈ O(n * log n). Hence, we must show that for some sufficiently large k, all i ≥ k, f(i) ≤ ci * log i. Hence, if we let k be large enough that f(i) ≤ ci, and i ≥ 2, then the result is trivial since ci ≤ ci * log i.
QED.
Look into the definition of big-O-notation. It means that 5n will run no slower the nlogn, which is true. nlogn is an upper bound of the number of operations to be performed.
You can prove it by applying L'Hopitals rule to lim n-> infinity of 5n/nlogn
g(n) = 5n and f(n)=nlogn
Derivate g(n) and f(n) so you will get something like this
5/(some stuff here that will contain n)
5/infinity = 0 so 5n = O(nlogn) is true
I don't remember the wording of the formal definition, but what you have to show is:
c1 * 5 * n < c2 * n * logn, n>c3
where c1 and c2 are arbitrary constants, for some number c3. Define c3 in terms of c1 and c2, and you're done.
It's been three years since I touched big-O stuff. But I think you can try to show this:
O(5n) = O(n) = O(nlogn)
O(5n) = O(n) is very easy to show, so all you have to do now is to show O(n) = O(nlogn) which shouldn't be too hard too.
Related
When using the substitution method to find the time complexity of recursive functions, why do we to prove the exact form and can't use the asymptotic notations as they are defined. An example from the book "Introduction to Algorithms" at page 86:
T(n) ≤ 2(c⌊n/2⌋) + n
≤ cn + n
= O(n) wrong!!
Why is this wrong?
From the definition of Big-O: O(g(n)) = {f(n): there exist positive constants c2 and n0 such that 0 ≤ f(n) ≤ c2g(n) for all n > n0}, the solution seems right.
Let's say f(n) = cn + n = n(c + 1), and g(n) = n. Then n(c + 1) ≤ c2n if c2 ≥ c + 1. => f(n) = O(n).
Is it wrong because f(n) actually is T(n) and not cn + n, and instead g(n) = cn + n??
I would really appreciate a helpful answer.
At page 2 in this paper the problem is describe. The reason why this solution is wrong is because the extra n in cn + n adds up in every recursive level. As the paper said "over many recursive calls, those “plus ones” add up"
Edit (Have more time to answer probably).
What I tried to do in the question was to solve the problem by induction. This means that I show that the next recursion still is true for my time complexity, which would mean that it would hold for the next recursion after that and so on. However this is only true if the time complexity I calculated is lower than my guess, in this case cn. For example cn + n is larger than cn and and my proof therefore fails. This is because, if I now let the recursion go on one more time I start with cn + n. This would than be calculated to c(cn + n) + n = n * c^2 + cn + n. This would increase for every recursive level and would not grow with O(n). It therefore fails. If however my proof was calculated to cn or lower(imagine it), then the next level would be cn or lower, just as the following one and so on. This leads to O(n). In short the calculation needs to be lower or equal to the quest.
So I am currently taking an algorithms class and have been asked to prove
Prove: ((n^2 / log n) + 10^5 n * sqrt(n)) / n^2 = O(n^2 / log n)
I have come up with n0 = 1 and c = 5 when solving it I end up with 1 <= 5 I just wanted to see if I could get someone to verify this for me.
I'm not sure if this is the right forum to post in, if it's wrong I apologize and if you could point me in the right direction to go to that would be wonderful.
If I am not wrong, you have to prove that the upper bound of the given function is n^2 logn.
Which can be the case if for very large values of n,
n^2/logn >= n * sqrt(n)
n >= sqrt(n) * log(n)
Since, log(n) < sqrt(n), log(n)*sqrt(n) will always be less than n. Hence, our inequality is correct. So, the upper bound is O(n^2/ logn).
You can use the limit method process:
Thus, the solution of your case should look like this:
Assuming functions f and g are increasing, by definition f(x) = O(g(x)) iff limit x->inf f(x)/g(x) is non-negative. If you substitute your functions for f and g, and simplify the expression, you will see that the limit trivially comes out to be 0.
I'm taking Data Structures and Algorithm course and I'm stuck at this recursive equation:
T(n) = logn*T(logn) + n
obviously this can't be handled with the use of the Master Theorem, so I was wondering if anybody has any ideas for solving this recursive equation. I'm pretty sure that it should be solved with a change in the parameters, like considering n to be 2^m , but I couldn't manage to find any good fix.
The answer is Theta(n). To prove something is Theta(n), you have to show it is Omega(n) and O(n). Omega(n) in this case is obvious because T(n)>=n. To show that T(n)=O(n), first
Pick a large finite value N such that log(n)^2 < n/100 for all n>N. This is possible because log(n)^2=o(n).
Pick a constant C>100 such that T(n)<Cn for all n<=N. This is possible due to the fact that N is finite.
We will show inductively that T(n)<Cn for all n>N. Since log(n)<n, by the induction hypothesis, we have:
T(n) < n + log(n) C log(n)
= n + C log(n)^2
< n + (C/100) n
= C * (1/100 + 1/C) * n
< C/50 * n
< C*n
In fact, for this function it is even possible to show that T(n) = n + o(n) using a similar argument.
This is by no means an official proof but I think it goes like this.
The key is the + n part. Because of this, T is bounded below by o(n). (or should that be big omega? I'm rusty.) So let's assume that T(n) = O(n) and have a go at that.
Substitute into the original relation
T(n) = (log n)O(log n) + n
= O(log^2(n)) + O(n)
= O(n)
So it still holds.
In this example: http://www.wolframalpha.com/input/?i=2%5E%281000000000%2F2%29+%3C+%283%2F2%29%5E1000000000
I noticed that those two equations are pretty similar no matter how high you go in n. Do all algorithms with a constant to the n fall in the same time complexity category? Such as 2^n, 3^n, 4^n, etc.
They are in the same category, This does not mean their complexity is the same. They are exponential running time algorithms. Obviously 2^n < 4^n
We can see 4^n/2^n = 2^2n/2^n = 2^n
This means 4^n algorithm exponential slower(2^n times) than 2^n
The same thing happens with 3^n which is 1.5^n.
But this does not mean 2^n is something far less than 4^n, It is still exponential and will not be feasible when n>50.
Note this is happening due to n is not in the base. If they were in the base like this:
4n^k vs n^k then this 2 algorithms are asymptotically the same(as long as n is relatively small than actually data size). They would be different by linear time, just like O(n) vs c * O(n)
The time complexities O(an) and O(bn) are not the same if 1 < a < b. As a quick proof, we can use the formal definition of big-O notation to show that bn ≠ O(an).
This works by contradiction. Suppose that bn = O(an) and that 1 < a < b. Then there must be some c and n0 such that for any n ≥ n0, we have that bn ≤ c · an. This means that bn / an ≤ c for any n ≥ n0. Since b > a, it should start to become clear that this is impossible - as n grows larger, bn / an = (b / a)n will get larger and larger. In particular, if we pick any n ≥ n0 such that n > logb / a c, then we will have that
(b / a)n > (b / a)log(b/a) c = c
So, if we pick n = max{n0, c + 1}, then it's not true that bn ≤ c · an, contradicting our assumption that bn = O(an).
This means, in particular, that O(2n) ≠ O(1.5n) and that O(3n) ≠ O(2n). This is why when using big-O notation, it's still necessary to specify the base of any exponents that end up getting used.
One more thing to notice - although it looks like 21000000000/2 is approximately 1.41000000000/2, notice that these are totally different numbers. The first is of the form 10108.1ish and the second of the form 10108.2ish. That might not seem like a big difference, but it's absolutely colossal. Take, for example, 10101 and 10102. This first number is 1010, which is 10 billion and takes ten digits to write out. The second is 10100, one googol, which takes 100 digits to write out. There's a huge difference between them - the first of them is close to the world population, while the second is about the total number of atoms in the universe!
Hope this helps!
I learned that,using Big O notation
O(f(n)) + O(g(n)) -> O(max(f(n),g(n))
O( f(n) )* O( g(n)) -> O( f(n) g(n))
but now, I have this equation for running time T for input size N
T(N) = O(N^2) // O of N square
I need to find the ratio T(2N) / T(N)
I tried this
T(2N) / T(N) --> O((2N)^2) /O( N^2) --> 4
Is this correct? Or is the above division invalid?
I would also say this is incorrect. My intuition was that if T(N) is O(N^2), then T(2N)/T(N) is O(1), consistent with your suggestion that T(2N)/T(N) = 4. I think however that the intuition is wrong.
Consider a counter-example.
Let T(N) be 1 if N is odd, and N^2 if N is even, as below.
This is clearly O(N^2), because we can choose a constant p=1, such that T(N) ≤ pN^2 for sufficiently large N.
What is T(2N)? This is (2N)^2 = 4N^2, as below, because 2N is always even.
Hence T(2N)/T(N) is 4N^2/1 = 4N^2 when N is odd, and 4N^2/N^2=4 when N is even, as below.
Clearly T(2N)/T(N) is not 4. It is, however, O(N^2), because we can choose a constant q=4 such that T(2N)/T(N) ≤ qN^2 for sufficiently large N.
R code for the plots below
x=1:50
t1=ifelse(x%%2, 1, x^2)
plot(t1~x,type="l")
x2=2*x
t2=ifelse(x2%%2, 1, x2^2)
plot(t2~x,type="l")
ratio=t2/t1
plot(ratio~x,type="l")
This problem is an interesting one and strikes me as belonging in the realm of pure mathematics, i.e. limits of sequences and the like. I am not trained in pure mathematics and I would be nervous of claiming that T(2N)/T(N) is always O(N^2), as it might be possible to invent some rather tortuous counter-examples.
Even if T(N) = Θ(N²) (big-theta) this doesn't work. (I'm not even going to talk about big-O.)
c1 * N² <= T(N) <= c2 * N²
c1 * 4 * N² <= T(2N) <= c2 * 4 * N²
T(N) = c_a * N² + f(N)
T(2N) = c_b * 4 * N² + g(N)
For c_a and c_b somewhere between c1 and c2, and f(N) and g(N) small-o of N². (Thanks G. Bach!) And there is nothing to guarantee that the quotient will be equal to 4 since both c_a, c_b, f(N) and g(N) can be all sorts of things. For example, take c_a = 1, c_b = 2 and f(N) = g(N) = 0. Divide them and you get
T(2N)/T(N) = (2 * 4 * N²)/N² = 8
Good question!
This is incorrect.
Running time is not the same as time complexity(here Big O).Time complexity says like it can't have a running time worse that a constant times N^2.Running time can be quite different, maybe very low, maybe close to the asymptotic limit.That purely depends on the hidden constants.If someone asked you this question, it's a trick question.
Hidden constants refers to the actual number of primitive instructions carried out.So in this case the total number of operations could be:
5*N^2
or
1000*N^2.
or maybe
100*N^2+90N
or maybe just
100*N (Recall this is also O(N^2))
The factor depends on the implementation and the actual instructions carried out.This is found out from the algorithm.
So whichever the case we simply say the Big-O is O(N^2).