Sorry for the simple question but I'm going through some algorithms homework and for true/false problems like...
n^n = O(2^n)
Is it always possible to just graph the two and see which one is bigger? In this case n^n is so I think the answer is false. Thanks in advance!
Look at the graph of 1000000n and n^2 for n up to some really large number, say n = 10,000. What looks bigger on the graph?
Related
Please advise if this answer should be moved to maths forum.
I'm quite confused about how we simply complexity theory equations.
For example, suppose we have this small Fibonacci algorithm:
And we're given the following information:
What I struggle to understand is how the formula T(n) is expanded and simplified, especially this:
what am I really missing here?
Thanks
Edit
This was taken from this book in page 775.
Let me rephrase the claim:
There exist some a and b such that T(n) < aFn - b
Now start the proof with
Chose b large enough to dominate the constant term.
Now the last inequality should be clear.
I've made a backtracking algorithm.
I've been asked to say what is the complexity of this Algo.
I know that the equation is T(n) = 2T(n-1) + 3(n_hat), where n_hat is the initial n. Meaning it doesn't decrease in each step.
The thing is that I'm getting quite lost on calculating this thing. I believe it's around 2**n * something. But my calculations are a bit confusing. Can you help me please? Thanks!
Let's expand this formula repeatedly by substituting into itself:
I have a question reagarding to my lecture datastructures and algortihms.
I have to problem to understand how a algorithm grows. I dont unterstand the difference between the O Notations. And i dont understand the difference between them for example O(lgn)and O(nlgn).
I hope anyone can help me. thank you
To compare time complexities you should be able to make some mathematical proofs. In your example:
for every n>1 we have by multiplying with logn: nlogn>logn so nlogn is worse than logn. An easy way to understand this is by comparing the graphs of functions as suggested in comments or even try some big inputs to see the asymptotic behavior. For example for n=1000000 :
logn(1000000)=6 and 1000000log(1000000)=6000000 which is greater.
Also notice that you dount count constants in big O notation for example 4n is O(n) , n is O(n) and also cn+w is O(n) for c,w constants.
I am working on an exercise (note no homework question) where a number of steps that can be exercised by a computer are given and one is asked to compute N in relation to certain time intervals for multiple functions some functions.
I have no problem doing this for functions such as f(n) = n, n^2, n^3 and the like.
But when it comes to f(n) = lgn, sqrt(n), n log n, 2^n, and n! i run into problems.
It is clear to me that I that I have to construct a term of the form func(n) = interval and then have to get n.
But how to do this with the functions above?
Can somebody please give me an example, or name the inverse functions so that I can look it up on wikipedia or somewhere else.
Your question isn't so much about algorithms, or complexity, but about inversions of math formulas.
It's easy to solve for n in n^k = N in a closed form. Unfortunately, for most other functions it is either not known or known that it is not possible. In particular, for n log(n), the solution involves the Lambert function, which doesn't help you much.
In most cases, you will have to solve this kind of stuff numerically.
I am asked to find the simplest exact answer and the best big-O expression for the expression:
sum n, n = j to k.
I have computed what I think the simplest exact answer is as:
-1/2(j-k-1)(j+k)
Now when I go to take the best possible big-O expression I am stuck.
From my understanding, big-O is just finding the operation time of the worst case for an algorithm by taking the term that over powers the rest. So like I know:
n^2+n+1 = O(n^2)
Because in the long run, n^2 is the only term that matters for big n.
My confusion with the original formula in question:
-1/2(j-k-1)(j+k)
is as to what the strongest term is? To try and solve again I try factoring to get:
-1/2(j^2-jk-j+jk-k^2-k)
Which still does not make itself clear to me since we now have j^2-k^2. Is the answer I am looking for O(k^2) since k is the end point of my summation?
Any help thanks.
EDIT: It is unspecified as to which variable (j or k) is larger.
If you know k > j, then you have O(k^2). Intuitively, that's because as numbers get bigger, squares get farther apart.
It's a little unclear from your question which variable is the larger of the two, but I've assumed that it's k.