I am asked to find the simplest exact answer and the best big-O expression for the expression:
sum n, n = j to k.
I have computed what I think the simplest exact answer is as:
-1/2(j-k-1)(j+k)
Now when I go to take the best possible big-O expression I am stuck.
From my understanding, big-O is just finding the operation time of the worst case for an algorithm by taking the term that over powers the rest. So like I know:
n^2+n+1 = O(n^2)
Because in the long run, n^2 is the only term that matters for big n.
My confusion with the original formula in question:
-1/2(j-k-1)(j+k)
is as to what the strongest term is? To try and solve again I try factoring to get:
-1/2(j^2-jk-j+jk-k^2-k)
Which still does not make itself clear to me since we now have j^2-k^2. Is the answer I am looking for O(k^2) since k is the end point of my summation?
Any help thanks.
EDIT: It is unspecified as to which variable (j or k) is larger.
If you know k > j, then you have O(k^2). Intuitively, that's because as numbers get bigger, squares get farther apart.
It's a little unclear from your question which variable is the larger of the two, but I've assumed that it's k.
Related
So far understanding Big-O notation and how it's calculated is ok...most of the situations are easy to understand. However, I just came across this one problem that I cannot for the life of me figure out.
Directions: select the best big-O notation for the expression.
(n^2 + lg(n))(n-1) / (n + n^2)
The answer is O(n). That's all fine and dandy, but how is that rationalized given the n^3 factor in the numerator? n^3 isn't the best, but I thought there was like a "minimum" basis between f(n) <= O(g(n))?
The book has not explained any mathematical inner-workings, everything has sort of been injected into a possible solution (taking f(n) and generating a g(n) that's slightly greater than f(n)).
Kinda stumped. Go crazy on the math, or math referencing, if you must.
Also, given a piece of code, how does one determine the time units per line? How do you determine logarithmic times based off of a line of code (or multiple lines of code)? I understand that declaring and setting a variable is considered 1 unit of time, but when things get nasty, how would I approach a solution?
If you throw this algorithm into Wolfram Alpha, you get this generic result:
If you expand (FOIL) it, you get (roughly) a cubic function divided by a quadratic function. With Big-O, constants don't matter and the larger power wins, so you'd result with something like this:
The rest from here is mathematical induction. The overall algorithm grows in a linear-like fashion with respect to larger and larger values of n. It's not quite linear so we can't say it has a Big-Omega of (n), but it does come fairly reasonably close to O(n) due to the amortized constant growth rate.
Alternatively, you could annoy mathematicians everywhere and say, "Since this is based on Big-O rules, we can drop the factor of n from the denominator and thus result in O(n) by simple division." However, it's important in my mind to consider that this is still not quite linear.
Mind, this is a less-rigorous explanation than might be satisfactory for your class, but this gives you some math-based perspective on its runtime.
Non-rigorous answer:
Distributing the numerator product, we find that the numerator is n^3 + n log(n) - n^2 - log n.
We note that the numerator grows as n^3 for large n, and the denominator grows as n^2 for large n.
We interpret that as growth as n^{3 - 2} for large n, or O(n).
I am working on an exercise (note no homework question) where a number of steps that can be exercised by a computer are given and one is asked to compute N in relation to certain time intervals for multiple functions some functions.
I have no problem doing this for functions such as f(n) = n, n^2, n^3 and the like.
But when it comes to f(n) = lgn, sqrt(n), n log n, 2^n, and n! i run into problems.
It is clear to me that I that I have to construct a term of the form func(n) = interval and then have to get n.
But how to do this with the functions above?
Can somebody please give me an example, or name the inverse functions so that I can look it up on wikipedia or somewhere else.
Your question isn't so much about algorithms, or complexity, but about inversions of math formulas.
It's easy to solve for n in n^k = N in a closed form. Unfortunately, for most other functions it is either not known or known that it is not possible. In particular, for n log(n), the solution involves the Lambert function, which doesn't help you much.
In most cases, you will have to solve this kind of stuff numerically.
I have seen this problem and I couldn't solve it
the problem is finding the complexity of C(m,n) = C(m-1, n-1) + C(m, n-1) ( Pascal's formula )
Its an iterated formula but with two variable, I have no idea to solve this
I would b happy for your help... :)
If you consider the 2D representation of this formula you get to sum numbers that cover the "area" of a triangle when given its "height", so the complexity would be o(n^2) if calculated directly from the formula.
Idk if what I just said makes sense at all to you but you can also think of expressing the complexity of each iteration of the formula for a fixed n, which will give you linear complexity, multiplied by the linear complexity over n you should still get o(n^2)
This line of thought seems to match what they demonstrate here:
http://www.geeksforgeeks.org/pascal-triangle/
I'm at a loss as to how to calculate the best case run time - omega f(n) - for this algorithm. In particular, I don't understand how there could be a "best case" - this seems like a very straightforward algorithm. Can someone give me some hints / a general approach for solving these types of questions?
for i = 0 to n do
for j = n to 0 do
for k = 1 to j-i do
print (k)
Have a look at this question and answers Still sort of confused about Big O notation it may help you sort out some of the confusion between worst case, best case, Big O, Big Omega, etc.
As to your specific problem, you are asked to find some (asymptotical) lower bound of the time complexity of the algorithm (you should also clarify whether you mean Big Omega or small omega).
Otherwise you are right, this problem is quite simple. You just have to think about how many times print (k) will be executed for a given n.
You can see that the first loop goes n-times. The second one also n-times.
For the third one, you see that if i = 1, then j = n-1, thus k is 1 to n-1-1 = n-2, which makes me think whether your example is correct and if there should be i-j instead.
In any case, the third loop will execute at least n/2 times. It is n/2 because you subtract j-i and j decreases while i increases so when they are n/2 the result will be 0 and then it will be negative at which point the innermost loop will not execute anymore.
Therefore print (k) will execute n*n*n/2-times which is Omega(n^3) which you can easily verify from definition.
Just beware, as #Yves points out in his answer, that this is all assuming that print(k) is done in constant time which is probably what was meant in your exercise.
In the real world, it wouldn't be true because printing the number also takes time and the time increases with log(n) if print(k) prints in base 2 or base 10 (it would be n for base 1).
Otherwise, in this case the best case is the same as the worst case. There is just one input of size n so you cannot find "some best instance" of size n and "worst instance" of size n. There is just instance of size n.
I do not see anything in that function that varies between different executions except for n
aside from that, as far as I can tell, there is only one case, and that is both the best case and the worst case at the same time
it's O(n3) in case you're wondering.
If this is a sadistic exercise, the complexity is probably Theta(n^3.Log(n)), because of the triple loop but also due to the fact that the number of digits to be printed increases with n.
As n the only factor in the behavior of your algorithm, best case is n being 0. One initialization, one test, and then done...
Edit:
The best case describe the algorithm behavior under optimal condition. You provide us a code snippet which depend on n. What is the nbest case? n being zero.
Another example : What is the best case of performing a linear search on an array? It is either that the key match the first element or the array is empty.
I was reading this post on SO : p-versus-np
And saw this:
"The creation of all permutations for a given range with the length n is not bounded, as you have n! different permutations. This means that the problem could be solved in n^(100) log n, which will take a very long time, but this is still considered fast."
Can someone explain how n! is solvable in n^(100) log n
I carefully read the statement that comes from a longer explanation that I googled out. I think that the correct wording would be:
"This means that a problem could be solved in n 100 log n, which would take a very long time, but this is still considered fast. On the other hand, one of the first algorithms for TSP was O(n!) and another one was O(n2 2n). And compared to polynomial functions these things grow really, really fast."
Notice the word "a" instead of "the"
The correct approximation is the Stirling's formula.
That condradicts what that guys wrote. Honestly, I have no idea what he meant by it...