Well, I came across a question that I could not solve can anyone tell how I can solve this problem. Not the definition but the mathematical problem.
express each time in microseconds, for example 1 sec = 10^6 microsec, let it be t (unclear for Month, perhaps it is considered 30 days?)
find inverse function, i.e. n in f(n)=t equation, for example if sqrt(n)=t , then n = t^2
substitute t (round down if not integer - all functions are increasing)
for n! there is no simple inverse function, you can compute it numerically or partially help yourself using inverse Stirling's approximation.
Related
This question already has answers here:
What is a plain English explanation of "Big O" notation?
(43 answers)
Closed last year.
I'm a beginner in algorithms and trying to understand time-complexity and read online that any algorithm whose time-complexity is O(n) is the upper-bound which means there is no way that the time-taken by that particular algorithm could be expressed above O(n).
but understood that O(n) algorithms can also be called as O(n^2)(If so, then why do we call "O(n) as the upper-bound" and also we say Big-O gives upper bound ? ). How is it possible technically ? can someone explain for the beginners.
Note: Kindly do not mark as duplicate, we are unable to understand from the mathematical relations and other examples available.
Thanks in advance.
Maybe it is better explained through pictures. However, you will have to try to understand the mathematical definition of Big O.
A function f is Big O of another function g, or f(x) = O(g(x)), if we can find a point on the x-axis so that after that point, some "stretched version" of g(x) (like 3g(x) or 5g(x)) is always greater than f(x).
So g(x) is like a "measuring function" which tries to give you a feel for how fast f grows. What does this mean with pictures?
Suppose f is the function 2x+3sin(x) + 10. Then, in the following picture, we see that a stretched version of g(x) = x (specifically, 4g(x)) is above f after x = 3.933:
Therefore, we can say that our chosen function f is O(x).
Now let's add another function k(x) = x2 into the mix. Let's take its stretched version 0.2x2 and plot it:
Here, we not only see that 0.2x2 is greater than 4x after some point, it is also greater than f(x) (much earlier, in fact). So we can say that 4x = O(x2), but also f(x) = O(x2).
You can play around with the graphs here: https://www.desmos.com/calculator/okeagehzbh
If an algorithm is described as being "in O(n) time", then it will never take longer than some multiple of its size, in time, to run.
Every algorithm that is O(n) is also O(n^2), and O(n^3), and O(2^n) - in the same way that every number smaller than 3 is also smaller than 5, and 7, and 1,000.
I am working on an exercise (note no homework question) where a number of steps that can be exercised by a computer are given and one is asked to compute N in relation to certain time intervals for multiple functions some functions.
I have no problem doing this for functions such as f(n) = n, n^2, n^3 and the like.
But when it comes to f(n) = lgn, sqrt(n), n log n, 2^n, and n! i run into problems.
It is clear to me that I that I have to construct a term of the form func(n) = interval and then have to get n.
But how to do this with the functions above?
Can somebody please give me an example, or name the inverse functions so that I can look it up on wikipedia or somewhere else.
Your question isn't so much about algorithms, or complexity, but about inversions of math formulas.
It's easy to solve for n in n^k = N in a closed form. Unfortunately, for most other functions it is either not known or known that it is not possible. In particular, for n log(n), the solution involves the Lambert function, which doesn't help you much.
In most cases, you will have to solve this kind of stuff numerically.
I have seen this problem and I couldn't solve it
the problem is finding the complexity of C(m,n) = C(m-1, n-1) + C(m, n-1) ( Pascal's formula )
Its an iterated formula but with two variable, I have no idea to solve this
I would b happy for your help... :)
If you consider the 2D representation of this formula you get to sum numbers that cover the "area" of a triangle when given its "height", so the complexity would be o(n^2) if calculated directly from the formula.
Idk if what I just said makes sense at all to you but you can also think of expressing the complexity of each iteration of the formula for a fixed n, which will give you linear complexity, multiplied by the linear complexity over n you should still get o(n^2)
This line of thought seems to match what they demonstrate here:
http://www.geeksforgeeks.org/pascal-triangle/
This is the graph which I am expected to analyze. I have to find the gradient (slope) and from that I am expected to deduce the time complexity.
I have found that the slope is equal to 1,91. If that is true what else should I do?
Quotient of logarithms is approximately 2. What does it mean when removing the logarithms?
log(T(n)) / log(n) = 2
log(T(n)) = 2 * log(n)
log(T(n)) = log(n²)
T(n) = n²
T(n) denotes algorithm’s time complexity. Of course we are talking in asymptotic terms, i.e. using Big O notation we say that
T(n) ∈ O(n²).
You measured the value 2 for large inputs and you are assuming it will remain the same even for all bigger ones.
You can read more at a page by one of the tutors at University of Toronto. It uses basic calculus to explain how it works. Still, the idea behind all this is that logarithms make multiplicative constants from constant exponents and additive constants from multiplicative constants.
Also regarding interpretation of the plot, a similar question popped up here on Stack Overflow recently: Log-log plot/graph of algorithm time complexity
But note that this is really just an estimation of time complexity. You cannot prove time complexity of an algorithm by just running it on a finite set of inputs. This method can give you a good guess on what to try to prove using analysis of the algorithm, though.
Looking for some help with an upcoming exam, this is a question from the review. Seeing if someone could restate a) so I might be able to better understand what it is asking.
So it wants me to instead of using extra multiplications maybe obtain some of the terms in the answer (PQ) by subtracting and adding already multiplied terms. Such as Strassen does in his algorithm to compute the product of 2x2 matrices in 7 multiplications instead of 8.
a) Suppose P(x) and Q(x) are two polynomials of (even) size n.
Let P1(x) and P2(x) denote the polynomials of size n/2 determined by the first n/2 and last n/2 coefficients of P(x). Similarly define Q1(x) and Q2(x),
i.e., P = P1 + x^(n/2)P2. and Q = Q1 + x^(n/2) Q2.
Show how the product PQ can be computed using only 3 distinct multiplications of polynomials of size n/2.
b) Briefly explain how the result in a) can be used to design a divide-and-conquer algorithm for multiplying two polynomials of size n (explain what the recursive calls are and what the bootstrap condition is).
c) Analyze the worst-case complexity of algorithm you have given in part b). In particular derive a recurrence formula for W(n) and solve. As usual, to simplify the math, you may assume that n is a power of 2.
Here is a link I found which does polynomial multiplication.
http://algorithm.cs.nthu.edu.tw/~course/Extra_Info/Divide%20and%20Conquer_supplement.pdf
Notice here that if we do polynomial multiplication the way we learned in high school, it would take big-omega(n^2) time. The question wants you to see that there is a more efficient algorithm out there by first preprocessing the polynomials, by dividing it into two pieces. This lecture gives a pretty detailed explanation of how to do this.
Especially, look at page 12 of the link. It shows you explicitly how a 4 multiplication process can be done in 3 when multiplying polynomials.