Looking for some help with an upcoming exam, this is a question from the review. Seeing if someone could restate a) so I might be able to better understand what it is asking.
So it wants me to instead of using extra multiplications maybe obtain some of the terms in the answer (PQ) by subtracting and adding already multiplied terms. Such as Strassen does in his algorithm to compute the product of 2x2 matrices in 7 multiplications instead of 8.
a) Suppose P(x) and Q(x) are two polynomials of (even) size n.
Let P1(x) and P2(x) denote the polynomials of size n/2 determined by the first n/2 and last n/2 coefficients of P(x). Similarly define Q1(x) and Q2(x),
i.e., P = P1 + x^(n/2)P2. and Q = Q1 + x^(n/2) Q2.
Show how the product PQ can be computed using only 3 distinct multiplications of polynomials of size n/2.
b) Briefly explain how the result in a) can be used to design a divide-and-conquer algorithm for multiplying two polynomials of size n (explain what the recursive calls are and what the bootstrap condition is).
c) Analyze the worst-case complexity of algorithm you have given in part b). In particular derive a recurrence formula for W(n) and solve. As usual, to simplify the math, you may assume that n is a power of 2.
Here is a link I found which does polynomial multiplication.
http://algorithm.cs.nthu.edu.tw/~course/Extra_Info/Divide%20and%20Conquer_supplement.pdf
Notice here that if we do polynomial multiplication the way we learned in high school, it would take big-omega(n^2) time. The question wants you to see that there is a more efficient algorithm out there by first preprocessing the polynomials, by dividing it into two pieces. This lecture gives a pretty detailed explanation of how to do this.
Especially, look at page 12 of the link. It shows you explicitly how a 4 multiplication process can be done in 3 when multiplying polynomials.
Related
Say I need to calculate the time complexity of a function
16+26+36+...+n6. I am pretty sure this would be O(n7), but I only figure that because I know that Σi from i=0 to n is in O(n2). I cannot find a simple closed-version formula for a summation of ik. Can anyone provide more detail on how to actually calculate the time complexity?
Thanks!
An easy proof that it's Θ(n⁷) is to observe that:
1⁶+2⁶+3⁶+...+n⁶ <= n⁶+n⁶+...n⁶ = n⁷
(replacing all numbers with n makes the sum larger).
and
1⁶+2⁶+3⁶+...+n⁶ >= (n/2+1)⁶+...+n⁶ >= (n/2)⁶+(n/2)⁶+...+(n/2)⁶ = n⁷/2⁷
(in the first step, we discard the terms less or equal than n/2, and in the second step we replace all numbers with n/2. Both steps reduce the sum). (Note: I've assumed n is even, but you can extend to odd n with a bit of minor fiddling around).
Thus 1⁶+2⁶+3⁶+...+n⁶ is bounded above and below by a constant factor of n⁷ and so by definition is Θ(n⁷).
As David Eisenstat suggests in the comments, another proof is to consider the (continuous) graphs y=x⁶ and y=(x+1)⁶ from 0 to n. The area under these curves bound the sum below and above, and are readily calculated via integrals: the first is n⁷/7 and the second is ((n+1)⁷-1)/7. This shows that the sum is n⁷/7 + o(n⁷)
I asked myself if one can compute the nth Fibonacci number in time O(n) or O(1) and why?
Can someone explain please?
Yes. It is called Binet's Formula, or sometimes, incorrectly, De Moivre's Formula (the real De Moivre's formula is another, but De Moivre did discover Binet's formula before Binet), and involves the golden ratio Phi. The mathematical reasoning behind this (see link) is a bit involved, but doable:
While it is an approximate formula, Fibonacci numbers are integers -- so, once you achieve a high enough precision (depends on n), you can just approximate the number from Binet's formula to the closest integer.
Precision however depends on constants, so you basically have two versions, one with float numbers and one with double precision numbers, with the second also running in constant time, but slightly slower. For large n you will need an arbitrary precision number library, and those have processing times that do depend on the numbers involved; as observed by #MattTimmermans, you'll then probably end up with a O(log^2 n) algorithm. This should happen for large enough values of n that you'd be stuck with a large-number library no matter what (but I'd need to test this to be sure).
Otherwise, the Binet formula is mainly made up of two exponentiations and one division (the three sums and divisions by 2 are probably negligible), while the recursive formula mainly employs function calls and the iterative formula uses a loop. While the first formula is O(1), and the other two are O(n), the actual times are more like a, b n + c and d n + e, with values for a, b, c, d and e that depend on the hardware, compiler, implementation etc. . With a modern CPU it is very likely that a is not too larger than b or d, which means that the O(1) formula should be faster for almost every n. But most implementations of the iterative algorithm start with
if (n < 2) {
return n;
}
which is very likely to be faster for n = 0 and n = 1. I feel confident that Binet's formula is faster for any n beyond the single digits.
Instead of thinking about the recursive method, think of building the sequence from the bottom up, starting at 1+1.
You can also use a matrix m like this:
1 1
1 0
and calculate power n of it. then output m^n[0,0].
I have a trouble in understanding time complexity. People can look at algorithms and directly say what its time complexity is, but I can't do that well.
Consider two n * n matrices (A and B). Their multiplication result is C.
So, value of C11 consists of n multiplications and n-1 additions. How come that its time complexity is O(n^3)? I would say O(n^2).
Can someone explain it in understandable language? I know what's theta , I know what is big O, but I just can't implement this stuff.
And if you provide another simple example similar to above, that would be greatly appreciated.
Simply put, your matrix C has n x n cells, which requires n^2 operations for all cells. Calculating each cell alone (like c11) takes n operations. So that would take O(n^3) time complexity in total.
You said that computing a cell in C (like c11) takes n^2 is not really correct. What it takes to compute c11 is to loop from 1 to n (write it down on paper and you will see), which is O(n) time complexity.
Practice makes perfect. Just try more problems and you will be good at it. Also, Facebook has an interview preparation tool called codelab for you to improve those related stuff.
Hope this helps!
I feel stupid for asking this question, but...
For the "closest pair of points" problem (see this if unfamiliar with it), why is the worst-case running time of the brute-force algorithm O(n^2)?
If say n = 4, then there would only be 12 possible pair of points to compare in the search space, if we also consider comparing two points from either direction. If we don't compare two points twice, then it's going to be 6.
O(n^2) doesn't add up to me.
The actual number of comparisons is:
, or .
But, in big-O notation, you are only concerned about the dominant term. At very large values of , the term becomes less important, as does the coefficient on the term. So, we just say it's .
Big-O notation isn't meant to give you the exact formula for the time taken or number of steps. It only gives you the order of the complexity/time so you can get a sense of how it scales for large inputs.
Applying brute force, we are forced to check all the possible pairs.Assuming N points,for each point there are N-1 other points for which we need to calculate the distance. So total possible distances calculated = N points * N-1 other points. But in process we double counted distances. Distance between A to B remains whether A to B or B to A is calculated. Hence N*(N-1)/2. Hence O(N^2).
In big-O notation, you can factor out multiplied constants, so:
O(k*(n^2)) = O(n^2)
The idea is that the constant (1/2 in the OP example, since distance comparison is reflective) doesn't really tell us anything new about the complexity. It still gets bigger with the square of the input.
In the brute-force version of the algorithm you compare all possible pairs of points. For each of n points you have (n - 1) other points to compare and if we take every pair once we end up with (n * (n - 1)) / 2 comparisons. The pessimistic complexity of O(n^2) means that the number of operations is bound by k * n^2 for some constant k. Big O notation can't tell you the exact number of operations but a function to which it is proportional when the size of data (n) increases.
I am confused in calculating the time complexity of one problem, please help me in that.
Problem statement:-
2-D matrix is given, you are at bottom-left block, and you have to go to top-right block. One constraint is given, from every point to can move only step either upwards or right direction.
How many such ways are there, prove mathematically ?
Time complexity is in polynomial or exponential form ?
My effort :-
If the matrix is of N*N size, then you have to move exactly 2N steps, out of which N steps is R, and remaining N steps is U.
So, if we simplify this, its a permutation and combination problem, A string is given that contains only two letters R and U, how can you arrange that ?
Answer is
( (2N) C (N) )*( (N) C (N) )*2
Question
Is my above logic correct? If not, please correct me.
Above formula is polynomial or exponential ?
Your idea is correct. However answer is a bit inaccurate.
The string only has letters R and U but its length is 2N-2.
The problem is to arrange 2N-2 objects such that n-1 are of 1 type and n-1 objects are of other type.
The number of possibilities = factorial(2N-2)/( factorial(n-1) * factorial(n-1) )
If you consider product of 2 numbers as O(k), then calculating the above shall have a time complexity of O(N*k).
To get an idea of the order of multiplication for various algorithms, you can visit http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations
EDIT:
This number appears in the expansion of binomial coeffecients of 2^(2n-2).
Hence we can safely say that "counting up to this number" is bounded by an exponential rate of growth, not factorial.