When I was learning the divide and conquer approach, I came to this example (https://www.geeksforgeeks.org/multiply-two-polynomials-2/) about polynomial multiplication. I cannot understand why the time required to add four results (subproblems) is Theta(n). I thought the addition only takes constant time. Why linear time? Thanks in advance!
You're right. But here "to add all results" means the sum of multiplies of each power of x together to find the final result which is a polynomial, i.e., the sum of multiplies of x, x^2, ..., x^n. In this sense, the sum of the four polynomials with power of O(n), takes O(n).
Related
In two algorithms I've been working with, I use the two functions:
pi(n):=number of primes <= n, and
R(n):=r, where prod(p_i,i=1,r)<=n but n < prod(p_i,i=1,r+1) where p_i is the i-th prime.
Basically pi(n) is the famous prime-counting function, and R(n) just calculates the product of consecutive primes until you reach the bound n and returns the amount of primes used, so for example:
R(12)=2 because 2*3<=12 but 2*3*5>12 and for example
R(100)=3 because 2*3*5<=100 but 2*3*5*7>100.
With my Professor we have been talking about the running time of calculating these functions. I know that the pi(n) that it approximates x/ln(x) over time, but I have my doubts about some stuff:
Can R(n) be calculated in polynomial time? From my point of view, by using dynamic programming we can calculate the products 2*3*5*...*p_i by knowing 2*3*5*...*p_(i-1), so the problem reduces to get the next prime, which as far as I know it can be calculated in polynomial time (PRIMES is in P).
Also because we know that we can determine if a number is prime in polynomial time, shouldn't that mean that pi(n) can be calculated in polynomial time? (Also using dynamic programming can be helpful).
If anyone can help me to clarify these questions or point me on the right direction, I would really appreciate it! Thank you!
There are methods to compute pi(n) in sub-linear time. Google for "legendre phi" or for "lehmer prime counting function", or for more recent work "lagarias miller odlyzko prime counting function". Lehmer's method isn't difficult to program; I discuss it at my blog.
For any n, you can easily determine if it's prime in O(n^(1/2)) time (check for divisibility by 2,3,4...,sqrt(n)), so you could just iterate over n and keep a counter as you go. If you store your primes in a list you could even speed it up checking whether each number is prime (check for divisibility by 2,3,5...,closest prime to sqrt(n)). So this algorithm for finding pi(n) should be O(n^(3/2)).
So let's say you run that algorithm and store the primes in a list. Then for R(n), you could just iterate through them to get their cumulative product, and return once you exceed n. I'm not sure what the time complexity of this would be, but it's going to be small. Probably something along the lines of O(log(n)), certainly something faster than O(n). Put both of those together and you should get something faster than O(n^(5/2)).
In two long polynomials of degree n - 1's division, obviously the reminders with coefficient can be calculated in O(n * n). I want to know if there is any faster algorithms to obtain the reminders, e.g. O(nlogn).
P.S(updated)clarify :if polynomials, p, q have different degree, where deg(p) > deg(q), Is it possible to find the remainders of p / q faster than O((deg(p)- deg(q)) * p) while accuracy don't lose?
If the polynomials are of the same degree, you can determine with one operation the number of times one goes into the other (if the coefficients are integers, this will in general be a rational number). Then you can multiply that fraction times one polynomial and subtract it from the other. This takes O(n) field operations, which is optimal since the answer is of length Θ(n) in the average and worst case.
Yes, there is a faster algorithm, however only for very large degrees. First reverse the polynomials, then use fast (Newton-based) Taylor series division to compute the reversed quotient, reverse the quotient and compute the remainder using fast multiplication.
The first operation has the runtime of a multiplication of polynomials with the degree of the quotient, the second of a multiplication of polynomials of the degree of the divisor.
I have seen this problem and I couldn't solve it
the problem is finding the complexity of C(m,n) = C(m-1, n-1) + C(m, n-1) ( Pascal's formula )
Its an iterated formula but with two variable, I have no idea to solve this
I would b happy for your help... :)
If you consider the 2D representation of this formula you get to sum numbers that cover the "area" of a triangle when given its "height", so the complexity would be o(n^2) if calculated directly from the formula.
Idk if what I just said makes sense at all to you but you can also think of expressing the complexity of each iteration of the formula for a fixed n, which will give you linear complexity, multiplied by the linear complexity over n you should still get o(n^2)
This line of thought seems to match what they demonstrate here:
http://www.geeksforgeeks.org/pascal-triangle/
Looking for some help with an upcoming exam, this is a question from the review. Seeing if someone could restate a) so I might be able to better understand what it is asking.
So it wants me to instead of using extra multiplications maybe obtain some of the terms in the answer (PQ) by subtracting and adding already multiplied terms. Such as Strassen does in his algorithm to compute the product of 2x2 matrices in 7 multiplications instead of 8.
a) Suppose P(x) and Q(x) are two polynomials of (even) size n.
Let P1(x) and P2(x) denote the polynomials of size n/2 determined by the first n/2 and last n/2 coefficients of P(x). Similarly define Q1(x) and Q2(x),
i.e., P = P1 + x^(n/2)P2. and Q = Q1 + x^(n/2) Q2.
Show how the product PQ can be computed using only 3 distinct multiplications of polynomials of size n/2.
b) Briefly explain how the result in a) can be used to design a divide-and-conquer algorithm for multiplying two polynomials of size n (explain what the recursive calls are and what the bootstrap condition is).
c) Analyze the worst-case complexity of algorithm you have given in part b). In particular derive a recurrence formula for W(n) and solve. As usual, to simplify the math, you may assume that n is a power of 2.
Here is a link I found which does polynomial multiplication.
http://algorithm.cs.nthu.edu.tw/~course/Extra_Info/Divide%20and%20Conquer_supplement.pdf
Notice here that if we do polynomial multiplication the way we learned in high school, it would take big-omega(n^2) time. The question wants you to see that there is a more efficient algorithm out there by first preprocessing the polynomials, by dividing it into two pieces. This lecture gives a pretty detailed explanation of how to do this.
Especially, look at page 12 of the link. It shows you explicitly how a 4 multiplication process can be done in 3 when multiplying polynomials.
This is a solved problem in "Introduction to Algorithms", by Cormen, et. al.
Ch. 15, Section 15.2: Matrix Chain Multiplication. Pg. 373.
The objective is to parenthesize the matrix chain product A1.A2.A3.....An such that there are minimum number of scalar multiplications.
For Ai.Ai+1....Ak.Ak+1.....Aj,
Matrix Ai has dimension pi-1xpi
The author comes up with the recursion
m[i,j] = 0 if i=j
= min {m[i,k] + m[k+1] + pi-1.pk.pj} where i goes from k to (j-1) if i<j
(m[i,j] is the minimum number of scalar multiplications required for the product Ai....Aj)
So far I understood, but then the time complexity he says is O(n^3).
When I look at the pseudo-code, there are 3 for loops, so its correct. But I don't understand this intuitively by looking at the recursion.
Can anyone please help?
The final solution is to calculate m[0,N]. But all m[i,j] values need to be calculated before m[0,N] can be calculated. This makes it O(N^2).
From the recursion formula you can see each m[i,j] calculation needs O(N) complexity.
So O(N^3) for the complete solution.
There could be O(n^2) unique sub-problems to any MCM given problem and for every such sub-problem there could be O(n) splits possible.
So it is O(n^3).