I am going through the above topic from CLRS(CORMEN) (page 834) and I got stuck at this point.
Can anybody please explain how the following expression,
A(x)=A^{[0]}(x^2) +xA^{[1]}(x^2)
follows from,
n-1 `
Σ a_j x^j
j=0
Where,
A^{[0]} = a_0 + a_2x + a_4a^x ... a_{n-2}x^{\frac{n}{2-1}}
A^{[1]} = a_1 + a_3x + a_5a^x ... a_{n-1}x^{\frac{n}{2-1}}
The polynomial A(x) is defined as
A(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ...
To start the divide-and-conquer strategy of polynomial multiplication by the FFT, CLRS introduces two new polynomials: one of the coefficients of the even-powers of x called A[0] and one of the coefficients of the odd-powers of x called A[1]
A[0](x) = a_0 + a_2 x + a_4 x^2 + ...
A[1](x) = a_1 + a_3 x + a_5 x^2 + ...
Now if we substitute x^2 into A[0] and A[1], we have
A[0](x^2) = a_0 + a_2 x^2 + a_4 x^4 + ...
A[1](x^2) = a_1 + a_3 x^2 + a_5 x^4 + ...
and if we multiply A[1](x^2) by x, we have
x A[1](x^2) = a_1 x + a_3 x^3 + a_5 x^5 + ...
Now if we add A[0](x^2) and x A[1](x^2), we have
A[0](x^2) + x A[1](x^2) = (a_0 + a_2 x^2 + a_4 x^4 + ...) + (a_1 x + a_3 x^3 + a_5 x^5 + ...)
= a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ...
= A(x)
Q.E.D.
If you divvy the polynomial up into "odd exponents" and "even exponents", you'll find the annoying fact that the A[1] polynomial (the one with odd exponents) has, well odd exponents! Even exponents are easier to work with, for FFT. So, one can simply factor out a single "x" from all of the values in A[1], and move it outside of the expression.
FFT likes working with even-exponented polynomials only. Thus, when you're dividing-and-conquering, you want to turn your A[1] expression into an "even-exponented" polynomial, and recurse on that, and then multiply-back-in that x. You will see that occur in the inner loop of the actual algorithm.
Edit: I realize that your confusion may stem from the fact that they're "passing in" (x^2) as the value in the polynomial. The "x" in A[1] and A[0] are different from the x in the (x^2) expression. You'll see how that must be, as while the original polynomial A goes up to exponent N, A[1] and A[0] both only go up to exponent (N/2).
I'm not going to answer your question because I feel that previous people have answered it. What I will do is try to explain the purpose of the FFT.
First, the FFT is a way to compute the convolution between two vectors. That is, suppose x = and y= are 1xn vectors then the convolution of x and y is
\sum_{i=0} ^n {xi y{n-i}}.
You will have to accept the fact that computing that value is EXTREMELY useful in a wide range of applications.
Now consider the following.
Suppose we construct two polynomials
A(z) = x0 + x1*z + x2 *z^2 + .. + xn^z^n
B(z) = y0 + y1*z + y2 *z^2 + .. + yn^z^n
then the multiplication is
AB(z) = A(z)B(z) = \sum_{i=0} ^ n (\sum_{k=0} ^ i xk*y{i-k}) z^i
where the inside sum is clearly a convolution of different size for different values of k.
Now we can clearly compute the coefficients (convolutions) of AB in n^2 time by a brute force method.
However, we can also be much more clever. Consider the fact that any polynomial of degree n can be described uniquely by n+1 points. That is given n+1 points we can construct the unique polynomial of degree n that goes through all n+1 points. Further more consider 2 polynomials in the form of n+1 points. You can compute their product by simply multiplying the n+1 y-values and keeping the x-values to result in their product in point-form. Now given a polynomial in n+1 point-form you can find the unique polynomial that describes it in O(n) time (actually Im not sure about this, it may be O(nlogn) time but certainly not more.)
This is exactly what the FFT does. However, the points that it picks to get the n+1 points to described the polynomials A and B are VERY carefully chosen. Some of the points are indeed complex because it just so happens that you can save time in evaluating a Polynomial by considering such points. That is if you were to choose just real points instead of the carefully chosen points that the FFT uses you would need O(n^2) time to evaluate the n+1 points. If you choose the FFT you only need O(nlogn) time. And thats all there is to the FFT. Oh and there is a unique side effect to the way that the FFT chooses points. Given an n-th degree polynomial, you must choose 2^m points where m is chosen such that 2^m is the smallest power of 2 greater than or equal to n.
A(x) is broken in to even x^2, and odd x parts,
for example if A(x) = 21 x^5 + 17 x^4 + 33 x^3 + 4 x^2 + 8 x + 7
then A0 = 17 y^2 + 4 y + 7
so that A0(x^2) = 17 x^4 + 4 x^2 + 7
and A1 = 21 y^2 + 33 y + 8
so that A1(x^2) = 21 x^4 + 33 x^2 + 8
or x * A1(x^2) = 21 x^5 + 33 x^3 + 8 x
clearly, in this case, A(x) = A0(x^2) + x A1(x^2) = even + odd parts
Related
The multiplication algorithm is for multiplying two radix r numbers:
0 <= x,y < r^n
x = x1 * r^(n/2) + x0
y = y1 * r^(n/2) + y0
where x0 is the half of x that contains the least significant digits, and x1 is the half with the most significant digits, and similarly for y.
So if r = 10 and n = 4, we have that x = 9723 = 97 * 10^2 + 23, where x1 = 97 and x0 = 23.
The multiplication can be done as:
z = x*y = x1*y1 + (x0*y1 + x1*y0) + x0*y0
So we have now four multiplications of half-sized numbers (we initially had a multiplication of n digit numbers, and now we have four multiplications of n/2 digit numbers).
As I see it the recurrence for this algorithm is:
T(n) = O(1) + 4*T(n/2)
But apparently it is T(n) = O(n) + 3T(n/2)
Either way, the solution is T(n) = O(n^2), and I can see this, but I am wondering why there is an O(n) term instead of an O(1) term?
You are right, if you'll compute the term x0*y1 + x1*y0 naively, with two products, the time complexity is quadratic. This is because we do four products and the recurrence is, as you suggest, T(n) = O(n) + 4T(n/2), which solves to O(n^2).
However, Karatsuba observed that xy=z2 * r^n + z1 * r^(n/2) + z0, where we let z2=x1*y2, z0=x0*y0, and z1=x0*y1 + x1*y0, and that one can express the last term as z1=(x1+x0)(y1+y0)-z2-z0,which involves only one product. Using this trick, the recurrence does become T(n) = O(n) + 3T(n/2) because we do three products altogether (as opposed to four if don't use the trick).
Because the numbers are of order r^n we will need n digits to represent the numbers (in general, for a fixed r>=2, we need O(log N) digits to represent the number N). To add two numbers of that order, you need to "touch" all the digits. Since there are n digits, you need O(n) (formally I'd say Omega(n), meaning "at least order of n time", but let's leave the details aside) time to compute their sum.
For example, when computing the product N*M, the number of bits n will be max(log N, log M) (assuming the base r>=2 is constant).
The algebraic trick is explained in more detail on the Wiki page for the Karatsuba algorithm.
I am studying big O notation from this book.
The deffinition of big O notation is:
We say that f (x) is O(g(x)) if there are constants C and k such that |f (x)| ≤ C|g(x)| whenever x > k.
Now here is the first example:
EXAMPLE 1 Show that f (x) = x^2 + 2x + 1 is O(x^2).
Solution: We observe that we can readily estimate the size of f (x) when x > 1 because x 1. It follows that
0 ≤ x^2 + 2x + 1 ≤ x^2 + 2x^2 + x^2 = 4x^2
whenever x > 1. Consequently, we can take C = 4 and k = 1 as witnesses to show that f (x) is O(x^2). That is, f (x) = x^2 + 2x + 1
1. (Note that it is not necessary to use absolute values here because all functions in these equalities are positive when x is positive.)
I honestly don't know how they got c = 4, looks like they jump straight to the equation manipulation and my algebra skills are pretty weak. However, I found another way through [The accepted answer to this question])What is an easy way for finding C and N when proving the Big-Oh of an Algorithm?) that says to add all coefficients to find c if k = 1. So x^2+2x+1 = 1+2+1 = 4.
Now for k = 2, I'm completely lost:
Alternatively, we can estimate the size of f (x) when x > 2. When x > 2, we have 2x ≤ x^2 and 1 ≤ x^2. Consequently, if x > 2, we have
0 ≤ x^2 + 2x + 1 ≤ x^2 + x^2 + x^2 = 3x^2.
It follows that C = 3 and k = 2 are also witnesses to the relation f (x) is O(x^2).
Can anyone explain what is happening? What method are they using?
First alternative:
C=4?
The C=4 come from the inequalities
0 ≤ x^2 + 2x + 1 ≤ x^2 + 2x^2 + x^2 = 4x^2 = C*x^2, with C=4 (+)
The second inequality in (+) is true for all x greater than 1, since, term by term
2x < 2x^2, given x>1
1 < x^2, given x>1
k = 1?
From above, we've shown that (+) holds as long as x is larger than 1, i.e.
(+) is true given x > k, with k=1
Second alternative:
k=2?
By the statement, we want to study f(x) for x larger than 2, i.e.
Study f(x) for x > k, k=2
Given x > 2, it's apparent that
0 ≤ x^2 + 2x + 1 ≤ x^2 + x^2 + x^2 = 3x^2 = C*x^2, with C=3 (++)
since, for x>2, we have
2x = x^2 given x=2 ==> 2x < x^2 given x>2
for x=2, 1 < x^2 = 4, so 1 < x^2 for all x>2
Both examples show that f(x) is O(x^2). By using your constants C and k, recall that then Big-O notation for f(x) can be summarized as something along the lines
... we can say that f(x) is O(g(x)) if we can find a constant C such
that |f(x)| is less than C|g(x)| or all x larger than k, i.e., for all
x>k. (*)
This, by no means, implies that we need to find a unique set of (C, k) to prove that some f(x) is some O(g(x)), just some set (C, k) such that (*) above holds.
See e.g. the following link for some reference on how to specify the asymptotic behaviour of a function:
https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/big-o-notation
I was asked to analyze the time complexity of the following recursive equation using the iterative method:
T(n)=T(n/3)+T(2n/3)+n^2.
T(1)=1
when I try to expand the equation it blows up and I can't really keep track of all the recursive "calls" and constants.
This is caused by the uneven division of the data (1\3 - 2\3).
Is there an easier way to solve this using the iterative method?
Many thanks.
Here is a paper that shows the analysis of a similiar formula: T(n)=T(n/3)+T(2n/3)+n
One way to make it iterative will require using a method similar to how parsers\compilers work
Applying your formula: T(n)=T(n/3)+T(2n/3)+n^2 with n = 1..9 yields
T(0) = 0
T(1) = T(1/3) + T(2/3) + 1
T(2) = T(2/3) + T(4/3) + 4
T(3) = T(1) + T(2) + 9
T(4) = T(4/3) + T(8/3) + 16
T(5) = T(5/3) + T(10/3) + 25
T(6) = T(2) + T(4) + 36
T(7) = T(7/3) + T(14/3) + 49
T(8) = T(8/3) + T(16/3) + 64
T(9) = T(3) + T(6) + 91
T(3m) = T(m) + T(2m) + 9m^2
.. Maybe this can give you some hints
What helps here is to not multiply out any of the numbers, but write everything in terms of powers. Doing that all by hand, I got the following for the first few expansions:
T(n) = T((1/3)n) + T((2/3)n) + n^2
= T((1/3^2)n)
+ 2T((2/3^2)n)
+ T((2^2/3^2)n)
+ [n^2] #constants from the first expansion
+ [((1/3)n)^2 + ((2/3)n)^2] #constants from the second expansion
= T((1/3^3)n)
+ 3T((2/3^3)n)
+ 3T((2^2/3^3)n)
+ T((2^3/3^3)n)
+ [n^2]
+ [((1/3)n)^2 + ((2/3)n)^2]
+ [((1/3^2)n)^2 + ((2/3^2)n)^2 + ((2^2/3^2)n)^2] #constants from 3rd expansion
It's a bit hard to tell, but what seems to happen is that you get the binomial coefficients going for the Ts, where the xth expansion looks like this:
T(n) = sum((x choose i) * T(((2^i)/(3^x))n), i from 0 to x)
+ constants
At each step, the additional constants that are added at expansion x are the arguments to T from expansion x-1, squared, since they all end up getting squared thanks to the n^2. So all the new constants at a given expansion y are equal to:
NewConsts(y) = sum(((y - 1) choose i) * (((2^i)/(3^(y-1)))*n)^2, i from 0 to y - 1)
And all the constants at expansion x are equal to
n^2 + sum(NewConsts(y), y from 1 to x)
So, assuming all the above is correct, which I'm not 100% sure on, I guess you have to figure out when the constants stop mattering - that is, for what x is ((2^x / 3^x) * n)^2) equal to 0 - and your answer is the sum of all of those constants...
It seems to be O(n^2) if I haven't missed anything...
First of all T grows monotonously (for several first values you can check this manually, for the rest it's by induction - if a function is monotonous in [1..10], then it will be monotonous on [1..15] and so on).
T(n)=T(n/3)+T(2n/3)+n^2<=2T(2n/3)+n^2
T(n)<=n^2+2*(2n/3)^2+4*(4n/9)^2+...
=sum[k=0..log3(n)]((8/9)^k*n^2)
=n^2*sum[k=0..log3(n)](8/9)^k
<=n^2*sum[k=0..inf](8/9)^k
<=C*n^2
I can clearly see than N^2 is bounded by c2^N, but how do i prove it by using formal definition of big-O. I can simply prove it by M.I.
Here is my attempt..
By definition, there for any n>n0, there exist a constant C which
f(n) <= Cg(n)
where
f(n) = n^2
and
g(n) = 2^n
Should I take log to both side and solve for C?
and one more question about fibonacci sequence, i wanna solve the recurrence relation.
int fib(int n){
if(n<=1) return n;
else return fib(n-1) + fib(n-2);
The equation is..
T(n) = T(n-1)+T(n-2)+C // where c is for the adding operation
so
T(n) = T(n-2) + 2T(n-3) + T(n-4) + 3c
and one more
T(n) = T(n-3) + 3T(n-4) + 3T(n-5) + T(n-6) + 6c
then i started to get lost to form the general equation i
The pattern is somehow like pascal triangle?
t(n) = t(n-i) + aT(n-i-1) + bT(n-i-2) + ... + kT(n-i-i) + C
As you point out, to see if f(x) ϵ O(g(x)) you need to find...
...some c > 0 and
...some x0
such that f(x) < c·g(x) for all x > x0.
In this case, you can pick c = 1 and x0 = 2. What you need to prove is that
x2 < 2x for all x > 2
At this point you can log both sides (since if log(x) > log(y), then x > y.) Assuming you're using base-2 log you get the following
log(x2) < log(2x)
and by standard laws of logarithms, you get
2·log(x) < x·log(2)
Since log(2) = 1 this can be written as
2·log(x) < x
If we set x = 2, we get
2·log(2) = 2
and since x grows faster than log(x) we know that 2·log(x) < x holds for all x > 2.
For the most part, the accepted answer (from aioobe) is correct, but there is an important correction that needs to be made.
Yes, for x=2, 2×log(x) = x or 2×log(2) = 2 is correct, but then he incorrectly implies that 2×log(x) < x is true for ALL x>2, which is not true.
Let's take x=3, so the equation becomes: 2×log(3) < 3 (an invalid equation).
If you calculate this, you get: 2×log(3) ≈ 3,16993 which is greater than 3.
You can clearly see this if you plot f(x) = x2 and g(x) = 2x or if you plot f(x)= 2×log(x) and g(x) = x (if c=1).
Between x=2 and x=4, you can see that g(x) will dip below f(x). It is only when x ≥ 4, that f(x) will remain ≤ c×g(x).
So to get the correct answer, you follow the steps described in aioobe's answer, but you plot the functions to get the last intersection where f(x) = c×g(x). The x at that intersection is your x0 (together with the choosen c) for which the following is true: f(x) ≤ c×g(x) for all x ≥ x0.
So for c=1 it should be: for all x≥4, or x0=4
To improve upon the accepted answer:
You have to prove that x^2 < 2^x for all x > 2
Taking log on both sides, we have to prove that:
2·log(x) < x for all x > 2
Thus we have to show the function h(x)=x-2·log(x)>0 for all x>2
h(2)=0
Differentiating h(x) with respect to x, we get h'(x)= 1 - 1/(x·ln(2))
For all x>2, h'(x) is always greater than 0, thus h(x) keeps increasing and since h(2)=0,
it is hence proved that h(x) > 0 for all x > 2,
or x^2 < 2^x for all x > 2
The question is, how to solve 1/x + 1/y = 1/N! (N factorial). Find the number of values that satisfy x and y for large values of N.
I've solved the problem for relatively small values of N (any N! that'll fit into a long). So, I know I solve the problem by getting all the divisors of (N!)^2. But that starts failing when (N!)^2 fails to fit into a long. I also know I can find all the divisors of N! by adding up all the prime factors of each number factored in N!. What I am missing is how I can use all the numbers in the factorial to find the x and y values.
EDIT: Not looking for the "answer" just a hint or two.
Problem : To find the count of factors of (N!)^2.
Hints :
1) You don't really need to compute (N!)^2 to find its prime factors.
Why?
Say you find the prime factorization of N! as (p1^k1) x (p2^k2) .... (pi^ki)
where pj's are primes and kj's are exponents.
Now the number of factors of N! is as obvious as
(k1 + 1) x (k2 + 1) x ... x (ki + 1).
2) For (N!)^2, the above expression would be,
(2*k1 + 1) * (2*k2 + 1) * .... * (2*k1 + 1)
which is essentially what we are looking for.
For example, lets take N=4, N! = 24 and (N!)^2 = 576;
24 = 2^3 * 3^1;
Hence no of factors = (3+1) * (1+1) = 8, viz {1,2,3,4,6,8,12,24}
For 576 = 2^6 * 3^2, it is (2*3 + 1) * (2*1 + 1) = 21;
3) Basically you need to find the multiplicity of each primes <= N here.
Please correct me if i'm wrong somewhere till here.
Here is your hint. Suppose that m = p1k1 · p2k2 · ... · pjkj. Every factor of m will have from 0 to k1 factors of p1, 0 to k2 factors of p2, and so on. Thus there are (1 + k1) · (1 + k2) · ... · (1 + kj) possible divisors.
So you need to figure out the prime factorization of n!2.
Note, this will count, for instance, 1⁄6 = 1⁄8 + 1⁄24 as being a different pair from 1⁄6 = 1⁄24 + 1⁄8. If order does not matter, add 1 and divide by 2. (The divide by 2 is because typically 2 divisors will lead to the same answer, with the add 1 for the exception that the divisor n! leads to a pair that pairs with itself.)
It's more to math than programming.
Your equation implies xy = n!(x+y).
Let c = gcd(x,y), so x = cx', y= cy', and gcd(x', y')=1.
Then c^2 x' y'=n! c (x'+y'), so cx'y' = n!(x' + y').
Now, as x' and y' are coprime, and cannot be divisible be x'+y', c should be.
So c = a(x'+y'), which gives ax'y'=n!.
To solve your problem, you should find all two coprime divisors of n!, every pair of which will give a solution as ( n!(x'+y')/y', n!(x'+y')/x').
Let F(N) be the number of (x,y) combinations that satisfy your requirements.
F(N+1) = F(N) + #(x,y) that satisfy the condition for N+1 and at least one of them (x or y) is not divisible N+1.
The intuition here is for all combinations (x,y) that work for N, (x*(N+1), y*(N+1)) would work for N+1. Also, if (x,y) is a solution for N+1 and both are divisible by n+1, then (x/(N+1),y/(N+1)) is a solution for N.
Now, I am not sure how difficult it is to find #(x,y) that work for (N+1) and at least one of them not divisible by N+1, but should be easier than solving the original problem.
Now Multiplicity or Exponent for Prime p in N! can be found by below formula:\
Exponent of P in (N!)= [N/p] + [N/(P^2)] +[N/(P^3)] + [N/(P^4)] +...............
where [x]=Step function E.g. [1.23]=integer part(1.23)=1
E.g. Exponent of 3 in 24! = [24/3] +[24/9]+ [24/27] + ... = 8 +2 +0 + 0+..=10
Now whole problem reduces to identifying prime number below N and finding its Exponent in N!