Given a positive integer n, find largest integer a such that a*a divides n.
If you know the factorization of n, it's fairly trivial. The question is, can it be done asymptotically faster than factoring n using the best known method? Is there any polynomial algorithm? (polynomial in bit-length of n)
For example with Pollard's rho algorithm for factorization it would run in O(n^(1/4)), but I'm looking for better.
Related
The security of RSA hinges upon a simple assumption:
Given N, e, and y = (x ^e) mod N, it is computationally intractable to
determine x.
This assumption is quite plausible. How might Eve try to guess x?
She could experiment with all possible values of x, each time checking whether x^e is equal to y mod N, but this would take exponential time. Or she could try to factor N to retrieve p and q, and then figure out d by inverting e modulo (p-1)(q-1), but we believe factoring to be hard. Intractability is normally a source of dismay; the insight of RSA lies in using it to advantage.
My question on above text
How we got exponential time for calcuation for each value of x in above context?
https://en.wikipedia.org/wiki/Time_complexity
...the time complexity is generally expressed as a function of the
size of the input
The size of the input of this task is proportional to b = the number of bits in N. Thus the iteration through all possible values of x has time complexity O(2^b) that is exponential.
The algorithms that have a polynomial time complexity in terms of numeric value of the input are called pseudo-polynomial algorithms. For example, it is well known that integer factorization problem has no known polynomial algorithm. But we can simply iterate from 2 to sqrt(N) and find all prime factors of number N in O(sqrt(N)) time. This algorithm has a polynomial complexity in terms of N, but the length of the input of this problem is not N, it is log(N) approximately. As a result, this iteration is only a pseudo-polynomial solution.
I am wondering if there are fast algorithms to do polynomial multiplication modulo N, i.e. given two polynomials of degree N,
I am interested in their product but only up to degree N, i.e.
Note that the k sum only goes to N (not to 2N as it would in normal polynomial multiplication). For example given
the result should be:
I am aware that there are fast algorithms to do polynomial multiplication but I wonder if they can be applied to the restricted polynomial multiplication I am interested in.
In two long polynomials of degree n - 1's division, obviously the reminders with coefficient can be calculated in O(n * n). I want to know if there is any faster algorithms to obtain the reminders, e.g. O(nlogn).
P.S(updated)clarify :if polynomials, p, q have different degree, where deg(p) > deg(q), Is it possible to find the remainders of p / q faster than O((deg(p)- deg(q)) * p) while accuracy don't lose?
If the polynomials are of the same degree, you can determine with one operation the number of times one goes into the other (if the coefficients are integers, this will in general be a rational number). Then you can multiply that fraction times one polynomial and subtract it from the other. This takes O(n) field operations, which is optimal since the answer is of length Θ(n) in the average and worst case.
Yes, there is a faster algorithm, however only for very large degrees. First reverse the polynomials, then use fast (Newton-based) Taylor series division to compute the reversed quotient, reverse the quotient and compute the remainder using fast multiplication.
The first operation has the runtime of a multiplication of polynomials with the degree of the quotient, the second of a multiplication of polynomials of the degree of the divisor.
Strassen's algorithm is polynomially faster than n-cubed regular matrix multiplication. What does "polynomially faster" mean?
Your question has to do with the theoretical concept of "complexity".
As an example, the regular matrix multiplication is said to have the complexity of O(n^3). This means that as the dimension "n" grows, the time it takes to run the algorithm, T(n) is guaranteed to not exceed the function "n^3" (the cubic function) with respect to a positive constant.
Formally, this means:
There exists a positive treshold n_t such that for every n >= n_t, T(n) <= c * n^3, where c > 0 is some constant.
In your case, the Strassen algorithm has been demonstrated to have the complexity O(n^ log7). Since log7 = 2.8 < 3, it follows that the Strassen algorithm is guaranteed to run faster than the classical multiplication algorithm as n grows.
As a side-note, keep in mind that for very small values of n (i.e. when n < n_t above) this statement might not hold.
Algorithms with complexity O(n^3) and O(n^2) both are polynomial. But the second is polynomially faster.
In this case, I assume it means that both algoritms have a ploynomial run time, but the Strassen algorithm is faster.
That's only because the standard (even for a cube) is polynomial.
Anyway, I don't think the term "polynomially faster" is a standard term.
So I'm computing the Fibonacci numbers using Binet's formula with the GNU MP library. I'm trying to work out the asymptotic runtime of the algorithm.
For Fib(n) I'm setting the variables to n bits of precision; thus I believe multiplying two numbers is n Log(n). The exponentiation is, I believe n Log(n) multiplications; so I believe I have n Log(n)Log(n Log(n)). Is this correct, in both in assumptions (multiplying floating point numbers and number of multiplications in exponentiation with integer exponent) and in conclusion?
If my precision is high, and I use precision g(n); then I think this reduces to g(n) Log(g(n)); however I think g(n) should be g(n)=n Log(phi)+1; which shouldn't have a real impact on the asymptotics.
I don't agree with your evaluation.
The cost of long multiplies depends on the algorithm in use. Can be O(n^1.585) [Karatsuba], or O(n.Log(n).Log(Log(n))) [Schönhage–Strassen], or other.
The cost of exponentiation can be made O(Log(n)) multiplies for exponent n.