best complexity to evaluate coefficients of polynomial - algorithm

I want to find out coefficients of the n degree polynomial with roots 0,1,2...n-1. Can anybody suggest a good algorithm? I tried using FFT but didn't work fast enough

The simple solution that I would use is to write a function like this:
def poly_with_root_sequence (start, end, gap):
if end < start + gap:
return Polynomial([1, -start])
else:
p1 = poly_with_root_sequence(start, end, gap*2)
p2 = poly_with_root_sequence(start+gap, end, gap*2)
return p1 * p2
answer = poly_with_root_sequence(1, n, 1)
With a naive algorithm this will take O(n^2) arithmetic operations. However some of the operations will involve very large numbers. (Note that n! has more than n digits for large n.) But we have arranged that very few of the operations will involve very large numbers.
There is still no chance of producing answers as quickly as you want unless you are using a polynomial implementation with a very fast multiplication algorithm.
https://gist.github.com/ksenobojca/dc492206f8a8c7e9c75b155b5bd7a099 advertises itself as an implementation of the FFT algorithm for multiplying polynomials in Python. I can't verify that. But it gives you a shot at going fairly fast.

As replied on Evaluating Polynomial coefficients, you can do this as simple way:
def poly(lst, x):
n, tmp = 0, 0
for a in lst:
tmp = tmp + (a * (x**n))
n += 1
return tmp
print poly([1,2,3], 2)

Related

How do I prove that this algorithm is O(loglogn)

How do I prove that this algorithm is O(loglogn)
i <-- 2
while i < n
i <-- i*i
Well, I believe we should first start with n / 2^k < 1, but that will yield O(logn). Any ideas?
I want to look at this in a simple way, what happends after one iteration, after two iterations, and after k iterations, I think this way I'll be able to understand better how to compute this correctly. What do you think about this approach? I'm new to this, so excuse me.
Let us use the name A for the presented algorithm. Let us further assume that the input variable is n.
Then, strictly speaking, A is not in the runtime complexity class O(log log n). A must be in (Omega)(n), i.e. in terms of runtime complexity, it is at least linear. Why? There is i*i, a multiplication that depends on i that depends on n. A naive multiplication approach might require quadratic runtime complexity. More sophisticated approaches will reduce the exponent, but not below linear in terms of n.
For the sake of completeness, the comparison < is also a linear operation.
For the purpose of the question, we could assume that multiplication and comparison is done in constant time. Then, we can formulate the question: How often do we have to apply the constant time operations > and * until A terminates for a given n?
Simply speaking, the multiplication reduces the effort logarithmic and the iterative application leads to a further logarithmic reduce. How can we show this? Thankfully to the simple structure of A, we can transform A to an equation that we can solve directly.
A changes i to the power of 2 and does this repeatedly. Therefore, A calculates 2^(2^k). When is 2^(2^k) = n? To solve this for k, we apply the logarithm (base 2) two times, i.e., with ignoring the bases, we get k = log log n. The < can be ignored due to the O notation.
To answer the very last part of the original question, we can also look at examples for each iteration. We can note the state of i at the end of the while loop body for each iteration of the while loop:
1: i = 4 = 2^2 = 2^(2^1)
2: i = 16 = 4*4 = (2^2)*(2^2) = 2^(2^2)
3: i = 256 = 16*16 = 4*4 = (2^2)*(2^2)*(2^2)*(2^2) = 2^(2^3)
4: i = 65536 = 256*256 = 16*16*16*16 = ... = 2^(2^4)
...
k: i = ... = 2^(2^k)

Fastest algorithm for computing the determinant of a matrix?

For a research paper, I have been assigned to research the fastest algorithm for computing the determinant of a matrix.
I already know about LU decomposition and Bareiss algorithm which both run in O(n^3), but after doing some digging, it seems there are some algorithms that run somewhere between n^2 and n^3.
This source (see page 113-114) and this source (see page 198) say that an algorithm exists that runs in O(n^2.376) because it is based on the Coppersmith-Winograd's algorithm for multiplying matrices. However, I have not been able to find any details on such an algorithm.
My questions are:
What is the fastest created (non-theoretical) algorithm for computing the determinant of a matrix?
Where can I find information about this fastest algorithm?
Thanks so much.
I believe the fastest in practice (and commonly used) algorithm is the Strassen Algorithm. You can find explanation on Wikipedia along with sample C code.
Algorithms based on Coppersmith-Winograd's multiplication algorithms are too complex to be practical, though they have best asymptotic complexity as far.
I know this is not a direct answer for my question, but for the purposes of completing my research paper, it is enough.
I just ended up asking my professor and I will summarize what he said:
Summary:
The fastest matrix-multiplication algorithms (e.g., Coppersmith-Winograd and more recent improvements) can be used with O(n^~2.376) arithmetic operations, but use heavy mathematical tools and are often impractical.
LU Decomposition and Bareiss do use O(n^3) operations, but are more practical
In short, even though LU Decomposition and Bareiss are not as fast as the most efficient algorithms, they are more practical and I should focus my research paper on these two.
Thanks for all who commented and helped!
See the following Matlab test script, which computes determinants of arbitrary square matrices (comparisons to Matlab's built-in function is also included):
nMin = 2; % Start with 2-by-2 matrices
nMax = 50; % Quit with 50-by-50 matrices
nTests = 10000;
detsOfL = NaN*zeros(nTests, nMax - nMin + 1);
detsOfA = NaN*zeros(nTests, nMax - nMin + 1);
disp(' ');
for n = nMin:nMax
tStart = tic;
for j = 1:nTests
A = randn(n, n);
detA1 = det(A); % Matlab's built-in function
if n == 1
detsOfL(j, 1) = 1;
detsOfA(j, 1) = A;
continue; % Trivial case => Quick return
end
[L, U, P] = lu(A);
PtL = P'*L;
realEigenvaluesOfPtL = real(eig(PtL));
if min(prod(realEigenvaluesOfPtL)) < 0 % det(L) is always +1 or -1
detL = -1;
else
detL = 1;
end
detU = prod(diag(U));
detA2 = detL * detU; % Determinant of A using LU decomposition
if detA1 ~= detA2
error(['Determinant computation failed at n = ' num2str(n) ', j = ' num2str(j)]);
end
detsOfL(j, n - nMin + 1) = detL;
detsOfA(j, n - nMin + 1) = detA2;
end
tElapsed = toc(tStart);
disp(sprintf('Determinant computation was successful for n = %d!! Elapsed time was %.3f seconds', n, tElapsed));
end
disp(' ');

I have a new algorithm to find factors or primes in linear time - need verification for this

I have developed an algorithm to find factors of a given number. Thus it also helps in finding if the given number is a prime number. I feel this is the fastest algorithm for finding factors or prime numbers.
This algorithm finds if a give number is prime in time frame of 5*N (where N is the input number). So I hope I can call this a linear time algorithm.
How can I verify if this is the fastest algorithm available? Can anybody help in this matter? (faster than GNFS and others known)
Algorithm is given below
Input: A Number (whose factors is to be found)
Output: The two factor of the Number. If the one of the factor found is 1 then it can be concluded that the
Number is prime.
Integer N, mL, mR, r;
Integer temp1; // used for temporary data storage
mR = mL = square root of (N);
/*Check if perfect square*/
temp1 = mL * mR;
if temp1 equals N then
{
r = 0; //answer is found
End;
}
mR = N/mL; (have the value of mL less than mR)
r = N%mL;
while r not equals 0 do
{
mL = mL-1;
r = r+ mR;
temp1 = r/mL;
mR = mR + temp1;
r = r%mL;
}
End; //mR and mL has answer
Please provide your comments.. dont hesitate to contact me for any more information.
Thanks,
Harish
http://randomoneness.blogspot.com/2011/09/algorithm-to-find-factors-or-primes-in.html
"Linear time" means time proportional to the length of the input data: the number of bits in the number you're trying to factorize, in this case. Your algorithm does not run in linear time, or anything close to it, and I'm afraid it's much slower than many existing factoring algorithms. (Including, e.g., GNFS.)
The size of the input in this case is not n, but the number of bits in n, so the running time of your algorithm is exponential in the size of the input. This is known as pseudo-polynomial time.
I haven't looked closely at your algorithm, but prime number tests are usually faster than O(n) (where n is the input number). Take for example this simple one:
def isprime(n):
for f in range(2,int(sqrt(n))):
if n % f == 0:
return "not prime"
return "prime"
Here it is determined in O(sqrt(n)) if n is prime or not, simply by checking all possible factors up to sqrt(n).

Better ways to implement a modulo operation (algorithm question)

I've been trying to implement a modular exponentiator recently. I'm writing the code in VHDL, but I'm looking for advice of a more algorithmic nature. The main component of the modular exponentiator is a modular multiplier which I also have to implement myself. I haven't had any problems with the multiplication algorithm- it's just adding and shifting and I've done a good job of figuring out what all of my variables mean so that I can multiply in a pretty reasonable amount of time.
The problem that I'm having is with implementing the modulus operation in the multiplier. I know that performing repeated subtractions will work, but it will also be slow. I found out that I could shift the modulus to effectively subtract large multiples of the modulus but I think there might still be better ways to do this. The algorithm that I'm using works something like this (weird pseudocode follows):
result,modulus : integer (n bits) (previously defined)
shiftcount : integer (initialized to zero)
while( (modulus<result) and (modulus(n-1) != 1) ){
modulus = modulus << 1
shiftcount++
}
for(i=shiftcount;i>=0;i--){
if(modulus<result){result = result-modulus}
if(i!=0){modulus = modulus >> 1}
}
So...is this a good algorithm, or at least a good place to start? Wikipedia doesn't really discuss algorithms for implementing the modulo operation, and whenever I try to search elsewhere I find really interesting but incredibly complicated (and often unrelated) research papers and publications. If there's an obvious way to implement this that I'm not seeing, I'd really appreciate some feedback.
I'm not sure what you're calculating there to be honest. You talk about modulo operation, but usually a modulo operation is between two numbers a and b, and its result is the remainder of dividing a by b. Where is the a and b in your pseudocode...?
Anyway, maybe this'll help: a mod b = a - floor(a / b) * b.
I don't know if this is faster or not, it depends on whether or not you can do division and multiplication faster than a lot of subtractions.
Another way to speed up the subtraction approach is to use binary search. If you want a mod b, you need to subtract b from a until a is smaller than b. So basically you need to find k such that:
a - k*b < b, k is min
One way to find this k is a linear search:
k = 0;
while ( a - k*b >= b )
++k;
return a - k*b;
But you can also binary search it (only ran a few tests but it worked on all of them):
k = 0;
left = 0, right = a
while ( left < right )
{
m = (left + right) / 2;
if ( a - m*b >= b )
left = m + 1;
else
right = m;
}
return a - left*b;
I'm guessing the binary search solution will be the fastest when dealing with big numbers.
If you want to calculate a mod b and only a is a big number (you can store b on a primitive data type), you can do it even faster:
for each digit p of a do
mod = (mod * 10 + p) % b
return mod
This works because we can write a as a_n*10^n + a_(n-1)*10^(n-1) + ... + a_1*10^0 = (((a_n * 10 + a_(n-1)) * 10 + a_(n-2)) * 10 + ...
I think the binary search is what you're looking for though.
There are many ways to do it in O(log n) time for n bits; you can do it with multiplication and you don't have to iterate 1 bit at a time. For example,
a mod b = a - floor((a * r)/2^n) * b
where
r = 2^n / b
is precomputed because typically you're using the same b many times. If not, use the standard superconverging polynomial iteration method for reciprocal (iterate 2x - bx^2 in fixed point).
Choose n according to the range you need the result (for many algorithms like modulo exponentiation it doesn't have to be 0..b).
(Many decades ago I thought I saw a trick to avoid 2 multiplications in a row... Update: I think it's Montgomery Multiplication (see REDC algorithm). I take it back, REDC does the same work as the simpler algorithm above. Not sure why REDC was ever invented... Maybe slightly lower latency due to using the low-order result into the chained multiplication, instead of the higher-order result?)
Of course if you have a lot of memory, you can just precompute all the 2^n mod b partial sums for n = log2(b)..log2(a). Many software implementations do this.
If you're using shift-and-add for the multiplication (which is by no means the fastest way) you can do the modulo operation after each addition step. If the sum is greater than the modulus you then subtract the modulus. If you can predict the overflow, you can do the addition and subtraction at the same time. Doing the modulo at each step will also reduce the overall size of your multiplier (same length as input rather than double).
The shifting of the modulus you're doing is getting you most of the way towards a full division algorithm (modulo is just taking the remainder).
EDIT Here is my implementation in Python:
def mod_mul(a,b,m):
result = 0
a = a % m
b = b % m
while (b>0):
if (b&1)!=0:
result += a
if result >= m: result -= m
a = a << 1
if a>=m: a-= m
b = b>>1
return result
This is just modular multiplication (result = a*b mod m). The modulo operations at the top are not needed, but serve as a reminder that the algorithm assumes a and b are less than m.
Of course for modular exponentiation you'll have an outer loop that does this entire operation at each step doing either squaring or multiplication. But I think you knew that.
For modulo itself, I'm not sure. For modulo as part of the larger modular exponential operation, did you look up Montgomery multiplication as mentioned in the wikipedia page on modular exponentiation? It's been a while since I've looked into this type of algorithm, but from what I recall, it's commonly used in fast modular exponentiation.
edit: for what it's worth, your modulo algorithm seems ok at first glance. You're basically doing division which is a repeated subtraction algorithm.
That test (modulus(n-1) != 1) //a bit test?
-seems redundant combined with (modulus<result).
Designing for hardware implementation i would be conscious of the smaller/greater than tests implying more logic (subtraction) than bitwise operations and branching on zero.
If we can do bitwise tests easily, this could be quick:
m=msb_of(modulus)
while( result>0 )
{
r=msb_of(result) //countdown from prev msb onto result
shift=r-m //countdown from r onto modulus or
//unroll the small subtraction
takeoff=(modulus<<(shift)) //or integrate this into count of shift
result=result-takeoff; //necessary subtraction
if(shift!=0 && result<0)
{ result=result+(takeoff>>1); }
} //endwhile
if(result==0) { return result }
else { return result+takeoff }
(code untested may contain gotchas)
result is repetively decremented by modulus shifted to match at most significant bits.
After each subtraction: result has a ~50/50 chance of loosing more than 1 msb. It also has ~50/50 chance of going negative,
addition of half what was subtracted will always put it into positive again. > it should be put back in positive if shift was not=0
The working loop exits when result is underrun and 'shift' was 0.

Calculate discrete logarithm

Given positive integers b, c, m where (b < m) is True it is to find a positive integer e such that
(b**e % m == c) is True
where ** is exponentiation (e.g. in Ruby, Python or ^ in some other languages) and % is modulo operation. What is the most effective algorithm (with the lowest big-O complexity) to solve it?
Example:
Given b=5; c=8; m=13 this algorithm must find e=7 because 5**7%13 = 8
From the % operator I'm assuming that you are working with integers.
You are trying to solve the Discrete Logarithm problem. A reasonable algorithm is Baby step, giant step, although there are many others, none of which are particularly fast.
The difficulty of finding a fast solution to the discrete logarithm problem is a fundamental part of some popular cryptographic algorithms, so if you find a better solution than any of those on Wikipedia please let me know!
This isn't a simple problem at all. It is called calculating the discrete logarithm and it is the inverse operation to a modular exponentation.
There is no efficient algorithm known. That is, if N denotes the number of bits in m, all known algorithms run in O(2^(N^C)) where C>0.
Python 3 Solution:
Thankfully, SymPy has implemented this for you!
SymPy is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) while keeping the code as simple as possible in order to be comprehensible and easily extensible. SymPy is written entirely in Python.
This is the documentation on the discrete_log function. Use this to import it:
from sympy.ntheory import discrete_log
Their example computes \log_7(15) (mod 41):
>>> discrete_log(41, 15, 7)
3
Because of the (state-of-the-art, mind you) algorithms it employs to solve it, you'll get O(\sqrt{n}) on most inputs you try. It's considerably faster when your prime modulus has the property where p - 1 factors into a lot of small primes.
Consider a prime on the order of 100 bits: (~ 2^{100}). With \sqrt{n} complexity, that's still 2^{50} iterations. That being said, don't reinvent the wheel. This does a pretty good job. I might also add that it was almost 4x times more memory efficient than Mathematica's MultiplicativeOrder function when I ran with large-ish inputs (44 MiB vs. 173 MiB).
Since a duplicate of this question was asked under the Python tag, here is a Python implementation of baby step, giant step, which, as #MarkBeyers points out, is a reasonable approach (as long as the modulus isn't too large):
def baby_steps_giant_steps(a,b,p,N = None):
if not N: N = 1 + int(math.sqrt(p))
#initialize baby_steps table
baby_steps = {}
baby_step = 1
for r in range(N+1):
baby_steps[baby_step] = r
baby_step = baby_step * a % p
#now take the giant steps
giant_stride = pow(a,(p-2)*N,p)
giant_step = b
for q in range(N+1):
if giant_step in baby_steps:
return q*N + baby_steps[giant_step]
else:
giant_step = giant_step * giant_stride % p
return "No Match"
In the above implementation, an explicit N can be passed to fish for a small exponent even if p is cryptographically large. It will find the exponent as long as the exponent is smaller than N**2. When N is omitted, the exponent will always be found, but not necessarily in your lifetime or with your machine's memory if p is too large.
For example, if
p = 70606432933607
a = 100001
b = 54696545758787
then 'pow(a,b,p)' evaluates to 67385023448517
and
>>> baby_steps_giant_steps(a,67385023448517,p)
54696545758787
This took about 5 seconds on my machine. For the exponent and the modulus of those sizes, I estimate (based on timing experiments) that brute force would have taken several months.
Discrete logarithm is a hard problem
Computing discrete logarithms is believed to be difficult. No
efficient general method for computing discrete logarithms on
conventional computers is known.
I will add here a simple bruteforce algorithm which tries every possible value from 1 to m and outputs a solution if it was found. Note that there may be more than one solution to the problem or zero solutions at all. This algorithm will return you the smallest possible value or -1 if it does not exist.
def bruteLog(b, c, m):
s = 1
for i in xrange(m):
s = (s * b) % m
if s == c:
return i + 1
return -1
print bruteLog(5, 8, 13)
and here you can see that 3 is in fact the solution:
print 5**3 % 13
There is a better algorithm, but because it is often asked to be implemented in programming competitions, I will just give you a link to explanation.
as said the general problem is hard. however a prcatical way to find e if and only if you know e is going to be small (like in your example) would be just to try each e from 1.
btw e==3 is the first solution to your example, and you can obviously find that in 3 steps, compare to solving the non discrete version, and naively looking for integer solutions i.e.
e = log(c + n*m)/log(b) where n is a non-negative integer
which finds e==3 in 9 steps

Resources