Related
The task is to find the amount of distinct pairs of {x, y} that fits the equation 1/x + 1/y = 1/n, with n being the input given by the user. Different ordering of x and y does not count as a new pair.
For example, the value n = 2 will mean 1/n = 1/2. 1/2 can be formed with two pairs of {x, y}, whcih are 6 and 3 and 4 and 4.
The value n = 3 will mean 1/n = 1/3. 1/3 can be formed with two pairs of {x, y}, which are 4 and 12 and 6 and 6.
The mathematical equation of 1/x + 1/y = 1/n can be converted to y = nx/(x-n) where if y and x in said converted equation are whole, they count as a pair of {x, y}. Using said converted formula, I will iterate n times starting from x = n + 1 and adding x by 1 per iteration to find whether nx % (x - n) == 0; if it yields true, the x and y are a new distinct pair.
I found the answer to limit my iteration by n times by manually computing the answers and finding the number of repetitions 'pattern'. x also starts with n+1 because otherwise, division by zero will happen or y will result in a negative number. The modulo operator is to indicate that the y attained is whole.
Questions:
Is there a mathematical explanation behind why the iteration is limited to n times? I found out that the limit of iteration is n times by doing manual computation and finding the pattern: that I only need to iterate n times to find the amount of distinct pairs.
Is there another way to find the amount of distinct pairs {x, y} other than my method above, which is by finding the VALUES of distinct pairs itself and then summing the amount of distinct pair? Is there a quick mathematical formula I'm not aware of?
For reference, my code can be seen here: https://gist.github.com/TakeNoteIAmHere/596eaa2ccf5815fe9bbc20172dce7a63
Assuming that x,y,n > 0 we have
Observation 1: both, x and y must be greater than n
Observation 2: since (x,y) and (y,x) do not count as distinct, we can assume that x <= y.
Observation 3: x = y = 2n is always a solution and if x > 2n then y < x (thus no new solution)
This means the possible values for x are from n+1 up to 2n.
A little algebra convers the equation
1/x + 1/y = n
into
(x-n)*(y-n) = n*n
Since we want a solution in integers, we seek integers f, g so that
f*g = n*n
and then the solution for x and y is
x = f+n, y = g+n
I think the easiest way to proceed is to factorise n, ie write
n = (P[1]^k[1]) * .. *(P[m]^k[m])
where the Ps are distinct primes, the ks positive integers and ^ denotes exponentiation.
Then the possibilities for f and g are
f = P[1]^a[1]) * .. *(P[m]^a[m])
g = P[1]^b[1]) * .. *(P[m]^b[m])
where the as and bs satisfy, for each i=1..m
0<=a[i]<=2*k[i]
b[i] = 2*k[i] - a[i]
If we just wanted to count the number of solutions, we would just need to count the number of fs, ie the number of distinct sequences a[]. But this is just
Nall = (2*k[1]+1)*... (2*[k[m]+1)
However we want to count the solution (f,g) and (g,f) as being the same. There is only one case where f = g (because the factorisation into primes is unique, we can only have f=g if the a[] equal the b[]) and so the number we seek is
1 + (Nall-1)/2
Consider a binary sequence b of length N. Initially, all the bits are set to 0. We define a flip operation with 2 arguments, flip(L,R), such that:
All bits with indices between L and R are "flipped", meaning a bit with value 1 becomes a bit with value 0 and vice-versa. More exactly, for all i in range [L,R]: b[i] = !b[i].
Nothing happens to bits outside the specified range.
You are asked to determine the number of possible different sequences that can be obtained using exactly K flip operations modulo an arbitrary given number, let's call it MOD.
More specifically, each test contains on the first line a number T, the number of queries to be given. Then there are T queries, each one being of the form N, K, MOD with the meaning from above.
1 ≤ N, K ≤ 300 000
T ≤ 250
2 ≤ MOD ≤ 1 000 000 007
Sum of all N-s in a test is ≤ 600 000
time limit: 2 seconds
memory limit: 65536 kbytes
Example :
Input :
1
2 1 1000
Output :
3
Explanation :
There is a single query. The initial sequence is 00. We can do the following operations :
flip(1,1) ⇒ 10
flip(2,2) ⇒ 01
flip(1,2) ⇒ 11
So there are 3 possible sequences that can be generated using exactly 1 flip.
Some quick observations that I've made, although I'm not sure they are totally correct :
If K is big enough, that is if we have a big enough number of flips at our disposal, we should be able to obtain 2n sequences.
If K=1, then the result we're looking for is N(N+1)/2. It's also C(n,1)+C(n,2), where C is the binomial coefficient.
Currently trying a brute force approach to see if I can spot a rule of some kind. I think this is a sum of some binomial coefficients, but I'm not sure.
I've also come across a somewhat simpler variant of this problem, where the flip operation only flips a single specified bit. In that case, the result is
C(n,k)+C(n,k-2)+C(n,k-4)+...+C(n,(1 or 0)). Of course, there's the special case where k > n, but it's not a huge difference. Anyway, it's pretty easy to understand why that happens.I guess it's worth noting.
Here are a few ideas:
We may assume that no flip operation occurs twice (otherwise, we can assume that it did not happen). It does affect the number of operations, but I'll talk about it later.
We may assume that no two segments intersect. Indeed, if L1 < L2 < R1 < R2, we can just do the (L1, L2 - 1) and (R1 + 1, R2) flips instead. The case when one segment is inside the other is handled similarly.
We may also assume that no two segments touch each other. Otherwise, we can glue them together and reduce the number of operations.
These observations give the following formula for the number of different sequences one can obtain by flipping exactly k segments without "redundant" flips: C(n + 1, 2 * k) (we choose 2 * k ends of segments. They are always different. The left end is exclusive).
If we had perform no more than K flips, the answer would be sum for k = 0...K of C(n + 1, 2 * k)
Intuitively, it seems that its possible to transform the sequence of no more than K flips into a sequence of exactly K flips (for instance, we can flip the same segment two more times and add 2 operations. We can also split a segment of more than two elements into two segments and add one operation).
By running the brute force search (I know that it's not a real proof, but looks correct combined with the observations mentioned above) that the answer this sum minus 1 if n or k is equal to 1 and exactly the sum otherwise.
That is, the result is C(n + 1, 0) + C(n + 1, 2) + ... + C(n + 1, 2 * K) - d, where d = 1 if n = 1 or k = 1 and 0 otherwise.
Here is code I used to look for patterns running a brute force search and to verify that the formula is correct for small n and k:
reachable = set()
was = set()
def other(c):
"""
returns '1' if c == '0' and '0' otherwise
"""
return '0' if c == '1' else '1'
def flipped(s, l, r):
"""
Flips the [l, r] segment of the string s and returns the result
"""
res = s[:l]
for i in range(l, r + 1):
res += other(s[i])
res += s[r + 1:]
return res
def go(xs, k):
"""
Exhaustive search. was is used to speed up the search to avoid checking the
same string with the same number of remaining operations twice.
"""
p = (xs, k)
if p in was:
return
was.add(p)
if k == 0:
reachable.add(xs)
return
for l in range(len(xs)):
for r in range(l, len(xs)):
go(flipped(xs, l, r), k - 1)
def calc_naive(n, k):
"""
Counts the number of reachable sequences by running an exhaustive search
"""
xs = '0' * n
global reachable
global was
was = set()
reachable = set()
go(xs, k)
return len(reachable)
def fact(n):
return 1 if n == 0 else n * fact(n - 1)
def cnk(n, k):
if k > n:
return 0
return fact(n) // fact(k) // fact(n - k)
def solve(n, k):
"""
Uses the formula shown above to compute the answer
"""
res = 0
for i in range(k + 1):
res += cnk(n + 1, 2 * i)
if k == 1 or n == 1:
res -= 1
return res
if __name__ == '__main__':
# Checks that the formula gives the right answer for small values of n and k
for n in range(1, 11):
for k in range(1, 11):
assert calc_naive(n, k) == solve(n, k)
This solution is much better than the exhaustive search. For instance, it can run in O(N * K) time per test case if we compute the coefficients using Pascal's triangle. Unfortunately, it is not fast enough. I know how to solve it more efficiently for prime MOD (using Lucas' theorem), but O do not have a solution in general case.
Multiplicative modular inverses can't solve this problem immediately as k! or (n - k)! may not have an inverse modulo MOD.
Note: I assumed that C(n, m) is defined for all non-negative n and m and is equal to 0 if n < m.
I think I know how to solve it for an arbitrary MOD now.
Let's factorize the MOD into prime factors p1^a1 * p2^a2 * ... * pn^an. Now can solve this problem for each prime factor independently and combine the result using the Chinese remainder theorem.
Let's fix a prime p. Let's assume that p^a|MOD (that is, we need to get the result modulo p^a). We can precompute all p-free parts of the factorial and the maximum power of p that divides the factorial for all 0 <= n <= N in linear time using something like this:
powers = [0] * (N + 1)
p_free = [i for i in range(N + 1)]
p_free[0] = 1
for cur_p in powers of p <= N:
i = cur_p
while i < N:
powers[i] += 1
p_free[i] /= p
i += cur_p
Now the p-free part of the factorial is the product of p_free[i] for all i <= n and the power of p that divides n! is the prefix sum of the powers.
Now we can divide two factorials: the p-free part is coprime with p^a so it always has an inverse. The powers of p are just subtracted.
We're almost there. One more observation: we can precompute the inverses of p-free parts in linear time. Let's compute the inverse for the p-free part of N! using Euclid's algorithm. Now we can iterate over all i from N to 0. The inverse of the p-free part of i! is the inverse for i + 1 times p_free[i] (it's easy to prove it if we rewrite the inverse of the p-free part as a product using the fact that elements coprime with p^a form an abelian group under multiplication).
This algorithm runs in O(N * number_of_prime_factors + the time to solve the system using the Chinese remainder theorem + sqrt(MOD)) time per test case. Now it looks good enough.
You're on a good path with binomial-coefficients already. There are several factors to consider:
Think of your number as a binary-string of length n. Now we can create another array counting the number of times a bit will be flipped:
[0, 1, 0, 0, 1] number
[a, b, c, d, e] number of flips.
But even numbers of flips all lead to the same result and so do all odd numbers of flips. So basically the relevant part of the distribution can be represented %2
Logical next question: How many different combinations of even and odd values are available. We'll take care of the ordering later on, for now just assume the flipping-array is ordered descending for simplicity. We start of with k as the only flipping-number in the array. Now we want to add a flip. Since the whole flipping-array is used %2, we need to remove two from the value of k to achieve this and insert them into the array separately. E.g.:
[5, 0, 0, 0] mod 2 [1, 0, 0, 0]
[3, 1, 1, 0] [1, 1, 1, 0]
[4, 1, 0, 0] [0, 1, 0, 0]
As the last example shows (remember we're operating modulo 2 in the final result), moving a single 1 doesn't change the number of flips in the final outcome. Thus we always have to flip an even number bits in the flipping-array. If k is even, so will the number of flipped bits be and same applies vice versa, no matter what the value of n is.
So now the question is of course how many different ways of filling the array are available? For simplicity we'll start with mod 2 right away.
Obviously we start with 1 flipped bit, if k is odd, otherwise with 1. And we always add 2 flipped bits. We can continue with this until we either have flipped all n bits (or at least as many as we can flip)
v = (k % 2 == n % 2) ? n : n - 1
or we can't spread k further over the array.
v = k
Putting this together:
noOfAvailableFlips:
if k < n:
return k
else:
return (k % 2 == n % 2) ? n : n - 1
So far so well, there are always v / 2 flipping-arrays (mod 2) that differ by the number of flipped bits. Now we come to the next part permuting these arrays. This is just a simple permutation-function (permutation with repetition to be precise):
flipArrayNo(flippedbits):
return factorial(n) / (factorial(flippedbits) * factorial(n - flippedbits)
Putting it all together:
solutionsByFlipping(n, k):
res = 0
for i in [k % 2, noOfAvailableFlips(), step=2]:
res += flipArrayNo(i)
return res
This also shows that for sufficiently large numbers we can't obtain 2^n sequences for the simply reason that we can not arrange operations as we please. The number of flips that actually affect the outcome will always be either even or odd depending upon k. There's no way around this. The best result one can get is 2^(n-1) sequences.
For completeness, here's a dynamic program. It can deal easily with arbitrary modulo since it is based on sums, but unfortunately I haven't found a way to speed it beyond O(n * k).
Let a[n][k] be the number of binary strings of length n with k non-adjacent blocks of contiguous 1s that end in 1. Let b[n][k] be the number of binary strings of length n with k non-adjacent blocks of contiguous 1s that end in 0.
Then:
# we can append 1 to any arrangement of k non-adjacent blocks of contiguous 1's
# that ends in 1, or to any arrangement of (k-1) non-adjacent blocks of contiguous
# 1's that ends in 0:
a[n][k] = a[n - 1][k] + b[n - 1][k - 1]
# we can append 0 to any arrangement of k non-adjacent blocks of contiguous 1's
# that ends in either 0 or 1:
b[n][k] = b[n - 1][k] + a[n - 1][k]
# complete answer would be sum (a[n][i] + b[n][i]) for i = 0 to k
I wonder if the following observations might be useful: (1) a[n][k] and b[n][k] are zero when n < 2*k - 1, and (2) on the flip side, for values of k greater than ⌊(n + 1) / 2⌋ the overall answer seems to be identical.
Python code (full matrices are defined for simplicity, but I think only one row of each would actually be needed, space-wise, for a bottom-up method):
a = [[0] * 11 for i in range(0,11)]
b = [([1] + [0] * 10) for i in range(0,11)]
def f(n,k):
return fa(n,k) + fb(n,k)
def fa(n,k):
global a
if a[n][k] or n == 0 or k == 0:
return a[n][k]
elif n == 2*k - 1:
a[n][k] = 1
return 1
else:
a[n][k] = fb(n-1,k-1) + fa(n-1,k)
return a[n][k]
def fb(n,k):
global b
if b[n][k] or n == 0 or n == 2*k - 1:
return b[n][k]
else:
b[n][k] = fb(n-1,k) + fa(n-1,k)
return b[n][k]
def g(n,k):
return sum([f(n,i) for i in range(0,k+1)])
# example
print(g(10,10))
for i in range(0,11):
print(a[i])
print()
for i in range(0,11):
print(b[i])
You are given N total number of item, P group in which you have to divide the N items.
Condition is the product of number of item held by each group should be max.
example N=10 and P=3 you can divide the 10 item in {3,4,3} since 3x3x4=36 max possible product.
You will want to form P groups of roughly N / P elements. However, this will not always be possible, as N might not be divisible by P, as is the case for your example.
So form groups of floor(N / P) elements initially. For your example, you'd form:
floor(10 / 3) = 3
=> groups = {3, 3, 3}
Now, take the remainder of the division of N by P:
10 mod 3 = 1
This means you have to distribute 1 more item to your groups (you can have up to P - 1 items left to distribute in general):
for i = 0 up to (N mod P) - 1:
groups[i]++
=> groups = {4, 3, 3} for your example
Which is also a valid solution.
For fun I worked out a proof of the fact that it in an optimal solution either all numbers = N/P or the numbers are some combination of floor(N/P) and ceiling(N/P). The proof is somewhat long, but proving optimality in a discrete context is seldom trivial. I would be interested if anybody can shorten the proof.
Lemma: For P = 2 the optimal way to divide N is into {N/2, N/2} if N is even and {floor(N/2), ceiling(N/2)} if N is odd.
This follows since the constraint that the two numbers sum to N means that the two numbers are of the form x, N-x.
The resulting product is (N-x)x = Nx - x^2. This is a parabola that opens down. Its max is at its vertex at x = N/2. If N is even this max is an integer. If N is odd, then x = N/2 is a fraction, but such parabolas are strictly unimodal, so the closer x gets to N/2 the larger the product. x = floor(N/2) (or ceiling, it doesn't matter by symmetry) is the closest an integer can get to N/2, hence {floor(N/2),ceiling(N/2)} is optimal for integers.
General case: First of all, a global max exists since there are only finitely many integer partitions and a finite list of numbers always has a max. Suppose that {x_1, x_2, ..., x_P} is globally optimal. Claim: given and i,j we have
|x_i - x_ j| <= 1
In other words: any two numbers in an optimal solution differ by at most 1. This follows immediately from the P = 2 lemma (applied to N = x_i + x_ j).
From this claim it follows that there are at most two distinct numbers among the x_i. If there is only 1 number, that number is clearly N/P. If there are two numbers, they are of the form a and a+1. Let k = the number of x_i which equal a+1, hence P-k of the x_i = a. Hence
(P-k)a + k(a+1) = N, where k is an integer with 1 <= k < P
But simple algebra yields that a = (N-k)/P = N/P - k/P.
Hence -- a is an integer < N/P which differs from N/P by less than 1 (k/P < 1)
Thus a = floor(N/P) and a+1 = ceiling(N/P).
QED
To test whether a number is prime or not, why do we have to test whether it is divisible only up to the square root of that number?
If a number n is not a prime, it can be factored into two factors a and b:
n = a * b
Now a and b can't be both greater than the square root of n, since then the product a * b would be greater than sqrt(n) * sqrt(n) = n. So in any factorization of n, at least one of the factors must be smaller than the square root of n, and if we can't find any factors less than or equal to the square root, n must be a prime.
Let's say m = sqrt(n) then m × m = n. Now if n is not a prime then n can be written as n = a × b, so m × m = a × b. Notice that m is a real number whereas n, a and b are natural numbers.
Now there can be 3 cases:
a > m ⇒ b < m
a = m ⇒ b = m
a < m ⇒ b > m
In all 3 cases, min(a, b) ≤ m. Hence if we search till m, we are bound to find at least one factor of n, which is enough to show that n is not prime.
Because if a factor is greater than the square root of n, the other factor that would multiply with it to equal n is necessarily less than the square root of n.
Suppose n is not a prime number (greater than 1). So there are numbers a and b such that
n = ab (1 < a <= b < n)
By multiplying the relation a<=b by a and b we get:
a^2 <= ab
ab <= b^2
Therefore: (note that n=ab)
a^2 <= n <= b^2
Hence: (Note that a and b are positive)
a <= sqrt(n) <= b
So if a number (greater than 1) is not prime and we test divisibility up to square root of the number, we will find one of the factors.
It's all really just basic uses of Factorization and Square Roots.
It may appear to be abstract, but in reality it simply lies with the fact that a non-prime-number's maximum possible factorial would have to be its square root because:
sqrroot(n) * sqrroot(n) = n.
Given that, if any whole number above 1 and below or up to sqrroot(n) divides evenly into n, then n cannot be a prime number.
Pseudo-code example:
i = 2;
is_prime = true;
while loop (i <= sqrroot(n))
{
if (n % i == 0)
{
is_prime = false;
exit while;
}
++i;
}
Let's suppose that the given integer N is not prime,
Then N can be factorized into two factors a and b , 2 <= a, b < N such that N = a*b.
Clearly, both of them can't be greater than sqrt(N) simultaneously.
Let us assume without loss of generality that a is smaller.
Now, if you could not find any divisor of N belonging in the range [2, sqrt(N)], what does that mean?
This means that N does not have any divisor in [2, a] as a <= sqrt(N).
Therefore, a = 1 and b = n and hence By definition, N is prime.
...
Further reading if you are not satisfied:
Many different combinations of (a, b) may be possible. Let's say they are:
(a1, b1), (a2, b2), (a3, b3), ..... , (ak, bk). Without loss of generality, assume ai < bi, 1<= i <=k.
Now, to be able to show that N is not prime it is sufficient to show that none of ai can be factorized further. And we also know that ai <= sqrt(N) and thus you need to check till sqrt(N) which will cover all ai. And hence you will be able to conclude whether or not N is prime.
...
So to check whether a number N is Prime or not.
We need to only check if N is divisible by numbers<=SQROOT(N). This is because, if we factor N into any 2 factors say X and Y, ie. N=XY.
Each of X and Y cannot be less than SQROOT(N) because then, XY < N
Each of X and Y cannot be greater than SQROOT(N) because then, X*Y > N
Therefore one factor must be less than or equal to SQROOT(N) ( while the other factor is greater than or equal to SQROOT(N) ).
So to check if N is Prime we need only check those numbers <= SQROOT(N).
Let's say we have a number "a", which is not prime [not prime/composite number means - a number which can be divided evenly by numbers other than 1 or itself. For example, 6 can be divided evenly by 2, or by 3, as well as by 1 or 6].
6 = 1 × 6 or 6 = 2 × 3
So now if "a" is not prime then it can be divided by two other numbers and let's say those numbers are "b" and "c". Which means
a=b*c.
Now if "b" or "c" , any of them is greater than square root of "a "than multiplication of "b" & "c" will be greater than "a".
So, "b" or "c" is always <= square root of "a" to prove the equation "a=b*c".
Because of the above reason, when we test if a number is prime or not, we only check until square root of that number.
Given any number n, then one way to find its factors is to get its square root p:
sqrt(n) = p
Of course, if we multiply p by itself, then we get back n:
p*p = n
It can be re-written as:
a*b = n
Where p = a = b. If a increases, then b decreases to maintain a*b = n. Therefore, p is the upper limit.
Update: I am re-reading this answer again today and it became clearer to me more. The value p does not necessarily mean an integer because if it is, then n would not be a prime. So, p could be a real number (ie, with fractions). And instead of going through the whole range of n, now we only need to go through the whole range of p. The other p is a mirror copy so in effect we halve the range. And then, now I am seeing that we can actually continue re-doing the square root and doing it to p to further half the range.
Let n be non-prime. Therefore, it has at least two integer factors greater than 1. Let f be the smallest of n's such factors. Suppose f > sqrt n. Then n/f is an integer ≤ sqrt n, thus smaller than f. Therefore, f cannot be n's smallest factor. Reductio ad absurdum; n's smallest factor must be ≤ sqrt n.
Any composite number is a product of primes.
Let say n = p1 * p2, where p2 > p1 and they are primes.
If n % p1 === 0 then n is a composite number.
If n % p2 === 0 then guess what n % p1 === 0 as well!
So there is no way that if n % p2 === 0 but n % p1 !== 0 at the same time.
In other words if a composite number n can be divided evenly by
p2,p3...pi (its greater factor) it must be divided by its lowest factor p1 too.
It turns out that the lowest factor p1 <= Math.square(n) is always true.
Yes, as it was properly explained above, it's enough to iterate up to Math.floor of a number's square root to check its primality (because sqrt covers all possible cases of division; and Math.floor, because any integer above sqrt will already be beyond its range).
Here is a runnable JavaScript code snippet that represents a simple implementation of this approach – and its "runtime-friendliness" is good enough for handling pretty big numbers (I tried checking both prime and not prime numbers up to 10**12, i.e. 1 trillion, compared results with the online database of prime numbers and encountered no errors or lags even on my cheap phone):
function isPrime(num) {
if (num % 2 === 0 || num < 3 || !Number.isSafeInteger(num)) {
return num === 2;
} else {
const sqrt = Math.floor(Math.sqrt(num));
for (let i = 3; i <= sqrt; i += 2) {
if (num % i === 0) return false;
}
return true;
}
}
<label for="inp">Enter a number and click "Check!":</label><br>
<input type="number" id="inp"></input>
<button onclick="alert(isPrime(+document.getElementById('inp').value) ? 'Prime' : 'Not prime')" type="button">Check!</button>
To test the primality of a number, n, one would expect a loop such as following in the first place :
bool isPrime = true;
for(int i = 2; i < n; i++){
if(n%i == 0){
isPrime = false;
break;
}
}
What the above loop does is this : for a given 1 < i < n, it checks if n/i is an integer (leaves remainder 0). If there exists an i for which n/i is an integer, then we can be sure that n is not a prime number, at which point the loop terminates. If for no i, n/i is an integer, then n is prime.
As with every algorithm, we ask : Can we do better ?
Let us see what is going on in the above loop.
The sequence of i goes : i = 2, 3, 4, ... , n-1
And the sequence of integer-checks goes : j = n/i, which is n/2, n/3, n/4, ... , n/(n-1)
If for some i = a, n/a is an integer, then n/a = k (integer)
or n = ak, clearly n > k > 1 (if k = 1, then a = n, but i never reaches n; and if k = n, then a = 1, but i starts form 2)
Also, n/k = a, and as stated above, a is a value of i so n > a > 1.
So, a and k are both integers between 1 and n (exclusive). Since, i reaches every integer in that range, at some iteration i = a, and at some other iteration i = k. If the primality test of n fails for min(a,k), it will also fail for max(a,k). So we need to check only one of these two cases, unless min(a,k) = max(a,k) (where two checks reduce to one) i.e., a = k , at which point a*a = n, which implies a = sqrt(n).
In other words, if the primality test of n were to fail for some i >= sqrt(n) (i.e., max(a,k)), then it would also fail for some i <= n (i.e., min(a,k)). So, it would suffice if we run the test for i = 2 to sqrt(n).
I already have prime factorization (for integers), but now I want to implement it for gaussian integers but how should I do it? thanks!
This turned out to be a bit verbose, but I hope it fully answers your question...
A Gaussian integer is a complex number of the form
G = a+bi
where i2 = -1, and a and b are integers.
The Gaussian integers form a unique factorization domain. Some of them act as units (e.g. 1, -1, i, and -i), some as primes (e.g. 1 + i), and the rest composite, that can be decomposed as a product of primes and units that is unique, aside from the order of factors and the presence of a set of units whose product is 1.
The norm of such a number G is defined as an integer: norm(G) = a2 + b2 .
It can be shown that the norm is a multiplicative property, that is:
norm(I*J) = norm(I)*norm(J)
So if you want to factor a Gaussian integer G, you could take advantage of the fact that any Gaussian integer I that divides G must satisfy the property that norm(I) divides norm(G), and you know how to find the factors of norm(G).
The primes of the Gaussian integers fall into three categories:
1 +/- i , with norm 2,
a +/- bi, with prime norm a2+b2 congruent to 1 mod 4 ,
a, where a is a prime congruent to 3 mod 4 , with norm a2
Now to turn this into an algorithm...if you want to factor a Gaussian integer G,
you can find its norm N, and then factor that into prime integers. Then
we work our way down this list, peeling off prime factors p of N that correspond
to prime Gaussian factors q of our original number G.
There are only three cases to consider, and two of them are trivial.
If p = 2, let q = (1+i). (Note that q = (1-i) would work equally well, since they only differ by a unit factor.)
If p = 3 mod 4, q = p. But the norm of q is p2, so we can strike
another factor of p from the list of remaining factors of norm(G).
The p = 1 mod 4 case is the only one that's a little tricky to deal with.
It's equivalent to the problem of expressing p as the sum of two squares:
if p = a2 + b2, then a+bi and a-bi form a conjugate pair of Gaussian
primes with norm p, and one of them will be the factor we're looking for.
But with a little number theory, it turns out not to be too difficult.
Consider the integers mod p. Suppose we can find an integer k such
that k2 = -1 mod p. Then k2+1 = 0 mod p, which is equivalent to
saying that p divides k2+1 in the integers (and therefore also the
Gaussian integers). In the Gaussian integers, k2+1 factors into
(k+i)(k-i). p divides the product, but does not divide either
factor. Therefore, p has a nontrivial GCD with each of the
factors (k+i) and (k-i), and that GCD or its conjugate is the
factor we're looking for!
But how do we find such an integer k? Let n be some integer between
2 and p-1 inclusive. Calculate n(p-1)/2 mod p -- this value will be either
1 or -1. If -1, then k = n(p-1)/4, otherwise try a different n.
Nearly half the possible values of n will give us a square root of
-1 mod p, so it won't take many guesses to find a value of k that
works.
To find the Gaussian primes with norm p, just use Euclid's algorithm
(slightly modified to work with Gaussian integers) to compute the GCD
of (p, k+i). That gives one trial divisor. If it evenly divides the
Gaussian integer we're trying to factor (remainder = 0), we're done.
Otherwise, its conjugate is the desired factor.
Euclid's GCD algorithm for Gaussian integers is almost identical to
that for normal integers. Each iteration consists of a trial division
with remainder. If we're looking for gcd(a,b),
q = floor(a/b), remainder = a - q*b, and if the remainder is nonzero
we return gcd(b,remainder).
In the integers, if we get a fraction as the quotient, we round it toward zero.
In the Gaussian integers, if the real or imaginary parts of the quotient are fractions, they get rounded to the nearest integer. Besides that, the
algorithms are identical.
So the algorithm for factoring a Gaussian integer G looks something like
this:
Calculate norm(G), then factor norm(G) into primes p1, p2 ... pn.
For each remaining factor p:
if p=2, u = (1 + i).
strike p from the list of remaining primes.
else if p mod 4 = 3, q = p, and strike 2 copies of p from the list of primes.
else find k such that k^2 = -1 mod p, then u = gcd(p, k+i)
if G/u has remainder 0, q = u
else q = conjugate(u)
strike p from the list of remaining primes.
Add q to the list of Gaussian factors.
Replace G with G/q.
endfor
At the end of this procedure, G is a unit with norm 1. But it's not necessarily
1 -- it could be -1, i, or -i, in which case add G to the list of factors,
to make the signs come out right when you multiply all the factors to
see if the product matches the original value of G.
Here's a worked example: factor G = 361 - 1767i over the Gaussian integers.
norm(G) = 3252610 = 2 * 5 * 17 * 19 * 19 * 53
Considering 2, we try q = (1+i), and find G/q = -703 - 1064i with remainder 0.
G <= G/q = -703 - 1064i
Considering 5, we see it is congruent to 1 mod 4. We need to find a good k.
Trying n=2, n(p-1)/2 mod p = 22 mod p = 4. 4 is congruent to -1 mod 5. Success! k = 21 = 2. u = gcd(5, 2+i) which works out to be 2+i.
G/u = -494 - 285i, with remainder 0, so we find q = 2+i.
G <= G/q = -494 - 285i
Considering 17, it is also congruent to 1 mod 4, so we need to find another
k mod 17. Trying n=2, 28 = 1 mod 17, no good. Try n=3 instead.
38 = 16 mod 17 = -1 mod 17. Success! So k = 34 = 13 mod 17.
gcd(17, 13+i) = u = 4-i, G/u = -99 -96i with remainder -2. No good,
so try conjugate(u) = 4+i. G/u = -133 - 38i with remainder 0. Another factor!
G <= G/(4+i) = -133 - 38i
Considering 19, it is congruent to 3 mod 4. So our next factor is 19, and
we strike the second copy of 19 from the list.
G <= G/19 = -7 - 2i
Considering 53, it is congruent to 1 mod 4. Again with the k process...
Trying n=2, 226 = 52 mod 53 = -1 mod 53. Then k = 213 mod 53 = 30.
gcd(53,30+i) = u = -7 - 2i. That's identical to G, so the final
quotient G/(-7-2i) = 1, and there are no factors of -1, i, or -i to worry
about.
We have found factors (1+i)(2+i)(4+i)(19+0i)(-7-2i). And if you multiply
that out (left as an exercise for the reader...), lo and behold, the
product is 361-1767i, which is the number we started with.
Ain't number theory sweet?
Use floating point for the real and imaginary components if you want full single cell integer accuracy, and define gsub, gmul and a special division gdivr with rounded coefficients, not floored. That's because the Pollard rho factorization method needs gcd via Euclid's algorithm, whith a slightly modified gmodulo:
gmodulo((x,y),(x',y'))=gsub((x,y),gmul((x',y'),gdivr((x,y),(x',y'))))
Pollard rho
def poly((a,b),(x,y))=gmodulo(gsub(gmul((a,b),(a,b)),(1,0)),(x,y))
input (x,y),(a,b) % (x,y) is the Gaussian number to be factorized
(c,d)<-(a,b)
do
(a,b)=poly((a,b),(x,y))
(c,d)=poly(poly((c,d),(x,y)),(x,y))
(e,f)=ggcd((x,y),gsub((a,b),(c,d)))
if (e,f)=(x,y) then return (x,y) % failure, try other (a,b)
until e^2+f^2>1
return (e,f)
A normal start value is a=1, b=0.
I have used this method programmed in Forth on my blog http://forthmath.blogspot.se
For safety, use rounded values in all calculations while using floating points for integers.