Given x,y, How to find whether x! is divisible by y or not? - algorithm

computing x! can be very costly and might often result in overflow. Is there a way to find out whether x! is divisible by y or not without computing x!?
For y < x, its trivial;
But,for y > x, e.g. x = 5 and y = 60; I am struggling to find a way without computing x!

Compute the prime factorization of x! and y. You can do this without computing x! by factorizing every number from 2 to x and collecting all of the factors together. If the factors of y is a subset of the factors of x! then it is divisible.

If x and y are really large, so that it's not viable to iterate through all the numbers 1 to x, you can instead just factorize y and compute for every prime factor whether its maximum power in y also divides x!.
I've written about the algorithm more detailled in another answer.
Basically the check goes like this:
// computes maximum q so that p^q divides n!
bool max_power_of_p_in_fac(int p, int n) {
int mu = 0;
while (n/p > 0) {
mu += n/p;
n /= p;
}
return mu;
}
// checks whether y divides x!
bool y_divides_x_fac(int y, int x) {
for each prime factor p^q of y:
if (max_power_of_p_in_fac(p, x) < q)
return false;
return true;
}
This results in an algorithm for the case x < y of complexity O(time to factorize y + log x * #number of prime factors of y).
Obviously y can have at O(log y) prime factors. So with Pollard's rho factorization this would be something like O(y^(1/4) + log x * log y)
The correctness can be proven using this theorem:

For every i from 1 to x, update y /= gcd(y, i). The divisibility check at the end is y == 1.

Related

Find a number for minimum sum of nth power of absolute difference in an array

My question is similar to this, but instead if the absolute difference is raised to a power 'c' (which will be given as input) is there an algorithm to find the answer?
For example, given A = {a1, a2,...., an} and c it should find an x such that it minimises |a1 − x|^c +|a2 − x|^c +··· +|an − x|^c.
If c = 1 it's the median of the sorted array and if c = 2 it's the average of the array, but I can't find connection between median and average which we can extend to any value of c.
I assume that c is a positive integer.
If it is not an integer, then the fractional powers are hard to calculate. If it is negative, then as x goes to infinity (either way) the result goes to 0, so there is no global minimum. If it is 0, then x does not matter. So a positive integer is the only thing that makes sense.
Now each term is a convex function. The sum of convex functions is itself convex. Convex functions have the following properties. Suppose that x < y < z. If f(x) = f(z) then the global minimum is between them. If f(x) = f(y) = f(z), that's a straight line segment. And finally, if f(y) < min(f(x), f(z)) then the global minimum is between x and z.
This is sufficient for a variation on binary search.
while z - x > some tolerance:
if z-y > y-x:
y1 = (y + z) / 2
if f(y1) < f(y):
(x, y, z) = (y, y1, z)
elif f(y1) = f(y):
(x, y, z) = (y1, (2*y1 + y)/3, y)
else:
(x, y, z) = (y1, y, z)
else:
y1 = (x + y) / 2
if f(y1) < f(y):
(x, y, z) = (x, y1, y)
elif f(y1) = f(y):
(x, y, z) = (y1, (2*y1 + y)/3, y)
else:
(x, y, z) = (y1, y, z)
As this runs, each iteration reduces the size of the interval to at most 3/4 of what it previously was. And therefore you will narrow in on the answer.
If you special case c = 1, you can do even better. The second derivative will be defined everywhere and be a non-decreasing function. This allows you to do a binary search, but guess where in the interval the minimum is expected to be. If you land close, you know which way you're wrong, and can put a much tighter bound on it.

Does Pollard Rho not work for certain numbers?

I'm trying to implement Pollard Rho based on pseudocode I found on Wikipedia, but it doesn't appear to work for the numbers 4, 8, and 25, and I have no clue why.
Here's my code:
long long x = initXY;
long long y = initXY;
long long d = 1;
while (d == 1) {
x = polynomialModN(x, n);
y = polynomialModN(polynomialModN(y, n), n);
d = gcd(labs(x - y), n);
}
if (d == n)
return getFactor(n, initXY + 1);
return d;
This is my polynomial function:
long long polynomialModN(long long x, long long n) {
return (x * x + 1) % n;
}
And this is example pseudocode from Wikipedia:
x ← 2; y ← 2; d ← 1
while d = 1:
x ← g(x)
y ← g(g(y))
d ← gcd(|x - y|, n)
if d = n:
return failure
else:
return d
Only difference: I don't return failure but instead try different initializing variables, as Wikipedia also notes this:
Here x and y corresponds to x i {\displaystyle x_{i}} x_{i} and x j
{\displaystyle x_{j}} x_{j} in the section about core idea. Note that
this algorithm may fail to find a nontrivial factor even when n is
composite. In that case, the method can be tried again, using a
starting value other than 2 or a different g ( x ) {\displaystyle
g(x)} g(x).
Does Pollard-Rho just not work for certain numbers? What are their characteristics? Or am I doing something wrong?
Pollard Rho does not work on even numbers. If you have an even number, first remove all factors of 2 before applying Pollard Rho to find the odd factors.
Pollard Rho properly factors 25, but it finds both factors of 5 at the same time, so it returns a factor of 25. That's correct, but not useful. So Pollard Rho will not find the factors of any power (square, cube, and so on).
Although I didn't run it, your Pollard Rho function looks okay. Wikipedia's advice to change the starting point might work, but generally doesn't. It is better, as Wikipedia also suggests, to change the random function g. The easiest way to do that is to increase the addend; instead of x²+1, use x²+c, where c is initially 1 and increases to 2, 3, … after each failure.
Here, as x can be as big as n-1, the product in your polynomialModN function will overflow.

compute number of pairs in a given array with a specific condition

How can we compute number of pairs (P,Q) in a given array, Q>P, such that C[P] * C[Q] ≥ C[P] + C[Q] with complexity O(N) ?
I believe this is impossible in the general case (for real numbers), but under some assumptions on the numbers, it is possible.
For example, consider the case of non-negative integers:
Let X and Y be non-negative integers:
If X=0 and Y=0: X + Y = X * Y
If X=0 or X=1, for any Y>0: X + Y > X * Y
If Y=0 or Y=1, for any X>0: X + Y > X * Y
In any other case: X + Y <= X * Y
So we can run across the array, and count the number of 0's, 1's, and greater than 1's (this takes O(n) time):
We're only interested in combinations of pairs where both numbers are from the group "greater than 1's", or the group of "0's" (any other combination of numbers doesn't satisfy the condition).
Let's say the number of pairs in the first group is n and the second group is m, the total number of pairs satisfying the condition X * Y >= X + Y is:
n(n-1)/2 + m(m-1)/2 (representing the number of possible pairs in each group).
This method can probably be extended to other classes of numbers (e.g. signed integers).
You can't do this as a straightforward programming approach with O(N) complexity as you have N^2 combinations to try.
Just do nested for loops performing the comparison and computer the results.
i.e.
int count = 0;
for (int i=0;i<len;i++) {
for (int j=i+1;j<len;j++) {
if (arr[i]*arr[j] >= arr[i]+arr[j]) {
count++;
}
}
}
Note that I loop from i in the inner loop so each pair only gets scanned once.
It sounds like there is some "trick" algorithm involved that will allow you to get linear but that's a math/algorithms problem not a programming one and I don't see anything obvious.
I am not math-brain myself but i see a pattern here.
you didn't included it here, but there is assumption that is really helping here.
array is sorted in non-decreasing order
P
if out >= 1 000 000 000 return 1 000 000 000
(tip) if pair (x,y) is good, any next pair from this row is good (x, y+1) (x, y+n) you just need to find the first one. Why? look how plots from (x+y) and (x*y) looks like...
there is joint point after which every next pair will work.
Look at example:
[lets assume natural numbers] like that: 1,2,3,4,5,6,7,8... will give u pairs :
pairs are (x,y) -> [ (A[x]*A[y]) >= (A[x]+A[y]) ]
none for x == 1 :
( parallel plots a+b and a*b for a=1 and a=0, no matter b),
A[y] == 3 or more for A[x] == 2 (y=2 break condition y>x)
A[y] == 4 or more for A[x] == 3 etc
here I posted my code, it scored 45% so its not so bad.
I hope someone will catch up my idea and somehow improve this. Good luck :).
inline double real (std::vector<int> &A, std::vector<int> &B, int i)
{
return (double)((double)A[i] + ((double)B[i]/1000000));
}
int solution(std::vector<int> &A, std::vector<int> &B)
{
int size = A.size();
int pairs = 0;
if (size < 2) return pairs;
for(int x = 0; x<size; ++x)
{
for(int y = x+1; y<size; ++y)
{
double lx = real(A,B,x);
double ly = real(A,B,y);
double m = lx*ly;
double a = lx+ly;
if(m<a) continue;
pairs+=(size-y);
if (pairs >= 1000000000) return 1000000000
break;
}
}
return pairs;
}

exponential multiplication algorithm that runs in O(n) time?

I am reading an algorithms textbook and I am stumped by this question:
Suppose we want to compute the value x^y, where x and y are positive
integers with m and n bits, respectively. One way to solve the problem is to perform y - 1 multiplications by x. Can you give a more efficient algorithm that uses only O(n) multiplication steps?
Would this be a divide and conquer algorithm? y-1 multiplications by x would run in theta(n) right? .. I don't know where to start with this question
I understand this better in an iterative way:
You can compute x^z for all powers of two: z = (2^0, 2^1, 2^2, ... ,2^(n-1))
Simply by going from 1 to n and applying x^(2^(i+1)) = x^(2^i) * x^(2^i).
Now you can use these n values to compute x^y:
result = 1
for i=0 to n-1:
if the i'th bit in y is on:
result *= x^(2^i)
return result
All is done in O(n)
Apply a simple recursion for divide and conquer.
Here i am posting a more like a pseudo code.
x^y :=
base case: if y==1 return x;
if y%2==0:
then (x^2)^(y/2;
else
x.(x^2)^((y-1)/2);
The y-1 multiplications solution is based on the identity x^y = x * x^(y-1). By repeated application of the identity, you know that you will decrease y down to 1 in y-1 steps.
A better idea is to decrease y more "energically". Assuming an even y, we have x^y = x^(2*y/2) = (x^2)^(y/2). Assuming an odd y, we have x^y = x^(2*y/2+1) = x * (x^2)^(y/2).
You see that you can halve y, provided you continue the power computation with x^2 instead of x.
Recursively:
Power(x, y)=
1 if y = 0
x if y = 1
Power(x * x, y / 2) if y even
x * Power(x * x, y / 2) if y odd
Another way to view it is to read y as a sum of weighted bits. y = b0 + 2.b1 + 4.b2 + 8.b3...
The properties of exponentiation imply:
x^y = x^b0 . x^(2.b1) . x^(4.b2) . x^(8.b2)...
= x^b0 . (x^2)^b1 . (x^4)^b2 . (x^8)^b3...
You can obtain the desired powers of x by squaring, and the binary decomposition of y tells you which powers to multiply.

Number of different solutions of xy+yz+ xz = N

I have been trying to solve a problem on spoj.
Here is the link to the problem.
http://www.spoj.pl/problems/TAP2012B/
From what I have interpreted, I need to find the number of solutions of the equation xy+yz+xz = N
where n is given to us.
x>=y>=z
z can be zero.
But x and y cannot.
I tried doing solving this via implementing 3 for loops (bad approach).
It is giving the right answer but it is too slow.
Also, other people have solved it in almost no time (0.00)
So I am sure there is a very different approach to this problem.
For N = 20,
the number of different solutions is 5 :
(6,2,1)
(5,4,0)
(10,2,0)
(4,2,2,)
(20,1,0)
Maybe there is some brillian solution built on number-theory. But simply rethinking the task can reduce algorithm complexity as well.
For instance, we don't need a third loop as we can calculate z as (N - x*y)/(x+y). And we don't have to run y all the way to x every time, as we know, that z is not negative, therefore N >= xy.
N = 9747
for x in range(1, N+1):
max_y = min( N / x, x)
for y in range(1, max_y+1):
if (N - x*y) % (x+y) == 0:
z = (N - x*y) / (x+y)
if z <= y:
print x,y,z
You are approaching towards the right direction there will be 3 nested loops but try to reduce the no. of times the loop operates.... Follow the question and conditions carefully.....
You are obviously learning, so it would have had been better if you would do everything yourself, but you now have a great solution from akalenuk, and I hope that you will learn a few things from it as well.
If you are learning python at the same time, I will give you an equivalent solution to akalenuk's, but this time with list comprehension which is a very useful mechanism:
N = 10000
print [(x, y, z)
for x in range(1, N+1)
for y in range(1, min( N/x, x) + 1 )
for z in [ (N - x*y) / (x+y) ]
if (N - x*y) % (x+y) == 0
if z <= y]
The point is in pruning the solution space. The code above is already quite optimised. You might start with something like:
N = 10000
print [(x, y, z)
for x in range(1, N+1)
for y in range(1, x+1 )
for z in range(y+1)
if N == x*y + y*z + x*z]
This would run quite long. So, the first point of optimization may be adding the condition on y:
N = 10000
print [(x, y, z)
for x in range(1, N+1)
for y in range(1, x+1 )
if x*y <= N
for z in range(y+1)
if N == x*y + y*z + x*z]
This already cuts down the time considerably, as for non-promising y the z-loop is not run at all. Then, you notice that you may actually replace that if-statement by explicit computation of maximum y, as akalenuk did:
N = 10000
print [(x, y, z)
for x in range(1, N+1)
for y in range(1, min(x, N/x) +1)
for z in range(y+1)
if N == x*y + y*z + x*z]
This will again speed it up.
As you are learning, I recommend you try all these, and your own, time it, and learn from it.
I also recommend to try and time different, similar solutions

Resources