Number of different solutions of xy+yz+ xz = N - number-theory

I have been trying to solve a problem on spoj.
Here is the link to the problem.
http://www.spoj.pl/problems/TAP2012B/
From what I have interpreted, I need to find the number of solutions of the equation xy+yz+xz = N
where n is given to us.
x>=y>=z
z can be zero.
But x and y cannot.
I tried doing solving this via implementing 3 for loops (bad approach).
It is giving the right answer but it is too slow.
Also, other people have solved it in almost no time (0.00)
So I am sure there is a very different approach to this problem.
For N = 20,
the number of different solutions is 5 :
(6,2,1)
(5,4,0)
(10,2,0)
(4,2,2,)
(20,1,0)

Maybe there is some brillian solution built on number-theory. But simply rethinking the task can reduce algorithm complexity as well.
For instance, we don't need a third loop as we can calculate z as (N - x*y)/(x+y). And we don't have to run y all the way to x every time, as we know, that z is not negative, therefore N >= xy.
N = 9747
for x in range(1, N+1):
max_y = min( N / x, x)
for y in range(1, max_y+1):
if (N - x*y) % (x+y) == 0:
z = (N - x*y) / (x+y)
if z <= y:
print x,y,z

You are approaching towards the right direction there will be 3 nested loops but try to reduce the no. of times the loop operates.... Follow the question and conditions carefully.....

You are obviously learning, so it would have had been better if you would do everything yourself, but you now have a great solution from akalenuk, and I hope that you will learn a few things from it as well.
If you are learning python at the same time, I will give you an equivalent solution to akalenuk's, but this time with list comprehension which is a very useful mechanism:
N = 10000
print [(x, y, z)
for x in range(1, N+1)
for y in range(1, min( N/x, x) + 1 )
for z in [ (N - x*y) / (x+y) ]
if (N - x*y) % (x+y) == 0
if z <= y]
The point is in pruning the solution space. The code above is already quite optimised. You might start with something like:
N = 10000
print [(x, y, z)
for x in range(1, N+1)
for y in range(1, x+1 )
for z in range(y+1)
if N == x*y + y*z + x*z]
This would run quite long. So, the first point of optimization may be adding the condition on y:
N = 10000
print [(x, y, z)
for x in range(1, N+1)
for y in range(1, x+1 )
if x*y <= N
for z in range(y+1)
if N == x*y + y*z + x*z]
This already cuts down the time considerably, as for non-promising y the z-loop is not run at all. Then, you notice that you may actually replace that if-statement by explicit computation of maximum y, as akalenuk did:
N = 10000
print [(x, y, z)
for x in range(1, N+1)
for y in range(1, min(x, N/x) +1)
for z in range(y+1)
if N == x*y + y*z + x*z]
This will again speed it up.
As you are learning, I recommend you try all these, and your own, time it, and learn from it.
I also recommend to try and time different, similar solutions

Related

Find a number for minimum sum of nth power of absolute difference in an array

My question is similar to this, but instead if the absolute difference is raised to a power 'c' (which will be given as input) is there an algorithm to find the answer?
For example, given A = {a1, a2,...., an} and c it should find an x such that it minimises |a1 − x|^c +|a2 − x|^c +··· +|an − x|^c.
If c = 1 it's the median of the sorted array and if c = 2 it's the average of the array, but I can't find connection between median and average which we can extend to any value of c.
I assume that c is a positive integer.
If it is not an integer, then the fractional powers are hard to calculate. If it is negative, then as x goes to infinity (either way) the result goes to 0, so there is no global minimum. If it is 0, then x does not matter. So a positive integer is the only thing that makes sense.
Now each term is a convex function. The sum of convex functions is itself convex. Convex functions have the following properties. Suppose that x < y < z. If f(x) = f(z) then the global minimum is between them. If f(x) = f(y) = f(z), that's a straight line segment. And finally, if f(y) < min(f(x), f(z)) then the global minimum is between x and z.
This is sufficient for a variation on binary search.
while z - x > some tolerance:
if z-y > y-x:
y1 = (y + z) / 2
if f(y1) < f(y):
(x, y, z) = (y, y1, z)
elif f(y1) = f(y):
(x, y, z) = (y1, (2*y1 + y)/3, y)
else:
(x, y, z) = (y1, y, z)
else:
y1 = (x + y) / 2
if f(y1) < f(y):
(x, y, z) = (x, y1, y)
elif f(y1) = f(y):
(x, y, z) = (y1, (2*y1 + y)/3, y)
else:
(x, y, z) = (y1, y, z)
As this runs, each iteration reduces the size of the interval to at most 3/4 of what it previously was. And therefore you will narrow in on the answer.
If you special case c = 1, you can do even better. The second derivative will be defined everywhere and be a non-decreasing function. This allows you to do a binary search, but guess where in the interval the minimum is expected to be. If you land close, you know which way you're wrong, and can put a much tighter bound on it.

calculate x ^ (1 / y) mod m fast (modular root)

How can I solve x ^ ( 1 / y ) mod m fast, where x, y, m are all positive integers?
This is to reverse the calculation for x ^ y mod m. For example
party A hands party B agree on positive integer y and m ahead of time
party A generates a number x1 (0 < x1 < m), and hands party B the result of x1 ^ y mod m, call it x2
party B calculates x2 ^ ( 1 / y ) mod m, so that it gets back x1
I know how to calculate x1 ^ y mod m fast, but I don't know how to calculate x2 ^ (1 / y) mod m fast. Any suggestions?
I don't know how to call this question. Given x ^ y mod m is called modular exponentiation, is this called modular root?
I think you're asking this question: Given y, m, and the result of x^y (mod m) find x (assuming 0 <= x < m).
In general, this doesn't have a solution -- for example, for y=2, m=4, 0^2, 1^2, 2^2, 3^2 = 0, 1, 0, 1 (mod 4), so if you're given the square of a number mod 4, you can't get back the original number.
However, in some cases you can do it. For example, when m is prime and y is coprime to m-1. Then one can find y' such that for all 0 <= x < m, (x^y)^y' = x (mod m).
Note that (x^y)^y' = x^(yy'). Ignoring the trivial case when x=0, if m is prime Fermat's Little Theorem tells us that x^(m-1) = 1 (mod m). Thus we can solve yy' = 1 (mod m-1). This has a solution (which can be found using the extended Euclidean algorithm) assuming y and m-1 are coprime.
Here's working code, with an example with y=5, m=17. It uses the modular inverse code from https://en.wikibooks.org/wiki/Algorithm_Implementation/Mathematics/Extended_Euclidean_algorithm
def egcd(a, b):
if a == 0: return b, 0, 1
g, x, y = egcd(b%a, a)
return g, y - (b//a) * x, x
def modinv(a, m):
g, x, y = egcd(a, m)
if g != 1:
raise AssertionError('no inverse')
return x % m
def encrypt(xs, y, m):
return [pow(x, y, m) for x in xs]
def decrypt(xs, y, m):
y2 = modinv(y, m-1)
return encrypt(xs, y2, m)
y = 5
m = 17
e = encrypt(range(m), y, m)
print decrypt(e, y, m)
RSA is based on the case when m is the product of two distinct primes p, q. The same ideas as above apply, but one needs to find y' such that yy' = 1 (mod lcm((p-1)(q-1))). Unlike above, one can't do this easily only given y and m, because there are no known efficient methods for finding p and q.

Given x,y, How to find whether x! is divisible by y or not?

computing x! can be very costly and might often result in overflow. Is there a way to find out whether x! is divisible by y or not without computing x!?
For y < x, its trivial;
But,for y > x, e.g. x = 5 and y = 60; I am struggling to find a way without computing x!
Compute the prime factorization of x! and y. You can do this without computing x! by factorizing every number from 2 to x and collecting all of the factors together. If the factors of y is a subset of the factors of x! then it is divisible.
If x and y are really large, so that it's not viable to iterate through all the numbers 1 to x, you can instead just factorize y and compute for every prime factor whether its maximum power in y also divides x!.
I've written about the algorithm more detailled in another answer.
Basically the check goes like this:
// computes maximum q so that p^q divides n!
bool max_power_of_p_in_fac(int p, int n) {
int mu = 0;
while (n/p > 0) {
mu += n/p;
n /= p;
}
return mu;
}
// checks whether y divides x!
bool y_divides_x_fac(int y, int x) {
for each prime factor p^q of y:
if (max_power_of_p_in_fac(p, x) < q)
return false;
return true;
}
This results in an algorithm for the case x < y of complexity O(time to factorize y + log x * #number of prime factors of y).
Obviously y can have at O(log y) prime factors. So with Pollard's rho factorization this would be something like O(y^(1/4) + log x * log y)
The correctness can be proven using this theorem:
For every i from 1 to x, update y /= gcd(y, i). The divisibility check at the end is y == 1.

exponential multiplication algorithm that runs in O(n) time?

I am reading an algorithms textbook and I am stumped by this question:
Suppose we want to compute the value x^y, where x and y are positive
integers with m and n bits, respectively. One way to solve the problem is to perform y - 1 multiplications by x. Can you give a more efficient algorithm that uses only O(n) multiplication steps?
Would this be a divide and conquer algorithm? y-1 multiplications by x would run in theta(n) right? .. I don't know where to start with this question
I understand this better in an iterative way:
You can compute x^z for all powers of two: z = (2^0, 2^1, 2^2, ... ,2^(n-1))
Simply by going from 1 to n and applying x^(2^(i+1)) = x^(2^i) * x^(2^i).
Now you can use these n values to compute x^y:
result = 1
for i=0 to n-1:
if the i'th bit in y is on:
result *= x^(2^i)
return result
All is done in O(n)
Apply a simple recursion for divide and conquer.
Here i am posting a more like a pseudo code.
x^y :=
base case: if y==1 return x;
if y%2==0:
then (x^2)^(y/2;
else
x.(x^2)^((y-1)/2);
The y-1 multiplications solution is based on the identity x^y = x * x^(y-1). By repeated application of the identity, you know that you will decrease y down to 1 in y-1 steps.
A better idea is to decrease y more "energically". Assuming an even y, we have x^y = x^(2*y/2) = (x^2)^(y/2). Assuming an odd y, we have x^y = x^(2*y/2+1) = x * (x^2)^(y/2).
You see that you can halve y, provided you continue the power computation with x^2 instead of x.
Recursively:
Power(x, y)=
1 if y = 0
x if y = 1
Power(x * x, y / 2) if y even
x * Power(x * x, y / 2) if y odd
Another way to view it is to read y as a sum of weighted bits. y = b0 + 2.b1 + 4.b2 + 8.b3...
The properties of exponentiation imply:
x^y = x^b0 . x^(2.b1) . x^(4.b2) . x^(8.b2)...
= x^b0 . (x^2)^b1 . (x^4)^b2 . (x^8)^b3...
You can obtain the desired powers of x by squaring, and the binary decomposition of y tells you which powers to multiply.

Algorithm to partition a number

Given a positive integer X, how can one partition it into N parts, each between A and B where A <= B are also positive integers? That is, write
X = X_1 + X_2 + ... + X_N
where A <= X_i <= B and the order of the X_is doesn't matter?
If you want to know the number of ways to do this, then you can use generating functions.
Essentially, you are interested in integer partitions. An integer partition of X is a way to write X as a sum of positive integers. Let p(n) be the number of integer partitions of n. For example, if n=5 then p(n)=7 corresponding to the partitions:
5
4,1
3,2
3,1,1
2,2,1
2,1,1,1
1,1,1,1,1
The the generating function for p(n) is
sum_{n >= 0} p(n) z^n = Prod_{i >= 1} ( 1 / (1 - z^i) )
What does this do for you? By expanding the right hand side and taking the coefficient of z^n you can recover p(n). Don't worry that the product is infinite since you'll only ever be taking finitely many terms to compute p(n). In fact, if that's all you want, then just truncate the product and stop at i=n.
Why does this work? Remember that
1 / (1 - z^i) = 1 + z^i + z^{2i} + z^{3i} + ...
So the coefficient of z^n is the number of ways to write
n = 1*a_1 + 2*a_2 + 3*a_3 +...
where now I'm thinking of a_i as the number of times i appears in the partition of n.
How does this generalize? Easily, as it turns out. From the description above, if you only want the parts of the partition to be in a given set A, then instead of taking the product over all i >= 1, take the product over only i in A. Let p_A(n) be the number of integer partitions of n whose parts come from the set A. Then
sum_{n >= 0} p_A(n) z^n = Prod_{i in A} ( 1 / (1 - z^i) )
Again, taking the coefficient of z^n in this expansion solves your problem. But we can go further and track the number of parts of the partition. To do this, add in another place holder q to keep track of how many parts we're using. Let p_A(n,k) be the number of integer partitions of n into k parts where the parts come from the set A. Then
sum_{n >= 0} sum_{k >= 0} p_A(n,k) q^k z^n = Prod_{i in A} ( 1 / (1 - q*z^i) )
so taking the coefficient of q^k z^n gives the number of integer partitions of n into k parts where the parts come from the set A.
How can you code this? The generating function approach actually gives you an algorithm for generating all of the solutions to the problem as well as a way to uniformly sample from the set of solutions. Once n and k are chosen, the product on the right is finite.
Here is a python solution to this problem, This is quite un-optimised but I have tried to keep it as simple as I can to demonstrate an iterative method of solving this problem.
The results of this method will commonly be a list of max values and min values with maybe 1 or 2 values inbetween. Because of this, there is a slight optimisation in there, (using abs) which will prevent the iterator constantly trying to find min values counting down from max and vice versa.
There are recursive ways of doing this that look far more elegant, but this will get the job done and hopefully give you an insite into a better solution.
SCRIPT:
# iterative approach in-case the number of partitians is particularly large
def splitter(value, partitians, min_range, max_range, part_values):
# lower bound used to determine if the solution is within reach
lower_bound = 0
# upper bound used to determine if the solution is within reach
upper_bound = 0
# upper_range used as upper limit for the iterator
upper_range = 0
# lower range used as lower limit for the iterator
lower_range = 0
# interval will be + or -
interval = 0
while value > 0:
partitians -= 1
lower_bound = min_range*(partitians)
upper_bound = max_range*(partitians)
# if the value is more likely at the upper bound start from there
if abs(lower_bound - value) < abs(upper_bound - value):
upper_range = max_range
lower_range = min_range-1
interval = -1
# if the value is more likely at the lower bound start from there
else:
upper_range = min_range
lower_range = max_range+1
interval = 1
for i in range(upper_range, lower_range, interval):
# make sure what we are doing won't break solution
if lower_bound <= value-i and upper_bound >= value-i:
part_values.append(i)
value -= i
break
return part_values
def partitioner(value, partitians, min_range, max_range):
if min_range*partitians <= value and max_range*partitians >= value:
return splitter(value, partitians, min_range, max_range, [])
else:
print ("this is impossible to solve")
def main():
print(partitioner(9800, 1000, 2, 100))
The basic idea behind this script is that the value needs to fall between min*parts and max*parts, for each step of the solution, if we always achieve this goal, we will eventually end up at min < value < max for parts == 1, so if we constantly take away from the value, and keep it within this min < value < max range we will always find the result if it is possable.
For this code's example, it will basically always take away either max or min depending on which bound the value is closer to, untill some non min or max value is left over as remainder.
A simple realization you can make is that the average of the X_i must be between A and B, so we can simply divide X by N and then do some small adjustments to distribute the remainder evenly to get a valid partition.
Here's one way to do it:
X_i = ceil (X / N) if i <= X mod N,
floor (X / N) otherwise.
This gives a valid solution if A <= floor (X / N) and ceil (X / N) <= B. Otherwise, there is no solution. See proofs below.
sum(X_i) == X
Proof:
Use the division algorithm to write X = q*N + r with 0 <= r < N.
If r == 0, then ceil (X / N) == floor (X / N) == q so the algorithm sets all X_i = q. Their sum is q*N == X.
If r > 0, then floor (X / N) == q and ceil (X / N) == q+1. The algorithm sets X_i = q+1 for 1 <= i <= r (i.e. r copies), and X_i = q for the remaining N - r pieces. The sum is therefore (q+1)*r + (N-r)*q == q*r + r + N*q - r*q == q*N + r == X.
If floor (X / N) < A or ceil (X / N) > B, then there is no solution.
Proof:
If floor (X / N) < A, then floor (X / N) * N < A * N, and since floor(X / N) * N <= X, this means that X < A*N, so even using only the smallest pieces possible, the sum would be larger than X.
Similarly, if ceil (X / N) > B, then ceil (X / N) * N > B * N, and since ceil(X / N) * N >= X, this means that X > B*N, so even using only the largest pieces possible, the sum would be smaller than X.

Resources