Given X,p,a,b. We need to find out how many positive integers n ( 1 <= n <= X) satisfies the following condition:-
na^n ≡ b(mod p)
Constraints:
2 <= p <= 10^6,
1 <= a,b< p,
1 <= X <= 10^12
I have no idea how to solve this question, any approach or proof will be highly helpful.
Thanks.
This assumes p is prime.
For each i from 1 upwards, compute a^i. At some point (call it q), you'll get to 1 and then you can stop. Then finding all n <= X such that na^n = b (mod p) and a^n = a^i (mod p) is a question of counting all solutions to n = b*(a^i)^-1 (mod p) and n=i (mod q), which you can do using the Chinese remainder theorem.
This process enumerates all solutions exactly once, and if you're careful runs in O(p) time. (The care is needed to avoid O(p log p) if you calculate a^i (mod p) and (a^i)^-1 (mod p) from scratch each iteration).
Related
Input: Two n-bit integers x and y, where x ≥ 0, y ≥ 1.
Output: The quotient and remainder of x divided by y.
if x = 0, then return (q, r) := (0, 0);
q := 0; r := x;
while (r ≥ y) do
{ q := q + 1;
r := r – y};
return (q, r);
I have obtained the Big O complexity as O(n^2) but a friends says it is O(2^n) where n is the number of bits as the input size
Please provide explanation
The number of iterations of the while-loop is exactly floor(x/y). Each iteration takes n operations, because that is the complexity of the subtraction r - y.
Hence the complexity of the algorithm is n * floor(x/y). However, we want to express the complexity as a function of n, not as a function of x and y.
Thus the question becomes: how does floor(x/y) relate to n, in the worst case?
The biggest value that can be obtained for x/y when x and y are two nonnegative n-digits numbers, and y >= 1, is obtained by taking the biggest possible value for x, and the smallest possible value for y.
The biggest possible value for x is x = 2**n - 1 (all bits of x are 1 in its binary representation);
The smallest possible value for y is y = 1.
Hence the biggest possible value for x/y is x/y = 2**n - 1.
The time-complexity of your division algorithm is O(n * 2**n), and this upper-bound is achieved when x = 2**n - 1 and y = 1.
My proposed solution:
When calculating Big O complexity we need to take n->infinity, where n is > input size, we have 3
possibilities:
x & y both become infinity when n->infinity
y becomes infinity when n->infinity
x becomes infinity when n-> infinity
We are interested only in case 3 here,
(x – i * y ) < y, i being the number of iterations
also written as x/y < i + 1
when x becomes infinity LHS(Left hand side) is infinity in this
equation which implies RHS is infinity as well
So as n-> infinity the number iterations becomes equal to n
Hence, the complexity is O(n^2)
I came across this question. To prove whether the following statement was true or falseLet f(n) = n + log n, then f(n) = O(log^2 n).I'm unsure as to how I can go about proving or disproving whether log^2n is the upper bound for n or not. Could someone help me construct a proof for the same.
Consider the function
g(x) = x(ln x)^2 ; x > 0
This function is positive and increasing for 0 < x < e^(-2).
To see why this is true, let's calculate its derivative:
g'(x) = 1*(ln x)^2 + x*2(ln x)/x
basically because the derivative of ln x is 1/x. Then
g'(x) = (ln x)((ln x) + 2)
which is positive for 0 < x < e^(-2), since both factors are negative in that interval.
This proves that g(x) is positive and increasing in the interval (0, e^(-2)). Therefore, there exists a positive constant c such that
g(x) > c ; if x is small enough
which implies that
g(1/n) > c ; if n is large enough
or
(1/n)(ln n)^2 > c
or
n < (1/c)(ln n)^2 = O((ln n)^2)
and since ln n is also O((ln n)^2) we get
n + (ln n) = O((ln n)^2)
as we wanted to see.
I have a question about the Euclid's Algorithm for finding greatest common divisors.
gcd(p,q) where p > q and q is a n-bit integer.
I'm trying to follow a time complexity analysis on the algorithm (input is n-bits as above)
gcd(p,q)
if (p == q)
return q
if (p < q)
gcd(q,p)
while (q != 0)
temp = p % q
p = q
q = temp
return p
I already understand that the sum of the two numbers, u + v where u and v stand for initial values of p and q , reduces by a factor of at least 1/2.
Now let m be the number of iterations for this algorithm.
We want to find the smallest integer m such that (1/2)^m(u + v) <= 1
Here is my question.
I get that sum of the two numbers at each iteration is upper-bounded by (1/2)^m(p + q). But I don't really see why the max m is reached when this quantity is <= 1.
The answer is O(n) for n-bits q, but this is where I'm getting stuck.
Please help!!
Imagine that we have p and q where p > q. Now, there are two cases:
1) p >= 2*q: in this case, p will be reduced to something less than q after mod, so the sum will be at most 2/3 of what is was before.
2) q < p < 2*q: in this case, a mod operation will be like subtracting q from p, so again the overall sum will be at most 2/3 of what is was before.
Therefore, in each step this sum will be 2/3 of the last sum. Since your numbers are n bits the magnitude of the sum is 2^{n+1}; so, with log 2^{n+1} (base 3/2) steps which is actually O(n), the sum will be 0.
I want to find a fast algorithm to evaluate an expression like the following, where P is prime.
A ^ B ^ C ^ D ^ E mod P
Example:
(9 ^ (3 ^ (15 ^ (3 ^ 15)))) mod 65537 = 16134
The problem is the intermediate results can grow much too large to handle.
Basically the problem reduces to computing a^T mod m for given a, m and a term T that is ridiulously huge. However, we are able to evaluate T mod n with a given modulus n much faster than T . So we ask: "Is there an integer n, such that a^(T mod n) mod m = a^T mod m?"
Now if a and m are coprime, we know that n = phi(m) fulfills our condition according to Euler's theorem:
a^T (mod m)
= a^((T mod phi(m)) + k * phi(m)) (mod m) (for some k)
= a^(T mod phi(m)) * a^(k * phi(m)) (mod m)
= a^(T mod phi(m)) * (a^phi(m))^k (mod m)
= a^(T mod phi(m)) * 1^k (mod m)
= a^(T mod phi(m)) (mod m)
If we can compute phi(m) (which is easy to do for example in O(m^(1/2)) or if we know the prime factorization of m), we have reduced the problem to computing T mod phi(m) and a simple modular exponentiation.
What if a and m are not coprime? The situation is not as pleasant as before, since there might not be a valid n with the property a^T mod m = a^(T mod n) mod m for all T. However, we can show that the sequence a^k mod m for k = 0, 1, 2, ... enters a cycle after some point, that is there exist x and C with x, C < m, such that a^y = a^(y + C) for all y >= x.
Example: For a = 2, m = 12, we get the sequence 2^0, 2^1, ... = 1, 2, 4, 8, 4, 8, ... (mod 12). We can see the cycle with parameters x = 2 and C = 2.
We can find the cycle length via brute-force, by computing the sequence elements a^0, a^1, ... until we find two indices X < Y with a^X = a^Y. Now we set x = X and C = Y - X. This gives us an algorithm with O(m) exponentiations per recursion.
What if we want to do better? Thanks to Jyrki Lahtonen from Math Exchange for providing the essentials for the following algorithm!
Let's evaluate the sequence d_k = gcd(a^k, m) until we find an x with d_x = d_{x+1}. This will take at most log(m) GCD computations, because x is bounded by the highest exponent in the prime factorization of m. Let C = phi(m / d_x). We can now prove that a^{k + C} = a^k for all k >= x, so we have found the cycle parameters in O(m^(1/2)) time.
Let's assume we have found x and C and want to compute a^T mod m now.
If T < x, the task is trivial to perform with simple modular exponentiation. Otherwise, we have T >= x and can thus make use of the cycle:
a^T (mod m)
= a^(x + ((T - x) mod C)) (mod m)
= a^(x + (-x mod C) + (T mod C) + k*C) (mod m) (for some k)
= a^(x + (-x mod C) + k*C) * a^(T mod C) (mod m)
= a^(x + (-x mod C)) * a^(T mod C) (mod m)
Again, we have reduced the problem to a subproblem of the same form ("compute T mod C") and two simple modular exponentiations.
Since the modulus is reduced by at least 1 in every iteration, we get a pretty weak bound of O(P^(1/2) * min (P, n)) for the runtime of this algorithm, where n is the height of the stack. In practice we should get a lot better, since the moduli are expected to decrease exponentially. Of course this argument is a bit hand-wavy, maybe some more mathematically-inclined person can improve on it.
There are a few edge cases to consider that actually make your life a bit easier: you can stop immediately if m = 1 (the result is 0 in this case) or if a is a multiple of m (the result is 0 as well in this case).
EDIT: It can be shown that x = C = phi(m) is valid, so as a quick and dirty solution we can use the formula
a^T = a^(phi(m) + T mod phi(m)) (mod m)
for T >= phi(m) or even T >= log_2(m).
if x:
for i in range(a):
for z in range(a):
for k in range(z):
for p in range(i):
c = (i * z) + (k * p)
else:
for i in range(a):
for z in range(a):
for k in range(z):
c = (i * z) + (k * p)
Would this be O(n^4)? Also, how many multiplications would occur?
EDIT: updated the code. Also, since the lower bound captures the max number of steps a valid input will force, wouldn't big omega be n^4 as well?
Yes, the complexity is still O(n^4). To make things simple, here is the trick to rearrange your code
for i in range(a):
for p in range(i):
f(i, p)
where f(i, p) is
for z in range(a):
for k in range(z):
c = (i * z) + (k * p)
In the first part, f(i, p) has been executed for O(n^2/2) up to the largest order (because of the summation sum_i (i^2), do the math yourself). Similarly, the f(i, p) has the complexity of f(i, p) which is again equal to O(n^2/2).
So the combined resulting order is O(n^4/4). and there is two multiplications for each operation, so number of multiplication is O(n^4/2)
The following code would only be O(n4) if all the numbers a, z, and i were O(n).
for i in range(a):
for z in range(a):
for k in range(z):
for p in range(i):
c = (i * z) + (k * p)
As you've written it, all we know is that that code block is O(a2zi). Similarly, the total number of multiplications that would occur would be: 2a2zi. And, again, if a, z, and i are all O(n), the number of multiplications would be O(n4).
I'm not sure what you want to know about the second block of code.