I have a question about the Euclid's Algorithm for finding greatest common divisors.
gcd(p,q) where p > q and q is a n-bit integer.
I'm trying to follow a time complexity analysis on the algorithm (input is n-bits as above)
gcd(p,q)
if (p == q)
return q
if (p < q)
gcd(q,p)
while (q != 0)
temp = p % q
p = q
q = temp
return p
I already understand that the sum of the two numbers, u + v where u and v stand for initial values of p and q , reduces by a factor of at least 1/2.
Now let m be the number of iterations for this algorithm.
We want to find the smallest integer m such that (1/2)^m(u + v) <= 1
Here is my question.
I get that sum of the two numbers at each iteration is upper-bounded by (1/2)^m(p + q). But I don't really see why the max m is reached when this quantity is <= 1.
The answer is O(n) for n-bits q, but this is where I'm getting stuck.
Please help!!
Imagine that we have p and q where p > q. Now, there are two cases:
1) p >= 2*q: in this case, p will be reduced to something less than q after mod, so the sum will be at most 2/3 of what is was before.
2) q < p < 2*q: in this case, a mod operation will be like subtracting q from p, so again the overall sum will be at most 2/3 of what is was before.
Therefore, in each step this sum will be 2/3 of the last sum. Since your numbers are n bits the magnitude of the sum is 2^{n+1}; so, with log 2^{n+1} (base 3/2) steps which is actually O(n), the sum will be 0.
Related
Input: Two n-bit integers x and y, where x ≥ 0, y ≥ 1.
Output: The quotient and remainder of x divided by y.
if x = 0, then return (q, r) := (0, 0);
q := 0; r := x;
while (r ≥ y) do
{ q := q + 1;
r := r – y};
return (q, r);
I have obtained the Big O complexity as O(n^2) but a friends says it is O(2^n) where n is the number of bits as the input size
Please provide explanation
The number of iterations of the while-loop is exactly floor(x/y). Each iteration takes n operations, because that is the complexity of the subtraction r - y.
Hence the complexity of the algorithm is n * floor(x/y). However, we want to express the complexity as a function of n, not as a function of x and y.
Thus the question becomes: how does floor(x/y) relate to n, in the worst case?
The biggest value that can be obtained for x/y when x and y are two nonnegative n-digits numbers, and y >= 1, is obtained by taking the biggest possible value for x, and the smallest possible value for y.
The biggest possible value for x is x = 2**n - 1 (all bits of x are 1 in its binary representation);
The smallest possible value for y is y = 1.
Hence the biggest possible value for x/y is x/y = 2**n - 1.
The time-complexity of your division algorithm is O(n * 2**n), and this upper-bound is achieved when x = 2**n - 1 and y = 1.
My proposed solution:
When calculating Big O complexity we need to take n->infinity, where n is > input size, we have 3
possibilities:
x & y both become infinity when n->infinity
y becomes infinity when n->infinity
x becomes infinity when n-> infinity
We are interested only in case 3 here,
(x – i * y ) < y, i being the number of iterations
also written as x/y < i + 1
when x becomes infinity LHS(Left hand side) is infinity in this
equation which implies RHS is infinity as well
So as n-> infinity the number iterations becomes equal to n
Hence, the complexity is O(n^2)
I have recently stumbled upon an algorithmic problem and I can't get the end of it. You're given a positive integer N < 10^13, and you need to choose a nonnegative integer M, such that the sum: MN + N(N-1) / 2 has the least number of divisors that lie between 1 and N, inclusive.
Can someone point me to the right direction for solving this problem?
Thank you for your time.
Find a prime P greater than N. There are a number of ways to do this.
If N is odd, then M*N + N*(N-1)/2 is a multiple of N. It must be divisible by any factor of N, but if we choose M = P - (N-1)/2, then M*N + N*(N-1)/2 = P*N, so it isn't divisible by any other integers between 1 and N.
If N is even, then M*N + N*(N-1)/2 is a multiple of N/2. It must be divisible by any factor of N/2, but if we choose M = (P - N + 1)/2 (which must be an integer), then M*N + N*(N-1)/2 = (P - N + 1)*N/2 + (N-1)*N/2 = P*N/2, so it isn't divisible by any other integers between 1 and N.
According to Wikipedia, a linear congruential generator is defined by the recurrence relation below:
X(n) = {a.X(n-1) + c} mod m
where 0 < m, 0 <= a < m, 0 <= c < m, 0 <= X(0) < m are integer constants that specify the generator.
If the value of a, c, m, X(0), and n are given, can I determine the k-th smallest value (1 <= k <= n) of the set {X(0), X(1), ..., X(n)} very fast? (faster than O(n) - based by sorting algorithm)
Assuming you're not storing the k lowest items during generation ...
If (n >= m) and the constants meet the criteria for a full period (ref here) then the k-th smallest item will be k-1.
If (n >= m) and the constants do not meet the criteria or (n < m) then you need to do a linear search which can terminate if the k-th lowest to date is k-1.
The algorithm is taken from great "Algorithms and Programming: Problems and Solutions" by Alexander Shen (namely exercise 1.1.28).
Following is my translation from Russian so excuse me for mistakes or ambiguity. Please correct me if you feel so.
What should algorithm do
With given natural n algorithm calculates the number of solutions of
inequality
x*x + y*y < n
in natural (non-negative) numbers without using
manipulations on real numbers
In Pascal
k := 0; s := 0;
{at this moment of execution
(s) = number of solutions of inequality with
x*x + y*y < n, x < k}
while k*k < n do begin
l := 0; t := 0;
while k*k + l*l < n do begin
l := l + 1;
t := t + 1;
end;
{at this line
(t) = number of solutions of k*k + y*y < n
for given (k) with y>=0}
k := k + 1;
s := s + t;
end;
{k*k >= n, so s = number of solutions of inequality}
Further in the text Shen says briefly that number of operations performed by this algorithm is "proportional to n, as one can calculate". So I ask you how one can calculate that with strict mathematics.
You have two loops, one inside the other.
The external has this condition: k*k < n so k goes from 0 up to SQRT(n)
and the internal loop has this condition: k*k + l*l < n so l goes from 0 up to SQRT(n-k^2). But this is snaller than SQRT(n)
So, the maximum iterations is less than SQRT(n) * SQRT(n) which is n and in every iteration a constant number of operations is done.
The number of operation done by nested loops is a multiplication of the 2 lenghts
for example:
for i=1 to 5
for j = 1 to 10
print j+i
end
end
will print 5*10 = 50 times
In your example the outer loop runs sqrt(n) times -that is until k^2=n or k=sqrt(n).
the inner loop runs sqrt(n) times as well .
k is constant within the loop, and it will stop when k^2+l^2>n , the most times it could run would be at k=0 -> l^2>n => l>sqrt(n) .
So the total number of iterations is at most sqrt(n)*sqrt(n) - O(n)
The time taken by your algorithm is proportional to the number of operations made. Therefore, you just have to calculate that the time taken by your algorithm is proportional with an increasing size of the input (n). You can do so by timing the algorithm's completion with a wide range of n's and plotting the n vs time graph. Doing so should give you a linear graph.
int gcd(n,m)
{
if (n%m ==0) return m;
n = n%m;
return gcd(m,n);
}
I solved this and i got
T(n, m) = 1 + T(m, n%m) if n > m
= 1 + T(m, n) if n < m
= m if n%m == 0
I am confused how to proceed further to get the final result. Please help me to solve this.
The problem here is that the size of the next values of m and n depend on exactly what the previous values were, not just their size. Knuth goes into this in detail in "The Art of Computer Programming" Vol 2: Seminumerical algorithms, section 4.5.3. After about five pages he proves what you might have guessed, which is that the worst case is when m and n are consecutive fibonacci numbers. From this (or otherwise!) it turns out that in the worst case the number of divisions required is linear in the logarithm of the larger of the two arguments.
After a great deal more heavy-duty math, Knuth proves that the average case is also linear in the logarithm of the arguments.
mcdowella has given a perfect answer to this.
For an intuitive explaination you can think of it this way,
if n >= m, n mod m < n/2;
This can be shown as,
if m < n/2, then:
n mod m < m < n/2
if m > n/2, then: n mod m = n-m < n/2
So effectively you are halving the larger input, and in two calls both the arguments will be halved.