Time complexity of gcd algorithm in terms of big theta notation. - algorithm

Here n>m.
I have analyzed the worst case when n = fibonacci Nth term and m = fiboncci (N-1)th term.In this case total work will be proportinal to N or time complexity will be O(N).But I am interested in finding time complexity(theta notation) in terms of n.But I am not getting how to find relation between n and N or the upper and lower bound in terms of n.
int gcd(int n, int m) {
if (n%m ==0) return m;
if (n < m) swap(n, m);
while (m > 0) {
n = n%m;
swap(n, m);
}
return n;
}
Please help.

I would try to analyse how m changes after two iterations of the loop. One iteration might change n very little, for example (1001, 1000) -> (1000, 1). m might also change very little, for example (1999, 1000) -> (1000, 999). So analysing how either changes in a single iteration gives you very little. However, if you have two iterations then you will always have a big change.
So analyse this: If r = n/m, how does m change after two iterations of the algorithm, depending on r? By which factor does it decrease at least? (By the way, what is r for Fibonacci numbers? Does that explain the worst case? )

You might find this interesting.
It tries to explain Lamé's theorem for the number of steps of the GCD algorithm. The worst case is when the 2 numbers are consecutive Fibonacci numbers.

Related

Running time of this algorithm for calculating square root

I have this algorithm
int f(int n){
int k=0;
While(true){
If(k == n*n) return k;
k++;
}
}
My friend says that it cost O(2^n). I don’t understand why.
The input is n , the while loop iterate n*n wich is n^2, hence the complexity is O(n^2).
This is based on your source code, not on the title.
For the title, this link my help, complexity of finding the square root,
From the answer Emil Jeřábek I quote:
The square root of an n-digit number can be computed in time O(M(n)) using e.g. Newton’s iteration, where M(n) is the time needed to multiply two n-digit integers. The current best bound on M(n) is n log n 2^{O(log∗n)}, provided by Fürer’s algorithm.
You may look at the interesting entry for sqrt on wikipedia
In my opinion the time cost is O(n^2).
This function will return the k=n^2 value after n^2 while's iterations.
I'm Manuel's friend,
what you don't consider is that input n has length of log(n)... the time complexity would be n ^ 2 if we considered the input length equal to n, but it's not.
So let consider x = log(n) (the length of the input), now we have that n = 2^(x) = 2^(logn) = n and so far all correct.
Now if we calculate the cost as a function of n we get n ^ 2, but n is equal to 2^(x) and we need to calculate the cost as a function of x (because time complexity is calculated on the length of the input, not on the value), so :
O(f) = n^2 = (2^(x))^2 = 2^(2x) = O(2^x)
calculation with excel
"In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows." (https://en.wikipedia.org/wiki/Big_O_notation)
here's another explanation where the algorithm in question is the primality test : Why naive primality test algorithm is not polynomial

what is the time complexity of this method which checks if a number k can be represented as n^p

Time complexity of below method? I'm calculating it as log(n)*log(n)= log(n)
public int isPower(int A) {
if (A == 1)
return 1;
for (int i = (int)Math.sqrt(A); i > 1; i--){
int p = A;
while (p % i == 0) {
p = p / i;
}
if (p == 1)
return 1;
}
return 0;
}
Worst-case complexity:
for(..) runs sqrt(A) times
Then while(..) depends on prime factorization of A=p_1^e1*p_2^e_2*..*p_n^e_n, so it is Max(e_1,e_2,..,e_n) worst-case, or roughly Max(log_p_1(A),log_p_2(A),..)
At most while(..) will execute log(A) times roughly.
so total rough worst-case complexity = sqrt(A)*log(A) leaving out constant factors
Worst-case complexity happens for numbers A which are products of different integers ie A = n_1^e_1*n_2^e_2*..
Average-case complexity:
Given than numbers which are products of different integers are more numerous than numbers which are simply powers of a single integer, in a given range, then choosing a number at random, is more likely to be product of different integers, ie A = n_1^e_1*n_2^e_2... Thus average-case complexity is roughly the same as worst-case complexity ie sqrt(A)*log(A)
Best-case complexity:
Best-case complexity happens when the number A is indeed a power of a single integer/prime ie A = n^e. Then the algorithm in this case takes less time. I leave it as an exercise to compute best-case complexity.
PS. Another way to see this is to understand that checking if a number is a power of a prime/integer, effectively one has to factor the number to its prime factorisation (which is what is done in this algorithm), which is effectively of the same complexity (see for example complexity of factoring by trial division).
SO should have mathjax support as cs.stackexchange has :p !
You iterate from sqrt(A) to 2. Then u tried to factorize. For prime number your code iterate sqrt(A) times . its best case. if number is 2^30 then ur code execute
sqrt(2^30) * 30 means sqrt(n) * log(n) times.
So your code complexity: sqrt(n) * log(n)

Algorithm Analysis: Big-O explanation

I'm currently taking a class in algorithms. The following is a question I got wrong from a quiz: Basically, we have to indicate the worst case running time in Big O notation:
int foo(int n)
{
m = 0;
while (n >=2)
{
n = n/4;
m = m + 1;
}
return m;
}
I don't understand how the worst case time for this just isn't O(n). Would appreciate an explanation. Thanks.
foo calculates log4(n) by dividing n by 4 and counting number of 4's in n using m as a counter. At the end, m is going to be the number of 4's in n. So it is linear in the final value of m, which is equal to log base 4 of n. The algorithm is then O(logn), which is also O(n).
Let's suppose that the worst case is O(n). That implies that the function takes at least n steps.
Now let's see the loop, n is being divided by 4 (or 2²) at each step. So, in the first iteration n is reduced to n/4, in the second, to n/8. It isn't being reduced linearly. It's being reduced by a power of two so, in the worst case, it's running time is O(log n).
The computation can be expressed as a recurrence formula:
f(r) = 4*f(r+1)
The solution is
f(r) = k * 4 ^(1-r)
Where ^ means exponent. In our case we can say f(0) = n
So f(r) = n * 4^(-r)
Solving for r on the end condition we have: 2 = n * 4^(-r)
Using log in both sides, log(2) = log(n) - r* log(4) we can see
r = P * log(n);
Not having more branches or inner loops, and assuming division and addition are O(1) we can confidently say the algorithm, runs P * log(n) steps therefore is a O((log(n)).
http://www.wolframalpha.com/input/?i=f%28r%2B1%29+%3D+f%28r%29%2F4%2C+f%280%29+%3D+n
Nitpickers corner: A C int usually means the largest value is 2^32 - 1 so in practice it means only max 15 iterations, which is of course O(1). But I think your teacher really means O(log(n)).

How to perform the big O analysis

Below is my code.
How can you perform the big O analysis for intDiv(m-n, n), where m & n are two random inputs?
intDiv(int m, int n) {
if(n>m)
return 0;
else
return 1 + intDiv(m-n, n);
} //this code finds the number of times n goes into m
It depends on the values of the m and n. In general, The number of steps involved for any pair of (m,n), such that n, m >=0 are integer_ceil(m/n).
Therefore, the time complexity of the above algorithm : O([m/n]), where, [] represents the ceil of the number.
I think it totally depends on n and m. For example if n=1 then it it is O(m). And when n=m/2 then it is O(log(m)).

how to solve recursion of given algorithm?

int gcd(n,m)
{
if (n%m ==0) return m;
n = n%m;
return gcd(m,n);
}
I solved this and i got
T(n, m) = 1 + T(m, n%m) if n > m
= 1 + T(m, n) if n < m
= m if n%m == 0
I am confused how to proceed further to get the final result. Please help me to solve this.
The problem here is that the size of the next values of m and n depend on exactly what the previous values were, not just their size. Knuth goes into this in detail in "The Art of Computer Programming" Vol 2: Seminumerical algorithms, section 4.5.3. After about five pages he proves what you might have guessed, which is that the worst case is when m and n are consecutive fibonacci numbers. From this (or otherwise!) it turns out that in the worst case the number of divisions required is linear in the logarithm of the larger of the two arguments.
After a great deal more heavy-duty math, Knuth proves that the average case is also linear in the logarithm of the arguments.
mcdowella has given a perfect answer to this.
For an intuitive explaination you can think of it this way,
if n >= m, n mod m < n/2;
This can be shown as,
if m < n/2, then:
n mod m < m < n/2
if m > n/2, then: n mod m = n-m < n/2
So effectively you are halving the larger input, and in two calls both the arguments will be halved.

Resources