How to determine the time complexity of this algorithm? - algorithm

The following function calculates a^b.
assume that we already have a prime_list which contain all needed primes and is sorted from small to large.
The code is written in python.
def power(a,b):
if b == 0:
return 1
prime_range = int(sqrt(b)) + 1
for prime in prime_list:
if prime > prime_range:
break
if b % prime == 0:
return power(power(a, prime), b/prime)
return a * power(a, b-1)
How to determine its time complexity?
p.s. The code isn't perfect but as you can see the idea is using primes to reduce the number of times of arithmetic operations.
I am still looking for an ideal implementation so please help if you come up with something. Thx!

Worst case when for loop is exausted. But in this case b becomes divided by 2 in next recursive call.
In worst case we devide b by factor 2 in approx sqrt(b) operations in each step until b reach 1.
so if we set equations
f(1) = 1 and f(n) = f(n/2) + sqrt(n)
we get using woflram alpha
f(n) = (1+sqrt(2)) (sqrt(2) sqrt(n)-1)
and that is
O(sqrt(b))

Related

How do I find the space and time complexities for this code

fun root(n) =
if n>0 then
let
val x = root(n div 4);
in
if (2*x+1)*(2*x+1) > n then 2*x
else 2*x+1
end
else 0;
fun isPrime(n,c) =
if c<=root(n) then
if n mod c = 0 then false
else isPrime(n,c+1)
else true;
The time complexity for the root(n) function here is O(log(n)): the number is getting divided by 4 at every step and the code in the function itself is O(1). The time complexity for the isPrime function is o(sqrt(n)) as it runs iteratively from 1 to sqrt(n). The issue I face now is what would be the order of both functions together? Would it just be O(sqrt(n)) or would it be O(sqrt(n)*log(n)) or something else altogether?
I'm new to big O notation in general, I have gone through multiple websites and youtube videos trying to understand the concept but I can't seem to calculate it with any confidence... If you guys could point me towards a few resources to help me practice calculating, it would be a great help.
root(n) is O(log₄(n)), yes.
isPrime(n,c) is O((√n - c) · log₄(n)):
You recompute root(n) in every step even though it never changes, causing the "... · log₄(n)".
You iterate c from some value up to root(n); while it is upwards bounded by root(n), it is not downards bounded: c could start at 0, or at an arbitrarily large negative number, or at a positive number less than or equal to √n, or at a number greater than √n. If you assume that c starts at 0, then isPrime(n,c) is O(√n · log₄(n)).
You probably want to prove this using either induction or by reference to the Master Theorem. You may want to simplify isPrime so that it does not take c as an argument in its outer signature, and so that it does not recompute root(n) unnecessarily on every iteration.
For example:
fun isPrime n =
let
val sq = root n
fun check c = c > sq orelse (n mod c <> 0 andalso check (c + 1))
in
check 2
end
This isPrime(n) is O(√n + log₄(n)), or just O(√n) if we omit lower-order terms.
First it computes root n once at O(log₄(n)).
Then it loops from 0 up to root n once at O(√n).
Note that neither of us have proven anything formally at this point.
(Edit: Changed check (n, 0) to check (n, 2), since duh.)
(Edit: Removed n as argument from check since it never varies.)
(Edit: As you point out, Aryan, looping from 2 to root n is indeed O(√n) even though computing root n takes only O(log₄(n))!)

Time complexity of recursive function dividing the input value by 2/3 everytime

I know that the time complexity of a recursive function dividing its input by /2 is log n base 2,I have come across some interesting scenarios on
https://stackoverflow.com/a/42038565/8169857
Kindly help me to understand the logic behind the scenarios in the answer regarding the derivation of the formula
It's back to the recursion tree. Why for 1/2 is O(log2(n))? Because if n = 2^k, you should divide k times to reach to 1. Hence, the number of computation is k = log2(n) comparison at most. Now suppose it is (c-1)/c. Hence, if n = (c/(c-1))^k, we need log_{c/(c-1)}(n) operations to reach to 1.
Now as for any constant c > 1, limit log2(n)/log_{c/(c-1)}(n), n \to \infty is equal to a constant greater than zero, log_{c/(c-1)}(n) = \Theta(log2(n)). Indeed, you can say this for any constants a, b > 1, log_a(n) = \Theta(log_b(n)). Now, the proof is completed.

Efficiently generate primes in Python and calculate complexity

Generating prime numbers from 1 to n Python 3. How to improve efficiency and what is the complexity?
Input: A number, max (a large number)
Output: All the primes from 1 to max
Output is in the form of a list and will be [2,3,5,7,11,13,.......]
The code attempts to perform this task in an efficient way (least time complexity).
from math import sqrt
max = (10**6)*3
print("\nThis code prints all primes till: " , max , "\n")
list_primes=[2]
def am_i_prime(num):
"""
Input/Parameter the function takes: An integer number
Output: returns True, if the number is prime and False if not
"""
decision=True
i=0
while(list_primes[i] <= sqrt(num)): #Till sqrt(n) to save comparisons
if(num%list_primes[i]==0):
decision=False
break
#break is inserted so that we get out of comparisons faster
#Eg. for 1568, we should break from the loop as soon as we know that 1568%2==0
i+=1
return decision
for i in range(3,max,2): #starts from 3 as our list contains 2 from the beginning
if am_i_prime(i)==True:
list_primes.append(i) #if a number is found to be prime, we append it to our list of primes
print(list_primes)
How can I make this faster? Where can I improve?
What is the time complexity of this code? Which steps are inefficient?
In what ways is the Sieve of Eratosthenes more efficient than this?
Working for the first few iterations:-
We have a list_primes which contains prime numbers. It initially contains only 2.
We go to the next number, 3. Is 3 divisible by any of the numbers in list_primes? No! We append 3 to list_primes. Right now, list_primes=[2,3]
We go to the next number 4. Is 4 divisible by any of the numbers in list_primes? Yes (4 is divisible by 2). So, we don't do anything. Right now list_primes=[2,3]
We go to the next number, 5. Is 5 divisible by any of the numbers in list_primes? No! We append 5 to list_primes. Right now, list_primes=[2,3,5]
We go to the next number, 6. Is 6 divisible by any of the numbers in list_primes? Yes (6 is divisible by 2 and also divisible by 3). So, we don't do anything. Right now list_primes=[2,3,5]
And so on...
Interestingly, it takes a rather deep mathematical theorem to prove that your algorithm is correct at all. The theorem is: "For every n ≥ 2, there is a prime number between n and n^2". I know it has been proven, and much stricter bounds are proven, but I must admit I wouldn't know how to prove it myself. And if this theorem is not correct, then the loop in am_i_prime can go past the bounds of the array.
The number of primes ≤ k is O (k / log k) - this is again a very deep mathematical theorem. Again, beyond me to prove.
But anyway, there are about n / log n primes up to n, and for these primes the loop will iterate through all primes up to n^(1/2), and there are O (n^(1/2) / log n) of them.
So for the primes alone, the runtime is therefore O (n^1.5 / log^2 n), so that is a lower bound. With some effort it should be possible to prove that for all numbers, the runtime is asymptotically the same.
O (n^1.5 / log n) is obviously an upper bound, but experimentally the number of divisions to find all primes ≤ n seems to be ≤ 2 n^1.5 / log^2 n, where log is the natural logarithm.
The following rearrangement and optimization of your code will reach your maximum in nearly 1/2 the time of your original code. It combines your top level loop and predicate function into a single function to eliminate overhead and manages squares (square roots) more efficiently:
def get_primes(maximum):
primes = []
if maximum > 1:
primes.append(2)
squares = [4]
for number in range(3, maximum, 2):
i = 0
while squares[i] <= number:
if number % primes[i] == 0:
break
i += 1
else: # no break
primes.append(number)
squares.append(number * number)
return primes
maximum = 10 ** 6 * 3
print(get_primes(maximum))
However, a sieve-based algorithm will easily beat this (as it avoids division and/or multiplication). Your code has a bug: setting max = 1 will create the list [2] instead of the correct answer of an empty list. Always test both ends of your limits.
O(N**2)
Approximately speaking, the first call to am_I_prime does 1 comparison, the second does 2, ..., so the total count is 1 + 2 + ... + N, which is (N * (N-1)) / 2, which has order N-squared.

How to solve the recurrence relation for this Multiplication algorithm

How to I establish a big-O upper bound for the number of times the function calls itself, as a function of b for the following:
function multiply(a,b)
if b = 0 then return 0
else if b is even then
t := multiply(a, b/2);
return t+t;
else if b is odd then
t := multiply(a, b-1);
return a+t;
this is a function to multiply two integer numbers. I'm confused on how to handle the if else conditions for the recurrence relation. I was thinking that the answer is T(n) = T(n/2) + T(n-1). Is that correct?
function multiply(a,b)
if b = 0 then return 0
else if b is even then
t := multiply(a, b/2);
return t+t;
else if b is odd then
t := multiply(a, b-1);
return a+t;
Therefore:
F(0) = 0
If Even: F(N) = F(N/2) + 1
If Odd-Even: F(N) = F(N-1) + 1 = F((N-1)/2) + 2 <-next number is definitely even
Solving the odd-even-odd-even case(the worst scenario):
F(N) = F((N-1)/2) + 2 = O(LogN)
Another way to think of the problems is that we know the odd-even-odd-even case has at most twice the depth of even-even-even-even case. The even only case has LogN depth, thus odd-even-odd-even case has at most 2*LogN depth.
Appreciate the following two points:
Calling multiply with an odd input will trigger a call to the same input minus one, which is an even number. This will take one additional call to reach an even number.
Calling multiply with an even input will trigger another call with the input halved. The resulting number will either be even, or odd, q.v. the above point.
In the worst case scenario, starting with an even input, it would take two calls to halve the input being passed to multiply. This behavior is consistent with 2*O(lgN) running time (where lg is the log base 2). This is the same as just O(lgN).

Is there a polynomial time algorithm to test whether a number exponent of some number?

Just study the famous paper PRIMES is in P and get confused.
First step of the proposed algorithm is If (n=a^b for nature number a and b>1), output COMPOSITE. Since the whole algorithm runs in polynomial time, this step must also complete in O((log n)^c)(given input size is O(log n). However, I can't figure out any algorithm to hit the target after some googling.
QUESTION:
Is there any algorithm available to test whether a number exponent of some other number in polynomial time?
Thanks and Best Regards!
If n=a^b (for a > 1) then b ≤ log2 n, we can check for all b's smaller than log n to test this, we can iterate for finding b from 2 to log n, and for finding a we should do binary search between 1..sqrt(n). But binary search takes O(logn) time for iteration, finally in each step of search(for any found a for checking) we should check that whether ab == n and this takes O(log n), so total search time will be O(log3n). may be there is a faster way but by knowing that AKS is O(log6n) this O(log3n) doesn't harm anything.
A number n is a perfect power if there exists b and e for which b^e = n. For instance 216 = 6^3 = 2^3 * 3^3 is a perfect power, but 72 = 2^3 * 3^2 is not. The trick to determining if a number is a perfect power is to know that, if the number is a perfect power, then the exponent e must be less than log2 n, because if e is greater then 2^e will be greater than n. Further, it is only necessary to test prime e, because if a number is a perfect power to a composite exponent it will also be a perfect power to the prime factors of the composite component; for instance, 2^15 = 32768 = 32^3 = 8^5 is a perfect cube root and also a perfect fifth root. Thus, the algorithm is to make a list of primes less than log2 n and test each one. Since log2 n is small, and the list of primes is even smaller, this isn't much work, even for large n.
You can see an implementation here.
public boolean isPerfectPower(int a) {
if(a == 1) return true;
for(int i = 2; i <= (int)Math.sqrt(a); i++){
double pow = Math.log10(a)/Math.log10(i);
if(pow == Math.floor(pow) && pow > 1) return true;
}
return false;
}

Resources