How to I establish a big-O upper bound for the number of times the function calls itself, as a function of b for the following:
function multiply(a,b)
if b = 0 then return 0
else if b is even then
t := multiply(a, b/2);
return t+t;
else if b is odd then
t := multiply(a, b-1);
return a+t;
this is a function to multiply two integer numbers. I'm confused on how to handle the if else conditions for the recurrence relation. I was thinking that the answer is T(n) = T(n/2) + T(n-1). Is that correct?
function multiply(a,b)
if b = 0 then return 0
else if b is even then
t := multiply(a, b/2);
return t+t;
else if b is odd then
t := multiply(a, b-1);
return a+t;
Therefore:
F(0) = 0
If Even: F(N) = F(N/2) + 1
If Odd-Even: F(N) = F(N-1) + 1 = F((N-1)/2) + 2 <-next number is definitely even
Solving the odd-even-odd-even case(the worst scenario):
F(N) = F((N-1)/2) + 2 = O(LogN)
Another way to think of the problems is that we know the odd-even-odd-even case has at most twice the depth of even-even-even-even case. The even only case has LogN depth, thus odd-even-odd-even case has at most 2*LogN depth.
Appreciate the following two points:
Calling multiply with an odd input will trigger a call to the same input minus one, which is an even number. This will take one additional call to reach an even number.
Calling multiply with an even input will trigger another call with the input halved. The resulting number will either be even, or odd, q.v. the above point.
In the worst case scenario, starting with an even input, it would take two calls to halve the input being passed to multiply. This behavior is consistent with 2*O(lgN) running time (where lg is the log base 2). This is the same as just O(lgN).
Related
fun root(n) =
if n>0 then
let
val x = root(n div 4);
in
if (2*x+1)*(2*x+1) > n then 2*x
else 2*x+1
end
else 0;
fun isPrime(n,c) =
if c<=root(n) then
if n mod c = 0 then false
else isPrime(n,c+1)
else true;
The time complexity for the root(n) function here is O(log(n)): the number is getting divided by 4 at every step and the code in the function itself is O(1). The time complexity for the isPrime function is o(sqrt(n)) as it runs iteratively from 1 to sqrt(n). The issue I face now is what would be the order of both functions together? Would it just be O(sqrt(n)) or would it be O(sqrt(n)*log(n)) or something else altogether?
I'm new to big O notation in general, I have gone through multiple websites and youtube videos trying to understand the concept but I can't seem to calculate it with any confidence... If you guys could point me towards a few resources to help me practice calculating, it would be a great help.
root(n) is O(log₄(n)), yes.
isPrime(n,c) is O((√n - c) · log₄(n)):
You recompute root(n) in every step even though it never changes, causing the "... · log₄(n)".
You iterate c from some value up to root(n); while it is upwards bounded by root(n), it is not downards bounded: c could start at 0, or at an arbitrarily large negative number, or at a positive number less than or equal to √n, or at a number greater than √n. If you assume that c starts at 0, then isPrime(n,c) is O(√n · log₄(n)).
You probably want to prove this using either induction or by reference to the Master Theorem. You may want to simplify isPrime so that it does not take c as an argument in its outer signature, and so that it does not recompute root(n) unnecessarily on every iteration.
For example:
fun isPrime n =
let
val sq = root n
fun check c = c > sq orelse (n mod c <> 0 andalso check (c + 1))
in
check 2
end
This isPrime(n) is O(√n + log₄(n)), or just O(√n) if we omit lower-order terms.
First it computes root n once at O(log₄(n)).
Then it loops from 0 up to root n once at O(√n).
Note that neither of us have proven anything formally at this point.
(Edit: Changed check (n, 0) to check (n, 2), since duh.)
(Edit: Removed n as argument from check since it never varies.)
(Edit: As you point out, Aryan, looping from 2 to root n is indeed O(√n) even though computing root n takes only O(log₄(n))!)
I have a one doubt about expected execution time and worst case executio time of algorithms, I was think a simple algorithm but this contain a loop infinite and I don't know about to explain that expected execution time?. I could always determine the time to algorithms with stop conditions.
This is my example (I just know, that more than n/2 can give true):
while (true)
{
int i = random(0,n-1);
bool e = decision(i); //Θ(n)
if (e==true)
return i;
}
The expected execution time is O(n).
With probability p >= 1/2, the first i will give decision(i) == true, so the loop will terminate after one call to decision.
Let q = 1 - p be the probability that it did not happen.
Then, with probability q * p, the second i will give decision(i) == true, so the loop will terminate after two calls to decision.
Similarly, with probability q^2 * p, the third i will give decision(i) == true, so the loop will terminate after three calls to decision, and so on.
By taking the sum, we have the expected number of calls to decision as
1 + q + q^2 + q^3 + ....
As q <= 1/2, the sum is at most 1 + 1/2 + 1/4 + 1/8 + ... which has an upper limit of 2.
So, the expected number of calls is limited by 2.
In total, as each call to decision takes O(n) time, the expected time to run is O(2*n) which is still O(n) since 2 is a constant.
It seems dependant on the probabilty that decision ends up being true.
So in this case decision takes n steps.
Which means that the runtime is O(n),
Now we take the probability of decision being true, assume 50%, that means on avarage we need to 2n steps per loop( sum(prob^x*n) , x=0..infinity ,prob=0.5).
Eventhough O increases by the decision probability, the multiplication is still linearly bound to the change of "decision" being true, and therefore stil O(n).
The following function calculates a^b.
assume that we already have a prime_list which contain all needed primes and is sorted from small to large.
The code is written in python.
def power(a,b):
if b == 0:
return 1
prime_range = int(sqrt(b)) + 1
for prime in prime_list:
if prime > prime_range:
break
if b % prime == 0:
return power(power(a, prime), b/prime)
return a * power(a, b-1)
How to determine its time complexity?
p.s. The code isn't perfect but as you can see the idea is using primes to reduce the number of times of arithmetic operations.
I am still looking for an ideal implementation so please help if you come up with something. Thx!
Worst case when for loop is exausted. But in this case b becomes divided by 2 in next recursive call.
In worst case we devide b by factor 2 in approx sqrt(b) operations in each step until b reach 1.
so if we set equations
f(1) = 1 and f(n) = f(n/2) + sqrt(n)
we get using woflram alpha
f(n) = (1+sqrt(2)) (sqrt(2) sqrt(n)-1)
and that is
O(sqrt(b))
I have a question concerning a question on a past paper (my school does not provide me solutions for them). I was wondering if I've done the question correctly:
Let X be an array with the elements X(1),...,X(n) such that each X(i) is an integer satisfying 0≤X(i)≤n^2 , let Y be an array with the elements Y(1),...,Y(n) such that each Y(i) is an integer satisfying 1≤Y(i)≤n^4 , and let Z be an integer variable with the value satisfying 1≤Z≤n. Then consider the following fragment of code:
for i := 1 to Z do
for j := n*n - X(i) to Y(i)*Y(i) + n*n do
k := 0
Analyse the complexity of this code in the best case and worst case, counting 1 unit for each basic operation, and expressing the total count as a function of n.
Here is my approach:
For the worst case, I have let Z = n, every Y(i) = n^4 and every X(i) = 0. I am trying to get the loops to iterate around the most times within the worst case. I think these values would be what would be needed during the worst case - so for my analysis I have gotten the function T(n) = 2n^5 + 2n + 1 as the first line is n+1, the second is (n^4 + 1)n and the final line of pseudocode is n^5
For the best case I have let Z = 1, every X(i) = 0 and every Y(i) = 1
Thus I got the function T(n) = 7 as the first line executes twice, the second thrice, and the third line twice. I am unsure if this is right, I am counting the for loop checking if i has exceeded the value (hence the extra addition of 1 to the complexity of for loop lines).
Is my complexity analysis right, or am I approaching this completely incorrectly/have major misconception or errors in my working out. Thanks!
I'm having trouble with understanding the following property of divide-and-conquer algorithms.
A recursive method that divides a problem of size N into two independent
(nonempty) parts that it solves recursively calls itself less than N times.
The proof is
A recursive function that divides a problem of size N into two independent
(nonempty) parts that it solves recursively calls itself less than N times.
If the parts are one of size k and one of size N-k, then the total number of
recursive calls that we use is T(n) = T(k) + T(n-k) + 1, for N>=1 with T(1) = 0.
The solution T(N) = N-1 is immediate by induction. If the sizes sum to a value
less than N, the proof that the number of calls is less than N-1 follows from
same inductive argument.
I perfectly understand the formal proof above. What I don't understand is how this property is connected to the examples that are usually used to demonstrate the divide-and-conquer idea, particularly to the finding the maximum problem:
static double max(double a[], int l, int r)
{
if (l == r) return a[l];
int m = (l+r)/2;
double u = max(a, l, m);
double v = max(a, m+1, r);
if (u > v) return u; else return v;
}
In this case when a consists of N=2 elements max(0,1) will call itself 2 more times, that is max(0,0) and max(1,1), which equals to N. If N=4, max(0,3) will call itself 2 times, and then each of the subsequent calls will also call max 2 times, so the total number of calls is 6 > N. What am I missing?
You're not missing anything. The theorem and its proof are wrong. The error is here:
T(n) = T(k) + T(n-k) + 1
The constant term of 1 should be 2, as the function makes one recursive call for each of the two pieces into which it divides the problem. The correct bound is 2N-1, rather than N. Hopefully, this error will be fixed in the next edition of your textbook, or at least in the errata.