Runtime complexity of this algorithm? - runtime

Given a and b, I'm asked to compute a^b with a runtime faster than O(b). I came up with:
if(b == 1) return a;
if(b % 2 == 0)
return findExp(a,b/2) * findExp(a,b/2);
else
return findExp(a,(b/2)+1) * findExp(a,b/2);
My question is, is the runtime complexity of this algorithm logarithmic time or polynomial time?

Your algorithm is O(b) not logarithmic.
In this part of the code return findExp(a,b/2) * findExp(a,b/2); you are calling the same function which returns the same value two times. Basically, you are spending redundant time for the second call.
Mathematically, if T(b) represents time required for a^b according to your algorithm, it is T(b) = T(b/2) + T(b/2) where each term corresponds to each call. T(b) = 2.T(b/2). Solve this recurrence and you will get T(b) in the order of b.
To make it logarithmic, just prevent that redundant call by storing the value in a variable.
The edited code :
if(b == 1) return a;
if(b % 2 == 0)
int x = findExp(a,b/2);
return x*x;
else
int x = findExp(a,b/2);
return a*x*x;
Now T(b) = T(b/2) since you are calling findExp(a,b/2) only once (either in if part or in else part).
This gives you O(log(b)) algorithm
If you wanna test it, run your code and the one mentioned by me for some large b, let us say b = 1000000000 and contrast the times taken.

It is O(log(b)), so it is logarithmic.

Related

Time and Space Complexity of This Algorithm

Despite reading some previous questions here on stackoverflow and watching a few videos including this
one, time and space complexity are going straight over my head. I need to find the time and space complexity of this algorithm
public static int aPowB(int a, int b){
if(b == 0){
return 1;
}
int halfResult = aPowB(a, b/2);
if(b%2 == 0){
return halfResult * halfResult;
}
return a * halfResult * halfResult;
}
An explanation of the answer would be appreciated so I can try to understand. Thank you.
First of all, the inputs are a and b, so we can expect the time/space complexity to be dependent on this two parameters.
With recursive algorithms, always try to write down the recurrence relation for the time complexity T first. Here it's
T(a, 0) = O(1) // base case
T(a, b) = T(a, b/2) + O(1) // recursive call + some O(1) stuff at the end
This equation is one of the standard ones that you should just know by heart, so we can immediately give the solution
T(a, b) = O(log b)
(If you don't know the solution by hard, just ask yourself how many times you can divide b by 2 until you hit 0.)
The space complexity is also O(log b) because that's the depth of the recursion stack.

How to solve the recurrence relation for this Multiplication algorithm

How to I establish a big-O upper bound for the number of times the function calls itself, as a function of b for the following:
function multiply(a,b)
if b = 0 then return 0
else if b is even then
t := multiply(a, b/2);
return t+t;
else if b is odd then
t := multiply(a, b-1);
return a+t;
this is a function to multiply two integer numbers. I'm confused on how to handle the if else conditions for the recurrence relation. I was thinking that the answer is T(n) = T(n/2) + T(n-1). Is that correct?
function multiply(a,b)
if b = 0 then return 0
else if b is even then
t := multiply(a, b/2);
return t+t;
else if b is odd then
t := multiply(a, b-1);
return a+t;
Therefore:
F(0) = 0
If Even: F(N) = F(N/2) + 1
If Odd-Even: F(N) = F(N-1) + 1 = F((N-1)/2) + 2 <-next number is definitely even
Solving the odd-even-odd-even case(the worst scenario):
F(N) = F((N-1)/2) + 2 = O(LogN)
Another way to think of the problems is that we know the odd-even-odd-even case has at most twice the depth of even-even-even-even case. The even only case has LogN depth, thus odd-even-odd-even case has at most 2*LogN depth.
Appreciate the following two points:
Calling multiply with an odd input will trigger a call to the same input minus one, which is an even number. This will take one additional call to reach an even number.
Calling multiply with an even input will trigger another call with the input halved. The resulting number will either be even, or odd, q.v. the above point.
In the worst case scenario, starting with an even input, it would take two calls to halve the input being passed to multiply. This behavior is consistent with 2*O(lgN) running time (where lg is the log base 2). This is the same as just O(lgN).

“Operations to consider”(ex. If, return, assign..) when calculating time complexity

I’m studying algorithm - time complexity and recursion.
I’m actually ok with solving recursion, cuz it’s simple math. But code part is the problem.
For example, This is the problem I’ve brought :
https://brilliant.org/practice/big-o-notation/?problem=complexityrun-time-analysis-2-2
public int P(int x , int n){
if (n == 0){
return 1;
}
if (n % 2 == 1){
int y = P(x, (n - 1) / 2);
return x * y * y;
}
else{
int y = P(x, n / 2);
return y * y;
}
}
It is a simple power function. T(n)=O(g(n)) is running time of this function for large, and I have to find it.
The solution says,
“When the power is odd an extra multiplication operation is performed. To work out time complexity, let us first look at the worst scenario, meaning let us assume that one additional multiplication operation is needed.”
However, I do not understand the next part, the solution says that :
Recursion relation is
T(n) = T(n/2) + 3, T(1)=1
1) Why is the constant part 3?
if (n % 2 == 1){
int y = P(x, (n - 1) / 2);
return x * y * y;
}
2) I actually don’t get exactly why T(1)=1 also.
I’m puzzled with.. which operations should we consider while calculating time complexity?
For example, T(1)=1 part must be related with
if (n == 0){
return 1;
}
if (n % 2 == 1){
int y = P(x, (n - 1) / 2);
return x * y * y;
}
This part, and I want to ask whether T(1)=1 comes from if statement/assign statement/return statement..
I understand afterwards, solving the recursion relation above, but I’m stuck with the recursion relation itself.
Please help me algo gurus..
which operations should we consider while calculating time complexity?
The answer will disappoint you a bit: it doesn't matter what operations you count. That's why we use big-Oh in analysing algorithms and expressing their time/memory requirements. It is an asymptotic notation that describes what happens to the algorithm for large values of n. By the definition of Big-Oh, we can say that both 1/2n^2 and 10n^2+6n+100 are O(n^2), even if they are not the same function. Counting all the operations, will just increase some constant factors, and that's why it doesn't really matter which ones you count.
By the above, the constants are simply O(1). This disregards details, since both 10 and 10000 are O(1), for example.
One could argue that specifying the exact number of operations in the expression T(n) = T(n/2) + 3 is not very correct, since there is no definition for what an operation is, and moreover the same operation might take a different amount of time on different computers, so exactly counting the number of operations is a bit meaningless at best and simply wrong at worst. A better way of saying it is T(n) = T(n/2) + O(1).
T(1)=1 represents the base case, which is solved in constant time (read: a constant number of operations at each time). Again, a better (more formal) way of saying that is T(1)=O(1).

Finding the temporal complexity of an exponential algorithm

Problem: Find best way to cut a rod of length n.
Each cut is integer length.
Assume that each length i rod has a price p(i).
Given: rod of length n, and a list of prices p, which provided the price of each possible integer lenght between 0 and n.
Find best set of cuts to get maximum price.
Can use any number of cuts, from 0 to n−1.
There is no cost for a cut.
Following I present a naive algorithm for this problem.
CUT-ROD(p,n)
if(n == 0)
return 0
q = -infinity
for i = 1 to n
q = max(q, p[i]+CUT-ROD(p,n-1))
return q
How can I prove that this algorithm is exponential? Step-by-step.
I can see that it is exponential. However, I'm not able to proove it.
Let's translate the code to C++ for clarity:
int prices[n];
int cut-rod(int n) {
if(n == 0) {
return 0;
}
q = -1;
res = cut-rod(n-1);
for(int i = 0; i < n; i++) {
q = max(q, prices[i] + res);
}
return q;
}
Note: We are caching the result of cut-rod(n-1) to avoid unnecessarily increasing the complexity of the algorithm. Here, we can see that cut-rod(n) calls cut-rod(n-1), which calls cut-rod(n-2) and so on until cut-rod(0). For cut-rod(n), we see that the function iterates over the array n times. Therefore the time complexity of the algorithm is equal to O(n + (n-1) + (n-2) + (n-3)...1) = O(n(n+1)/2) which is approximately equal to O((n^2)/2).
EDIT:
If we are using the exact same algorithm as the one in the question, its time complexity is O(n!) since cut-rod(n) calls cut-rod(n-1) n times. cut-rod(n-1) calls cut-rod(n-2) n-1 times and so on. Therefore the time complexity is equal to O(n*(n-1)*(n-2)...1) = O(n!).
I am unsure if this counts as a step-by-step solution but it can be shown easily by induction/substitution. Just assume T(i)=2^i for all i<n then we show that it holds for n:

How to do recurrence relations?

nSo we were taught about recurrence relations a day ago and we were given some codes to practice with:
int pow(int base, int n){
if (n == 0)
return 1;
else if (n == 1)
return base;
else if(n%2 == 0)
return pow(base*base, n/2);
else
return base * pow(base*base, n/2);
}
The farthest I've got to getting its closed form is T(n) = T(n/2^k) + 7k.
I'm not sure how to go any further as the examples given to us were simple and does not help that much.
How do you actually solve for the recurrence relation of this code?
Let us count only the multiplies in a call to pow, denoted as M(N), assuming they dominate the cost (a nowadays strongly invalid assumption).
By inspection of the code we see that:
M(0) = 0 (no multiply for N=0)
M(1) = 0 (no multiply for N=1)
M(N), N>1, N even = M(N/2) + 1 (for even N, recursive call after one multiply)
M(N), N>1, N odd = M(N/2) + 2 (for odd N, recursive call after one multiply, followed by a second multiply).
This recurrence is a bit complicated by the fact that it handles differently the even and odd integers. We will work around this by considering sequences of even or odd numbers only.
Let us first handle the case of N being a power of 2. If we iterate the formula, we get M(N) = M(N/2) + 1 = M(N/4) + 2 = M(N/8) + 3 = M(N/16) + 4. We easily spot the pattern M(N) = M(N/2^k) + k, so that the solution M(2^n) = n follows. We can write this as M(N) = Lg(N) (base 2 logarithm).
Similarly, N = 2^n-1 will always yield odd numbers after divisions by 2. We have M(2^n-1) = M(2^(n-1)-1) + 2 = M(2^(n-2)-1) + 4... = 2(n-1). Or M(N) = 2 Lg(N+1) - 2.
The exact solution for general N can be fairly involved but we can see that Lg(N) <= M(N) <= 2 Lg(N+1) - 2. Thus M(N) is O(Log(N)).

Resources