Time complexity of powering a number - algorithm

Learning from MIT Opencourseware's algorithms course, a professor talks about powering a number and its time complexity.
x^n simply is computed as x*x*x...... n times (imagine a simple for loop with a multiplication being performed inside it)
He states that the time complexity of this approach is theta(n).
Here is my analysis:
Let the N(x) be a function that gives the number of digits in x. Then, complexity of :
x*1 = N(x)
x*x = N(x)*N(x)
x*x*x = N(x^2) * N(X)
x*x*x*x = N(x^3) * N(x)
and so on......
To sum up, T(x^n) = N(x) + N(x)*N(x) + N(x^2)*N(x) + N(x^3)*N(x) + ........... N(x^(n-1))*N(x)
T(x^n) = N(x)[1 + N(x) + N(x^2) + N(x^3) + ....... N(x^n-1)]
However, i can't solve any further. How does it yield theta(n) ultimately?

Think of it like this.
If you conisder multiplication between two numbers to be an operation that takes unit time. Then the complexity of a 2 number multiplication is done in theta(1) time.
Now, in a for loop which runs for n-1 times for n numbers. You apply this operation n-1 times. So the theta(1) cost operation happens N-1 times which makes the overall cost of the operation theta(n-1) which in asymptotic terms is theta(n)
The multiplication happens like this
x=x
x^2 = x*x
x^3 = (x^2)*x
x^4 = (x^3)*x
................
.................
.................
x^(n-1) =(x^(n-2))*x
x^n = (x^(n-1))*x
It's theta(1) for each step as you can use the result of a previous step to calculate the overall product. For example, when you caculate x^2. You can store the value of x^2 and use it while calculating x^3. Similarly when you calculate x^4 you can use the stored value of x^3.
Now all the individual operations take theta(1) time. If you do it n times, the total time is theta(n). Now for calculating the complexity of x^n.
for x^2, T(2) = theta(1)
This is the base case for our induction.
Let us assume for x^k, T(k) = theta(k) to be true
x^(k+1) = (x^k)*x, T(k+1)= theta(k)+theta(1)
Hence, for x^n, time complexity is T(n) = theta(N)
and if you want to sum up the complexity. You are summing it up wrong.
We know that T(2) = theta(1), time complexity of multiplying two numbers.
T(n) = T(n-1)+T(2) (time complexity of multiplying two numbers and time complexity of multiplying (n-1) numbers)
T(n) = T(n-2)+T(2)+T(2)
T(n) = T(n-3)+T(2)+T(2)+T(2)
...................
...................
T(n) = T(3) + (n-3)*T(2)
T(n) = T(2) + (n-2)*T(2)
T(n) = (n-1)*T(2)
T(n) = (n-1)*theta(1)
T(n) = theta(n)
As you know C an example of how you will write a power(naive) function.
int power(int x,int n)
{
int powerVal=1;
for(int i=1;i<=n;++i)
{
powerVal=powerVal*x;
}
return powerVal;
}
Now, as you can see each time multiplication of two integer takes place and that takes only theta(1) time. You run this loop n times. so total complexity is theta(n)

You're waaaaaay off-track.
Multiplication is a single operation.
You are applying this operation n times.
Therefore, O(1*n), which is O(n).
Done.

If you're looking for a best algorithm to compute power of a given number, it's not the best solution. Indeed, power of a number is not computed as you said, this method gives a complexity o(n) because you're applying the same operation n times X*X*...*X. The algorithm below gives a complexity o(log n):
pow(x,n)
{
R=1; X=x; N=n;
while (N > 0)
{
if N mod 2=1 R= R*X
N= N div 2
X= X ยท X
}
return R
}

Related

How do you find the complexity of an algorithm given the number of computations performed each iteration?

Say there is an algorithm with input of size n. On the first iteration, it performs n computations, then is left with a problem instance of size floor(n/2) - for the worst case. Now it performs floor(n/2) computations. So, for example, an input of n=25 would see it perform 25+12+6+3+1 computations until an answer is reached, which is 47 total computations. How do you put this into Big O form to find worst case complexity?
You just need to write the corresponding recurrence in a formal manner:
T(n) = T(n/2) + n = n + n/2 + n/4 + ... + 1 =
n(1 + 1/2 + 1/4 + ... + 1/n) < 2 n
=> T(n) = O(n)

Complexity of f(k) when f(n) = O(n!) and k=n*(n-1)

I have the following problem. Let's suppose we have function f(n). Complexity of f(n) is O(n!). However, there is also parameter k=n*(n-1). My question is - what is the complexity of f(k)? Is it f(k)=O(k!/k^2) or something like that, taking into consideration that there is a quadratic relation between k and n?
Computational complexity is interpreted base on the size of the input. Hence, if f(n) = O(n!) when your input is n, then f(k) = O(k!) when your input is k.
Therefore, you don't need to compute the complexity for each value of input for the function f(n). For example, f(2) = O(2!), you don't need to compute the complexity of f(10) likes O((5*2)!) as 10 = 5 * 2, and try to simplify it base on the value of 2!. We can say f(10) = O(10!).
Anyhow, if you want compute (n*(n-1))! = (n^2 - n)!(n^2 - n + 1)...(n^2 - n + n) /(n^2 - n + 1)...(n^2 - n + n) = (n^2)!/\theta(n^3) = O((n^2)!/n^(2.9))
Did you consider that there is a m, such that the n you used in your f(n) is equal to m * (m - 1).
Does that change the complexity?
The n in f(n) = O(n!) represents all the valid inputs.
You are trying to pass a variable k whose actual value in terms of another variable is n * (n - 1). That does not change the complexity. It will be O(k!) only.

Big-Oh and theta notation of a specific function... Running time

If evaluating f(n) is theta(n)
i = 1;
sum = 0;
while (i <= n)
do if (f(i) > k)
then sum += f(i);
i = 2*i;
Would the running time of this be O(n^3) because of the n times the functions are possibly being called or would it be O(n)? Or is it something in terms of theta since that's the information we know? I am very lost on this...
The i variable doubles each time => will reach n in Log2(n) time.
The evaluation of f will be done Log2(n) times => the function time complexity is O(N x LogN).
In fact, if computing f(i) has complexity O(i), then the time complexity is:
1 + 2 + 4 + ... + 2^(Log2(n)) = n (there are Log2(n) steps) => O(n)

Is the big-O complexity of these functions correct?

I am learning about algorithm complexity, and I just want to verify my understanding is correct.
1) T(n) = 2n + 1 = O(n)
This is because we drop the constants 2 and 1, and we are left with n. Therefore, we have O(n).
2) T(n) = n * n - 100 = O(n^2)
This is because we drop the constant -100, and are left with n * n, which is n^2. Therefore, we have O(n^2)
Am I correct?
Basically you have those different levels determined by the "dominant" factor of your function, starting from the lowest complexity :
O(1) if your function only contains constants
O(log(n)) if the dominant part is in log, ln...
O(n^p) if the dominant part is polynomial and the highest power is p (e.g. O(n^3) for T(n) = n*(3n^2 + 1) -3 )
O(p^n) if the dominant part is a fixed number to n-th power (e.g. O(3^n) for T(n) = 3 + n^99 + 2*3^n)
O(n!) if the dominant part is factorial
and so on...

Calculating execution time of an algorithm

I have this algorithm:
S(n)
if n=1 then return(0)
else
S(n/3)
x <- 0
while x<= 3n^3 do
x <- x+3
S(n/3)
Is 2 * T(n/3) + n^3 the recurrence relation?
Is T(n) = O(n^3) the execution time?
The recurrence expression is correct. The time complexity of the algorithm is O(n^3).
The recurrence stops at T(1).
Running an example for n = 27 helps deriving a general expression:
T(n) = 2*T(n/3)+n^3 =
= 2*(2*T(n/9)+(n/3)^3)+n^3 =
= 2*(2*(2*T(n/27)+(n/9)^3)+(n/3)^3)+n^3 =
= ... =
= 2*(2*2*T(n/27)+2*(n/9)^3+(n/3)^3)+n^3 =
= 2*2*2*T(n/27)+2*2*(n/9)^3+2*(n/3)^3+n^3
From this example we can see that the general expression is given by:
Which is equivalent to:
Which, in turn, can be solved to the following closed form:
The dominating term in this expression is (1/25)*27n^3 (2^(log_3(n)) is O(n), you can think of it as 2^(log(n)*(1/log(3))); dropping the constant 1/log(3) gives 2^log(n) = n), thus, the recurrence is O(n^3).
2 * T(n/3) + n^3
Yes, I think this is a correct recurrence relation.
Time complexity:
while x<= 3n^3 do
x <- x+3
This has a Time complexity of O(n^3). Also, at each step, the function calls itself twice with 1/3rd n. So the series shall be
n, n/3, n/9, ...
The total complexity after adding each depth
n^3 + 2/27 * (n^3) + 4/243 * (n^3)...
This series is bounded by k*n^3 where k is a constant.
Proof: if it is considered as a GP with a factor of 1/2, then the sum
becomes 2*n^3. Now we can see that at each step, the factor is
continuously decreasing and is less than half. Hence the upper bound is less than 2*n^3.
So in my opinion the complexity = O(n^3).

Resources