Calculating execution time of an algorithm - algorithm

I have this algorithm:
S(n)
if n=1 then return(0)
else
S(n/3)
x <- 0
while x<= 3n^3 do
x <- x+3
S(n/3)
Is 2 * T(n/3) + n^3 the recurrence relation?
Is T(n) = O(n^3) the execution time?

The recurrence expression is correct. The time complexity of the algorithm is O(n^3).
The recurrence stops at T(1).
Running an example for n = 27 helps deriving a general expression:
T(n) = 2*T(n/3)+n^3 =
= 2*(2*T(n/9)+(n/3)^3)+n^3 =
= 2*(2*(2*T(n/27)+(n/9)^3)+(n/3)^3)+n^3 =
= ... =
= 2*(2*2*T(n/27)+2*(n/9)^3+(n/3)^3)+n^3 =
= 2*2*2*T(n/27)+2*2*(n/9)^3+2*(n/3)^3+n^3
From this example we can see that the general expression is given by:
Which is equivalent to:
Which, in turn, can be solved to the following closed form:
The dominating term in this expression is (1/25)*27n^3 (2^(log_3(n)) is O(n), you can think of it as 2^(log(n)*(1/log(3))); dropping the constant 1/log(3) gives 2^log(n) = n), thus, the recurrence is O(n^3).

2 * T(n/3) + n^3
Yes, I think this is a correct recurrence relation.
Time complexity:
while x<= 3n^3 do
x <- x+3
This has a Time complexity of O(n^3). Also, at each step, the function calls itself twice with 1/3rd n. So the series shall be
n, n/3, n/9, ...
The total complexity after adding each depth
n^3 + 2/27 * (n^3) + 4/243 * (n^3)...
This series is bounded by k*n^3 where k is a constant.
Proof: if it is considered as a GP with a factor of 1/2, then the sum
becomes 2*n^3. Now we can see that at each step, the factor is
continuously decreasing and is less than half. Hence the upper bound is less than 2*n^3.
So in my opinion the complexity = O(n^3).

Related

Complexity of f(k) when f(n) = O(n!) and k=n*(n-1)

I have the following problem. Let's suppose we have function f(n). Complexity of f(n) is O(n!). However, there is also parameter k=n*(n-1). My question is - what is the complexity of f(k)? Is it f(k)=O(k!/k^2) or something like that, taking into consideration that there is a quadratic relation between k and n?
Computational complexity is interpreted base on the size of the input. Hence, if f(n) = O(n!) when your input is n, then f(k) = O(k!) when your input is k.
Therefore, you don't need to compute the complexity for each value of input for the function f(n). For example, f(2) = O(2!), you don't need to compute the complexity of f(10) likes O((5*2)!) as 10 = 5 * 2, and try to simplify it base on the value of 2!. We can say f(10) = O(10!).
Anyhow, if you want compute (n*(n-1))! = (n^2 - n)!(n^2 - n + 1)...(n^2 - n + n) /(n^2 - n + 1)...(n^2 - n + n) = (n^2)!/\theta(n^3) = O((n^2)!/n^(2.9))
Did you consider that there is a m, such that the n you used in your f(n) is equal to m * (m - 1).
Does that change the complexity?
The n in f(n) = O(n!) represents all the valid inputs.
You are trying to pass a variable k whose actual value in terms of another variable is n * (n - 1). That does not change the complexity. It will be O(k!) only.

Is the big-O complexity of these functions correct?

I am learning about algorithm complexity, and I just want to verify my understanding is correct.
1) T(n) = 2n + 1 = O(n)
This is because we drop the constants 2 and 1, and we are left with n. Therefore, we have O(n).
2) T(n) = n * n - 100 = O(n^2)
This is because we drop the constant -100, and are left with n * n, which is n^2. Therefore, we have O(n^2)
Am I correct?
Basically you have those different levels determined by the "dominant" factor of your function, starting from the lowest complexity :
O(1) if your function only contains constants
O(log(n)) if the dominant part is in log, ln...
O(n^p) if the dominant part is polynomial and the highest power is p (e.g. O(n^3) for T(n) = n*(3n^2 + 1) -3 )
O(p^n) if the dominant part is a fixed number to n-th power (e.g. O(3^n) for T(n) = 3 + n^99 + 2*3^n)
O(n!) if the dominant part is factorial
and so on...

Time complexity of powering a number

Learning from MIT Opencourseware's algorithms course, a professor talks about powering a number and its time complexity.
x^n simply is computed as x*x*x...... n times (imagine a simple for loop with a multiplication being performed inside it)
He states that the time complexity of this approach is theta(n).
Here is my analysis:
Let the N(x) be a function that gives the number of digits in x. Then, complexity of :
x*1 = N(x)
x*x = N(x)*N(x)
x*x*x = N(x^2) * N(X)
x*x*x*x = N(x^3) * N(x)
and so on......
To sum up, T(x^n) = N(x) + N(x)*N(x) + N(x^2)*N(x) + N(x^3)*N(x) + ........... N(x^(n-1))*N(x)
T(x^n) = N(x)[1 + N(x) + N(x^2) + N(x^3) + ....... N(x^n-1)]
However, i can't solve any further. How does it yield theta(n) ultimately?
Think of it like this.
If you conisder multiplication between two numbers to be an operation that takes unit time. Then the complexity of a 2 number multiplication is done in theta(1) time.
Now, in a for loop which runs for n-1 times for n numbers. You apply this operation n-1 times. So the theta(1) cost operation happens N-1 times which makes the overall cost of the operation theta(n-1) which in asymptotic terms is theta(n)
The multiplication happens like this
x=x
x^2 = x*x
x^3 = (x^2)*x
x^4 = (x^3)*x
................
.................
.................
x^(n-1) =(x^(n-2))*x
x^n = (x^(n-1))*x
It's theta(1) for each step as you can use the result of a previous step to calculate the overall product. For example, when you caculate x^2. You can store the value of x^2 and use it while calculating x^3. Similarly when you calculate x^4 you can use the stored value of x^3.
Now all the individual operations take theta(1) time. If you do it n times, the total time is theta(n). Now for calculating the complexity of x^n.
for x^2, T(2) = theta(1)
This is the base case for our induction.
Let us assume for x^k, T(k) = theta(k) to be true
x^(k+1) = (x^k)*x, T(k+1)= theta(k)+theta(1)
Hence, for x^n, time complexity is T(n) = theta(N)
and if you want to sum up the complexity. You are summing it up wrong.
We know that T(2) = theta(1), time complexity of multiplying two numbers.
T(n) = T(n-1)+T(2) (time complexity of multiplying two numbers and time complexity of multiplying (n-1) numbers)
T(n) = T(n-2)+T(2)+T(2)
T(n) = T(n-3)+T(2)+T(2)+T(2)
...................
...................
T(n) = T(3) + (n-3)*T(2)
T(n) = T(2) + (n-2)*T(2)
T(n) = (n-1)*T(2)
T(n) = (n-1)*theta(1)
T(n) = theta(n)
As you know C an example of how you will write a power(naive) function.
int power(int x,int n)
{
int powerVal=1;
for(int i=1;i<=n;++i)
{
powerVal=powerVal*x;
}
return powerVal;
}
Now, as you can see each time multiplication of two integer takes place and that takes only theta(1) time. You run this loop n times. so total complexity is theta(n)
You're waaaaaay off-track.
Multiplication is a single operation.
You are applying this operation n times.
Therefore, O(1*n), which is O(n).
Done.
If you're looking for a best algorithm to compute power of a given number, it's not the best solution. Indeed, power of a number is not computed as you said, this method gives a complexity o(n) because you're applying the same operation n times X*X*...*X. The algorithm below gives a complexity o(log n):
pow(x,n)
{
R=1; X=x; N=n;
while (N > 0)
{
if N mod 2=1 R= R*X
N= N div 2
X= X ยท X
}
return R
}

Algorithm complexity, solving recursive equation

I'm taking Data Structures and Algorithm course and I'm stuck at this recursive equation:
T(n) = logn*T(logn) + n
obviously this can't be handled with the use of the Master Theorem, so I was wondering if anybody has any ideas for solving this recursive equation. I'm pretty sure that it should be solved with a change in the parameters, like considering n to be 2^m , but I couldn't manage to find any good fix.
The answer is Theta(n). To prove something is Theta(n), you have to show it is Omega(n) and O(n). Omega(n) in this case is obvious because T(n)>=n. To show that T(n)=O(n), first
Pick a large finite value N such that log(n)^2 < n/100 for all n>N. This is possible because log(n)^2=o(n).
Pick a constant C>100 such that T(n)<Cn for all n<=N. This is possible due to the fact that N is finite.
We will show inductively that T(n)<Cn for all n>N. Since log(n)<n, by the induction hypothesis, we have:
T(n) < n + log(n) C log(n)
= n + C log(n)^2
< n + (C/100) n
= C * (1/100 + 1/C) * n
< C/50 * n
< C*n
In fact, for this function it is even possible to show that T(n) = n + o(n) using a similar argument.
This is by no means an official proof but I think it goes like this.
The key is the + n part. Because of this, T is bounded below by o(n). (or should that be big omega? I'm rusty.) So let's assume that T(n) = O(n) and have a go at that.
Substitute into the original relation
T(n) = (log n)O(log n) + n
= O(log^2(n)) + O(n)
= O(n)
So it still holds.

Worst Case Performance of Quicksort

I am trying to prove the following worst-case scenario for the Quicksort algorithm but am having some trouble. Initially, we have an array of size n, where n = ij. The idea is that at every partition step of Quicksort, you end up with two sub-arrays where one is of size i and the other is of size i(j-1). i in this case is an integer constant greater than 0. I have drawn out the recursive tree of some examples and understand why this is a worst-case scenario and that the running time will be theta(n^2). To prove this, I've used the iteration method to solve the recurrence equation:
T(n) = T(ij) = m if j = 1
T(n) = T(ij) = T(i) + T(i(j-1)) + cn if j > 1
T(i) = m
T(2i) = m + m + c*2i = 2m + 2ci
T(3i) = m + 2m + 2ci + 3ci = 3m + 5ci
So it looks like the recurrence is:
j
T(n) = jm + ci * sum k - 1
k=1
At this point, I'm a bit lost as to what to do. It looks the summation at the end will result in j^2 if expanded out, but I need to show that it somehow equals n^2. Any explanation on how to continue with this would be appreciated.
Pay attention, the quicksort algorithm worst case scenario is when you have two subproblems of size 0 and n-1. In this scenario, you have this recurrence equations for each level:
T(n) = T(n-1) + T(0) < -- at first level of tree
T(n-1) = T(n-2) + T(0) < -- at second level of tree
T(n-2) = T(n-3) + T(0) < -- at third level of tree
.
.
.
The sum of costs at each level is an arithmetic serie:
n n(n-1)
T(n) = sum k = ------ ~ n^2 (for n -> +inf)
k=1 2
It is O(n^2).
Its a problem of simple mathematics. The complexity as you have calculated correctly is
O(jm + ij^2)
what you have found out is a parameterized complextiy. The standard O(n^2) is contained in this as follows - assuming i=1 you have a standard base case so m=O(1) hence j=n therefore we get O(n^2). if you put ij=n you will get O(nm/i+n^2/i) . Now what you should remember is that m is a function of i depending upon what you will use as the base case algorithm hence m=f(i) thus you are left with O(nf(i)/i + n^2/i). Now again note that since there is no linear algorithm for general sorting hence f(i) = omega(ilogi) which will give you O(nlogi + n^2/i). So you have only one degree of freedom that is i. Check that for any value of i you cannot reduce it below nlogn which is the best bound for comparison based.
Now what I am confused is that you are doing some worst case analysis of quick sort. This is not the way its done. When you say worst case it implies you are using randomization in which case the worst case will always be when i=1 hence the worst case bound will be O(n^2). An elegant way to do this is explained in randomized algorithm book by R. Motwani and Raghavan alternatively if you are a programmer then you look at Cormen.

Resources