Order of growth for Permutation - algorithm

This is a simple program which finds all the permutations of a given string :
void perm( char str[], int len )
{
if ( len == 1 )
cout << str << endl ;
else
for ( int i=0; i<len; i++ ) {
swap( str[len-1], str[i] ) ;
perm( str, len-1 ) ;
swap( str[len-1], str[i] ) ;
}
}
What is the T(n) for this function ? How to calculate the Big Oh ( or Theta ) for this function ?

Let the length of the initial input string be N.
Let the time taken for a call of perm(str (size = N), len=i) be T(i)
T(1) = N
and
T(i > 1) = iT(i-1) + i
then the total time taken is T(N),
To calculate the closed form of this recurrence see here:
https://math.stackexchange.com/questions/188119/closed-form-for-t1-k-tx-xtx-1-x
The answer is:
T(N) is approximately (N + e - 1)N!
So as N approaches infinity the performance of the function is:
O((N + e - 1)N!) = O(N(N!))

For loop perform n recursion, to n*T(n-1), plus O(n) since you also need to swap 2n times, so
T(n) = n*T(n-1)+O(n)
n = 5 for sake of my keyboard
T(n) = n*T(n-1) + n
T(n) = n*[(n-1)*T(n-2) + (n-1)] + n
T(n) = n*[(n-1)*[(n-2)*T(n-3) + (n-2)] + (n+1)] + n
T(n) = n*[(n-1)*[(n-2)*[(n-3)*T(n-4) + (n-3)] + (n-2)] + (n-1)] + n
T(n-4) = 1 ----------------------^
Simplify
T(n) = n*[(n-1)*[(n-2)*[(n-3) + (n-3)] + (n-2)] + (n-1)] + n
T(n) = n*[(n-1)*[(n-2)*[2(n-3)] + (n-2)] + (n-1)] + n
T(n) = n(n-1)(n-2)*(n-3)*2 + (n-1)(n-2) + n(n-1) + n
T(n) = n! + O(n*n!) <-- wrong, see comment for right answer
T(n) = O(n*n!) <-- wrong, see comment for right answer
you see the pattern

The number of possible permutations of N items is N! (factorial), and this code seems to use O(1) operations per permutation it outputs. The cost of construction is therefore O(N!), which is equivalent to O(N^N).
Or maybe O(N!*N) since for every permutation N items are printed out to the console.

Related

Find formula to describe recursion in method

I am struggling with writing the formula that describes the recursive nature of the foo method.
The problem is that as far as I can tell, since every time n is divided with 2,
the binary tree formula should apply here.
This says that when in each call we divide the data we get a formula like this:
And then if we analyze for 2 so :
We get:
Which means that C(N) = log(N + 1), namely O(logN)
That all makes sense and seems to be the right choice for the foo method but it cant be because for
n = 8 I would get 3 + 1 iterations that are not n + 1 = 8 + 1 = 9 iterations
So here is your code:
void foo(int n) {
if (n == 1) System.out.println("Last line I print");
if (n > 1) {
System.out.println("I am printing one more line");
foo(n/2);
}
}
We can write a recurrence relation down for its runtime T as a function of the value of the parameter passed into it, n:
T(1) = a, a constant
T(n) = b + T(n/2), b constant, n > 1
We can write out some values of T(n) for various values of n to see if a pattern emerges:
n T(n)
---------
1 a
2 a + b
4 a + 2b
8 a + 3b
...
2^k a + kb
So for n = 2^k, T(n) = a + kb. We can solve for k in terms of n as follows:
n = 2^k <=> k = log(n)
Then we recover the expression T(n) = a + blog(n). We can verify this expression works easily:
a + blog(1) = a, as required
a + blog(n) = b + (a + blog(n/2))
= b + (a + b(log(n) - 1)
= b + a + blog(n) - b
= a + blog(n), as required
You can also use mathematical induction to do the same thing.

What's the complexity of sum i=0 -> n (n_i*i))

This is a test I failed because I thought this complexity would be O(n), but it appears i'm wrong and it's O(n^2). Why not O(n)?
First, notice that the question does not ask what is the time complexity of a function calculating f(n), but rather the complexity of the function f(n) itself. you can think about f(n) as being the time complexity of some other algorithm if you are more comfortable talking about time complexity.
This is indeed O(n^2), when the sequence a_i is bounded by a constant and each a_i is at least 1.
By the assumption, for all i, a_i <= c for some constant c.
Hence, a_1*1+...+a_n*n <= c * (1 + 2 + ... + n). Now we need to show that 1 + 2 +... + n = O(n^2) to complete the proof.
1 + 2 + ... + n <= n + n + ... + n = n * n = n ^ 2
and
1 + 2 + ... + n >= n / 2 + (n / 2 + 1) + ... + n >= (n / 2) * (n / 2) = n^2/4
So the complexity is actually Theta(n^2).
Note that if a_i was not constant, e.g., a_i = i then the result is not correct.
in that case, f(n) = 1^2 + 2^2 + ... + n^2 and you can show easily (using the same method as before) that f(n) = Omega(n^3), which means it's not O(n^2).
Preface, not super great with complexity-theory but I'll take a stab.
I think what is confusing is that its not a time complexity problem, but rather the functions complexity.
So for easy part i just goes up to n ie. 1,2,3 ...n , then for ai all entries must be above 0 meaning that a could be something like this 2,5,1... for n times. If you multiply them together n*n = O(n2).
The best case would be if a is 1,1,1 which drop the complexity down to O(n) but the average case will be n so you get squared.
Unless it's mentioned that a[i] is O(n), it's definitely O(n)
Here an another try to achieve O(n*n) if sum should be returned as result.
int sum = 0;
for(int i = 0; i<=n; i++){
for(int j = 0; j<=n; j++){
if(i == j){
sum += A[i] * j;
}
}
return sum;

Computing expected time complexity of recursive program

I wish to determine the average processing time T(n) of the recursive algorithm:
int myTest( int n ) {
if ( n <= 0 ) {
return 0;
}
else {
int i = random( n - 1 );
return myTest( i ) + myTest( n - 1 - i );
}
}
provided that the algorithm random( int n ) spends one time unit to return
a random integer value uniformly distributed in the range [0, n] whereas
all other instructions spend a negligibly small time (e.g., T(0) = 0).
This is certainly not of the simpler form T(n) = a * T(n/b) + c so I am lost at how to write it. I can't figure out how to write it due to the fact that I am taking a random number each time from 0 to n-1 range and supplying it twice to the function and asking for the sum of those two calls.
The recurrence relations are:
T(0) = 0
T(n) = 1 + sum(T(i) + T(n-1-i) for i = 0..n-1) / n
The second can be simplified to:
T(n) = 1 + 2*sum(T(i) for i = 0..n-1) / n
Multiplying by n:
n T(n) = n + 2*sum(T(i) for i = 0..n-1)
Noting that (n-1) T(n-1) = n-1 + 2*sum(T(i) for i = 0..n-2), we get:
n T(n) = (n-1) T(n-1) + 1 + 2T(n-1)
= (n+1) T(n-1) + 1
Or:
T(n) = ((n+1)T(n-1) + 1) / n
This has the solution T(n) = n, which you can derive by telescoping the series, or by guessing the solution and then substituting it in to prove it works.

What is the complexity of these functions with explanation?

i would like to know how to find the complexity of these functions using T(n) .. and stuff like this .. Because i can only Guess.
First Function :
int f(int n)
{
if (n == 1)
return 1 ;
return 1 + f(f(n-1));
}
Time&Space complexity ??
Second Function :
time&space Complexity of Function f() ??? :
void f(int n)
{
int i ;
if(n < 2) return ;
for(i = 0 ; i < n/2 , i+= 5)
printf("*");
g(n/3);
g(n/3);
}
void g(int n)
{
int i ;
for(i = 0 ; i < n ; i++)
printf("?") ;
f(3*n/2);
}
Many Thanks :)
It may surprise you, but the second one is easier to start with. O_o ikr
Second function:
g(n) = n + f(3n/2), f(n) = n/10 + 2g(n/3). Therefore f(n) = 21n/10 + 2f(n/2).
Substitute n = 2^m, therefore f(2^m) = 21(2^m)/10 + 2f(2^(m-1)) = 2*21(2^m)/10 + 4f(2^(m-2)) etc...
The first term sums to m*21(2^m)/10, which may be obvious to you.
The second term (with the f()) grows geometrically; now f(1) = 1 (as there is only 1 operation), so if you expand to the last term you will find this term is 2^m * f(1) = 2^m. Therefore the total complexity of f is f(2^m) = m*21(2^m)/10 + 2^m, or f(n) = n(2.1*log(n) + 1), where log is base-2 logarithm.
Thus f(n) is O(n log(n)).
First function:
Ok I'll be honest I didn't know how to start, but I tested the code in C++ and the result is exactly f(n) = n.
Proof by induction:
Suppose f(n) = n, then f(n + 1) = 1 + f(f(n)) = n + 1. Thus if true for n then also true for n + 1
Now f(1) = 1 obviously. Therefore it's true for 2, and for 3, 4, 5 ... and so on.
Therefore by mathematical induction, f(n) = n.
Now for the time complexity bit. Since f(n) returns n, the outer call in the nested f(f(n-1)) will effectively be a second call, as the argument is the same: f(n-1); f(n-1);. Thus T(n) = 2*T(n-1) and therefore T(n) = 2^n. O(2^n).

Complexity Time O(n) or O(n(n+1)/2)

What the complexity of algorithm that loops on n items (like array) then on (n-1) then (n-2)and so on Like:
Loop(int[] array) {
for (int i=0; i<array.Length; i++) {
//do some thing
}
}
Main() {
Loop({1, 2, 3, 4});
Loop({1, 2, 3});
Loop({1, 2});
Loop({1});
//What the complexity of this code.
}
What is the complexity of the previous program?
Assuming that what you do in the loop is O(1), The complexity of this is O(n+(n-1)+(n-2)+...+1) = O(n(n+1)/2) = O(0.5n^2 + 0.5n) = O(n^2)
The first = is due to arithmetic series sum.
The second = is due to opening the multiplication.
The third = is due to the fact that given a polynomial inside an O() you can simply replace it with x^highest_power
Formula:
n*(n+1)
n + ... + 1 = ─────────
2
Proof:
n + ... + 1 = S
2*(n + ... + 1) = 2*S
n + n-1 + ... + 2 + 1 +
1 + 2 + ... + n-1 + n = 2*S
n+1 + (n-1)+2 + ... + 2+(n-1) + 1+n = 2*S
n+1 + n+1 + ... + n+1 = 2*S
n*(n+1) = 2*S
S = n*(n+1)/2 = (n*n+n)/2
But:
n*n n*n + n n*n + n*n
───── < ───────── = S < ──────────── = n*n
2 2 2
Our sum is lower than (or equal to for n=1) n*n (for every n, but it's enough to be true for every n > n0). The assumption above is based on the fact that n >= 1 => n*n >= n.
n*n is in O(n2)
From (1) and (2) => our sum is in O(n2).
If we use the lower limit (n*n/2), we can also say that it is in Ω(n2) and then in Θ(n2).
Formal definition
You can also prove it based on the formal definition, but I found the explanation above more intuitive.
f(n) = O(g(n)) means there are positive constants c and n0, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0. The values of c and n0 must be fixed for the function f and must not depend on n.
f(n) = (n*n+n)/2
g(n) = n*n
Just choose n0 = 1 and c = 2, and you get:
0 ≤ (n*n+n)/2 ≤ 2*n*n
0 ≤ n*n+n ≤ 4*n*n
0 ≤ n ≤ 3*n*n
which is obviously true for every n ≥ n0=1.
In general, if you have problems when you choose the constants, use bigger values. E.g.: n0=10, c=100. Sometimes it will be more obvious.

Resources