What is the complexity of these functions with explanation? - algorithm

i would like to know how to find the complexity of these functions using T(n) .. and stuff like this .. Because i can only Guess.
First Function :
int f(int n)
{
if (n == 1)
return 1 ;
return 1 + f(f(n-1));
}
Time&Space complexity ??
Second Function :
time&space Complexity of Function f() ??? :
void f(int n)
{
int i ;
if(n < 2) return ;
for(i = 0 ; i < n/2 , i+= 5)
printf("*");
g(n/3);
g(n/3);
}
void g(int n)
{
int i ;
for(i = 0 ; i < n ; i++)
printf("?") ;
f(3*n/2);
}
Many Thanks :)

It may surprise you, but the second one is easier to start with. O_o ikr
Second function:
g(n) = n + f(3n/2), f(n) = n/10 + 2g(n/3). Therefore f(n) = 21n/10 + 2f(n/2).
Substitute n = 2^m, therefore f(2^m) = 21(2^m)/10 + 2f(2^(m-1)) = 2*21(2^m)/10 + 4f(2^(m-2)) etc...
The first term sums to m*21(2^m)/10, which may be obvious to you.
The second term (with the f()) grows geometrically; now f(1) = 1 (as there is only 1 operation), so if you expand to the last term you will find this term is 2^m * f(1) = 2^m. Therefore the total complexity of f is f(2^m) = m*21(2^m)/10 + 2^m, or f(n) = n(2.1*log(n) + 1), where log is base-2 logarithm.
Thus f(n) is O(n log(n)).
First function:
Ok I'll be honest I didn't know how to start, but I tested the code in C++ and the result is exactly f(n) = n.
Proof by induction:
Suppose f(n) = n, then f(n + 1) = 1 + f(f(n)) = n + 1. Thus if true for n then also true for n + 1
Now f(1) = 1 obviously. Therefore it's true for 2, and for 3, 4, 5 ... and so on.
Therefore by mathematical induction, f(n) = n.
Now for the time complexity bit. Since f(n) returns n, the outer call in the nested f(f(n-1)) will effectively be a second call, as the argument is the same: f(n-1); f(n-1);. Thus T(n) = 2*T(n-1) and therefore T(n) = 2^n. O(2^n).

Related

What could we the complexity proof of it?

In an online assessment I got one coding challenge and I wrote one recursive code for it.
The question was -
Given an integer n, return all the reversible numbers that are of length n.
A reversible number is a number that looks the same when rotated 180 degrees (looked at upside down).
Example:
Input: n = 2
Output: ["11","69","88","96"]
I wrote some kind of recursive approach and it passed.
vector<string> recursion(int n, int m) {
if (n == 0) {
vector<string> res = {""};
return res;
}
if (n == 1) {
vector<string> res = {"0", "1", "8"};
return res;
}
vector<string> ans = recursion(n - 2, m);
vector<string> res;
for (auto subAns: ans) {
// We can only append 0's if it is not first digit.
if (n != m) {
res.push_back('0' + subAns + '0');
}
res.push_back('1' + subAns + '1');
res.push_back('6' + subAns + '9');
res.push_back('8' + subAns + '8');
res.push_back('9' + subAns + '6');
}
return res;
}
vector<string> getAllNumbers(int n) {
return recursion(n, n);
}
I thought because we are calling 5 recursion it is something like 5^N but I want to do exact space and time complexity analysis for it.
Can anyone help me out what could be the exact solution, it is very tricky for me to figure out the exact space time complextiy for recursive approaches
Observe first that there are Θ(5n/2) valid numbers of length
n. Given the recurrence
C(−2) = 0
C(−1) = 0
C(0) = 1
C(1) = 3
∀n ≥ 2, C(n) = 5 N(n−2),
there are C(n) − C(n−2) numbers. If n = 2k where k is an integer, then
C(n) = 5k. If n = 2k + 1, then C(n) = 3 (5k).
The running time is Θ(5n/2 n). We can write a recurrence
T(0) = O(1)
T(1) = O(1)
∀n ≥ 2, T(n) = T(n−2) + Θ(5n/2 n),
where the latter term counts the cost of constructing Θ(5n/2)
numbers each of length n. This is not a terribly interesting recurrence;
we end up with a sum whose terms decrease faster than geometrically, so
it's Θ of its largest term.
Space usage will be asymptotically the same since space usage is bounded
above by time and below by the total size of the output, which are the
same Θ.

Computing expected time complexity of recursive program

I wish to determine the average processing time T(n) of the recursive algorithm:
int myTest( int n ) {
if ( n <= 0 ) {
return 0;
}
else {
int i = random( n - 1 );
return myTest( i ) + myTest( n - 1 - i );
}
}
provided that the algorithm random( int n ) spends one time unit to return
a random integer value uniformly distributed in the range [0, n] whereas
all other instructions spend a negligibly small time (e.g., T(0) = 0).
This is certainly not of the simpler form T(n) = a * T(n/b) + c so I am lost at how to write it. I can't figure out how to write it due to the fact that I am taking a random number each time from 0 to n-1 range and supplying it twice to the function and asking for the sum of those two calls.
The recurrence relations are:
T(0) = 0
T(n) = 1 + sum(T(i) + T(n-1-i) for i = 0..n-1) / n
The second can be simplified to:
T(n) = 1 + 2*sum(T(i) for i = 0..n-1) / n
Multiplying by n:
n T(n) = n + 2*sum(T(i) for i = 0..n-1)
Noting that (n-1) T(n-1) = n-1 + 2*sum(T(i) for i = 0..n-2), we get:
n T(n) = (n-1) T(n-1) + 1 + 2T(n-1)
= (n+1) T(n-1) + 1
Or:
T(n) = ((n+1)T(n-1) + 1) / n
This has the solution T(n) = n, which you can derive by telescoping the series, or by guessing the solution and then substituting it in to prove it works.

Complexity Time O(n) or O(n(n+1)/2)

What the complexity of algorithm that loops on n items (like array) then on (n-1) then (n-2)and so on Like:
Loop(int[] array) {
for (int i=0; i<array.Length; i++) {
//do some thing
}
}
Main() {
Loop({1, 2, 3, 4});
Loop({1, 2, 3});
Loop({1, 2});
Loop({1});
//What the complexity of this code.
}
What is the complexity of the previous program?
Assuming that what you do in the loop is O(1), The complexity of this is O(n+(n-1)+(n-2)+...+1) = O(n(n+1)/2) = O(0.5n^2 + 0.5n) = O(n^2)
The first = is due to arithmetic series sum.
The second = is due to opening the multiplication.
The third = is due to the fact that given a polynomial inside an O() you can simply replace it with x^highest_power
Formula:
n*(n+1)
n + ... + 1 = ─────────
2
Proof:
n + ... + 1 = S
2*(n + ... + 1) = 2*S
n + n-1 + ... + 2 + 1 +
1 + 2 + ... + n-1 + n = 2*S
n+1 + (n-1)+2 + ... + 2+(n-1) + 1+n = 2*S
n+1 + n+1 + ... + n+1 = 2*S
n*(n+1) = 2*S
S = n*(n+1)/2 = (n*n+n)/2
But:
n*n n*n + n n*n + n*n
───── < ───────── = S < ──────────── = n*n
2 2 2
Our sum is lower than (or equal to for n=1) n*n (for every n, but it's enough to be true for every n > n0). The assumption above is based on the fact that n >= 1 => n*n >= n.
n*n is in O(n2)
From (1) and (2) => our sum is in O(n2).
If we use the lower limit (n*n/2), we can also say that it is in Ω(n2) and then in Θ(n2).
Formal definition
You can also prove it based on the formal definition, but I found the explanation above more intuitive.
f(n) = O(g(n)) means there are positive constants c and n0, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0. The values of c and n0 must be fixed for the function f and must not depend on n.
f(n) = (n*n+n)/2
g(n) = n*n
Just choose n0 = 1 and c = 2, and you get:
0 ≤ (n*n+n)/2 ≤ 2*n*n
0 ≤ n*n+n ≤ 4*n*n
0 ≤ n ≤ 3*n*n
which is obviously true for every n ≥ n0=1.
In general, if you have problems when you choose the constants, use bigger values. E.g.: n0=10, c=100. Sometimes it will be more obvious.

Efficient Algorithm to Solve a Recursive Formula

I am given a formula f(n) where f(n) is defined, for all non-negative integers, as:
f(0) = 1
f(1) = 1
f(2) = 2
f(2n) = f(n) + f(n + 1) + n (for n > 1)
f(2n + 1) = f(n - 1) + f(n) + 1 (for n >= 1)
My goal is to find, for any given number s, the largest n where f(n) = s. If there is no such n return None. s can be up to 10^25.
I have a brute force solution using both recursion and dynamic programming, but neither is efficient enough. What concepts might help me find an efficient solution to this problem?
I want to add a little complexity analysis and estimate the size of f(n).
If you look at one recursive call of f(n), you notice, that the input n is basically divided by 2 before calling f(n) two times more, where always one call has an even and one has an odd input.
So the call tree is basically a binary tree where always the half of the nodes on a specific depth k provides a summand approx n/2k+1. The depth of the tree is log₂(n).
So the value of f(n) is in total about Θ(n/2 ⋅ log₂(n)).
Just to notice: This holds for even and odd inputs, but for even inputs the value is about an additional summand n/2 bigger. (I use Θ-notation to not have to think to much about some constants).
Now to the complexity:
Naive brute force
To calculate f(n) you have to call f(n) Θ(2log₂(n)) = Θ(n) times.
So if you want to calculate the values of f(n) until you reach s (or notice that there is no n with f(n)=s) you have to calculate f(n) s⋅log₂(s) times, which is in total Θ(s²⋅log(s)).
Dynamic programming
If you store every result of f(n), the time to calculate a f(n) reduces to Θ(1) (but it requires much more memory). So the total time complexity would reduce to Θ(s⋅log(s)).
Notice: Since we know f(n) ≤ f(n+2) for all n, you don't have to sort the values of f(n) and do a binary search.
Using binary search
Algorithm (input is s):
Set l = 1 and r = s
Set n = (l+r)/2 and round it to the next even number
calculate val = f(n).
if val == s then return n.
if val < s then set l = n
else set r = n.
goto 2
If you found a solution, fine. If not: try it again but round in step 2 to odd numbers. If this also does not return a solution, no solution exists at all.
This will take you Θ(log(s)) for the binary search and Θ(s) for the calculation of f(n) each time, so in total you get Θ(s⋅log(s)).
As you can see, this has the same complexity as the dynamic programming solution, but you don't have to save anything.
Notice: r = s does not hold for all s as an initial upper limit. However, if s is big enough, it holds. To be save, you can change the algorithm:
check first, if f(s) < s. If not, you can set l = s and r = 2s (or 2s+1 if it has to be odd).
Can you calculate the value of f(x) which x is from 0 to MAX_SIZE only once time?
what i mean is : calculate the value by DP.
f(0) = 1
f(1) = 1
f(2) = 2
f(3) = 3
f(4) = 7
f(5) = 4
... ...
f(MAX_SIZE) = ???
If the 1st step is illegal, exit. Otherwise, sort the value from small to big.
Such as 1,1,2,3,4,7,...
Now you can find whether exists n satisfied with f(n)=s in O(log(MAX_SIZE)) time.
Unfortunately, you don't mention how fast your algorithm should be. Perhaps you need to find some really clever rewrite of your formula to make it fast enough, in this case you might want to post this question on a mathematics forum.
The running time of your formula is O(n) for f(2n + 1) and O(n log n) for f(2n), according to the Master theorem, since:
T_even(n) = 2 * T(n / 2) + n / 2
T_odd(n) = 2 * T(n / 2) + 1
So the running time for the overall formula is O(n log n).
So if n is the answer to the problem, this algorithm would run in approx. O(n^2 log n), because you have to perform the formula roughly n times.
You can make this a little bit quicker by storing previous results, but of course, this is a tradeoff with memory.
Below is such a solution in Python.
D = {}
def f(n):
if n in D:
return D[n]
if n == 0 or n == 1:
return 1
if n == 2:
return 2
m = n // 2
if n % 2 == 0:
# f(2n) = f(n) + f(n + 1) + n (for n > 1)
y = f(m) + f(m + 1) + m
else:
# f(2n + 1) = f(n - 1) + f(n) + 1 (for n >= 1)
y = f(m - 1) + f(m) + 1
D[n] = y
return y
def find(s):
n = 0
y = 0
even_sol = None
while y < s:
y = f(n)
if y == s:
even_sol = n
break
n += 2
n = 1
y = 0
odd_sol = None
while y < s:
y = f(n)
if y == s:
odd_sol = n
break
n += 2
print(s,even_sol,odd_sol)
find(9992)
This recursive in every iteration for 2n and 2n+1 is increasing values, so if in any moment you will have value bigger, than s, then you can stop your algorithm.
To make effective algorithm you have to find or nice formula, that will calculate value, or make this in small loop, that will be much, much, much more effective, than your recursion. Your recursion is generally O(2^n), where loop is O(n).
This is how loop can be looking:
int[] values = new int[1000];
values[0] = 1;
values[1] = 1;
values[2] = 2;
for (int i = 3; i < values.length /2 - 1; i++) {
values[2 * i] = values[i] + values[i + 1] + i;
values[2 * i + 1] = values[i - 1] + values[i] + 1;
}
And inside this loop add condition of possible breaking it with success of failure.

Order of growth for Permutation

This is a simple program which finds all the permutations of a given string :
void perm( char str[], int len )
{
if ( len == 1 )
cout << str << endl ;
else
for ( int i=0; i<len; i++ ) {
swap( str[len-1], str[i] ) ;
perm( str, len-1 ) ;
swap( str[len-1], str[i] ) ;
}
}
What is the T(n) for this function ? How to calculate the Big Oh ( or Theta ) for this function ?
Let the length of the initial input string be N.
Let the time taken for a call of perm(str (size = N), len=i) be T(i)
T(1) = N
and
T(i > 1) = iT(i-1) + i
then the total time taken is T(N),
To calculate the closed form of this recurrence see here:
https://math.stackexchange.com/questions/188119/closed-form-for-t1-k-tx-xtx-1-x
The answer is:
T(N) is approximately (N + e - 1)N!
So as N approaches infinity the performance of the function is:
O((N + e - 1)N!) = O(N(N!))
For loop perform n recursion, to n*T(n-1), plus O(n) since you also need to swap 2n times, so
T(n) = n*T(n-1)+O(n)
n = 5 for sake of my keyboard
T(n) = n*T(n-1) + n
T(n) = n*[(n-1)*T(n-2) + (n-1)] + n
T(n) = n*[(n-1)*[(n-2)*T(n-3) + (n-2)] + (n+1)] + n
T(n) = n*[(n-1)*[(n-2)*[(n-3)*T(n-4) + (n-3)] + (n-2)] + (n-1)] + n
T(n-4) = 1 ----------------------^
Simplify
T(n) = n*[(n-1)*[(n-2)*[(n-3) + (n-3)] + (n-2)] + (n-1)] + n
T(n) = n*[(n-1)*[(n-2)*[2(n-3)] + (n-2)] + (n-1)] + n
T(n) = n(n-1)(n-2)*(n-3)*2 + (n-1)(n-2) + n(n-1) + n
T(n) = n! + O(n*n!) <-- wrong, see comment for right answer
T(n) = O(n*n!) <-- wrong, see comment for right answer
you see the pattern
The number of possible permutations of N items is N! (factorial), and this code seems to use O(1) operations per permutation it outputs. The cost of construction is therefore O(N!), which is equivalent to O(N^N).
Or maybe O(N!*N) since for every permutation N items are printed out to the console.

Resources