How to calculate time complexity of the following algorithm. I tried but I am getting confused because recursive calls.
power (real x, positive integer n)
//comment : This algorithm returns xn, taking x and n as input
{
if n=1 then
return x;
y = power(x, |n/2|)
if n id odd then
return y*y*x //comment : returning the product of y2 and x
else
return y * y //comment : returning y2
}
can some one explain in simple steps.
To figure out the time complexity of a recursive function you need to calculate the number of recursive calls that is going to be made in terms of some input variable N.
In this case, each call makes at most one recursive invocation. The number of invocations is on the order of O(log2N), because each invocation decreases N in half.
The rest of the body of the recursive function is O(1), because it does not depend on N. Therefore, your function has time complexity of O(log2N).
Each call is considered a constant time operation, and how many times will it recurse is equal to how many times can you do n/2 before n = 1, which is at most log2(n) times. Therefore the worst case running time is O(log2n).
Related
How can i calculate the time complexity and the t(n) equation of this recursive function?
Function CoeffBin(n,k)
if (n=1) or (k=0) then return(1)
else return (CoeffBin(n-1,k) + CoeffBin(n-1,k-1))
Let T(n, k) be the cost function and assume a unit cost of the statement if (n=1) or (k=0) then return(1).
Now neglecting the cost of addition, we have the recurrence
T(n, k) =
1 if n = 1 or k = 0 (that is T(1, k) = T(n, 0) = 1)
T(n-1, k) + T(n-1, k-1) otherwise
The solution is T(n, k)=B(n-1, n-1+k)=B(k, n-1+k) where B denotes Binomial numbers
and the costs also follows Pascal's triangle !
For a more precise estimate, we can assume the costs a when n=1or k=0, and T(n-1,k)+T(n-1,k-1)+b otherwise. The solution is then (a+b)B(k, n+k-1)-b.
Note that, at the base level (that is, when not doing recursive calls), the function always returns ones.
So, to have an answer of X, the program will ultimately need to do X-1 additions, and thus do X calls executing the case in the first line and X-1 calls executing the second line.
So, whatever the intended result of the function call is -- perhaps choose(n,k), -- if you prove that it works, you automatically establish that the number of calls is proportional to that result.
I'm trying to understand the time and space complexity of an algorithm for generating an array's permutations. Given a partially built permutation where k out of n elements are already selected, the algorithm selects element k+1 from the remaining n-k elements and calls itself to select the remaining n-k-1 elements:
public static List<List<Integer>> permutations(List<Integer> A) {
List<List<Integer>> result = new ArrayList<>();
permutations(A, 0, result);
return result;
}
public static void permutations(List<Integer> A, int start, List<List<Integer>> result) {
if(A.size()-1==start) {
result.add(new ArrayList<>(A));
return;
}
for (int i=start; i<A.size(); i++) {
Collections.swap(A, start, i);
permutations(A, start+1, result);
Collections.swap(A, start, i);
}
}
My thoughts are that in each call we swap the collection's elements 2n times, where n is the number of elements to permute, and make n recursive calls. So the running time seems to fit the recurrence relation T(n)=nT(n-1)+n=n[(n-1)T(n-2)+(n-1)]+n=...=n+n(n-1)+n(n-1)(n-2)+...+n!=n![1/(n-1)!+1/(n-2)!+...+1]=n!e, hence the time complexity is O(n!) and the space complexity is O(max(n!, n)), where n! is the total number of permutations and n is the height of the recursion tree.
This problem is taken from the Elements of Programming Interviews book, and they're saying that the time complexity is O(n*n!) because "The number of function calls C(n)=1+nC(n-1) ... [which solves to] O(n!) ... [and] ... we do O(n) computation per call outside of the recursive calls".
Which time complexity is correct?
The time complexity of this algorithm, counted by the number of basic operations performed, is Θ(n * n!). Think about the size of the result list when the algorithm terminates-- it contains n! permutations, each of length n, and we cannot create a list with n * n! total elements in less than that amount of time. The space complexity is the same, since the recursion stack only ever has O(n) calls at a time, so the size of the output list dominates the space complexity.
If you count only the number of recursive calls to permutations(), the function is called O(n!) times, although this is usually not what is meant by 'time complexity' without further specification. In other words, you can generate all permutations in O(n!) time, as long as you don't read or write those permutations after they are generated.
The part where your derivation of run-time breaks down is in the definition of T(n). If you define T(n) as 'the run-time of permutations(A, start) when the input, A, has length n', then you can not define it recursively in terms of T(n-1) or any other function of T(), because the length of the input in all recursive calls is n, the length of A.
A more useful way to define T(n) is by specifying it as the run-time of permutations(A', start), when A' is any permutation of a fixed, initial array A, and A.length - start == n. It's easy to write the recurrence relation here:
T(x) = x * T(x-1) + O(x) if x > 1
T(1) = A.length
This takes into account the fact that the last recursive call, T(1), has to perform O(A.length) work to copy that array to the output, and this new recurrence gives the result from the textbook.
Link to problem: Ugly Numbers
How would you find the Big O of the brute force (Simple Method Approach) solution for Ugly-Numbers.
I see that for this part of the code:
/* Function to check if a number is ugly or not */
int isUgly(int no)
{
no = maxDivide(no, 2);
no = maxDivide(no, 3);
no = maxDivide(no, 5);
return (no == 1)? 1 : 0;
}
Each step takes log_2(x) + log_3(x) + log_5(x) steps, where x = no
So this would mean the runtime is (log_2(x) + log_3(x) + log_5(x))n where x is the result of the output. However, the result of an algorithm can't be a part of the Big O notation right? If it can't, this would be reduced to cn right? Where c > result. What is the proper method of proof for this?
Ugly numbers are also also known as regular numbers. As you can see from the Wikipedia article, it is known that the number of regular numbers up to m is
(ln(m*sqrt(30))^3 / (6*ln(2)*ln(3)*ln(5)) + O(ln(m))
In other words, your getNthUglyNo will call isUgly to get the n-th regular number
~ 1/sqrt(30) * exp((n*6*ln(2)*ln(3)*ln(5))^(1/3))
times.
The probability that a uniform random integer number x between 0 and M is divisible by 2^y is asymptotically1/2^y and so the mean number of times that the loop in the call maxDivide(no, 2); iterates is O(1) and equivalently for maxDivide(no, 3); and maxDivide(no, 5);.
Consequently your algorithm is
Theta( exp((n*6*ln(2)*ln(3)*ln(5))^(1/3)) )
which is approximately
Theta( exp(1.9446 * n^(1/3)) )
Also note that plugging in n = 500 into the asymptotic number of iterations mentioned above does give you 921498, which is pretty close to the number of iterations #sowrov found in their answer (937500).
The complexity of isUgly method is O(log N), where N is the input. Because the complexity of maxDivide is O(log N) and calling that function a fixed amount of times (3 in this case) does not change the complexity.
However, the result of an algorithm can't be a part of the Big O notation right?
Yes, the result of a function is irrelevant while calculating the complexity of that function.
The time complexity of the getNthUglyNo is unknown or ~infinite! For N=500 it runes 937500 times!
I can't figure out the solution of this excercise:
Calculate the complexity of f(g(n))+g(f(n)) with g and f defined as follows:
int f(int x) {
if (x<=1) return 2;
int a = g(x) + 2*f(x/2);
return 1+ x + 2*a;
}
int g(int x) {
int b=0;
if (x<=1) return 5;
for (int i=1, i<=x*x;i++)
b+=i;
return b + g(x-1);
}
could anyone explain me how to get to the solution?
There are two separate steps to solving this problem. Firstly we must look at the time complexity of each function, and then the output complexity.
Time complexity
Since g is self-contained, let's look at it first.
The work done in g consists of:
x^2 executions of a loop
A recursive call with parameter x - 1
Hence one might write the time complexity recurrence relation as (using upper-case to distinguish it from the original function)
To solve it, repeatedly self-substitute to give a summation. This is the sum of the squares of natural numbers from 6 to x:
Where in the last step we used a standard result. And thus, since a is a constant:
Next, f:
One call to g(x)
One recursive call with parameter x / 2
Some constant amount of work
Using a similar method:
Applying the stopping condition:
Thus:
Since the exponential term 2^(-x^3) vanishes:
Output complexity
This is basically the same process as above, with slightly different recursion relations. I'll skip the details and just state the results (using lower case for output functions):
Thus the final time complexity of f(g(n)) + g(f(n)) is:
Which matches the result given by your source.
P(x,y,z){
print x
if(y!=x) print y
if(z!=x && z!=y) print z
}
Trivial Algorithm here, values x,y,z are chosen randomly from {1,...r} with r >= 1.
I'm trying to determine the average case complexity of this algorithm and I measure complexity based on the number of print statements.
The best case here is T(n) = 1 or O(1), when x=y=z and the probability of that is 1/3.
The worst case here is still T(n) = 3 or still O(1) when x!=y!=z and the probability is 2/3.
But when it comes to mathematically deriving the average case:
Sample Space is n possible inputs, Probability over Sample Space is 1/n chance
So, how do I calculate average case complexity? (This is where I draw a blank..)
Your algorithm has three cases:
All three numbers are equal. The probability of this is 1/r, since
once you choose x, there's only one choice for y and for z. The cost
for this case is 1.
x != y, but x == z or y == z. The probability of this is 1/r * (1/(r - 1))* 1/2,
since once you choose x, you only have r -1 choices left for y, and z can only be
one of these two choices. Cost = 2.
All three numbers are distinct. Probability that all three are distinct is
1/r * (1/(r - 1))*(1/(r - 2)). Cost = 3.
Thus, the average case can be computed as:
1/r + 1/r * (1/(r - 1)) + 1/r * (1/(r - 1))*(1/(r - 2)) * 3 == O(1)
Edit: The above expression is O(1), since the whole expression is made up of constants.
The average case will be somewhere between the best and worst cases; for this particular problem, that's all you need (at least as far as big-O).
1) Can you program the general case at least? Write the (pseudo)-code and analyze it, it might be readily apparent. You may actually program it suboptimally and there may exist a better solution. This is very typical and it's part of the puzzle-solving of the mathematics end of computer science, e.g. it's hard to discover quicksort on your own if you're just trying to code up a sort.
2) If you can, then run a monte carlo simulation and graph the results. i.e., for N = 1, 5, 10, 20, ..., 100, 1000, or whatever sample is realistic, run 10000 trials and plot the average time. If you're lucky X = sample size, Y = avg. time for 10000 runs at that sample size will graph out a nice line, or parabola, or some easy-to-model curve.
So I'm not sure if you need help on (1) finding or coding the algorithm or (2) analyzing it, you will probably want to revise your question to specify this.
P(x,y,z){
1.print x
2.if(y!=x)
3. print y
4.if(z!=x && z!=y)
5. print z
}
Line 1: takes a constant time c1 (c1:print x)
Line 2: takes a constant time c2 (c2:condition test)
Line 3 :takes a constant time c3 (c3 :print y)
Line 3: takes a constant time c4 (c4:condition test)
Line 4: takes a constant time c5 (c5:print z)
Analysis :
Unless your function P(x,y,z) does not depend on input size " r" the program will take a constant amount of time to run since Time Taken :T(c1)+T(c2+c3)+T(c4+c5) ..summing up the Big O of the function P(x,y,z) is O(1) where 1 is a constant and indicates constant amount of time since T(c1),T(c2),..T(c5) all take constant amount of time.. and say if the function P(x,y,z) iterates from 1 to r..then the complexity of your snippet would have changed and will be in terms of the input size i.e "r"
Best Case : O(1)
Average Case : O(1)
worst Case : O(1)