Computational complexity in Big-O notation - performance

How can I specify computational complexity of an algorithm in Big-O notation whose execution follows the following pattern, according to input sizes?
Input size: 4
Number of execution steps: 4 + 3 + 2 + 1 = 10
Input size: 5
Number of execution steps: 5 + 4 + 3 + 2 + 1 = 15
Input size: 6
Number of execution steps: 6 + 5 + 4 + 3 + 2 + 1 = 21

Technically, you have not given enough information to determine the complexity. On the information so far, it could take 21 steps for all input sizes greater than 5. In that case, it would be O(1) regardless of the behavior for sizes 4 and 5. Complexity is about limiting behavior for large input sizes, not about how something behaves for three extremely small sizes.
If the number of steps for size n is, in general, the sum of the numbers from 1 through n then the formula for the number of steps is indeed n(n+1)/2 and it is O(n^2).

For any series which follows the pattern 1 + 2 + 3 + 4 + ....+n , the sum of the series is given as n(n+1)/2, which is O(n^2)..
In your case , the number of steps will never be n^2, because at each level we are doing one step less(n + n-1 + n-2),however the big Oh notation is used to specify the upper bound on the growth rate of a function thus your big O will be O(n^2) because it is more than n number of steps but less than n^2 number of steps.

I think it should be O(N), because it's just like : Given an array with N size, and iterate it (if we don't care the plus computation time).
By the way, I think the (n + 1)*n/2 is just the result of sum, not the algorithm complexity.

Related

is the following way correct for determines the complexity of Fibonacci function? [duplicate]

I understand Big-O notation, but I don't know how to calculate it for many functions. In particular, I've been trying to figure out the computational complexity of the naive version of the Fibonacci sequence:
int Fibonacci(int n)
{
if (n <= 1)
return n;
else
return Fibonacci(n - 1) + Fibonacci(n - 2);
}
What is the computational complexity of the Fibonacci sequence and how is it calculated?
You model the time function to calculate Fib(n) as sum of time to calculate Fib(n-1) plus the time to calculate Fib(n-2) plus the time to add them together (O(1)). This is assuming that repeated evaluations of the same Fib(n) take the same time - i.e. no memoization is used.
T(n<=1) = O(1)
T(n) = T(n-1) + T(n-2) + O(1)
You solve this recurrence relation (using generating functions, for instance) and you'll end up with the answer.
Alternatively, you can draw the recursion tree, which will have depth n and intuitively figure out that this function is asymptotically O(2n). You can then prove your conjecture by induction.
Base: n = 1 is obvious
Assume T(n-1) = O(2n-1), therefore
T(n) = T(n-1) + T(n-2) + O(1) which is equal to
T(n) = O(2n-1) + O(2n-2) + O(1) = O(2n)
However, as noted in a comment, this is not the tight bound. An interesting fact about this function is that the T(n) is asymptotically the same as the value of Fib(n) since both are defined as
f(n) = f(n-1) + f(n-2).
The leaves of the recursion tree will always return 1. The value of Fib(n) is sum of all values returned by the leaves in the recursion tree which is equal to the count of leaves. Since each leaf will take O(1) to compute, T(n) is equal to Fib(n) x O(1). Consequently, the tight bound for this function is the Fibonacci sequence itself (~θ(1.6n)). You can find out this tight bound by using generating functions as I'd mentioned above.
Just ask yourself how many statements need to execute for F(n) to complete.
For F(1), the answer is 1 (the first part of the conditional).
For F(n), the answer is F(n-1) + F(n-2).
So what function satisfies these rules? Try an (a > 1):
an == a(n-1) + a(n-2)
Divide through by a(n-2):
a2 == a + 1
Solve for a and you get (1+sqrt(5))/2 = 1.6180339887, otherwise known as the golden ratio.
So it takes exponential time.
I agree with pgaur and rickerbh, recursive-fibonacci's complexity is O(2^n).
I came to the same conclusion by a rather simplistic but I believe still valid reasoning.
First, it's all about figuring out how many times recursive fibonacci function ( F() from now on ) gets called when calculating the Nth fibonacci number. If it gets called once per number in the sequence 0 to n, then we have O(n), if it gets called n times for each number, then we get O(n*n), or O(n^2), and so on.
So, when F() is called for a number n, the number of times F() is called for a given number between 0 and n-1 grows as we approach 0.
As a first impression, it seems to me that if we put it in a visual way, drawing a unit per time F() is called for a given number, wet get a sort of pyramid shape (that is, if we center units horizontally). Something like this:
n *
n-1 **
n-2 ****
...
2 ***********
1 ******************
0 ***************************
Now, the question is, how fast is the base of this pyramid enlarging as n grows?
Let's take a real case, for instance F(6)
F(6) * <-- only once
F(5) * <-- only once too
F(4) **
F(3) ****
F(2) ********
F(1) **************** <-- 16
F(0) ******************************** <-- 32
We see F(0) gets called 32 times, which is 2^5, which for this sample case is 2^(n-1).
Now, we want to know how many times F(x) gets called at all, and we can see the number of times F(0) is called is only a part of that.
If we mentally move all the *'s from F(6) to F(2) lines into F(1) line, we see that F(1) and F(0) lines are now equal in length. Which means, total times F() gets called when n=6 is 2x32=64=2^6.
Now, in terms of complexity:
O( F(6) ) = O(2^6)
O( F(n) ) = O(2^n)
There's a very nice discussion of this specific problem over at MIT. On page 5, they make the point that, if you assume that an addition takes one computational unit, the time required to compute Fib(N) is very closely related to the result of Fib(N).
As a result, you can skip directly to the very close approximation of the Fibonacci series:
Fib(N) = (1/sqrt(5)) * 1.618^(N+1) (approximately)
and say, therefore, that the worst case performance of the naive algorithm is
O((1/sqrt(5)) * 1.618^(N+1)) = O(1.618^(N+1))
PS: There is a discussion of the closed form expression of the Nth Fibonacci number over at Wikipedia if you'd like more information.
You can expand it and have a visulization
T(n) = T(n-1) + T(n-2) <
T(n-1) + T(n-1)
= 2*T(n-1)
= 2*2*T(n-2)
= 2*2*2*T(n-3)
....
= 2^i*T(n-i)
...
==> O(2^n)
Recursive algorithm's time complexity can be better estimated by drawing recursion tree, In this case the recurrence relation for drawing recursion tree would be T(n)=T(n-1)+T(n-2)+O(1)
note that each step takes O(1) meaning constant time,since it does only one comparison to check value of n in if block.Recursion tree would look like
n
(n-1) (n-2)
(n-2)(n-3) (n-3)(n-4) ...so on
Here lets say each level of above tree is denoted by i
hence,
i
0 n
1 (n-1) (n-2)
2 (n-2) (n-3) (n-3) (n-4)
3 (n-3)(n-4) (n-4)(n-5) (n-4)(n-5) (n-5)(n-6)
lets say at particular value of i, the tree ends, that case would be when n-i=1, hence i=n-1, meaning that the height of the tree is n-1.
Now lets see how much work is done for each of n layers in tree.Note that each step takes O(1) time as stated in recurrence relation.
2^0=1 n
2^1=2 (n-1) (n-2)
2^2=4 (n-2) (n-3) (n-3) (n-4)
2^3=8 (n-3)(n-4) (n-4)(n-5) (n-4)(n-5) (n-5)(n-6) ..so on
2^i for ith level
since i=n-1 is height of the tree work done at each level will be
i work
1 2^1
2 2^2
3 2^3..so on
Hence total work done will sum of work done at each level, hence it will be 2^0+2^1+2^2+2^3...+2^(n-1) since i=n-1.
By geometric series this sum is 2^n, Hence total time complexity here is O(2^n)
The proof answers are good, but I always have to do a few iterations by hand to really convince myself. So I drew out a small calling tree on my whiteboard, and started counting the nodes. I split my counts out into total nodes, leaf nodes, and interior nodes. Here's what I got:
IN | OUT | TOT | LEAF | INT
1 | 1 | 1 | 1 | 0
2 | 1 | 1 | 1 | 0
3 | 2 | 3 | 2 | 1
4 | 3 | 5 | 3 | 2
5 | 5 | 9 | 5 | 4
6 | 8 | 15 | 8 | 7
7 | 13 | 25 | 13 | 12
8 | 21 | 41 | 21 | 20
9 | 34 | 67 | 34 | 33
10 | 55 | 109 | 55 | 54
What immediately leaps out is that the number of leaf nodes is fib(n). What took a few more iterations to notice is that the number of interior nodes is fib(n) - 1. Therefore the total number of nodes is 2 * fib(n) - 1.
Since you drop the coefficients when classifying computational complexity, the final answer is θ(fib(n)).
It is bounded on the lower end by 2^(n/2) and on the upper end by 2^n (as noted in other comments). And an interesting fact of that recursive implementation is that it has a tight asymptotic bound of Fib(n) itself. These facts can be summarized:
T(n) = Ω(2^(n/2)) (lower bound)
T(n) = O(2^n) (upper bound)
T(n) = Θ(Fib(n)) (tight bound)
The tight bound can be reduced further using its closed form if you like.
It is simple to calculate by diagramming function calls. Simply add the function calls for each value of n and look at how the number grows.
The Big O is O(Z^n) where Z is the golden ratio or about 1.62.
Both the Leonardo numbers and the Fibonacci numbers approach this ratio as we increase n.
Unlike other Big O questions there is no variability in the input and both the algorithm and implementation of the algorithm are clearly defined.
There is no need for a bunch of complex math. Simply diagram out the function calls below and fit a function to the numbers.
Or if you are familiar with the golden ratio you will recognize it as such.
This answer is more correct than the accepted answer which claims that it will approach f(n) = 2^n. It never will. It will approach f(n) = golden_ratio^n.
2 (2 -> 1, 0)
4 (3 -> 2, 1) (2 -> 1, 0)
8 (4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
14 (5 -> 4, 3) (4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
(3 -> 2, 1) (2 -> 1, 0)
22 (6 -> 5, 4)
(5 -> 4, 3) (4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
(3 -> 2, 1) (2 -> 1, 0)
(4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
The naive recursion version of Fibonacci is exponential by design due to repetition in the computation:
At the root you are computing:
F(n) depends on F(n-1) and F(n-2)
F(n-1) depends on F(n-2) again and F(n-3)
F(n-2) depends on F(n-3) again and F(n-4)
then you are having at each level 2 recursive calls that are wasting a lot of data in the calculation, the time function will look like this:
T(n) = T(n-1) + T(n-2) + C, with C constant
T(n-1) = T(n-2) + T(n-3) > T(n-2) then
T(n) > 2*T(n-2)
...
T(n) > 2^(n/2) * T(1) = O(2^(n/2))
This is just a lower bound that for the purpose of your analysis should be enough but the real time function is a factor of a constant by the same Fibonacci formula and the closed form is known to be exponential of the golden ratio.
In addition, you can find optimized versions of Fibonacci using dynamic programming like this:
static int fib(int n)
{
/* memory */
int f[] = new int[n+1];
int i;
/* Init */
f[0] = 0;
f[1] = 1;
/* Fill */
for (i = 2; i <= n; i++)
{
f[i] = f[i-1] + f[i-2];
}
return f[n];
}
That is optimized and do only n steps but is also exponential.
Cost functions are defined from Input size to the number of steps to solve the problem. When you see the dynamic version of Fibonacci (n steps to compute the table) or the easiest algorithm to know if a number is prime (sqrt(n) to analyze the valid divisors of the number). you may think that these algorithms are O(n) or O(sqrt(n)) but this is simply not true for the following reason:
The input to your algorithm is a number: n, using the binary notation the input size for an integer n is log2(n) then doing a variable change of
m = log2(n) // your real input size
let find out the number of steps as a function of the input size
m = log2(n)
2^m = 2^log2(n) = n
then the cost of your algorithm as a function of the input size is:
T(m) = n steps = 2^m steps
and this is why the cost is an exponential.
Well, according to me to it is O(2^n) as in this function only recursion is taking the considerable time (divide and conquer). We see that, the above function will continue in a tree until the leaves are approaches when we reach to the level F(n-(n-1)) i.e. F(1). So, here when we jot down the time complexity encountered at each depth of tree, the summation series is:
1+2+4+.......(n-1)
= 1((2^n)-1)/(2-1)
=2^n -1
that is order of 2^n [ O(2^n) ].
No answer emphasizes probably the fastest and most memory efficient way to calculate the sequence. There is a closed form exact expression for the Fibonacci sequence. It can be found by using generating functions or by using linear algebra as I will now do.
Let f_1,f_2, ... be the Fibonacci sequence with f_1 = f_2 = 1. Now consider a sequence of two dimensional vectors
f_1 , f_2 , f_3 , ...
f_2 , f_3 , f_4 , ...
Observe that the next element v_{n+1} in the vector sequence is M.v_{n} where M is a 2x2 matrix given by
M = [0 1]
[1 1]
due to f_{n+1} = f_{n+1} and f_{n+2} = f_{n} + f_{n+1}
M is diagonalizable over complex numbers (in fact diagonalizable over the reals as well, but this is not usually the case). There are two distinct eigenvectors of M given by
1 1
x_1 x_2
where x_1 = (1+sqrt(5))/2 and x_2 = (1-sqrt(5))/2 are the distinct solutions to the polynomial equation x*x-x-1 = 0. The corresponding eigenvalues are x_1 and x_2. Think of M as a linear transformation and change your basis to see that it is equivalent to
D = [x_1 0]
[0 x_2]
In order to find f_n find v_n and look at the first coordinate. To find v_n apply M n-1 times to v_1. But applying M n-1 times is easy, just think of it as D. Then using linearity one can find
f_n = 1/sqrt(5)*(x_1^n-x_2^n)
Since the norm of x_2 is smaller than 1, the corresponding term vanishes as n tends to infinity; therefore, obtaining the greatest integer smaller than (x_1^n)/sqrt(5) is enough to find the answer exactly. By making use of the trick of repeatedly squaring, this can be done using only O(log_2(n)) multiplication (and addition) operations. Memory complexity is even more impressive because it can be implemented in a way that you always need to hold at most 1 number in memory whose value is smaller than the answer. However, since this number is not a natural number, memory complexity here changes depending on whether if you use fixed bits to represent each number (hence do calculations with error)(O(1) memory complexity this case) or use a better model like Turing machines, in which case some more analysis is needed.

Cost analysis for implementing a stack as an array?

Please Refer to answer 2 of the material above. I can follow the text up to that point. I always seem to loose conceptualisation when there's no illustration maybe due to the fact that I'm new to math notation.
I understand the cost for an expensive operation (double the array when the stack is full)
1 + 2 + 4 + 8 + ... + 2^i where i is the index of that sequence. So index 0 = 1, 1 = 2, 2 = 4 and 3 = 8.
I can see the sequence for costly operations but I get confused with the following explanation.
Now, in any sequence of n operations, the total cost for resizing is
1 + 2 + 4 + 8 + ... + 2^i for some 2^i < n (if all operations are pushes
then 2^i will be the largest power of 2 less than n). This sum is at
most 2n − 1. Adding in the additional cost of n for
inserting/removing, we get a total cost < 3n, and so our amortised
cost per operation is < 3
I don't understand that explanation?
the total cost for resizing is 1 + 2 + 4 + 8 + ... + 2^i for some 2^i < n
What does it mean for some 2^i < n
does it say that the number of operations n will always be larger than 2^i? and does n stand for the number of operations or the length of the array?
And the following I just don't follow:
if all operations are pushes then 2^i will be the largest power of 2
less than n. This sum is at most 2n − 1.
Could someone illustrate this please?
n is the largest stack size, intrinsic array size at this moment is the least power of two 2^(i+1)>=n, so the last operation of expansion takes 2^i<n time.
For example, if array reaches size n=11, the last expansion causes grow from 8 to 16 with 8 items moved
About the second question: sum of geometric progression
1 + 2 + 4 + 8 + ... + 2^i = 2^(i+1) - 1

Find out the Complexity of algorithm?

Find out the complexity of an algorithm that measures the number of the print statements in an algorithm that considers a positive integer n and prints 1 one time, 2 two times, 3 three times, and n for n times.
That is
1
2 2
3 3 3
……………
……………
n n n n ……..n (n times)
Assuming the problem is to find the algorithmic complexity of an algorithm, which when given a number n will print every number from 1 to n, printing 1 once, 2 twice, 3 thrice, and so on...
Your algorithmic complexity has an upper bound of O(n²).
This is because for n there are n prints. Realistically, if you want the tilde approximation, it should be ~O( (n² + n) /2) because you average out the sequence.
For n = 5, you print 1 + 2 + 3 + 4 + 5 times... which is 15.
For n = 6, you print 1 + 2 + 3 + 4 + 5 + 6 times... which is 21.
For n = 10, you print 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 times... which is 55 times.
Since the actual algorithmic complexity is indeed O( (n² + n) / 2), your largest order of magnitude of complexity is n². You are better off approximating your algorithmic complexity as O(n²) because your n² will quickly outgrow your n with a large enough input size.

counting binary digit algorithm || prove big oh

Is the big oh (Log n ) ?
how can i prove it by using summation
//Input: A positive decimal integer n
//Output: The number of binary digits in n’s binary representation
count ← 1
while n > 1 do
count ← count + 1
n ← ⌊n/2⌋
return count
The n is reducing like this:
n + n / 2 + n / 4 + n / 8 + .... + 8 + 4 + 2 + 1
The summation of above series is 2^(log(n)) - 1.
Now come to the above question. The number of times the loop executed is the number of items appears in above series = time complexity of the algorithm.
The number of items appears in above series is logn. So the algorithm time complexity is O(logn).
Example:
n = 8; the corresponding series:
8 + 4 + 2 + 1 = 15(2^4 - 1) ~ 2^4 ~ 2^logn
Here, number of items in series = 4
therefore,
time complexity = O(number of iteration)
= O(number of elements in series)
= O(logn)
Just check the returned value count. As it would be closer to logN, you can state that the TC is log(N).
Just think reverse way (mathematically):
1 -> 2 -> 4 -> 8 -> ... N (xth value considering 0-indexing system)
2^x = N
x = logN
You have to consider that larger numbers use more memory and take more processing for each operation. Note: time complexity only cares what happens for the largest values not the smaller ones.
The number of iterations is log2(n) however the cost of the n > 0 and n = n / 2 is proportional to the size of the integer. i.e. 128 -bit costs twice 64-bit and 1024-bit is 16 times greater. So the cost of each operation is log(m) where m is the maximum unsigned value which the number of bits stores. If you assume there is a fixed wasted bits e.g. no more than 64-bit this means the cost is O(log(n) * log(n)) or O(log(n)^2)
If you used Java's BigInteger, that's what the time complexity would be.
Big Oh complexity can easily be calculated by counting number of times the while loop runs as the operations inside while loop take constant time.In this case N varies as-
N,N/2,N/4,N/16.... ,1
Just counting number of terms in above series will give me number of times loop runs.So,
N/2^p=1 (p is number of times loop runs)
This gives p=logN thus complexity O(logN)

Big O Speed of Removing Duplicates from Linked List Without Buffer

The approach I'm referring to is the dual-pointer technique. Where the first pointer is a straightforward iterator and the second pointer goes through only all previous values relative to the first pointer.
That way, less work is done than if, for each node, we compared against every other node. Which would end up being O(n^2).
My question is what is the speed of the dual-pointer method and why?
So if you have N elements in the list, doing your de-duping on element i will require i comparisons (there are i values behind it). So, we can set up the total number of comparisons as sum[i = 0 to N] i. This summation evaluates to N(N+1)/2, which is strictly less than N^2 for N > 1.
Edit:
To solve the summation, you can approach it like this.
1 + 2 + 3 + 4 + ... + (n-2) + (n-1) + n From here, you can match up numbers from opposite sides. This can then become
2 + 3 + ... + (n-1) + (n+1) by matching up the 1 at the start with the n at the end. Do the same with 2 and (n-1).
3 + ... + (n-1+2) + (n+1) simplify to become
3 + ... + (n+1) + (n+1)
You can repeat this n/2 times, since you are matching up two number each time. This will leave us with n/2 occurances of the term (n+1). Multiplying those and simplifying, we get (n+1)(n/2) or n(n+1)/2.
See here for more description.
Also, this suggests this summation still has a big-theta of n^2.

Resources