Complexity Time O(n) or O(n(n+1)/2) - algorithm

What the complexity of algorithm that loops on n items (like array) then on (n-1) then (n-2)and so on Like:
Loop(int[] array) {
for (int i=0; i<array.Length; i++) {
//do some thing
}
}
Main() {
Loop({1, 2, 3, 4});
Loop({1, 2, 3});
Loop({1, 2});
Loop({1});
//What the complexity of this code.
}
What is the complexity of the previous program?

Assuming that what you do in the loop is O(1), The complexity of this is O(n+(n-1)+(n-2)+...+1) = O(n(n+1)/2) = O(0.5n^2 + 0.5n) = O(n^2)
The first = is due to arithmetic series sum.
The second = is due to opening the multiplication.
The third = is due to the fact that given a polynomial inside an O() you can simply replace it with x^highest_power

Formula:
n*(n+1)
n + ... + 1 = ─────────
2
Proof:
n + ... + 1 = S
2*(n + ... + 1) = 2*S
n + n-1 + ... + 2 + 1 +
1 + 2 + ... + n-1 + n = 2*S
n+1 + (n-1)+2 + ... + 2+(n-1) + 1+n = 2*S
n+1 + n+1 + ... + n+1 = 2*S
n*(n+1) = 2*S
S = n*(n+1)/2 = (n*n+n)/2
But:
n*n n*n + n n*n + n*n
───── < ───────── = S < ──────────── = n*n
2 2 2
Our sum is lower than (or equal to for n=1) n*n (for every n, but it's enough to be true for every n > n0). The assumption above is based on the fact that n >= 1 => n*n >= n.
n*n is in O(n2)
From (1) and (2) => our sum is in O(n2).
If we use the lower limit (n*n/2), we can also say that it is in Ω(n2) and then in Θ(n2).
Formal definition
You can also prove it based on the formal definition, but I found the explanation above more intuitive.
f(n) = O(g(n)) means there are positive constants c and n0, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0. The values of c and n0 must be fixed for the function f and must not depend on n.
f(n) = (n*n+n)/2
g(n) = n*n
Just choose n0 = 1 and c = 2, and you get:
0 ≤ (n*n+n)/2 ≤ 2*n*n
0 ≤ n*n+n ≤ 4*n*n
0 ≤ n ≤ 3*n*n
which is obviously true for every n ≥ n0=1.
In general, if you have problems when you choose the constants, use bigger values. E.g.: n0=10, c=100. Sometimes it will be more obvious.

Related

How to find the big theta?

Here's some code segment I'm trying to find the big-theta for:
i = 1
while i ≤ n do #loops Θ(n) times
A[i] = i
i = i + 1
for j ← 1 to n do #loops Θ(n) times
i = j
while i ≤ n do #loops n times at worst when j = 1, 1 times at best given j = n.
A[i] = i
i = i + j
So given the inner while loop will be a summation of 1 to n, the big theta is Θ(n2. So does that mean the big theta is Θ(n2) for the entire code?
The first while loop and the inner while loop should be equal to Θ(n) + Θ(n2) which should just equal Θ(n2).
Thanks!
for j = 1 to n step 1
for i = j to n step j
# constant time op
The double loop is O(n⋅log(n)) because the number of iterations in the inner loop falls inversely to j. Counting the total number of iterations gives:
floor(n/1) + floor(n/2) + ... + floor(n/n) <= n⋅(1/1 + 1/2 + ... + 1/n) ∼ n⋅log(n)
The partial sums of the harmonic series have logarithmic behavior asymptotically, so the above shows that the double loop is O(n⋅log(n)). That can be strengthened to Θ(n⋅log(n)) with a math argument involving the Dirichlet Divisor Problem.
[ EDIT ] For an alternative derivation of the lower bound that establishes the Θ(n⋅log(n)) asymptote, it is enough to use the < part of the x - 1 < floor(x) <= x inequality, avoiding the more elaborate math (linked above) that gives the exact expression.
floor(n/1) + floor(n/2) + ... + floor(n/n) > (n/1 - 1) + (n/2 - 1) + ... + (n/n - 1)
= n⋅(1/1 + 1/2 + ... + 1/n) - n
∼ n⋅log(n) - n
∼ n⋅log(n)

confused about a nested loop having linear complexity(Big-Oh = O(n)) but I worked it to be logarithmic

Computing complexity and Big o of an algorithm
T(n) = 5n log n + 3log n + 2 // to the base 2 big o = o(n log n)
for(int x = 0,i = 1;i <= N;i*=2)
{
for(int j = 1 ;j <= i ;j++)
{
x++;
}
}
The Big o expected was linear where as mine is logarithmic
Your Big-Oh analysis is not correct. While it is true that the outer loop is executed log n times, the inner loop is linear in i at each iteration.
If you count the total number of iterations of the inner loop, you will see that the whole thing is linear:
The inner loop will do ‍1 + 2 + 4 + 8 + 16 + ... + (the last power of 2 <= N) iterations. This sum will be between N and 2*N, which makes the whole loop linear.
Let me explain why your analysis is wrong.
It is clear that inner loop will execute 1 + 2 + 4 + ... + 2^k times where k is the biggest integer which satisfies equation . This implies that upper bound for k is
Without loss of generality we can take upper bound for k and assume that k is integer, complexity equals to 1 + 2 + 4 + ... + = which is geometric series so it is equal to
Therefore in O notation it is O(n)
First, you should notice that your analysis is not logarithmic! As N \log N is not logarithmic.
Also, the time complexity is T(n) = sum_{j = 0}^{log(n)} 2^j (as the value of i duplicated each time). Hence, T(n) = 2^(log(N) + 1) - 1 = 2N - 1 = \Theta(N).

What's the complexity of sum i=0 -> n (n_i*i))

This is a test I failed because I thought this complexity would be O(n), but it appears i'm wrong and it's O(n^2). Why not O(n)?
First, notice that the question does not ask what is the time complexity of a function calculating f(n), but rather the complexity of the function f(n) itself. you can think about f(n) as being the time complexity of some other algorithm if you are more comfortable talking about time complexity.
This is indeed O(n^2), when the sequence a_i is bounded by a constant and each a_i is at least 1.
By the assumption, for all i, a_i <= c for some constant c.
Hence, a_1*1+...+a_n*n <= c * (1 + 2 + ... + n). Now we need to show that 1 + 2 +... + n = O(n^2) to complete the proof.
1 + 2 + ... + n <= n + n + ... + n = n * n = n ^ 2
and
1 + 2 + ... + n >= n / 2 + (n / 2 + 1) + ... + n >= (n / 2) * (n / 2) = n^2/4
So the complexity is actually Theta(n^2).
Note that if a_i was not constant, e.g., a_i = i then the result is not correct.
in that case, f(n) = 1^2 + 2^2 + ... + n^2 and you can show easily (using the same method as before) that f(n) = Omega(n^3), which means it's not O(n^2).
Preface, not super great with complexity-theory but I'll take a stab.
I think what is confusing is that its not a time complexity problem, but rather the functions complexity.
So for easy part i just goes up to n ie. 1,2,3 ...n , then for ai all entries must be above 0 meaning that a could be something like this 2,5,1... for n times. If you multiply them together n*n = O(n2).
The best case would be if a is 1,1,1 which drop the complexity down to O(n) but the average case will be n so you get squared.
Unless it's mentioned that a[i] is O(n), it's definitely O(n)
Here an another try to achieve O(n*n) if sum should be returned as result.
int sum = 0;
for(int i = 0; i<=n; i++){
for(int j = 0; j<=n; j++){
if(i == j){
sum += A[i] * j;
}
}
return sum;

Computing expected time complexity of recursive program

I wish to determine the average processing time T(n) of the recursive algorithm:
int myTest( int n ) {
if ( n <= 0 ) {
return 0;
}
else {
int i = random( n - 1 );
return myTest( i ) + myTest( n - 1 - i );
}
}
provided that the algorithm random( int n ) spends one time unit to return
a random integer value uniformly distributed in the range [0, n] whereas
all other instructions spend a negligibly small time (e.g., T(0) = 0).
This is certainly not of the simpler form T(n) = a * T(n/b) + c so I am lost at how to write it. I can't figure out how to write it due to the fact that I am taking a random number each time from 0 to n-1 range and supplying it twice to the function and asking for the sum of those two calls.
The recurrence relations are:
T(0) = 0
T(n) = 1 + sum(T(i) + T(n-1-i) for i = 0..n-1) / n
The second can be simplified to:
T(n) = 1 + 2*sum(T(i) for i = 0..n-1) / n
Multiplying by n:
n T(n) = n + 2*sum(T(i) for i = 0..n-1)
Noting that (n-1) T(n-1) = n-1 + 2*sum(T(i) for i = 0..n-2), we get:
n T(n) = (n-1) T(n-1) + 1 + 2T(n-1)
= (n+1) T(n-1) + 1
Or:
T(n) = ((n+1)T(n-1) + 1) / n
This has the solution T(n) = n, which you can derive by telescoping the series, or by guessing the solution and then substituting it in to prove it works.

Big-O complexity of algorithms

I'm trying to figure out the exact big-O value of algorithms. I'll provide an example:
for (int i = 0; i < n; i++) // 2N + 2
{
for (int x = i; x < n; x++) // N * 2N + 2 ?
{
sum += i; // N
}
} // Extra N?
So if I break some of this down, int i = 0 would be O(1), i < n is N+1, i++ is N, multiply the inner loop by N:
2N + 2 + N(1 + N + 1 + N) = 2N^2 + 2N + 2N + 2 = 2N^2 + 4N + 2
Add an N for the loop termination and the sum constant, = 3N^2 + 5N + 2...
Basically, I'm not 100% sure how to calculate the exact O notation for an algorithm, my guess is O(3N^2 + 5N + 2).
What do you mean by exact? Big O is an asymptotic upper bound, so it's by definition not exact.
Thinking about i=0 as O(1) and i<n as O(N+1) is not correct. Instead, think of the outer loop as doing something n times, and for every iteration of the outer loop, the inner loop is executed at most n times. The calculation inside the loop takes constant time (the calculation is not getting more complex as n gets bigger). So you end up with O(n*n*1) = O(n^2), quadratic complexity.
When asking about "exact", you're running the inner loop from 0 to n, then from 1 to n, then from 2 to n, ... , from (n-1) to n, each time doing a constant time operation. So you do n + (n-1) + (n-2) + ... + 1 = n*(n+1)/2 = n^2/2 + n/2 iterations. To get from the exact number of calculations to big O notation, omit constants and lower-order terms, and you'll end up with O(n^2) (the 1/2 and +n/2 are omitted).
Big O means Worst case complexity.
And Here worst case will occur only if both the loops will run for n numbers of time i.e n*n.
So, complexity is O(n2).

Resources