Summation Run Time Analysis - performance

A[] is size of n, B[][] is size of nxn
for i_{1,n} {
for j_{1,n} {
if(i<=j) -> B[i,j] = sum of the elements A[i], A[i+1],...,A[j]
else -> B[i,j] = 0
I understand that first 2 for loops are n iterations for both.
My question is how to do if(i<=j) part. At max it will sum n number of times (When i = 1 and j = n). At min, it will just do one thing, thus 1.
I'm really really lost.

I'm terrible with this theoretical run-time analysis stuff, but this might help to look at things another way:
for i_{1,n}
{
if (j>i) -> B[i,j] = sum of A[i],A[i+1],...,A[j]
else -> B[i,j] = 0
}
Which (I think) happens to be the same as:
for i_{1,n}
{
for j_{1,n}
{
B[i,j] = 0
if (j > i)
{
for k_{i,j}
B[i,j] += A[k];
}
}
}
That would make it a cubic complexity algorithm. The third loop over k executes 0 times, then 1 time, then two times, then three times, and so on up to n times. Its upper bound is still n. And then we have two more loops with n iterations, so our upper bound complexity is n*n*n or O(n^3).

Related

Big O for this triple nested loop?

What's the big O of this?
for (int i = 1; i < n; i++) {
for (int j = 1; j < (i*i); j++) {
if (j % i == 0) {
for (int k = 0; k < j; k++) {
// Simple computation
}
}
}
}
Can't really figure it out. Inclined to say O(n^4 log(n)) but feel like i'm wrong here.
This is quite a confusing analysis, so let's break it down bit by bit to make sense of the calculations:
The outermost loop runs for n-1 iterations (since 1 ≤ i < n).
The next loop inside it makes (i² - 1) iterations for each index i of the outer loop (since 1 ≤ j < i²).
In total, this means the number of iterations for these two loops is equal to calculating the sum of (i²-1) for each 1 ≤ i < n. This is similar to computing the sum of the first n squares, and is order of magnitude of O(n³).
Note the modulo operator % takes constant time (O(1)) to compute, therefore checking the condition if (j % i == 0) for all iterations of these two loops will not affect the O(n³) runtime.
Now let's talk about the inner loop inside the conditional.
We are interested in seeing how many times (and for which values of j) this if condition evaluates to true, since this would dictate how many iterations the innermost loop will run.
Practically speaking, (j % i) will never equal 0 if j < i, so the second loop could actually be shortened to start from i rather than from 1, however this will not impact the Big-O upper bound of the algorithm.
Notice that for a given number i, (j % i == 0) if and only if i is a divisor of j. Since our range is (1 ≤ j < i²), there will be a total of (i-1) values of j for which this will be true, for any given i. If this is confusing, consider this example:
Let's assume i = 4. Then our index j would iterate through all values 1,..,15=i²,
and (j%i == 0) would be true for j = 4, 8, 12 - exactly (i - 1) values.
The innermost loop would therefore make a total of (12 + 8 + 4 = 24) iterations. Thus for a general index i, we would look for the sum: i + 2i + 3i + ... + (i-1)i to indicate the number of iterations the innermost loop would make.
And this could be generalized by calculating the sum of this arithmetic progression. The first value is i and the last value is (i-1)i, which results in a sum of (i³ - i²)/2 iterations of the k loop for every value of i. In turn, the sum of this for all values of i could be computed by calculating the sum of cubes and the sum of squares - for a total runtime of O(n⁴) iterations of the innermost loop (the k loop) for all values of i.
Thus in total, the runtime of this algorithm would be the total of both runtimes we calculated above. We checked the if statement O(n³) times and the innermost loop ran for O(n⁴), so assuming // Simple computation runs in constant time, our total runtime would come down to:
O(n³) + O(n⁴)*O(1) = O(n⁴)
Let us assume that i = 2.Then j can be [1,2,3].The "k" loop will run for j = 2 only.
Similarly for i=3,j can be[1,2,3,4,5,6,7,8].hence, k can run for j = 3,6. You can see a pattern here that for any value of i, the 'k' loop will run (i-1) times.The length of loops will be [i,2*i,3*i,....i*i].
Hence the time complexity of k loop is
=i+(2*i)+(3*i)+ ..... +(i*i)
=(i^2)(i+1)/2
Hence the final complexity will be
= (n^3)(n+3)/2

Time complexity of an algorithm with two nested loops

Given this algorithm :
m = 1
while(a>m*b){
m = m*2
}
while(a>=b){
while(a>=m*b){
a = a-m*b
}
m=m/2
}
My question : What is the time complexity of this algorithm ?
What I have done : I have to find the number of instructions. So I found out that, for the first while, there is m=log_2(a/b) iterations approximately. Now for the inner while of the second part of this algorithm, I found this pattern : a_i = a - i*m where i is the number of iterations. So there is a/bm iterations for the inner while.
But I don't know how to calculate the outer now because the condition depends on what the inner while have done to a.
Let's begin by "normalizing" the function in the same way as in your previous question, noting that once again all changes in a and stopping conditions are proportional to b:
n = a/b
// 1)
m = 1
while(n>m){
m = m*2
}
// 2)
while(n>=1){
while(n>=m){
n = n-m
}
m=m/2
}
Unfortunately, this is where the similarity ends...
Snippet 1)
Note that m can be written as an integer power of 2, since it doubles every loop:
i = 0
while (n > pow(2, i)) {
i++
}
// m = pow(2, i)
From the stopping condition:
Snippet 2)
Here m decreases in the exact opposite way to 1), so it can again be written as a power of 2:
// using i from the end of 1)
while (n>=1) {
k = pow(2, i)
while (n >= k) {
n = n - k
}
i--
}
The inner loop is simpler than the inner loop from your previous question, because m does not change inside it. It is easy to deduce the number of times c it executes, and the value of n at the end:
This is the exact definition of the Modulus operator % in the "C-family" of languages:
while (n>=1) {
k = pow(2, i)
n = n % k // time complexity O(n / k) here instead of O(1)
i--
}
Note that, because consecutive values of k only differ by a factor of 2, at no point will the value of n be greater than or equal to 2k; this means that the inner loop executes at most once per outer loop. Therefore the outer loop executes at most i times.
Both the first and second loops are O(log n), which means the total time complexity is O(log n) = O(log [a/b]).
Update: numerical tests in Javascript as before.
function T(n)
{
let t = 0;
let m = 1;
while (n > m) {
m *= 2; t++;
}
while (n >= 1) {
while (n >= m) {
n -= m; t++;
}
m/=2;
}
return t;
}
Plotting T(n) against log(n) shows a nice straight line:
Edit: a more thorough explanation of snippet 2).
At the end of snippet 1), the value of i = ceil(log2(n)) represents the number of significant bits in the binary representation of the integer ceil(n).
Computing the modulus of an integer with a positive power-of-2 2^i is equivalent to discarding all but the first i bits. For example:
n = ...00011111111 (binary)
m = ...00000100000 (= 2^5)
n % m = ...00000011111
----- (5 least significant bits)
The operation of snippet 2) is therefore equivalent to removing the most significant bit of n, one at a time, until only zero is left. For example:
outer loop no | n
----------------------------
1 | ...110101101
| ^
2 | ...010101101
| ^
3 | ...000101101
| ^
4 | ...000001101
| ^
: | :
: | :
i (=9) | ...000000001
| ^
----------------------------
final | 000000000
When the current most significant bit (pointed to by ^) is:
0: the inner loop does not execute because the value of n is already smaller than k = 2^i (equal to the bit position value of ^).
1: the inner loop executes once because n is greater than k, but less than 2k (which corresponds the bit above the current position ^).
Hence the "worst" case occurs when all significant bits of n are 1, in which case the inner loop to always executes once.
Regardless, the outer loop executes ceil(log2(n)) times for any value of n.

How to calculate time complexity for conditional statement and loops

The pseudocode is as below. how to calculate time complexity for this programme
Algorithm MinValue(A, n):
Input: An integer array A of size n //1
Output: The smallest value in A
minValue <- A[0] //1
for k=1 to n-1 do //n
if (minValue > A[k]) then //n-1
minValue <- A[k] //1
return minValue //1
so, it's 1+1+n+n-1+1+1 = 2n+3, is it correct?
This is a more simple programme
Algorithm MaxInt(a, b):
Input: Two integers a and b //1
Output: The larger of the two integers
if a > b then //1
return a //1
else
return b. // 1
total operations = 4, is it correct?
Could anyone tell me the correct answer? Thanks
You were close.
In the first program, the number of times minValue <- A[k] is executed can be n-1 at the worst case (if the numbers are sorted in descending order).
Therefore the total number of operations is bound by 1 + n-1 + n-1 + n-1 + 1 = 3*n-1 = O(n).
For the second program, either return a or return b is executed. Therefore the number of operations is 2 (the condition + the chosen return statement).
Complexity of a conditional statement is maximum of complexities of the branches, plus complexity of the condition expression.

Finding the temporal complexity of an exponential algorithm

Problem: Find best way to cut a rod of length n.
Each cut is integer length.
Assume that each length i rod has a price p(i).
Given: rod of length n, and a list of prices p, which provided the price of each possible integer lenght between 0 and n.
Find best set of cuts to get maximum price.
Can use any number of cuts, from 0 to n−1.
There is no cost for a cut.
Following I present a naive algorithm for this problem.
CUT-ROD(p,n)
if(n == 0)
return 0
q = -infinity
for i = 1 to n
q = max(q, p[i]+CUT-ROD(p,n-1))
return q
How can I prove that this algorithm is exponential? Step-by-step.
I can see that it is exponential. However, I'm not able to proove it.
Let's translate the code to C++ for clarity:
int prices[n];
int cut-rod(int n) {
if(n == 0) {
return 0;
}
q = -1;
res = cut-rod(n-1);
for(int i = 0; i < n; i++) {
q = max(q, prices[i] + res);
}
return q;
}
Note: We are caching the result of cut-rod(n-1) to avoid unnecessarily increasing the complexity of the algorithm. Here, we can see that cut-rod(n) calls cut-rod(n-1), which calls cut-rod(n-2) and so on until cut-rod(0). For cut-rod(n), we see that the function iterates over the array n times. Therefore the time complexity of the algorithm is equal to O(n + (n-1) + (n-2) + (n-3)...1) = O(n(n+1)/2) which is approximately equal to O((n^2)/2).
EDIT:
If we are using the exact same algorithm as the one in the question, its time complexity is O(n!) since cut-rod(n) calls cut-rod(n-1) n times. cut-rod(n-1) calls cut-rod(n-2) n-1 times and so on. Therefore the time complexity is equal to O(n*(n-1)*(n-2)...1) = O(n!).
I am unsure if this counts as a step-by-step solution but it can be shown easily by induction/substitution. Just assume T(i)=2^i for all i<n then we show that it holds for n:

Finding Θ for an algorithm

I have the below pseudocode that takes a given unsorted array of length size and finds the range by finding the max and min values in the array. I'm just learning about the various time efficiency methods, but I think the below code is Θ(n), as a longer array adds a fixed number of actions (3).
For example, ignoring the actual assignments to max and min (as the unsorted array is arbitrary and these assignments are unknown in advance), an array of length 2 would only require 5 actions total (including the final range calculation). An array of length 4 only uses 9 actions total, again adding the final range calculation. An array of length 12 uses 25 actions.
This all points me to Θ(n), as it is a linear relationship. Is this correct?
Pseudocode:
// Traverse each element of the array, storing the max and min values
// Assuming int size exists that is size of array a[]
// Assuming array is a[]
min = a[0];
max = a[0];
for(i = 0; i < size; i++) {
if(min > a[i]) { // If current min is greater than val,
min = a[i]; // replace min with val
}
if(max < a[i]) { // If current max is smaller than val,
max = a[i]; // replace max with val
}
}
range = max – min; // range is largest value minus smallest
You're right. It's O(n).
An easy way to tell in simple code (like the one above) is to see how many for() loops are nested, if any. For every "normal" loop (from i = 0 -> n), you add a factor of n.
[Edit2]: That is, if you have code like this:
array a[n]; //Array with n elements.
for(int i = 0; i < n; ++i){ //Happens n times.
for(int j = 0; j < n; ++j){ //Happens n*n times.
//something //Happens n*n times.
}
}
//Overall complexity is O(n^2)
Whereas
array a[n]; //Array with n elements.
for(int i = 0; i < n; ++i){ //Happens n times.
//something //Happens n times.
}
for(int j = 0; j < n; ++j){ //Happens n times.
//something //Happens n times.
}
//Overall complexity is O(2n) = O(n)
This is pretty rudimentary, but useful if someone has not taken an Algorithm course.
The procedures within your for() loop are irrelevant in a complexity question.
[Edit]: This assumes that size actually means the size of array a.
Yes, this would be Θ(n). Your reasoning is a little skewed though.
You have to look at every item in your loop so you're bounded above by a linear function. Conversely, you are also bounded below by a linear function (the same one in fact), because you can't avoid looking at every element.
O(n) only requires that you bound above, Omega(n) requires that you bound below.
Θ(n) says you're bounded on both sides.
Let size be n, then it's clear to see that you always have 2n comparisons and of course the single assignment at the end. So you always have 2n + 1 operations in this algorithm.
In the worst case scenario, you have 2n assignments, thus 2n + 1 + 2n = 4n + 1 = O(n).
In the best case scenrio, you have 0 assignments, thus 2n + 1 + 0 = 2n + 1 = Ω(n).
Therefore, we have that both the best and worst case perform in linear time. Hence, Ɵ(n).
Yeah this surely is O(n) algorithm. I don't think you really need to drill down to see number of comparisons to arrive on the conclusion about the complexity of the algorithm. Just try to see how the number of comparisons will change with the increasing size of the input. For O(n) the comparisons should have a linear increase with the increase in input. For O(n^2) it increases by some multiple of n and so on.

Resources