Time complexity of nested for-loop with two independent variables - algorithm

I am trying to calculate the time complexity of this function:
int fun1(A, k){ //A is an array, k is an integer
n = length(A); //length of array
for j:= 1 to k{
for i:= j to n{
if A[i] < A[j-1]{
x:= A[j-1];
A[j-1]:= A[i];
A[i]:= x
}
}
}
return A[k-1]}
We iterate k times in the outer loop, but how to calculate number of iterations of inner loop, and then the time complexity of entire algorithm?

Since the work inside the double loop is constant and assuming that k<=n then the complexity can be written as
Edit: I forgot the constant but it doesn't matter as it will disappear inside the big Oh notation

Related

Time Complexity of this nested for-loop algorithm?

I'm having some trouble calculating the bigO of this algorithm:
public void foo(int[] arr){
int count = 0;
for(int i = 0; i < arr.length; i++){
for(int j = i; j > 0; j--){
count++;
}
}
}
I know the first for loop is O(n) time but I can't figure out what nested loop is. I was thinking O(logn) but I do not have solid reasoning. I'm sure I'm missing out on something pretty easy but some help would be nice.
Let's note n the length of the array.
If you consider the second loop alone, it is just a function f(i), and since it will iterate on all elements from i to 1, its complexity will be O(i). Since you know that j<n, you can say that it is O(n). However, there is no logarithm involved, since in the worst case, i.e. j=n, you will perfrom n iterations.
As for evaluating the complexity of both loops, observe that for each value of i, the second loop goes throught i iterations, so the total number of iterations is
1+2+...+(n-1)= n*(n-1)/2=(1/2)*(n^2-n)
which is O(n^2).
If we consider c as a number of times count is incremented in the inner loop then the total number of times count is incremented can be represented by the formula below:
As you can see, the total time complexity of the algorithm is O(n^2).

Big O for this triple nested loop?

What's the big O of this?
for (int i = 1; i < n; i++) {
for (int j = 1; j < (i*i); j++) {
if (j % i == 0) {
for (int k = 0; k < j; k++) {
// Simple computation
}
}
}
}
Can't really figure it out. Inclined to say O(n^4 log(n)) but feel like i'm wrong here.
This is quite a confusing analysis, so let's break it down bit by bit to make sense of the calculations:
The outermost loop runs for n-1 iterations (since 1 ≤ i < n).
The next loop inside it makes (i² - 1) iterations for each index i of the outer loop (since 1 ≤ j < i²).
In total, this means the number of iterations for these two loops is equal to calculating the sum of (i²-1) for each 1 ≤ i < n. This is similar to computing the sum of the first n squares, and is order of magnitude of O(n³).
Note the modulo operator % takes constant time (O(1)) to compute, therefore checking the condition if (j % i == 0) for all iterations of these two loops will not affect the O(n³) runtime.
Now let's talk about the inner loop inside the conditional.
We are interested in seeing how many times (and for which values of j) this if condition evaluates to true, since this would dictate how many iterations the innermost loop will run.
Practically speaking, (j % i) will never equal 0 if j < i, so the second loop could actually be shortened to start from i rather than from 1, however this will not impact the Big-O upper bound of the algorithm.
Notice that for a given number i, (j % i == 0) if and only if i is a divisor of j. Since our range is (1 ≤ j < i²), there will be a total of (i-1) values of j for which this will be true, for any given i. If this is confusing, consider this example:
Let's assume i = 4. Then our index j would iterate through all values 1,..,15=i²,
and (j%i == 0) would be true for j = 4, 8, 12 - exactly (i - 1) values.
The innermost loop would therefore make a total of (12 + 8 + 4 = 24) iterations. Thus for a general index i, we would look for the sum: i + 2i + 3i + ... + (i-1)i to indicate the number of iterations the innermost loop would make.
And this could be generalized by calculating the sum of this arithmetic progression. The first value is i and the last value is (i-1)i, which results in a sum of (i³ - i²)/2 iterations of the k loop for every value of i. In turn, the sum of this for all values of i could be computed by calculating the sum of cubes and the sum of squares - for a total runtime of O(n⁴) iterations of the innermost loop (the k loop) for all values of i.
Thus in total, the runtime of this algorithm would be the total of both runtimes we calculated above. We checked the if statement O(n³) times and the innermost loop ran for O(n⁴), so assuming // Simple computation runs in constant time, our total runtime would come down to:
O(n³) + O(n⁴)*O(1) = O(n⁴)
Let us assume that i = 2.Then j can be [1,2,3].The "k" loop will run for j = 2 only.
Similarly for i=3,j can be[1,2,3,4,5,6,7,8].hence, k can run for j = 3,6. You can see a pattern here that for any value of i, the 'k' loop will run (i-1) times.The length of loops will be [i,2*i,3*i,....i*i].
Hence the time complexity of k loop is
=i+(2*i)+(3*i)+ ..... +(i*i)
=(i^2)(i+1)/2
Hence the final complexity will be
= (n^3)(n+3)/2

Time complexity of the code containing condition

foo(int n)
{
int s=0;
for(int i=1;i<=n;i++)
for(int j=1;j<=i*i;j++)
if(j%i==0)
for(k=1;k<=j;k++)
s++;
}
What is the time complexity of the above code?
I am getting it as O(n^5) but it is not correct.
The complexity is O(n^4).
Innermost loop will be executed i times for each i. (i multiples of i within 0..i*i)
It will be like the inner loop will run for
j = 0 1 2...i i+1 ...2*i ....3*i .... 4*i .... 5*i... i*i
x x x x x x
\------/\--------/\-------/ \------/
These x denotes the execution of the innermost for loop with complexity j. Rest of the time this is not touched and just the test is done and it fails.
So now check the thing, these \-----/ has i*j (j = 1,2,3...i) looping and i checks.
And now we do i times precisely.
So total work = i*(1+1+1+...1) + i*(1+2+3+..i)
= i*i+ i*i*(i+1)/2 ~ i^3
With the outer loop it will be n^4.
Now what is the meaning of it. The whole work can be divided in like this
O(i*j+i)
^^^ ^
| The other cases when it simply skips
The innermost loop executed
Now if we iterate over j then it will have complexity O(n^3).
With added external loop it will be O(n^4).
Your function computes 4-dimensional pyramidal numbers (A001296). The number of increments to s can be computed using this formula:
a(n) = n*(1+n)*(2+n)*(1+3*n)/24
Therefore, the complexity of your function is O(n4).
The reason it is not O(n5) is that if (j%i == 0) proceeds with the "payload" loop only for multiples of i, of which we have exactly i among all js in the range from 1 to i2, inclusive.
Hence, we add one for the outermost loop, one for the loop in the middle, and two for the innermost loop, because it iterates up to i2, for the total of 4.
Why only one for middle (j) ? It runs up to i2 right?
Perhaps it would be easier to see if we rewrite the code to exclude the condition:
int s=0;
for(int i=1;i<=n;i++)
for(int j=1;j<=i;j++)
for(int k=1;k<=i*j;k++)
s++;
return s;
This code produces the same number of "payload loop" iterations, but rather than "filtering out" the iterations that skip the inner loop it removes them from consideration by computing the terminal value of k in the innermost loop as i*j.

Big O calculation given a piece of code

These programs do the calculation ∑𝑖=0 𝑎𝑖 𝑥
I am trying to figure out big O calculations. I have done alot of study but I am having a problem getting this down. I understand that big O is worst case scenario or upper bounds. From what I can figure program one has two for loops one that runs for the length of the array and the other runs to the value of the first loop up to the length of the array. I think that if both ran the full length of the array then it would be quadratic O(N^2). Since the second loop only runs the length of the length of the array once I am thinking O(NlogN).
Second program has only one for loop so it would be O(N).
Am I close? If not please explain to me how I would calculate this. Since this is in the homework I am going to have to be able to figure something like this on the test.
Program 1
// assume input array a is not null
public static double q6_1(double[] a, double x)
{
double result = 0;
for (int i=0; i<a.length; i++)
{
double b = 1;
for (int j=0; j<i; j++)
{
b *= x;
}
result += a[i] * b;
}
return result;
}
Program 2
// assume input array a is not null
public static double q6_2(double[] a, double x)
{
double result = 0;
for (int i=a.length-1; i>=0; i--)
{
result = result * x + a[i];
}
return result;
}
I'm using N to refer to the length of the array a.
The first one is O(N^2). The inner loop runs 1, 2, 3, 4, ..., N - 1 times. This sum is approx N(N-1)/2 which is O(N^2).
The second one is O(N). It is simply iterating through the length of the array.
Complexity of a program is basically number of instructions executed.
When we talk about the upper bound, it means we are considering the things in worst case which should be taken in consideration by every programmer.
Let n = a.length;
Now coming back to your question, you are saying that the time complexity of the first program should be O(nlogn), which is wrong. As when i = a.length-1 the inner loop will also iterate from j = 0 to j = i. Hence the complexity would be O(n^2).
You are correct in judging the time complexity of the second program which is O(n).

Is a loop that is running a constant number of times considered Big - Oh(1)?

From a popular definition ,a loop or recursion that runs a constant number of times is also considered as O(1).
For example the following loop is O(1)
// Here c is a constant
for (int i = 1; i <= c; i++) {
// some O(1) expressions
}
Time Complexity of a loop is considered as O(n) if the loop variables is incremented / decremented by a constant amount.
For example following functions have O(n) time complexity.
// Here c is a positive integer constant
for (int i = 1; i <= n; i += c) {
// some O(1) expressions
}
I got a little confused with the following example here lets take c = 5 and according to the O(1) definition the below code becomes - O(1)
for(int i = 0; i < 5 ; i++){
cout<<"Hello<<endl";
}
Function 1:
for(int i = 0; i < len(array); i+=2){
if(key == array[i])
cout<<"Element found";
}
Function 2:
for(int i =0;i < len(array) ; i++){
if(key == array[i])
cout<<"Element found";
}
But when we compare the above 2 examples will they both become O(n) or first function is O(1) from definition.What exaclty does a loop running constant number of times means?
Assuming that len(array) is the b we're talking about [*], both your functions are O(n).
Function 2 will execute the if n times (once for each element of the array), making it obviously O(n).
Function 1, on the other hand, will execute the if n/2 times (once for every other element in the array), leading to a run time of O(n*1/2), and since constant factors (1/2 in this case) are usually omitted in O notation, you'll again end up with O(n).
[*] For the sake of completeness, if your array were of a fixed size, ie. len(array) were a constant, than both functions would be O(1).
"Loop running a costant number of times" means the loop runs a number of times that is limited from above by a constant, i.e. a given number that is indipendent from the input of your program.
Both in function 1 and 2 (unless the lenghts of the arrays are fixed or you can prove they'll never be grater than a specific constant, indipendently of the input) the if will be execute a number of time that depends on the size of the input so the time complexity can't be O(1).
"Time Complexity of a loop is considered as O(n) if the loop variables is incremented / decremented by a constant amount" is a misleading definition

Resources