Assistance required for calculating algorithmic complexity - CSc fundamentals - complexity-theory

Hi I have two algorithms that need their complexity worked out, i've had a try myself at first; O(N^2) & O(N^3) Here they are:
Treat Y as though it's declared 'y=int[N][N]' and B as though 'B=int[N][N]'....
int x(int [] [] y)
{
int z = 0
for (int i =0; i<y.length; i++)
z = z + y[i].length;
return z;
}
int A (int [] [] B)
{
int c =0
for ( int i =0; i<B.length; i++)
for (int j =0; j<B[i].length; j++)
C = C + B[i] [j];
return C;
}
Thanks alot :)

To caclulate the algorithmic complexity, you need to tally up the number of operations performed in the algorithm (the big-O notation is concerned about worst case scenario)
In the first case, you have a loop that is performed N times (y.length==N). Inside the loop you have one operation (executed on each iteration). This is linear in the number of inputs, so O(x)=N.
Note: calculating y[i].length is a constant length operation.
In the second case, you have the outer loop that is performed N times (just like in the first case), and in each iteration another loop if the same length (N==B[i].length) is executed. Inside the inner loop you have one operation (executed on each iteration of the inner loop). This is O(N*N)==O(N^2) overall.
Note: calculating b[i][j] is a constant length operation
Note: remember that for big-O, only the fastest-growing term matters, so additive constants can be ignored (e.g. the initialization of the return value and the return instruction are both operations, but are constants and not executed in a loop; the term depending on N grows faster than constant)

Related

Time complexity of the code containing condition

foo(int n)
{
int s=0;
for(int i=1;i<=n;i++)
for(int j=1;j<=i*i;j++)
if(j%i==0)
for(k=1;k<=j;k++)
s++;
}
What is the time complexity of the above code?
I am getting it as O(n^5) but it is not correct.
The complexity is O(n^4).
Innermost loop will be executed i times for each i. (i multiples of i within 0..i*i)
It will be like the inner loop will run for
j = 0 1 2...i i+1 ...2*i ....3*i .... 4*i .... 5*i... i*i
x x x x x x
\------/\--------/\-------/ \------/
These x denotes the execution of the innermost for loop with complexity j. Rest of the time this is not touched and just the test is done and it fails.
So now check the thing, these \-----/ has i*j (j = 1,2,3...i) looping and i checks.
And now we do i times precisely.
So total work = i*(1+1+1+...1) + i*(1+2+3+..i)
= i*i+ i*i*(i+1)/2 ~ i^3
With the outer loop it will be n^4.
Now what is the meaning of it. The whole work can be divided in like this
O(i*j+i)
^^^ ^
| The other cases when it simply skips
The innermost loop executed
Now if we iterate over j then it will have complexity O(n^3).
With added external loop it will be O(n^4).
Your function computes 4-dimensional pyramidal numbers (A001296). The number of increments to s can be computed using this formula:
a(n) = n*(1+n)*(2+n)*(1+3*n)/24
Therefore, the complexity of your function is O(n4).
The reason it is not O(n5) is that if (j%i == 0) proceeds with the "payload" loop only for multiples of i, of which we have exactly i among all js in the range from 1 to i2, inclusive.
Hence, we add one for the outermost loop, one for the loop in the middle, and two for the innermost loop, because it iterates up to i2, for the total of 4.
Why only one for middle (j) ? It runs up to i2 right?
Perhaps it would be easier to see if we rewrite the code to exclude the condition:
int s=0;
for(int i=1;i<=n;i++)
for(int j=1;j<=i;j++)
for(int k=1;k<=i*j;k++)
s++;
return s;
This code produces the same number of "payload loop" iterations, but rather than "filtering out" the iterations that skip the inner loop it removes them from consideration by computing the terminal value of k in the innermost loop as i*j.

Big O calculation given a piece of code

These programs do the calculation āˆ‘š‘–=0 š‘Žš‘– š‘„
I am trying to figure out big O calculations. I have done alot of study but I am having a problem getting this down. I understand that big O is worst case scenario or upper bounds. From what I can figure program one has two for loops one that runs for the length of the array and the other runs to the value of the first loop up to the length of the array. I think that if both ran the full length of the array then it would be quadratic O(N^2). Since the second loop only runs the length of the length of the array once I am thinking O(NlogN).
Second program has only one for loop so it would be O(N).
Am I close? If not please explain to me how I would calculate this. Since this is in the homework I am going to have to be able to figure something like this on the test.
Program 1
// assume input array a is not null
public static double q6_1(double[] a, double x)
{
double result = 0;
for (int i=0; i<a.length; i++)
{
double b = 1;
for (int j=0; j<i; j++)
{
b *= x;
}
result += a[i] * b;
}
return result;
}
Program 2
// assume input array a is not null
public static double q6_2(double[] a, double x)
{
double result = 0;
for (int i=a.length-1; i>=0; i--)
{
result = result * x + a[i];
}
return result;
}
I'm using N to refer to the length of the array a.
The first one is O(N^2). The inner loop runs 1, 2, 3, 4, ..., N - 1 times. This sum is approx N(N-1)/2 which is O(N^2).
The second one is O(N). It is simply iterating through the length of the array.
Complexity of a program is basically number of instructions executed.
When we talk about the upper bound, it means we are considering the things in worst case which should be taken in consideration by every programmer.
Let n = a.length;
Now coming back to your question, you are saying that the time complexity of the first program should be O(nlogn), which is wrong. As when i = a.length-1 the inner loop will also iterate from j = 0 to j = i. Hence the complexity would be O(n^2).
You are correct in judging the time complexity of the second program which is O(n).

Is a loop that is running a constant number of times considered Big - Oh(1)?

From a popular definition ,a loop or recursion that runs a constant number of times is also considered as O(1).
For example the following loop is O(1)
// Here c is a constant
for (int i = 1; i <= c; i++) {
// some O(1) expressions
}
Time Complexity of a loop is considered as O(n) if the loop variables is incremented / decremented by a constant amount.
For example following functions have O(n) time complexity.
// Here c is a positive integer constant
for (int i = 1; i <= n; i += c) {
// some O(1) expressions
}
I got a little confused with the following example here lets take c = 5 and according to the O(1) definition the below code becomes - O(1)
for(int i = 0; i < 5 ; i++){
cout<<"Hello<<endl";
}
Function 1:
for(int i = 0; i < len(array); i+=2){
if(key == array[i])
cout<<"Element found";
}
Function 2:
for(int i =0;i < len(array) ; i++){
if(key == array[i])
cout<<"Element found";
}
But when we compare the above 2 examples will they both become O(n) or first function is O(1) from definition.What exaclty does a loop running constant number of times means?
Assuming that len(array) is the b we're talking about [*], both your functions are O(n).
Function 2 will execute the if n times (once for each element of the array), making it obviously O(n).
Function 1, on the other hand, will execute the if n/2 times (once for every other element in the array), leading to a run time of O(n*1/2), and since constant factors (1/2 in this case) are usually omitted in O notation, you'll again end up with O(n).
[*] For the sake of completeness, if your array were of a fixed size, ie. len(array) were a constant, than both functions would be O(1).
"Loop running a costant number of times" means the loop runs a number of times that is limited from above by a constant, i.e. a given number that is indipendent from the input of your program.
Both in function 1 and 2 (unless the lenghts of the arrays are fixed or you can prove they'll never be grater than a specific constant, indipendently of the input) the if will be execute a number of time that depends on the size of the input so the time complexity can't be O(1).
"Time Complexity of a loop is considered as O(n) if the loop variables is incremented / decremented by a constant amount" is a misleading definition

Time Complexity - Calculating Worst Case For Algorithms

I am reading some information on time complexity and I'm quite confused as to how the following time complexities are achieved and if there is a particular set of rules or methods for working this out?
1)
Input: int n
for(int i = 0; i < n; i++){
print("Hello World, ");
}
for(int j = n; j > 0; j--){
print("Hello World");
}
Tight: 6n + 5
Big O: O(n)
2)
Input: l = array of comparable items
Output: l = array of sorted items
Sort:
for(int i = 0; i < l.length; i++){
for(int j = 0; j < l.length; j++){
if(l{i} > l{j}){
} }
Swap(l{i},l{j});
}
return ls;
Worst Case Time Complexity: 4n2 +3n+2 = O(n2)
For a given algorithm, time complexity or Big O is a way to provide some fair enough estimation of "total elementary operations performed by the algorithm" in relationship with the given input size n.
Type-1
Lets say you have an algo like this:
a=n+1;
b=a*n;
there are 2 elementary operations in the above code, no matter how big your n is, for the above code a computer will always perform 2 operations, as the algo does not depend on the size of the input, so the Big-O of the above code is O(1).
Type-2
For this code:
for(int i = 0; i < n; i++){
a=a+i;
}
I hope you understand the Big-O in O(n), as elementary operation count directly depend on the size of n
Type-3
Now what about this code:
//Loop-1
for(int i = 0; i < n; i++){
print("Hello World, ");
}
//Loop-2
for(int i = 0; i < n; i++){
for(int j = 0; j < n; j++) {
x=x+j;
}
}
As you can see loop-1 is O(n) and loop-2 is O(n^2). So it feel like total complexity should be O(n)+O(n^2). But no, the time complexity of the above code is O(n^2). Why? Because we are trying to know the fair enough count of elementary operations performed by the algorithm for a given input size n, which will be comparatively easy to understand by another person. With this logic, O(n)+O(n^2) become O(n^2), or O(n^2)+O(n^3)+O(n^4) become O(n^4)!
Again, you may ask: But how? How all the lower power of Big-O become so insignificant as we add it with a higher power of Big-O, that we can completely omit them (lower powers) when we are describing the complexity of our algorithm to another human?
I will try show the reason for this case: O(n)+O(n^2)=O(n^2).
Lets say n=1000 then the exact count for O(n) is 1000 operations and the exact count for O(n^2) is 1000*1000=1000000, so O(n^2) is 1000 time bigger than O(n), which means your program will spend most of the execution time in O(n^2) and thus it is not worth to mention that your algorithm also has some O(n).
PS. Pardon my English :)
In the first example, the array has n elements, and you go through these elements Twice. The first time you start from index 0 until i, and the second time you start from index n to 0. So, to simplify this, we can say that it took you 2n. When dealing with Big O notation, you should keep in mind that we care about the bounds:
As a result, O(2n)=O(n)
and O(an+b)=O(n)
Input: int n // operation 1
for(int i = 0; i < n; i++){ // operation 2
print("Hello World, "); // Operation 3
}
for(int j = n; j > 0; j--) // Operation 4
{
print("Hello World"); //Operation 5
}
As you can see, we have a total of 5 operations outside the loops.
Inside the first loop, we do three internal operations: checking if i is less than n, printing "Hello World", and incrementing i .
Inside the second loop, we also have three internal operations.
So, the total number of of opetations that we need is: 3n ( for first loop) + 3n ( second loop) + 5 ( operations outside the loop). As a result, the total number of steps required is 6n+5 ( that is your tight bound).
As I mentioned before, O( an +b )= n because once an algorithm is linear, a and b do not have a great impact when n is very large.
So, your time complexity will become : O(6n+5) =O(n).
You can use the same logic for the second example keeping in mind that two nested loops take nĀ² instead of n.
I will slightly modify Johns answer. Defining n is one constant operation, defining integer i and assigning it to 0 is 2 constant operations. defining integer j and assigning with n is another 2 constant operations. checking the conditions for i,j inside for loop,increment,print statement depends on n so the total will be 3n+3n+5 which is equal to 6n+5. Here we cannot skip any of the statements during execution so its average case running time will also be its worst case running time which is O(n)

Big O: Which is better?

I'm new to Big-O notation, so I need a little advice. Say I have a choice of two algorithms: one with several for loops in a row, or one with a thrice nested for loop. For example, one is structured similar to this:
for(int i = 0; i < a.length; i++)
{
// do stuff
}
for(int i = 0; i < b.length; i++)
{
for(int j = 0; j < c.length; j++)
{
// do something with b and c
}
}
for(int i = 0; i < d.length; i++)
{
// do stuff
}
And the other is structured this way:
for(int i = 0; i < a.length; i++)
{
for(int j = 0; j < b.length; j++)
{
for(int k = 0; k < c.length; k++)
{
// do stuff
}
}
}
These examples may not seem practical, but I guess I'm just trying to illustrate my real questions: what's the big O notation for each method and will an algorithm with 3 nested loops always be less efficient than an algorithm with many twice nested loops (but not 3)?
what's the big O notation for each method
The big O of a loop with n steps nested in a loop with m steps is O(n * m). The big O of a loop with n steps followed by one with m steps is O(n + m) = O(max(n,m)).
So your first method is O( max(a.length, b.length * c.length, d.length)) and the second method is O( a.length * b.length * c.length). Which one is better depends on whether d.length is greater or less than a.length * b.length * c.length.
and will an algorithm with 3 nested loops always be less efficient than an algorithm with many twice nested loops (but not 3)?
It depends on how many steps each loops has in relation to the other. If all loops have the same number of steps, the 3 nested loops will always be worse than the 2 nested ones.
Not necessarily. Using a, b, and c to represent your variables, your first example is O(a + b * c + d) and your second is O(a * b * c). Probably the latter is worse, but it very much depends on the variables here - d could be the most important variable in the first example, and that would make it pretty hard to compare to the second - unless we're assuming the second doesn't have a d factor (say, optimizations of some sort)
Also - three loops doesn't necessarily mean less efficient, although this is often the case. if statements in the outer loops could cause the inner loops not to run, which could mean O(n2), despite having three nested loops. Sometimes the construction of the algorithm can appear to be one thing, but upon better analysis, we find that although it appears to be O(n3), it actually obeys O(n2).
In big-O you have the rule:
O(a + b) = O(max(a, b))
Therefore, if you run several loops after each other, the complexity of the algorithm is determined by the most-complex loop.
Your second example has O(a*b*c). This will be more complex than your first example if d < a*b*c. If d is not related to the other input lengths, you cannot compare the complexities.
[with a, b, .. I refer to a.length, b.length, ... respectively]

Resources