Big O: Which is better? - big-o

I'm new to Big-O notation, so I need a little advice. Say I have a choice of two algorithms: one with several for loops in a row, or one with a thrice nested for loop. For example, one is structured similar to this:
for(int i = 0; i < a.length; i++)
{
// do stuff
}
for(int i = 0; i < b.length; i++)
{
for(int j = 0; j < c.length; j++)
{
// do something with b and c
}
}
for(int i = 0; i < d.length; i++)
{
// do stuff
}
And the other is structured this way:
for(int i = 0; i < a.length; i++)
{
for(int j = 0; j < b.length; j++)
{
for(int k = 0; k < c.length; k++)
{
// do stuff
}
}
}
These examples may not seem practical, but I guess I'm just trying to illustrate my real questions: what's the big O notation for each method and will an algorithm with 3 nested loops always be less efficient than an algorithm with many twice nested loops (but not 3)?

what's the big O notation for each method
The big O of a loop with n steps nested in a loop with m steps is O(n * m). The big O of a loop with n steps followed by one with m steps is O(n + m) = O(max(n,m)).
So your first method is O( max(a.length, b.length * c.length, d.length)) and the second method is O( a.length * b.length * c.length). Which one is better depends on whether d.length is greater or less than a.length * b.length * c.length.
and will an algorithm with 3 nested loops always be less efficient than an algorithm with many twice nested loops (but not 3)?
It depends on how many steps each loops has in relation to the other. If all loops have the same number of steps, the 3 nested loops will always be worse than the 2 nested ones.

Not necessarily. Using a, b, and c to represent your variables, your first example is O(a + b * c + d) and your second is O(a * b * c). Probably the latter is worse, but it very much depends on the variables here - d could be the most important variable in the first example, and that would make it pretty hard to compare to the second - unless we're assuming the second doesn't have a d factor (say, optimizations of some sort)
Also - three loops doesn't necessarily mean less efficient, although this is often the case. if statements in the outer loops could cause the inner loops not to run, which could mean O(n2), despite having three nested loops. Sometimes the construction of the algorithm can appear to be one thing, but upon better analysis, we find that although it appears to be O(n3), it actually obeys O(n2).

In big-O you have the rule:
O(a + b) = O(max(a, b))
Therefore, if you run several loops after each other, the complexity of the algorithm is determined by the most-complex loop.
Your second example has O(a*b*c). This will be more complex than your first example if d < a*b*c. If d is not related to the other input lengths, you cannot compare the complexities.
[with a, b, .. I refer to a.length, b.length, ... respectively]

Related

Analyze the time cost of the following algorithms using Θ notation

So many loops, I stuck at counting how many times the last loop runs.
I also don't know how to simplify summations to get big Theta. Please somebody help me out!
int fun(int n) {
int sum = 0
for (int i = n; i > 0; i--) {
for (int j = i; j < n; j *= 2) {
for (int k = 0; k < j; k++) {
sum += 1
}
}
}
return sum
}
Any problem has 2 stages:
You guess the answer
You prove it
In easy problems, step 1 is easy and then you skip step 2 or explain it away as "obvious". This problem is a bit more tricky, so both steps require some more formal thinking. If you guess incorrectly, you will get stuck at your proof.
The outer loop goes from n to 0, so the number of iterations is O(n). The middle loop is uncomfortable to analyze because its bounds depend on current value of i. Like we usually do in guessing O-rates, let's just replace its bounds to be from 1 to n.
for (int i = n; i > 0; i--) {
for (int j = 1; j < n; j *= 2) {
perform j steps
}
}
The run-time of this new middle loop, including the inner loop, is 1+2+4+...+n, or approximately 2*n, which is O(n). Together with outer loop, you get O(n²). This is my guess.
I edited the code, so I may have changed the O-rate when I did. So I must now prove that my guess is right.
To prove this, use the "sandwich" technique - edit the program in 2 different ways, one which makes its run-time smaller and one which makes its run-time greater. If you manage to make both new programs have the same O-rate, you will prove that the original code has the same O-rate.
Here is a "smaller" or "faster" code:
do n/2 iterations; set i=n/2 for each of them {
do just one iteration, where you set j = i {
perform j steps
}
}
This code is faster because each loop does less work. It does something like n²/4 iterations.
Here is a "greater" or "slower" code:
do n iterations; set i=n for each of them {
for (int j = 1; j <= 2 * n; j *= 2) {
perform j steps
}
}
I made the upper bound for the middle loop 2n to make sure its last iteration is for j=n or greater.
This code is slower because each loop does more work. The number of iterations of the middle loop (and everything under it) is 1+2+4+...+n+2n, which is something like 4n. So the number of iterations for the whole program is something like 4n².
We got, in a somewhat formal manner:
n²/4 ≤ runtime ≤ 4n²
So runtime = O(n²).
Here I use O where it should be Θ. O is usually defined as "upper bound", while sometimes it means "upper or lower bound, depending on context". In my answer O means "both upper and lower bound".

How to determine computational complexity for algorithms with nested loops?

After looking at this question, this article, and several other questions, I have still not been able to find a general way to determine the computational complexity of algorithms with looping variables dependent on the parent loop's variable. For example,
for (int i = 0; i < n; i++) {
for (int j = i; j < n; j++) {
for (int k = i; k < j; k++) {
//one statement
}
}
}
I know that the first loop has a complexity of n, but the inner loops are confusing me. The second loop seems to be executed n-i times and the third loop seems to be executed j-i times. However, I'm not sure how to turn this into a regular Big-O statement. I don't think I can say O(n(n-i)(j-i)), so how can I get rid of the i and j variables here?
I know this is something on the order of n^3, but how can I show this? Do I need to use series?
Thanks for your help!
(If you were wondering, this is from a brute force implementation of the maximum sum contiguous subsequence problem.)
First loop hits N items on average.
Second loop hits N / 2 items on avarage
Third loop hits N / 4 items on average
O(N * N / 2 * N / 4) is about O((N^3)/8) is about O(N^3)

Proposed analysis of algorithm

I have been practicing analyzing algorithms lately. I feel like I have a pretty good understanding of analyzing non-recursive algorithms but I am unsure, and have just begun to embark on a full understanding of recursive algorithm as well. Although, I have not had a formal check on my methods and if what I have been doing is indeed correct
Would it be too much to ask if someone could check a few algorithms that I have implemented and analyzed and see if my understanding is along the right lines or if I am completely off.
here:
1)
sum = 0;
for (i = 0; i < n; i++){
for (j = 0; j < i*i; j++){
if (j % i == 0) {
for (k = 0; k < j; k++){
sum++;
}
}
}
}
My analysis of this one was O(n^5) due to:
Sum(i = 0 to n)[Sum(j = 0 to i^2)[Sum(k = 0 to j) of 1]]
which evaluated to:
(1/2)(n^5/5 + n^4/2 + n^3/3 - n/30) + (1/2)(n^3/3 + n^2/2 + n/6) + (1/2)(n^3/3 + n^2/2 + n/6) + n + 1.
Hence it is O(n^5)
Is this correct as an evaluation of the summations of the loops?
a triple summation. I have assumed that the if statement will always pass for worse case complexity. Is this a correct assumption for worst case?
2)
int tonyblair (int n, int a) {
if (a < 12) {
for (int i = 0; i < n; i++){
System.out.println("*");
}
tonyblair(n-1, a);
} else {
for (int k = 0; k < 3000; k++){
for (int j = 0; j < nk; j++){
System.out.println("#");
}
}
}
}
My analysis of this algorithm is O(infinity) due to the infinite recursion in the if statement if it is assumed to be true, which would be the worst case. Although, for pure analysis, I analyzed if this were not true and the if statement would not run. I then got a complexity of O(nk) due to:
Sum(k = 0 to 3000)[Sum(j = 0 to nk) of 1]
which then evaluated to nk(3001) + 3001. Hence is O(nk), where k is not discarded due to it controlling the number of iterations of the loop.
Number 1
I can't tell how you've derived your formula. Usually adding terms happens when there are multiple steps in an algorithm, such as precomputing data and then looking up values from the data. Instead, nested for loops implies multiplication. Also, the worst case is the best case for this snippet of code, because given a value of n, sum will be the same at the end.
To find the complexity, we want to find the number of times that the inner loop is evaluated. Summations are often easy to solve if they go from 1 to n, so I'm going to drop the 0s from them later on. If i is 0, the middle loop won't run, and if j is 0, the inner loop won't run. We can rewrite the code equivalently as:
sum = 0;
for (i = 1; i < n; i++)
{
for (j = 1; j < i*i; j++)
{
if (j % i == 0)
{
for (k = 0; k < j; k++)
{
sum++;
}
}
}
}
I could make my life harder by forcing the outer loop to start at 2, but I'm not going to. The outer loop now runs from 1 to n-1. The middle loop runs based on the current value of i, so we need to do a summation:
The middle for loop always goes to (i^2 - 1), and j will only be divisible by i for a total of (i - 1) times (i, i*2, i*3, ..., i*(i-2), i*(i-1)). With this, we get:
The middle loop then executes j times. The j in our summation is not the same as the j in the code though. The j in the summation represents each time the middle loop executes. Each time the middle loop executes, the j in the code will be (i * (number of executions so far)) = i * (the j in the summation). Therefore, we have:
We can move the i to in-between the two summations, as it is a constant for the inner summation. Then, the formula for the sum of 1 to n is well known: n*(n+1)/2. Because we are going to n - 1, we must subtract n out. This gives:
The summations for the sum of squares and the sum of cubes are also well known. Keeping in mind that we are only summing to n-1 in both cases, we must remember to subtract n^3 and n^2, respectively, and we get:
This is obviously n^4. If we solve it all the way, we get:
Number 2
For the last one, it is in fact O(infinity) if a < 12 because of the if statement. Well, technically everything is O(infinity), because Big-O only provides an upper bound on runtime. If a < 12, it is also omega(infinity) and theta(infinity). If only the else runs, then we have the summation from 1 to 2999 of i*n:
It's very important to notice that the summation from 1 to 2999 is a constant (it's 4498500). No matter how large a constant is, it's still a constant, and not dependent on n. We will end up throwing it out of the runtime calculations. Sometimes, when a theoretically fast algorithm has a large constant, it is practically slower than other algorithms that are theoretically slow. One example I can think of is Chazelle's linear time triangulation algorithm. No one has ever implemented it. In any case, we have 4498500 * n. This is theta(n):

Time Complexity - Calculating Worst Case For Algorithms

I am reading some information on time complexity and I'm quite confused as to how the following time complexities are achieved and if there is a particular set of rules or methods for working this out?
1)
Input: int n
for(int i = 0; i < n; i++){
print("Hello World, ");
}
for(int j = n; j > 0; j--){
print("Hello World");
}
Tight: 6n + 5
Big O: O(n)
2)
Input: l = array of comparable items
Output: l = array of sorted items
Sort:
for(int i = 0; i < l.length; i++){
for(int j = 0; j < l.length; j++){
if(l{i} > l{j}){
} }
Swap(l{i},l{j});
}
return ls;
Worst Case Time Complexity: 4n2 +3n+2 = O(n2)
For a given algorithm, time complexity or Big O is a way to provide some fair enough estimation of "total elementary operations performed by the algorithm" in relationship with the given input size n.
Type-1
Lets say you have an algo like this:
a=n+1;
b=a*n;
there are 2 elementary operations in the above code, no matter how big your n is, for the above code a computer will always perform 2 operations, as the algo does not depend on the size of the input, so the Big-O of the above code is O(1).
Type-2
For this code:
for(int i = 0; i < n; i++){
a=a+i;
}
I hope you understand the Big-O in O(n), as elementary operation count directly depend on the size of n
Type-3
Now what about this code:
//Loop-1
for(int i = 0; i < n; i++){
print("Hello World, ");
}
//Loop-2
for(int i = 0; i < n; i++){
for(int j = 0; j < n; j++) {
x=x+j;
}
}
As you can see loop-1 is O(n) and loop-2 is O(n^2). So it feel like total complexity should be O(n)+O(n^2). But no, the time complexity of the above code is O(n^2). Why? Because we are trying to know the fair enough count of elementary operations performed by the algorithm for a given input size n, which will be comparatively easy to understand by another person. With this logic, O(n)+O(n^2) become O(n^2), or O(n^2)+O(n^3)+O(n^4) become O(n^4)!
Again, you may ask: But how? How all the lower power of Big-O become so insignificant as we add it with a higher power of Big-O, that we can completely omit them (lower powers) when we are describing the complexity of our algorithm to another human?
I will try show the reason for this case: O(n)+O(n^2)=O(n^2).
Lets say n=1000 then the exact count for O(n) is 1000 operations and the exact count for O(n^2) is 1000*1000=1000000, so O(n^2) is 1000 time bigger than O(n), which means your program will spend most of the execution time in O(n^2) and thus it is not worth to mention that your algorithm also has some O(n).
PS. Pardon my English :)
In the first example, the array has n elements, and you go through these elements Twice. The first time you start from index 0 until i, and the second time you start from index n to 0. So, to simplify this, we can say that it took you 2n. When dealing with Big O notation, you should keep in mind that we care about the bounds:
As a result, O(2n)=O(n)
and O(an+b)=O(n)
Input: int n // operation 1
for(int i = 0; i < n; i++){ // operation 2
print("Hello World, "); // Operation 3
}
for(int j = n; j > 0; j--) // Operation 4
{
print("Hello World"); //Operation 5
}
As you can see, we have a total of 5 operations outside the loops.
Inside the first loop, we do three internal operations: checking if i is less than n, printing "Hello World", and incrementing i .
Inside the second loop, we also have three internal operations.
So, the total number of of opetations that we need is: 3n ( for first loop) + 3n ( second loop) + 5 ( operations outside the loop). As a result, the total number of steps required is 6n+5 ( that is your tight bound).
As I mentioned before, O( an +b )= n because once an algorithm is linear, a and b do not have a great impact when n is very large.
So, your time complexity will become : O(6n+5) =O(n).
You can use the same logic for the second example keeping in mind that two nested loops take n² instead of n.
I will slightly modify Johns answer. Defining n is one constant operation, defining integer i and assigning it to 0 is 2 constant operations. defining integer j and assigning with n is another 2 constant operations. checking the conditions for i,j inside for loop,increment,print statement depends on n so the total will be 3n+3n+5 which is equal to 6n+5. Here we cannot skip any of the statements during execution so its average case running time will also be its worst case running time which is O(n)

Assistance required for calculating algorithmic complexity - CSc fundamentals

Hi I have two algorithms that need their complexity worked out, i've had a try myself at first; O(N^2) & O(N^3) Here they are:
Treat Y as though it's declared 'y=int[N][N]' and B as though 'B=int[N][N]'....
int x(int [] [] y)
{
int z = 0
for (int i =0; i<y.length; i++)
z = z + y[i].length;
return z;
}
int A (int [] [] B)
{
int c =0
for ( int i =0; i<B.length; i++)
for (int j =0; j<B[i].length; j++)
C = C + B[i] [j];
return C;
}
Thanks alot :)
To caclulate the algorithmic complexity, you need to tally up the number of operations performed in the algorithm (the big-O notation is concerned about worst case scenario)
In the first case, you have a loop that is performed N times (y.length==N). Inside the loop you have one operation (executed on each iteration). This is linear in the number of inputs, so O(x)=N.
Note: calculating y[i].length is a constant length operation.
In the second case, you have the outer loop that is performed N times (just like in the first case), and in each iteration another loop if the same length (N==B[i].length) is executed. Inside the inner loop you have one operation (executed on each iteration of the inner loop). This is O(N*N)==O(N^2) overall.
Note: calculating b[i][j] is a constant length operation
Note: remember that for big-O, only the fastest-growing term matters, so additive constants can be ignored (e.g. the initialization of the return value and the return instruction are both operations, but are constants and not executed in a loop; the term depending on N grows faster than constant)

Resources