Calculating Big "O" for the following example - algorithm

Let's say I have a following code sample:
int number;
for(int i = 0; i < A; i++)
for(int j = 0; j < B; j++)
if(i == j) // some condition...
do{
number = rand();
}while(number > 100);
I would like to know the Big "O" for this example. Outer loops are O(A * B), but I'm not sure what to think about the do-while loop and it's Big "O". In the worst case scenario it can be an infinite loop and in the best case O(1) and ignored.
Edit: updated condition inside the if statement (replaced function call with a simple comparison).

While rand() is a random function and it has a specified range of output, we can say the do while statement is O(1).
So, it depends on the someCondition() function.
Total complexity is O(A * B) * O(someCondition).

Related

On understanding how to compute the Big-O of code snippets

I understand that simple statements like:
int x = 5; // is 1 or O(1)
And a while loop such as:
while(i<); // is n+1 or O(n)
And same with a for a single for loop (depending).
With nested while or for loop such as:
for(int i = 0; i<n; i++){ // this is n + 1
for(int j = 0; j<n; j++){ // this is (n+1)*n, total = O(n^2)
}
Also anytime we have a doubling effect it's log_2(n), tripling effect log_3(n) and so on. And if the control varible is being halved or quarted that's also either log_2(n) or log_4(n).
But I am dealing with much more complicated examples. How would one figure these examples out. I have the answers I just don't know how to work them out on paper come an examination.
Example1:
for (i = 1; i < (n*n+3*n+17) / 4 ; i += 1)
System.out.println("Sunshine");
Example2:
for (i = 0; i < n; i++)
if ( i % 2 == 0) // very confused by what mod would do to runtime
for (j = 0; j < n; j++)
System.out.print("Bacon");
else
for (j = 0; j < n * n; j++)
System.out.println("Ocean");
Example3:
for (i = 1; i <= 10000 * n: i *= 2)
x += 1;
Thank you
Example 1 is bounded by the term (n*n+3*n+17) and therefore should be O(n^2). The reason this is O(n^2) is because the largest, and therefore dominant, term in the expression is n^2.
The second example is a bit more tricky. The outer loop in i will iterate n times, but what executes on the inside depends on whether that value of i be odd or even. When even, another loop over n happens, but when odd a loop in n^2 happens. The odd case will dominate the running time eventually, so example 2 should be O(n^3).
The third example iterates until hitting 10000*n, but does so by doubling the loop counter i at each step. This will have an O(lgn) performance, where lg means the log base 2. To see why, imagine we wanted to reach n=32, starting at i=1 and doubling each time. Well we would have 2, 4, 8, 16, and 32, i.e. 6 steps, which grows as lg(32).

worst case runtime of the double for loop

Can someone please explain how the worst case running time is O(N) and not O(N^2)in the following excercise. There is double for loop, where for every i we need to compare j to i , sum++ and then increment and again repeat the operation until reach N.
What is the order of growth of the worst case running time of the following code fragment
as a function of N?
int sum = 0;
for (int i = 1; i <= N; i = i*2)
for (int j = 0; j < i; j++)
sum++;
Question Explanation
The answer is : N
The body of the inner loop is executed 1 + 2 + 4 + 8 + ... + N ~ 2N times.
I think you already stated the answer in your question -- the inner loop is executed 2N times, which is O(N). In asymptotic (or big-O) notation any multiples are dropped because for very, very large values, the graph of 2N looks just like N, so it isn't considered significant. In this case, the complexity of the problem is equal to the number of times "sum++" is called, because the algorithm is so simple. Does that make sense?
Complexity doesn't depends upon number of nested loops
it is O(Nc):
Time complexity of nested loops is equal to the number of times theinnermost statement is executed.For example the following sample loops have O(N2) time complexity
for (int i = 1; i <=n; i += c) {
for (int j = 1; j <=n; j += c) {
// some O(1) expressions
}
}
for (int i = n; i > 0; i += c) {
for (int j = i+1; j <=n; j += c) {
// some O(1) expressions
}

time complexity for code and an order of magnitude improvement

I have the following problem:
For the following code, with reason, give the time complexity of the function.
Write a function which performs the same task but which is an order-of magnitude improvement in time complexity. A function with greater (time or space) complexity will not get credit.
Code:
int something(int[] a) {
for (int i = 0; i < n; i++)
if (a[i] % 2 == 0) {
temp = a[i];
for(int j = i; j > 0; j--)
a[j] = a[j-1];
a[0] = temp;
}
}
I'm thinking that since the temp = a[i] assignment in the worst case is done n times, a time complexity of n is assigned to that, and a[j] = a[j-1] is run n(n+1)/2 times so a time complexity value of (n2+n)/2 is assigned to that, summing them returns a time complexity of n+0.5n2+0.5n, removing the constants would lead to 2n+n2 and a complexity of n2.
For the order of magnitude improvement:
int something(int[] a) {
String answer = "";
for (int i = 0; i < n; i++) {
if (a[i] % 2 == 0) answer = a[i] + answer;
else answer = answer + a[i];
}
for (int i = 0; i < n; i++)
a[i] = answer.charAt(i);
}
The code inside the first for-loop is executed n times and in the second for-loop n times, summing gives a time complexity figure of 2n.
Is this correct? Or am I doing something wrong?
I suppose your function is to arrange a list with all the even numbers at the beginning of the list and then followed by the odd numbers.
For the first function the complexity is O(n2) as you have specified.
But for the second function the complexity is O(n) if the operator + which is used for appending is implemented as a constant time operation. Usually the append operator + is implemented as a constant time operation without any hidden complexity. So we can conclude that the second operation takes O(n) time.

Running time of for loop

I seem to understand the basic concepts of easier loops like so...the first loop runs in O(n), as does the inner loop. Because they're both nested, you multiply to get a total running time of O(n^2).
sum = 0;
for ( i = 0; i < n; i++ )
for j = 0; j < n; j++ )
++sum;
Though when things start getting switched around, I get completely lost as to how to figure it out. Could someone explain to me how to figure out running time for both of the following? Also, any links to easy to understand references that could further help me improve is also appreciated. Thanks!
sum = 0;
for( i = 0; i < n; i += 2 )
for( j = 0; j < n; j++ )
++sum;
The only thing I can gather from this is that the inner loop runs in O(n). The i+=2 really throws me off in the outer loop.
sum = 0;
for( i = 1; i < n; i *= 2 )
for( j = 0; j < n; j++ )
++sum;
From my attempt...outer loop is O(log(n)), inner is O(n), so total is O(n log(n))?
A good way of thinking about Big-O performance is to pretend each element of the code is a mathematical function that takes in n items and returns the number of computations performed on those items.
For example, a single for loop like for ( i = 0; i < n; i++ ) would be equivalent to a function i(), where i(n) = n, indicating that one computation is performed for each input n.
If you have two nested loops, then the functional equivalent for
for ( i = 0; i < n; i++ )
for j = 0; j < n; j++ )
would look like these two functions:
i(n) = n * j(n)
j(n) = n
Working these two functions out produces an end result of n*n = n^2, since j(n) can be substituted for n.
What this means is that as long as you can solve for the Big-O of any single loop, you can then apply those solutions to a group of nested loops.
For example, let's look at your second problem:
for( i = 0; i < n; i += 2 )
for( j = 0; j < n; j++ )
i+=2 means that for an input set of n items (n0, n1, n2, n3, n4) you're only touching every other element of that set. Assuming you initialize so that i=0, that means you're only touching the set of (n0,n2,n4). This means you're halving the size of the data set that you're using for processing, and means the functional equivalents work out like:
i(n) = (n/2) * j(n)
j(n) = n
Solving these gets you (n/2) * n = (n^2)*(1/2). Since this is Big-O work, we remove the constants to produce a Big-O value of (n^2).
The two key points to remember here:
Big-O math starts with a set of n data elements. If you're trying to determine the Big-O of a for loop that iterates through that set of n elements, your first step is to look at how the incrementor changes the number of data elements that the for routine actually touches.
Big-O math is math. If you can solve for each for expression individually, you can use those solutions to build up into your final answer, just like you can solve for a set of equations with common definitions.

Big-Oh analysis of the running time?

For each of the following program fragments, give a Big-Oh analysis of the running time. I have two problems that I am not 100% sure if there right, can somebody help me
Fragment 1:
for( int i = 0; i < n; i++ )
for( int j = 0; j < n * n; j++ )
for( int k = 0; k < j; k++ )
sum++;
Answer: O(n^5) not really sure n*n??
Fragment 2:
for( int i = 1; i <= n; i++ )
for( int j = 1; j <= i * i; j++ )
if (j % i == 0)
for( int k = 0; k < j; k++)
sum++;
Answer:O(n^4)
Decompose the problem space per loop. Start from the outermost loop. What are the loops really going up to?
For the first problem, we have the following pattern.
The outer loop will run n times.
The outer inner loop will run n2 times, and is not bound by the current value of the inner loop.
The innermost loop will run up to j times, which causes it to be bound by the current value of the outer inner loop.
All of your steps are in linear chunks, meaning you will go from 0 to your ending condition in a linear fashion.
Here's what the summation actually looks like.
So, what would that translate into? You have to unroll the sums. It's not going to be O(n5), though.
For the second problem, we have the following pattern.
The outer loop runs up to and including n times.
The outer inner loop runs up to and including i2 times.
The innermost loop runs up to j times, on the condition that j % i == 0. That means that the innermost loop isn't executed every time.
I'll leave this problem for you to solve out. You do have to take the approach of unrolling the sums and reducing them to their algebraic counterparts.
for Fragment 1:
lets say m = n^2
Sigma(i=0...n) m Sigma(j=0.....m) j
=> n * (m(m+1)/2)
=> n ^ 5
Answer: O(n^5)
for Fragment 2:
last loop runs for i-1 times ...
Sigma(i=0...n) Sigma(j=0.....i-1) Sigma(k=0.....j) k
approximately its Sigma(i=0...n) i^2
~=> n^3
Answer:O(n^3)

Resources