I have got a program, and trying to compute its complexity. I want to be sure i am not mistaken
for(int i=4; i<=n; i=i*4)
{
cout<<"counter for first loop: "<<++count1<<endl;
for(int j=i;j>=0;j=j-4)
{
cout<<"counter for second loop: "<<++count2<<endl;
for(int k=0;k<=n;k++)
{
cout<<"counter for third loop: "<<++count3<<endl;
}
}
}
Here, the complexity of third loop is O(n), then together with the second loop, the complexity becomes O(n.log4i), and the complexity of whole program is O(n.(log4i)2). Am i right in my answer? Thanks
The complexity of the inner most loop is O(n). The complexity of the middle one is O(i/4), which in turn is O(i). The complexity of the outer most loop is O(log4n). There for the total complexity of the code is O(n.i.log4n) which is equal to O (n.(log4n)2).
You can proceed formally like the following:
Executing this fragment:
sum = 0;
for( i = 4 ; i <= n; i = i * 4 ) {
for( j = i ; j >= 0 ; j = j - 4 ) {
for( k = 0 ; k <= n ; k ++ ) {
sum ++;
}
}
}
We obtain:
Results exactly compatible with the formula above.
Besides, both inner loops' runtime is O(n) ... which means that, when executed together, we get O(n²).
Related
I have the following code: I need to get the big O notation and calculate the number of primitive operations. I know that loops usually corresponds to mathematical summations. Can someone help out clarify how to solve the Big O the following code, knowing the summations?
public static int hello(int[] first, int[] second) { // assume equal-length arrays
int n = first.length, count = 0;
for (int i = 0; i < n; i++) { // loop from 0 to n-1
int total = 0;
for (int j = 0; j < i; j++) {// loop from 0 to i
for (int k = 0; k <= j; k++) { // loop from 0 to j
total += first[k];
}
}
if (second[i] == total) {
count++;
}
}
return count;
}
So this would be
right? How do you continue from here?
When running the code with n=10, the first loop runs n times i.e. 10 times, a statement at the level of the second loop runs 45 times, don't know what that means in terms of n and a statement with constant time at the level of the inner third loop runs 165 times.
Can someone help me with what type of summations this code would be and how it translates to Big O? Thank you so much for any help.
Sum of first n natural numbers and sum of squares of first n natural numbers is given as ,
You have got the right summation, so solving it
Sn =
Sn ≤
Sn ≤
Sn ≤
Sn translates to number of operations performed by the for loops altogether.
Time complexity is thus givens as,
O(Sn) ~ O(n3)
I am having problem understanding the answer to the following question about analyzing two algorithms below.
for (int i = n ; i >= 1; i = i/2) {
for ( int j = 1; j <= n ; j++) {
//do something
}
}
The algorithm above has complexity of O(n) according to the answers. Shouldn't it be lower since the outer loop always halves the amount we have to go through. I thought that it should be something along the lines of O(n/2 * )?
for ( int j = 1; j <= n ; j++ ) {
for ( int i = n ; i >= j ; i = i / 2 ) {
//do something
}
}
This one is O(n log n) if I am correct?
The first iteration will execute n steps, the second will execute n/2, the third will execute n/4, and so on.
If you compute the sum of n/(2^i) for i=0..log n you will get roughly 2n and that is why it is O(n).
If you take n out of the summation and sum only the 1/(2^i) part, you will get 2. Take a look at an example:
1 + 1/2 + 1/4 + 1/8 + 1/16 + ... = 1 + 0.5 + 0.25 + 0.125 + 0.0625 + ... = 2
Each next element is twice smaller, therefore the sum will never exceed 2.
You are right with the second nested loop example - it is O(n log n).
EDIT:
After the comment from ringø I re-read the question and in fact the algorithm is different from what I understood. ringø is right, the algorithm as described in the question is O(n log n). However, judging from the context I think that the OP meant an algorithm where the inner loop is tied to i and not n.
This answer relates to the following algorithm:
for (int i = n ; i >= 1; i = i/2) {
for ( int j = 1; j <= i ; j++) {
//do something
}
}
Can someone please explain how the worst case running time is O(N) and not O(N^2)in the following excercise. There is double for loop, where for every i we need to compare j to i , sum++ and then increment and again repeat the operation until reach N.
What is the order of growth of the worst case running time of the following code fragment
as a function of N?
int sum = 0;
for (int i = 1; i <= N; i = i*2)
for (int j = 0; j < i; j++)
sum++;
Question Explanation
The answer is : N
The body of the inner loop is executed 1 + 2 + 4 + 8 + ... + N ~ 2N times.
I think you already stated the answer in your question -- the inner loop is executed 2N times, which is O(N). In asymptotic (or big-O) notation any multiples are dropped because for very, very large values, the graph of 2N looks just like N, so it isn't considered significant. In this case, the complexity of the problem is equal to the number of times "sum++" is called, because the algorithm is so simple. Does that make sense?
Complexity doesn't depends upon number of nested loops
it is O(Nc):
Time complexity of nested loops is equal to the number of times theinnermost statement is executed.For example the following sample loops have O(N2) time complexity
for (int i = 1; i <=n; i += c) {
for (int j = 1; j <=n; j += c) {
// some O(1) expressions
}
}
for (int i = n; i > 0; i += c) {
for (int j = i+1; j <=n; j += c) {
// some O(1) expressions
}
How can one determine the time complexity of this loop:
for(int i = N-1; i >= 0; i--)
{
for(int j = 1; j <= i; j++)
{
if(numbers[j-1] > numbers[j])
{
temp = numbers[j-1];
numbers[j-1] = numbers[j];
numbers[j] = temp;
}
}
}
As you may have noticed this is the algorithm for bubble sort. Also is the frequency count of this algorithm for comparison and assignment the same?
Calculate the complexity
You need to add the basic operations/machine instructions that are being executed. (as a function of the size of it's input)
Calculation
for(int i = N-1; i >= 0; i--)
{ | | |
c1 c2 c3
for(int j = 1; j <= i; j++)
{ | | |
c4 c5 c6
if(numbers[j-1] > numbers[j])--c7
{
temp = numbers[j-1];
numbers[j-1] = numbers[j];
numbers[j] = temp;
}
}
}
c1,c2,c3,c4,c5,c6,c7 are the costs to execute the machine instructions corresponding to these constructs (like i>=0,j<=i etc)
Now for i=N-1 the innerloop is executed N-1 times
for i=N-2 the innerloop is executed N-2 times
....
for i=0 the innerloop is executed 0 times
So the innerloop is executed (N-1)+(N-2)+...1+0 times which is
= N*(N-1)/2
Look carefully the cost is
= c1+ c2*(N+1) + c3*N+ c4*N+((N*(N-1)/2)+1)*(c5)+ (N(N-1)/2)*(c6+c7);
= c1+c2+c5+ N*(c2+c3-(c5+c6+c7)/2) + N^2 * (c5/2 + c6/2 + c7/2)
= c8 + N*c9 + N^2 *(c10) [c8,c9,c10 are constants]
Why do we multiply N+1 with c2 ? that is because of the last check when actually i=-1.
Now for large values of N, N^2 dominates N.
So the time complexity is O(N^2).
So, T(N)=O(N^2)
The complexity of your current implementation is O(n^2) for both best and worst cases, and is the same if count only comparisons, only assignments, or both.
Here are the detailed calculations, K being a constant depending on which operations you want to take into account for time complexity:
If you want a more efficient Bubble sort algorithm, check out the pseudo codes on the Wiki page or this answer, you will find algorithms with O(n) best case complexity.
I'm trying to compute the big-O time complexity of this selection sort implementation:
void selectionsort(int a[], int n)
{
int i, j, minimum, index;
for(i=0; i<(n-1); i++)
{
minimum=a[n-1];
index=(n-1);
for(j=i; j<(n-1); j++)
{
if(a[j]<minimum)
{
minimum=a[j];
index=j;
}
}
if (i != index)
{
a[index]=a[i];
a[i]=minimum;
}
}
}
How might I go about doing this?
Formally, you can obtain the exact number of iterations with the order of growth using the methodology below:
Executing the following fragment code (synthetic version of the original code), sum will equal the closed form of T(n).
sum = 0;
for( i = 0 ; i < ( n - 1 ) ; i ++ ) {
for( j = i ; j < ( n - 1 ) ; j ++ ) {
sum ++;
}
}
Let's begin by looking at the inside of the outer loop. It does O(1) work with the initial assignments, then has a loop that runs n - i times, then does O(1) more work at the end to perform the swap. Therefore, the runtime is Θ(n - i).
If we sum up from i going from 0 up to n - 1, we get the following:
n + (n - 1) + (n - 2) + ... + 1
This famous sum works out to Θ(n2), so the runtime would be Θ(n2), matching the known runtime of this algorithm.
Hope this helps!