Big O - for loop increment by partial N - complexity-theory

void function(int N){
int c=0;
for (int i =0; i< N; i+= N/5)
c++;
}
What is the Big O of the above? Since for every N the loop would iterate 5 times, would it be O(1)?

Suppose for example that N = 100. Let's draw a table:
Iteration | i
----------+------
0 | 20
1 | 40
2 | 60
3 | 80
4 | 100
Note that it doesn't matter what value of N you pick, the number of iteration will be at most 5.
So we conclude that i doesn't depend on N.
So.. you're right, it's O(1).
Clarification
What's the difference between the above example and the loop: for(i=0;i<N;i+=20)?
If you draw the table, you'll get the same table! But, in this case, the result do depends on the value of N. If you pick N = 200 you'll get more than 5. So the result in this case will be O(N).

Since for every N the loop would iterate 5 times, would it be O(1)?
Precisely. The running time only depends on a constant – 5 – so it’s bounded by O(1).

Yes, the result is not depended upon N.

A formal to answer your question is like the following:

Related

How to determine the time complexity of this loop?

x=1;
While(x<n)
{
x=x + n/100;
}
I'm trying to figure out if it's o(n) or o(1). Because no matter what we put in n's place I think the loop will go just 10 times.
lets say n=1.1
then it will go for 10 times and if n=1.2 loop will go on for 17 times
and if n=2 it will go on for 50 times and when n>=101 loop will be repeated 100 times even if n=10^10000 else you can figure out
Unfortunately you're wrong it it being O(n) or O(1) and this is immediately clear by the fact that it can't be O(1), because it takes different numbers of iterations for varying values of n(even looking at n = 1,2,3,4,5), and it can't be O(n) because it doesn't grow linearly.
Even through a bit of manual calculation you can see clearly that it won't always run 10 times. Examine the following short python program:
def t(n):
x = 1
c = 0
while x < n:
c += 1
x += n/100
return c
a = []
for i in range(10000):
a += [i/100 + 1]
with open("out.csv","w") as f:
for i in a:
f.write(str(i) + "," + str(t(i)) + "\n")
Using Excel or some other application you can easily trend the number of iterations taken to see the following curve:
It is immediately clear at this point that the number of iterations taken is logarithmic in the range {0:100} with any n < 1 taking 0 iterations and n > 100 taking 100 operations. So while Big-O notation wasn't my best subject, I would guess that the time complexity is thus O(log(n)).

How to effectively calculate an algorithm's time complexity? [duplicate]

This question already has answers here:
Big O, how do you calculate/approximate it?
(24 answers)
Closed 5 years ago.
I'm studying algorithm's complexity and I'm still not able to determine the complexity of some algorithms ... Ok I'm able to figure out basic O(N) and O(N^2) loops but I'm having some difficult in routines like this one:
// What is time complexity of fun()?
int fun(int n)
{
int count = 0;
for (int i = n; i > 0; i /= 2)
for (int j = 0; j < i; j++)
count += 1;
return count;
}
Ok I know that some guys can calculate this with the eyes closed but I would love to to see a "step" by "step" how to if possible.
My first attempt to solve this would be to "simulate" an input and put the values in some sort of table, like below:
for n = 100
Step i
1 100
2 50
3 25
4 12
5 6
6 3
7 1
Ok at this point I'm assuming that this loop is O(logn), but unfortunately as I said no one solve this problem "step" by "step" so in the end I have no clue at all of what was done ....
In case of the inner loop I can build some sort of table like below:
for n = 100
Step i j
1 100 0..99
2 50 0..49
3 25 0..24
4 12 0..11
5 6 0..5
6 3 0..2
7 1 0..0
I can see that both loops are decreasing and I suppose a formula can be derived based on data above ...
Could someone clarify this problem? (The Answer is O(n))
Another simple way to probably look at it is:
Your outer loop initializes i (can be considered step/iterator) at n and divides i by 2 after every iteration. Hence, it executes the i/2 statement log2(n) times. So, a way to think about it is, your outer loop run log2(n) times. Whenever you divide a number by a base continuously till it reaches 0, you effectively do this division log number of times. Hence, outer loop is O(log-base-2 n)
Your inner loop iterates j (now the iterator or the step) from 0 to i every iteration of outer loop. i takes the maximum value of n, hence the longest run that your inner loop will have will be from 0 to n. Thus, it is O(n).
Now, your program runs like this:
Run 1: i = n, j = 0->n
Run 2: i = n/2, j = 0->n/2
Run 3: i = n/4, j = 0->n/4
.
.
.
Run x: i = n/(2^(x-1)), j = 0->[n/(2^(x-1))]
Now, runnning time always "multiplies" for nested loops, so
O(log-base-2 n)*O(n) gives O(n) for your entire code
Lets break this analysis up into a few steps.
First, start with the inner for loop. It is straightforward to see that this takes exactly i steps.
Next, think about which different values i will assume over the course of the algorithm. To start, consider the case where n is some power of 2. In this case, i starts at n, then n/2, then n/4, etc., until it reaches 1, and finally 0 and terminates. Because the inner loop takes i steps each time, then the total number of steps of fun(n) in this case is exactly n + n/2 + n/4 + ... + 1 = 2n - 1.
Lastly, convince yourself this generalizes to non-powers of 2. Given an input n, find smallest power of 2 greater than n and call it m. Clearly, n < m < 2n, so fun(n) takes less than 2m - 1 steps which is less than 4n - 1. Thus fun(n) is O(n).

Complexity of Algorithm,when while loop changes

This algorithm gets array as a input.
i=1
j=1
m=0
c=0
while i<=|A|
if A[i] == A[j]
c=c+1
j=j+1
if j>|A|
if c>m
m=c
c=0
i=i+1
j=i
return m
As i know, while loop's complexity is O(n). But I can't understand this algorithm and while loop. I need to know how does this algorithm's complexity calculate?
The while loop iterates on the i value, but it can perform several iterations with the same value. A secondary variable j is then incremented instead, and it runs up to the same maximum value.
This means that in fact this algorithm loops for every (unordered) combination of 2 values (i and j) from the given array A (including twice the same value). For example, if A is [1, 2, 3, 4], then i and j take these values per iteration of the while loop:
i | j
-----+-----
1 | 1
1 | 2
1 | 3
1 | 4
2 | 2
2 | 3
2 | 4
3 | 3
3 | 4
4 | 4
If we define n as the number of values in A, then the while loop iterates n(n+1)/2 times. In the example above: 4*5/2 = 10 times.
This is ½n²+½n = O(n²).
Note that the manipulation of the variables c and m in the code does not influence the time complexity, only the outcome of the function.
The slowest process decides the time taken in any process.
Understand this with an example;
I want to buy a car and a bicycle.
The price of a car is around $100000 and that of bicycle is $1000
When I add them I got this as $101000
A question arises here; What decides the cost of both?
Of course the car is the deciding factor. We would often ignore the price of bicycle in this case.
Time taken in a loop is more than that of declaring some variable or some arithmetic operation as this loop might run billion times.
i=1
j=1
m=0
c=0
This part will take one unit of time.
While, this entire code -->
while i<=|A|
if A[i] == A[j]
c=c+1
j=j+1
if j>|A|
if c>m
m=c
c=0
i=i+1
j=i
will run N number of times where N is length of your array.
Similarly those assignments and conditional operations will take O(1) time.
Adding them gives you
O(1) + O(N)= O(N) //{ Remember the addition of car and bicycle}

what is the efficiency of this algorithm

What is the big O value for the following algorithm? Why is it that value?
algorithm A (val array <ptr to int>)
1 n = 0
2 loop ( n < array size )
1 min = n;
2 m = n;
3 loop ( m < array size)
1 if (array[m] < array[min])
1 min = m;
4 swap(array[min],array[n]);
3 n = n + 1
I answered O(n^2) am I correct? As to how I arrived to this conclusion, the inner loops executes the n times where n = the array size and the outer loop executes n times where n is the array size n*n = n^2
That is so-called Selection sort, and indeed it has O(n2) complexity.
Yes! you are correct!
This is selection sort algorithm.
Its Θ(n^2) to be more precise.
Edit : Why is it that value?
You take the first element. Compare it with all the other elements to find minimum in the array and place it in the first place. Iterations : n.
You take the second element. Compare it with rest of the array and find minimum in that part (second minimum in whole array) and place it in the second place. Iterations : n-1.
Continuing in this way for last element, Iterations : 1.
Total = n+n-1+ ... +1 = n(n+1)/2. That is O(n^2).

Number of iterations in nested for-loops?

So I was looking at this code from a textbook:
for (int i=0; i<N; i++)
for(int j=i+1; j<N; j++)
The author stated that the inner for-loop iterates for exactly N*(N-1)/2 times but gives no basis for how he arrived to such an equation. I understand N*(N-1) but why divide by 2? I ran the code myself and sure enough when N is 10, the inner loop iterates 45 times (10*9/2).
I messed around with the code myself and tried the following (assigned only i to j):
for (int i=0; i<N; i++)
for(int j=i; j<N; j++)
With N = 10, this results in 55. So I'm having trouble understanding the underlying math here. Sure I could just plug in all the values and bruteforce my way through the problem, but I feel there is something essential and very simple I'm missing. How would you come up with an equation for describing the for loop I just constructed? Is there a way to do it without relying on the outputs? Would really appreciate any help thanks!
Think about what happens each time the outer loop iterates. The first time, i == 0, so the inner loop starts at 1 and runs to N-1, which is N-1 iterations in total. The next time through the outer loop, i has incremented to 1, so the inner loop starts at 2 and runs up to N-1, for a total of N-2 iterations. And that pattern continues: the third time through the outer loop, you get N-3 iterations, the fourth time through, N-4, etc. When you get to the last iteration of the outer loop, i == N-1, so the inner loop starts with j = N and stops immediately. So that's zero iterations.
The total number of iterations is the sum of all these numbers:
(N-1) + (N-2) + (N-3) + ... + 1 + 0
To look at it another way, this is just the sum of the positive integers from 1 to N-1. The result of this sum is called the (N-1)th triangular number, and Wikipedia explains how you can find that the formula for the n'th triangular number is n(n+1)/2. But here you have the (N-1)th triangular number, so if you set n=N-1, you get
(N-1)(N-1+1)/2 = N(N-1)/2
You're looking at nested loops where the outer one runs N times and the inner one (N-1). You're in effect adding up the sum of 1 + 2 + 3 + ....
The N * (N+1) / 2 is a "classic" formula in mathematics. Young Carl Gauss, later a famous mathematician, was given in-class busywork: Adding up the numbers from 1 to 100. The teacher expected to keep the kids busy for an hour but Carl came up with the answer almost immediately: 5050. He explained: 1 + 100; 2 + 99; 3 + 98; 4 + 97; and so on up to 50 + 51. That's 50 sums of 101 each. You could also see that as (100 / 2) * (100 + 1); that's where the /2 comes from.
As for why it's (N-1) instead of the (N+1) I mentioned... that could have to do with starting from 1 rather than 0, that would drop one iteration from the inner loop, I think.
Look at how many times the inner (j) loop runs for each value of i. When N = 10, the outer (i) loop runs 10 times, and the j loop should run 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9 times. Now you just add up those numbers to see how many times the inner loop runs. You can sum the numbers from 0 to N-1 with the formula N(N-1)/2. This is a very slight modification of a well-known formula for adding the numbers from 1 to N.
For a visual aid, you can see why 1 + 2 + 3 + ... + n = n * (n+1) / 2
If you count the iterations of the inner loop, you get:
1 2 3 4 5 6 7 8 9 10
To get the total for an arbitrary number of iterations, you can "wrap" the numbers around like this:
0 1 2 3 4
9 8 7 6 5
Now, if we add each of those columns, the all add to 9 (N-1), and there are 5 (N/2) columns. It's pretty obvious that for any even N, we'd still get N/2 columns that each added up to (N-1). As such, when the total number of iterations is even, the total number of iterations is always (N/2)(N-1), which (thanks to the commutative property) we can rewrite as N(N-1)/2.
If we did the same for an odd number of iterations, we'd have one "odd" column that couldn't be paired. In this case, we can ignore the '0' since we know it won't affect the overall sum in any case. For example, let's consider N=9 instead of N=10. For that, we get:
1 2 3 4
8 7 6 5
This gives us (N-1)/2 columns (9-1=8, 8/2=4) that each add up to N, so the sum will be N*(N-1)/2. Even though we've arrived at it slightly differently, this is an exact match for the formula above for when N is even. Again, it seems pretty obvious that this would remain true regardless of the number of columns we used (i.e., total number of iterations).
For any N (odd or even), the sum of the numbers from 0 through N-1 is N*(N-1)/2.

Resources