How do I calculate the time complexity of this algorithm? - time

I have been trying to calculate the time complexity of this algorithm but I don't think that I am right.
Since it can't be n^2, I came up with a formula that goes O(n*(j*(1+j)*50), but I am still not sure enough.
for (int i = 1; i <= n; i++)
for (int j = 1; j <= i ; j++)
for (int k = 1; k <= 100; k++)
cout << "Hello";
Any help would be appreciated.

This is O(n²) indeed. The inner loop runs in constant time. It is the same as
for(int i = 1;i<=n;i++)
for(int j = 1;j<=i;j++) {
cout << "Hello";
cout << "Hello";
cout << "Hello";
cout << "Hello";
/* repeats 96 mores times */
}
More specifically, you can calculate the number of steps as
T(n) = 1 + 2 + 3 + ... + n
= n * n(1 + n)/2
= (n² + n)/2
Constants don't matter, so this function grows in O(n² + n) which is simply O(n²).
Instead of unrolling the inner loop, you could multiply everything by 100, but this wouldn't change the complexity.

Related

How to find the time complexity of these two programs? [duplicate]

int sum = 0;
for(int i = 1; i < n; i++) {
for(int j = 1; j < i * i; j++) {
if(j % i == 0) {
for(int k = 0; k < j; k++) {
sum++;
}
}
}
}
I don't understand how when j = i, 2i, 3i... the last for loop runs n times. I guess I just don't understand how we came to that conclusion based on the if statement.
Edit: I know how to compute the complexity for all the loops except for why the last loop executes i times based on the mod operator... I just don't see how it's i. Basically, why can't j % i go up to i * i rather than i?
Let's label the loops A, B and C:
int sum = 0;
// loop A
for(int i = 1; i < n; i++) {
// loop B
for(int j = 1; j < i * i; j++) {
if(j % i == 0) {
// loop C
for(int k = 0; k < j; k++) {
sum++;
}
}
}
}
Loop A iterates O(n) times.
Loop B iterates O(i2) times per iteration of A. For each of these iterations:
j % i == 0 is evaluated, which takes O(1) time.
On 1/i of these iterations, loop C iterates j times, doing O(1) work per iteration. Since j is O(i2) on average, and this is only done for 1/i iterations of loop B, the average cost is O(i2 / i) = O(i).
Multiplying all of this together, we get O(n × i2 × (1 + i)) = O(n × i3). Since i is on average O(n), this is O(n4).
The tricky part of this is saying that the if condition is only true 1/i of the time:
Basically, why can't j % i go up to i * i rather than i?
In fact, j does go up to j < i * i, not just up to j < i. But the condition j % i == 0 is true if and only if j is a multiple of i.
The multiples of i within the range are i, 2*i, 3*i, ..., (i-1) * i. There are i - 1 of these, so loop C is reached i - 1 times despite loop B iterating i * i - 1 times.
The first loop consumes n iterations.
The second loop consumes n*n iterations. Imagine the case when i=n, then j=n*n.
The third loop consumes n iterations because it's executed only i times, where i is bounded to n in the worst case.
Thus, the code complexity is O(n×n×n×n).
I hope this helps you understand.
All the other answers are correct, I just want to amend the following.
I wanted to see, if the reduction of executions of the inner k-loop was sufficient to reduce the actual complexity below O(n⁴). So I wrote the following:
for (int n = 1; n < 363; ++n) {
int sum = 0;
for(int i = 1; i < n; ++i) {
for(int j = 1; j < i * i; ++j) {
if(j % i == 0) {
for(int k = 0; k < j; ++k) {
sum++;
}
}
}
}
long cubic = (long) Math.pow(n, 3);
long hypCubic = (long) Math.pow(n, 4);
double relative = (double) (sum / (double) hypCubic);
System.out.println("n = " + n + ": iterations = " + sum +
", n³ = " + cubic + ", n⁴ = " + hypCubic + ", rel = " + relative);
}
After executing this, it becomes obvious, that the complexity is in fact n⁴. The last lines of output look like this:
n = 356: iterations = 1989000035, n³ = 45118016, n⁴ = 16062013696, rel = 0.12383254507467704
n = 357: iterations = 2011495675, n³ = 45499293, n⁴ = 16243247601, rel = 0.12383580700180696
n = 358: iterations = 2034181597, n³ = 45882712, n⁴ = 16426010896, rel = 0.12383905075183874
n = 359: iterations = 2057058871, n³ = 46268279, n⁴ = 16610312161, rel = 0.12384227647628734
n = 360: iterations = 2080128570, n³ = 46656000, n⁴ = 16796160000, rel = 0.12384548432498857
n = 361: iterations = 2103391770, n³ = 47045881, n⁴ = 16983563041, rel = 0.12384867444612208
n = 362: iterations = 2126849550, n³ = 47437928, n⁴ = 17172529936, rel = 0.1238518469862343
What this shows is, that the actual relative difference between actual n⁴ and the complexity of this code segment is a factor asymptotic towards a value around 0.124... (actually 0.125). While it does not give us the exact value, we can deduce, the following:
Time complexity is n⁴/8 ~ f(n) where f is your function/method.
The wikipedia-page on Big O notation states in the tables of 'Family of Bachmann–Landau notations' that the ~ defines the limit of the two operand sides is equal. Or:
f is equal to g asymptotically
(I chose 363 as excluded upper bound, because n = 362 is the last value for which we get a sensible result. After that, we exceed the long-space and the relative value becomes negative.)
User kaya3 figured out the following:
The asymptotic constant is exactly 1/8 = 0.125, by the way; here's the exact formula via Wolfram Alpha.
Remove if and modulo without changing the complexity
Here's the original method:
public static long f(int n) {
int sum = 0;
for (int i = 1; i < n; i++) {
for (int j = 1; j < i * i; j++) {
if (j % i == 0) {
for (int k = 0; k < j; k++) {
sum++;
}
}
}
}
return sum;
}
If you're confused by the if and modulo, you can just refactor them away, with j jumping directly from i to 2*i to 3*i ... :
public static long f2(int n) {
int sum = 0;
for (int i = 1; i < n; i++) {
for (int j = i; j < i * i; j = j + i) {
for (int k = 0; k < j; k++) {
sum++;
}
}
}
return sum;
}
To make it even easier to calculate the complexity, you can introduce an intermediary j2 variable, so that every loop variable is incremented by 1 at each iteration:
public static long f3(int n) {
int sum = 0;
for (int i = 1; i < n; i++) {
for (int j2 = 1; j2 < i; j2++) {
int j = j2 * i;
for (int k = 0; k < j; k++) {
sum++;
}
}
}
return sum;
}
You can use debugging or old-school System.out.println in order to check that i, j, k triplet is always the same in each method.
Closed form expression
As mentioned by others, you can use the fact that the sum of the first n integers is equal to n * (n+1) / 2 (see triangular numbers). If you use this simplification for every loop, you get :
public static long f4(int n) {
return (n - 1) * n * (n - 2) * (3 * n - 1) / 24;
}
It is obviously not the same complexity as the original code but it does return the same values.
If you google the first terms, you can notice that 0 0 0 2 11 35 85 175 322 546 870 1320 1925 2717 3731 appear in "Stirling numbers of the first kind: s(n+2, n).", with two 0s added at the beginning. It means that sum is the Stirling number of the first kind s(n, n-2).
Let's have a look at the first two loops.
The first one is simple, it's looping from 1 to n. The second one is more interesting. It goes from 1 to i squared. Let's see some examples:
e.g. n = 4
i = 1
j loops from 1 to 1^2
i = 2
j loops from 1 to 2^2
i = 3
j loops from 1 to 3^2
In total, the i and j loops combined have 1^2 + 2^2 + 3^2.
There is a formula for the sum of first n squares, n * (n+1) * (2n + 1) / 6, which is roughly O(n^3).
You have one last k loop which loops from 0 to j if and only if j % i == 0. Since j goes from 1 to i^2, j % i == 0 is true for i times. Since the i loop iterates over n, you have one extra O(n).
So you have O(n^3) from i and j loops and another O(n) from k loop for a grand total of O(n^4)

Big notation for an algorithm efficiency

can anyone help me to identify the steps for the following example and give more explanation on this Example the steps That determine Big-O notation is O(2n)
int i, j = 1;
for(i = 1; i <= n; i++)
{
j = j * 2;
}
for(i = 1; i <= j; i++)
{
cout << j << "\n";
}
thank you in advance
The first loop has n iterations and assigns 2^n to j.
The second loop has j = 2^n iterations.
The cout has time complexity O(log j) = O(n).
Hence the overall complexity is O(n * 2^n), which is strictly larger than O(2^n).

What would be the time complexity of this code?

What would be the time complexity of this code?
i = n;
while(i > 1) {
j = i;
while (j < n) {
k = 0;
while (k < n) {
k = k + 2;
}
j = j * 2;
}
i = i / 2;
}
I tried analyzing this code and got a complexity of Log^2n * n I rewrote the code in a for loop format to make it easier to see which came out like this.
for (i = n; i > 1; i = i / 2) // log2n + 1
{
for(j = i; j < n; j = j * 2) // log2n + 1
{
for (k = 0; k < n; k = k + 2) // n + 1 times
{
cout << "I = " << i << " J = " << j << " and K = " << k << endl;
}
cout << endl;
}
}
Is that correct? If not, why? I am new to algorithms and trying to understand but don't know where else to ask, sorry.
Yes, your answer is correct. The variable i is halved at every step, making the outer loop O(log n). j doubles at every step, making that loop O(log n), and the innermost k loop increases linearly, making that loop O(n). Multiplying together gives O(n log² n).

Algorithm_BIG_O

I need help understanding/doing Big O Notation. I understand the purpose of it, I don't know how to determine the complexity.Below are the few examples which i currently retrieved from pass year paper for doing revision before exam! And i had provide my answer for some of the question, please do help me check whether if it correct or not, tq! ##
Example 1:
for (int i = 0; sqrt(i) < n; i++)
cout << i << endl;
The code will run until sqrt(i) >= n which means that i >= n^2 so it's O(n^2)
The outer loops runs n times, the inner loop runs log(n) so that's O(n*log(n))
the first loop is O(n), after that the value of k is 2^n so the second loop is O(2^n) so in total it's O(n) + O(2^n) = O(2^n)
first part :
for (int i = 0; sqrt(i) < n; i++)
cout << i << endl;
sqrt(i) < n => i < n^2 so this loop takes O(n^2)
second part :
for(int i = 0 ; i < n ; i++){
outer loop run in O(n) [ increments by constant ]
while (k > 0){ // k >= 1
k /= 2;
k = n, n/2 , n/4 , ..... n/2^i
inner loop will stop when k == 1,
n/2^i = 1 => 2^i = n => i = log(n)
so total = O(n * log(n))
last part :
for (int i = 0; i < n; i++)
k = k * 2;
takes O(n) because it increments by constant.
second loop :
what is k value ?
k = 1 , i = 0
k = 2 , i = 1
k = 4 , i = 2
k = 8 , i = 3
k = 2^i
when i == n loop stops
value of k = 2^n
so total order = O(n + 2^n) = O(2^n)
O^1 = sequences of statements and conditions
O^n = loops
O^n*m = loops inside loops
Hope that helps.

Identify and state the running time using Big-O

For each of the following algorithms, identify and state the running time using Big-O.
//i for (int i = 0; Math.sqrt(i) < n; i++)
cout << i << endl;
//ii for (int i = 0; i < n; i++){
cout << i << endl;
int k = n;
while (k > 0)
{
k /= 2;
cout << k << endl;
} // while
}
//iii
int k = 1;
for (int i = 0; i < n; i++)
k = k * 2;
for (int j = 0; j < k; j++)
cout << j << endl;
I've calculate the loop times for the first question using n=1 and n=2. The loop in i will run n^2-1 times. Please help and guide me to identify the Big-O notation.
(i) for (int i = 0; Math.sqrt(i) < n; i++)
cout << i << endl;
The loop will run until squareRoot(i) < N , or until i < N^2. Thus the running time will be O(N^2), ie. quadratic.
(ii) for (int i = 0; i < n; i++){
cout << i << endl;
int k = n;
while (k > 0)
{
k /= 2;
cout << k << endl;
} // while
}
The outer loop will run for N iterations. The inner loop will run for logN iterations(because the inner loop will run for k=N, N/2, N/(2^2), N/(2^3), ...logN times). Thus the running time will be O(N logN), ie. linearithmic.
(iii)
int k = 1;
for (int i = 0; i < n; i++)
k = k * 2;
for (int j = 0; j < k; j++)
cout << j << endl;
The value of k after the execution of the first loop will be 2^n as k is multiplied by 2 n times. The second loop runs k times. Thus it will run for 2^n iterations. Running time is O(2^N), ie. exponential.
For the first question, you will have to loop until Math.sqrt(i) >= n, that means that you will stop when i >= n*n, thus the first program runs in O(n^2).
For the second question, the outer loop will execute n times, and the inner loop keeps repeatedly halving k (which is initially equal to n). So the inner loop executes log n times, thus the total time complexity is O(n log n).
For the third question, the first loop executes n times, and on each iteration you double the value of k which is initially 1. After the loop terminates, you will have k = 2^n, and the second loop executes k times, so the total complexity will be O(2^n)
Couple hints may allow you to solve most of running time complexity problems in CS tests/homeworks.
If something decrease by a factor of 2 on each iteration, that's a log(N). In your second case the inner loop index is halved each time.
Geometric series,
a r^0 + a r^1 + a r^2 ... = a (r^n - 1) / (r - 1).
Write out third problem:
2 + 4 + 8 + 16 ... = 2^1 + 2^2 + 2^3 + 2^4 + ...
and use the closed form formula.
Generally it helps to look for log2 and to write few terms to see if there is a repeatable pattern.
Other common questions require you to know factorials and its approximation (Sterling's approximation)
Using Sigma Notation, you can formally obtain the following results:
(i)
(ii)
(iii)

Resources