for(i=0; i<n; i++) {
if(i == x) {
for(j=0; j<x; j++) {
x++;
}
x *= 2;
}
}
What's the runtime analysis for this loop?
In the loop
for(j=0; j<x; j++) {
x++;
}
j will always be less than x if x starts out greater than zero. This is because you are incrementing x every time you increment j. In a language with arbitrary precision integer arithmetic, this will run until you run out of resources. In a language like C, this results in undefined behavior. The compiler could optimize j<x to always be true, or it could allow x to overflow to an undefined (but probably two's complement) result.
Related
If we assume the statements inside of a for loop is O(1).
for (i = 0; i < N; i++) {
sequence of statements
}
Then the above time complexity should be o(n)
another example is:
for (i = 0; i < N; i++) {
for (j = i+1; j < N; j++) {
sequence of statements
}
}
the above time complexity should be o(n^2)
It seems like one for loop symbolize n times
However, I've seen some discussion said that it's not simply like this
Sometimes there are 2 for loops but it doesn't mean the time complexity must be o(n^2)
Can somebody give me an example of two or more for loops with only O(n) time complexity (with explanation will be much better)???
It occurs when either the outer or inner loop is bounded by a fixed limit (constant value) on iterations. It happens a lot in many problems, like bit manipulation. An example like this:
for (int i = 0; i < n; i++) {
int x = ...; //some constant time operations to determine int x
for (int j = 0; x != 0; j++) {
x >>>= 1;
}
}
The inner loop is limited by the number of bits for int. For Java, it is 32, so the inner loop is strictly limited by 32 iterations and is constant time. The complexity is linear time or O(n).
Loops can have fixed time if the bounds are fixed. So for these nested loops, where k is a constant:
// k is a constant
for (int i=0; i<N; i++) {
for (int j=0; j<k; j++) {
}
}
// or this
for (int i=0; i<k; i++) {
for (int j=0; j<N; j++) {
}
}
Are both O(kN) ~= O(N)
The code in the post has a time complexity of:
Given this is a second order polynomial, (N^2 - N)/2, you correctly assessed the asymptotic time complexity of your example as O(N^2).
I have a function test(arr, x) where arr is the array of finite size and x is the finite positive number. I have two nested for loops as below:
function test(arr, x) {
for(i=0; i < arr.length; i++) { //executes arr.length times
for(j=1; j <= x; j++) { // executes x times
//code goes here
}
}
}
Question:
What is the time complexity for this case? I'm confused between O(n) and O(n^2).
Any help will be appreciated. Thanks in advance!
I am studying Data Structures, but I am a bit confused about my homework assignment. I am trying to figure out how to count the precise Big-Oh values of the following:
for (int i = 0; i < n; i++)
{
for (int j = i; j < n; j++)
{
for (int k = j; k < n; k++)
{
sum += j;
}
}
}
I know how to count the Big-Oh's in terms of magnitude, but I don't know how to count it precisely. An example answer is in the form: 10N + 3n^2 + 2 (IE)
In my code, assuming C is the capacity, N is the amount of items, w[j] is the weight of item j, and v[j] is the value of item j, does it do the same thing as the 0-1 knapsack algorithm? I've been trying my code on some data sets, and it seems to be the case. The reason I'm wondering this is because the 0-1 knapsack algorithm we've been taught is 2-dimensional, whereas this is 1-dimensional:
for (int j = 0; j < N; j++) {
if (C-w[j] < 0) continue;
for (int i = C-w[j]; i >= 0; --i) { //loop backwards to prevent double counting
dp[i + w[j]] = max(dp[i + w[j]], dp[i] + v[j]); //looping fwd is for the unbounded problem
}
}
printf( "max value without double counting (loop backwards) %d\n", dp[C]);
Here is my implementation of the 0-1 knapsack algorithm: (with the same variables)
for (int i = 0; i < N; i++) {
for (int j = 0; j <= C; j++) {
if (j - w[i] < 0) dp2[i][j] = i==0?0:dp2[i-1][j];
else dp2[i][j] = max(i==0?0:dp2[i-1][j], dp2[i-1][j-w[i]] + v[i]);
}
}
printf("0-1 knapsack: %d\n", dp2[N-1][C]);
Yes, your algorithm gets you the same result. This enhancement to the classic 0-1 Knapsack is reasonably popular: Wikipedia explains it as follows:
Additionally, if we use only a 1-dimensional array m[w] to store the current optimal values and pass over this array i + 1 times, rewriting from m[W] to m[1] every time, we get the same result for only O(W) space.
Note that they specifically mention your backward loop.
i try to find the complexity of this algorithm:
m=0;
i=1;
while (i<=n)
{
i=i*2;
for (j=1;j<=(long int)(log10(i)/log10(2));j++)
for (k=1;k<=j;k++)
m++;
}
I think it is O(log(n)*log(log(n))*log(log(n))):
The 'i' loop runs until i=log(n)
the 'j' loop runs until log(i) means log(log(n))
the 'k' loop runs until k=j --> k=log(i) --> k=log(log(n))
therefore O(log(n)*log(log(n))*log(log(n))).
The time complexity is Theta(log(n)^3).
Let T = floor(log_2(n)). Your code can be rewritten as:
int m = 0;
for (int i = 0; i <= T; i++)
for (int j = 1; j <= i+1; j++)
for (int k = 1; k <= j; k++)
m++;
Which is obviously Theta(T^3).
Edit: Here's an intermediate step for rewriting your code. Let a = log_2(i). a is always an integer because i is a power of 2. Then your code is clearly equivalent to:
m=0;
a=0;
while (a<=log_2(n))
{
a+=1;
for (j=1;j<=a;j++)
for (k=1;k<=j;k++)
m++;
}
The other changes I did were naming floor(log_2(n)) as T, a as i, and using a for loop instead of a while.
Hope it's clear now.
Is this homework?
Some hints:
I'm not sure if the code is doing what it should be. log10 returns a float value and the cast to (long int) will probably cut of .9999999999. I don't think that this is intended. The line should maybe look like that:
for (j=1;j<=(long int)(log10(i)/log10(2)+0.5);j++)
In that case you can rewrite this as:
m=0;
for (i=1, a=1; i<=n; i=i*2, a++)
for (j=1; j<=a; j++)
for (k=1; k<=j; k++)
m++;
Therefore your complexity assumption for the 'j'- and 'k'-loop is wrong.
(the outer loop runs log n times, but i is increasing until n, not log n)