Let's say we have a FOR loop
int n;
for (int i = 0; i < sqrt(n); i++)
{
statement;
}
Does calculating the sqrt of i add complexity to the loop's O(n) complexity? In my example the sqrt function in Java has a time complexity of O(log n), how does this affect the time complexity of the loop? Is the sqrt function applied for every sequence of the loop or just once and that value is stored and used again?
I suppose this can depend on language but generally i < sqrt(n) check will be ran after each loop's iteration so effectively you'll call it sqrt(n) times. Good idea is to store sqrt(n) result in variable and compare it to i, so
int n;
double sn = sqrt(n);
for (int i = 0; i < sn; i++)
{
statement;
}
The sqrt function is applied for every sequence of the loop. The use of n is different though. The complexity of O(logn) for the sqrt has n being the number of bits in the value, but the O(n) of the loop has n being the actual value. With that definition of n, then the sqrt is more like O(loglogn).
For loops like these, the complexity of operations on numbers like sqrt can be treated as constant time. The number of bits is bounded so the time is insignificant compared to the larger loop.
For the variable n given in the question,
O(log(d=bits of variable) * sqrt(n=number in the loop))
assuming finding n-digit(bits) numbers' sqrt is log(D) as 'fgb' said first.
Assuming n's number of bits are constant all through the computation and for all hardwares, so:
O(log(constant) * sqrt (n))
O(constant * sqrt(n))
O(sqrt(n))
But if it is not strongly-styped and if n's bits is increased gradually(such as going from 64 bit to 128 to 256 bit to 1024 but having same value) then it would be
O(log(d)*sqrt(n))
Answering your question about time complexity
sqrt(n) is called once each iteration, and this will take some additionally time.
But since sqrt(n) is independent from i or any other value in the statement, the calculation takes always the same amount of time. So from my understaning it does not increase the complexity of the loop, it is still O(n).
Related
int cnt = 0;
for (int i = 1; i < n; i++) {
for (int j = i; j < n; j++) {
for (int k = j * j; k < n; k++) {
++cnt;
}
}
}
I have no idea of it.
How to analyze the time complexity of it?
It's easy to see that the code is Omega(n²) (that is, is at least quadratic) - the two outer loops execute around n²/2 times.
The inner k loop executes zero times unless j is less than sqrt(n). Even though it executes zero times, it takes some computation to compute the conditions for the loop, so it's O(1) work in these cases.
When j is less than sqrt(n), i must also be less than sqrt(n), since by the construction of the loops, j is always greater than or equal to i. In these cases, the k loop does n-j² iterations. We can construct a bound for the total amount of work in this inner loop in these cases: both i and j are less than sqrt(n), and there's at worst O(n) work done in the k loop, so there's at most O(n²) (ie: sqrt(n) * sqrt(n) * n) total work done in the inner loop.
There's also at most O(n²) total work done for the cases where the inner loop is trivial (ie: when j>sqrt(n)).
This gives a proof that the runtime complexity of the code is θ(n²).
Methods involving looking at nested loops individually and constructing big-O bounds for them do not in general give tight bounds, and this is an example question where such a method fails.
The first approach would be to look at the loops separately, meaning that we have three O(.) that are connected by a product. Hence,
Complexity of the Algorithm = O(OuterLoop)*O(MiddleLoop)*O(InnerLoop)
Now look at each loop separately:
Outerloop: This is the most simple one. Incrementing from 1 to n, resulting in O(n)
Middleloop: This is non-obvious, the terminate condition of the loop is still n, but the starting iterator value is i, meaning that the larger i gets, less time it will take to finish the loop. But this factor is quadratical-asymptotically only a constant, meaning that it is still O(n), hence O(n^2) "until" the second loop.
Inner loop: We see, that the iterator increases quadratically. But we also see that the quadratic-increasing depends on the second loop, which we said to be O(n). Since, we again only look at the complexity asymptomatically, means that we can assume that j rises linearly, and since k rises quadratically until n, it will take \square(n) iterations until n is reached. Meaning that the inner most loop has a running time of O(\square(n)).
Putting all these results together,
O(n * n* square(n))=O(n^2*square(n))
I have the following code which determines whether a number is prime:
public static boolean isPrime(int n){
boolean answer = (n>1)? true: false;
for(int i = 2; i*i <= n; ++i)
{
System.out.printf("%d\n", i);
if(n%i == 0)
{
answer = false;
break;
}
}
return answer;
}
How can I determine the big-O time complexity of this function? What is the size of the input in this case?
Think about the worst-case runtime of this function, which happens if the number is indeed prime. In that case, the inner loop will execute as many times as possible. Since each iteration of the loop does a constant amount of work, the total work done will therefore be O(number of loop iterations).
So how many loop iterations will there be? Let's look at the loop bounds:
for(int i = 2; i*i <= n; ++i)
Notice that this loop will keep executing as long as i2 ≤ n. Therefore, the loop will terminate as soon as i ≥ √n + 1. Consequently, the loop will end up running O(√n) times, so the worst-case time complexity of the function is O(√n).
As to your second question - what is the size of the input? - typically, when looking at primality testing algorithms (or other algorithms that work on large numbers), the size of the input is defined to be the number of bits required to write out the input. In your case, since you're given a number n, the number of bits required to write out n is Θ(log n). This means that "polynomial time" in this case would be something like O(logk n). Your runtime, O(√n), is not considered polynomial time because O(√n) = O((2log n)1/2), which is exponentially larger than the number of bits required to write out the input.
Hope this helps!
I am having some challenges with big-oh problems. These are NOT homework problems. I am writing these problems to better understand the concept here.
function func(n)
{
int k,i = 0;
while(k < n){ < --- Guessing this outer loop is O(n/2)
k = n + 2
i = 0;
while(i < k){ <--- Not sure what this is?
i ++;
i = i * i;
}
}
}
I would really like it if you can explain to me what is going on in the inner loop and how your logic ends up at the big-o notation that you finally end up at.
The outer loop, with its test (k < n) and its step, k = n + 2, will run one time, providing an O(1) factor of complexity.
The inner loop has test (i < k) which is to say (i < n+2), and has steps i++; i=i*i; At the end,
i = (...(((1+1)^2+1)^2+1)^2+ ... )^2 > n+2`
which makes the value of i super-exponential. That is, i grows faster than exp(exp(p)) in p passes so that overall complexity is less than O(log log n). This is a tighter bound than the previously-mentioned O(log n), which also is an upper bound but not as tight.
While #alestanis has provided what looks to me like a much more accurate analysis of this problem than those in the comments, I still don't think it's quite right.
Let's create a small test program that prints out the values of i produced by the inner loop:
#include <iostream>
void inner(double k) {
double i;
i = 0.0;
while(i < k) {
i ++;
i = i * i;
std::cout << i << "\n";
}
}
int main() {
inner(1e200);
return 0;
}
When I run this, the result I get is:
1
4
25
676
458329
2.10066e+011
4.41279e+022
1.94727e+045
3.79186e+090
1.43782e+181
1.#INF
If the number of iterations were logarithmic, then the number of iterations to reach a particular number should be proportional to the number of digits in the limit. For example, if it were logarithmic, it should take around 180 iterations to reach 1e181, give or take some (fairly small) constant factor. That's clearly not the case here at all -- as is easily visible by looking at the exponents of the results in scientific notation, this is approximately doubling the number of digits every iteration, where logarithmic would mean it was roughly adding one digit every iteration.
I'm not absolutely certain, but I believe that puts the inner loop at something like O(log log N) instead of just O(log N). I think it's pretty easy to agree that the outer loop is probably intended to be O(N) (though it's currently written to execute only once), putting the overall complexity at O(N log log N).
I feel obliged to add that from a pragmatic viewpoint, O(log log N) can often be treated as essentially constant -- as shown above, the highest limit you can specify with a typical double precision floating point number is reached in only 11 iterations. As such, for most practical purposes, the overall complexity can be treated as O(N).
[Oops -- didn't notice he'd answered as I was writing this, but it looks like #jwpat7 has reached about the same conclusion I did. Kudos to him/her.]
The second loop squares the value of i until it reaches k. If we ignore the constant term, this loop runs in O(log k) time.
Why? Because if you solve i^m = k you get m = constant * log(k).
The outer loop, as you said, runs in O(n) time.
As bigger values of k depend on n, you can say the inner loop runs in O(log n) which gives you an overall complexity of O(n log n).
I was reading about Big O notation. It stated,
The big O of a loop is the number of iterations of the loop into
number of statements within the loop.
Here is a code snippet,
for (int i=0 ;i<n; i++)
{
cout <<"Hello World"<<endl;
cout <<"Hello SO";
}
Now according to the definition, the Big O should be O(n*2) but it is O(n). Can anyone help me out by explaining why is that?
Thanks in adavance.
If you check the definition of the O() notation you will see that (multiplier) constants doesn't matter.
The work to be done within the loop is not 2. There are two statements, for each of them you have to do a couple of machine instructions, maybe it's 50, or 78, or whatever, but this is completely irrelevant for the asymptotic complexity calculations because they are all constants. It doesn't depend on n. It's just O(1).
O(1) = O(2) = O(c) where c is a constant.
O(n) = O(3n) = O(cn)
O(n) is used to messure the loop agains a mathematical funciton (like n^2, n^m,..).
So if you have a loop like this
for(int i = 0; i < n; i++) {
// sumfin
}
The best describing math function the loops takes is calculated with O(n) (where n is a number between 0..infinity)
If you have a loop like this
for(int i =0 ; i< n*2; i++) {
}
Means it will took O(n*2); math function = n*2
for(int i = 0; i < n; i++) {
for(int j = 0; j < n; j++) {
}
}
This loops takes O(n^2) time; math funciton = n^n
This way you can calculate how long your loop need for n 10 or 100 or 1000
This way you can build graphs for loops and such.
Big-O notation ignores constant multipliers by design (and by definition), so being O(n) and being O(2n) is exactly the same thing. We usually write O(n) because that is shorter and more familiar, but O(2n) means the same.
First, don't call it "the Big O". That is wrong and misleading. What you are really trying to find is asymptotically how many instructions will be executed as a function of n. The right way to think about O(n) is not as a function, but rather as a set of functions. More specifically:
O(n) is the set of all functions f(x) such that there exists some constant M and some number x_0 where for all x > x_0, f(x) < M x.
In other words, as n gets very large, at some point the growth of the function (for example, number of instructions) will be bounded above by a linear function with some constant coefficient.
Depending on how you count instructions that loop can execute a different number of instructions, but no matter what it will only iterate at most n times. Therefore the number of instructions is in O(n). It doesn't matter if it repeats 6n or .5n or 100000000n times, or even if it only executes a constant number of instructions! It is still in the class of functions in O(n).
To expand a bit more, the class O(n*2) = O(0.1*n) = O(n), and the class O(n) is strictly contained in the class O(n^2). As a result, that loop is also in O(2*n) (because O(2*n) = O(n)), and contained in O(n^2) (but that upper bound is not tight).
O(n) means the loops time complexity increases linearly with the number of elements.
2*n is still linear, so you say the loop is of order O(n).
However, the loop you posted is O(n) since the instructions in the loop take constant time. Two times a constant is still a constant.
The fastest growing term in your program is the loop and the rest is just the constant so we choose the fastest growing term which is the loop O(n)
In case if your program has a nested loop in it this O(n) will be ignored and your algorithm will be given O(n^2) because your nested loop has the fastest growing term.
Usually big O notation expresses the number of principal operations in a function.
In this tou're overating over n elements. So complexity is O(n).
Surely is not O(n^2), since quadratic is the complexity of those algorithms, like bubble sort which compare every element in the input with all other elements.
As you remember, bubble sort, in order to determine the right position in which to insert an element, compare every element with the others n in a list (bubbling behaviour).
At most, you can claim that you're algorithm has complexity O(2n),since it prints 2 phrases for every element in the input, but in big O notation O(n) is quiv to O(2n).
int a = 3;
while (a <= n) {
a = a * a;
}
My version is that its complexity is:http://www.mmoprophet.com/stuff/big-o.jpg
Is there such a thing?
That's not right. a can't be a part of the big-O formula since it's just a temporary variable.
Off the top of my head, if we take multiplication to be a constant-time operation, then the number of multiplications performed will be O(log log n). If you were multiplying by a constant every iteration then it would be O(log n). Because you're multiplying by a growing number each iteration then there's another log.
Think of it as the number of digits doubling each iteration. How many times can you double the number of digits before you exceed the limit? The number of digits is log n and you can double the starting digits log2 log n times.
As for the other aspect of the question, yes, O(a-th root of n) could be a valid complexity class for some constant a. It would more familiarly be written as O(n1/a).
Well, you could actually go into an infinite loop!
Assume 32 bit integers:
Try this:
int a = 3 ;
int n = 2099150850;
while (a <= n)
{
a = a*a;
}
But assuming there are no integer overflows about, other posters are right, it is O(log logn) if you assume O(1) multiplication.
An easy way to see this is:
xn+1 = xn2.
Take x1 = a.
Taking logs.
tn = log xn
Then
tn+1 = 2tn
I will leave the rest to you.
It becomes more interesting if you consider the complexity of multiplication of two k digits numbers.
The number of loop iterations is Ο(log log n). The loop body itself does an assignment (which we can consider to be constant) and a multiplication. The best known multiplication algorithm so far has a step complexity of Ο(n log n×2Ο(log* n)), so, all together, the complexity is something like:
Ο(log log n × n log n×2Ο(log* n))
In more readable form:
Ο(log log n × n log n×2^Ο(log* n)) http://TeXHub.Com/b/TyhcbG9nXGxvZyBuXCxcdGltZXNcLG4gXGxvZyBuXCxcdGltZXNcLDJee08oXGxvZ14qIG4pfSk=
After i-th iteration (i=0,1,...) value of a is 32i. There will be O(log log n) iterations, and assuming arithmetic operations in O(1) this is time complexity.