Is this algorithm O(1)? - complexity-theory

Is the following algorithm simply O(1), or is its complexity trickier to define?
for (i = 0; i < n; ++i)
if (i > 10)
break;
I'm confused by the fact that it's obviously O(n) when n <= 10.

It's O(1) because it takes constant time regardless of the size of the input (n). Saying it's O(n) when n <= 10 does not make sense because the big-oh notation is defined in terms of asymptotic function growth, i.e., for n "large", or bigger than a certain value. This is because the actual value of n does not matter to the asymptotic complexity: it's a way to compare different algorithms to each other.
Just take a look at the definition of big-oh: a function f(n) is O(g(n)) if there exists a constant c>0 and a positive integer m so that f(n)<c*g(n) for n>m. In your case f(n) is the time it takes to run your algorithm, g(n)=1, m=10 and c is proportional to the time it takes to loop through 10 integers.

Yes, it's O(1). It is equivalent to say that a function is O(1) and to say it is bounded. The running time of that code is bounded, therefore it is O(1).

Related

Calculating tilde-complexity of for-loop with cubic index

Say I have following algorithm:
for(int i = 1; i < N; i *= 3) {
sum++
}
I need to calculate the complexity using tilde-notation, which basically means that I have to find a tilde-function so that when I divide the complexity of the algorithm by this tilde-function, the limit in infinity has to be 1.
I don't think there's any need to calculate the exact complexity, we can ignore the constants and then we have a tilde-complexity.
By looking at the growth off the index, I assume that this algorithm is
~ log N
But rather than having a binary logarithmic function, the base in this case is 3.
Does this matter for the exact notation? Is the order of growth exactly the same and thus can we ignore the base when using Tilde-notation? Do I approach this correctly?
You are right, the for loop executes ceil(log_3 N) times, where log_3 N denotes the base-3 logarithm of N.
No, you cannot ignore the base when using the tilde notation.
Here's how we can derive the time complexity.
We will assume that each iteration of the for loop costs C, for some constant C>0.
Let T(N) denote the number of executions of the for-loop. Since at j-th iteration the value of i is 3^j, it follows that the number of iterations that we make is the smallest j for which 3^j >= N. Taking base-3 logarithms of both sides we get j >= log_3 N. Because j is an integer, j = ceil(log_3 N). Thus T(N) ~ ceil(log_3 N).
Let S(N) denote the time complexity of the for-loop. The "total" time complexity is thus C * T(N), because the cost of each of T(N) iterations is C, which in tilde notation we can write as S(N) ~ C * ceil*(log_3 N).

Why is the Big-O complexity of this algorithm O(n^2)?

I know the big-O complexity of this algorithm is O(n^2), but I cannot understand why.
int sum = 0;
int i = 1; j = n * n;
while (i++ < j--)
sum++;
Even though we set j = n * n at the beginning, we increment i and decrement j during each iteration, so shouldn't the resulting number of iterations be a lot less than n*n?
During every iteration you increment i and decrement j which is equivalent to just incrementing i by 2. Therefore, total number of iterations is n^2 / 2 and that is still O(n^2).
big-O complexity ignores coefficients. For example: O(n), O(2n), and O(1000n) are all the same O(n) running time. Likewise, O(n^2) and O(0.5n^2) are both O(n^2) running time.
In your situation, you're essentially incrementing your loop counter by 2 each time through your loop (since j-- has the same effect as i++). So your running time is O(0.5n^2), but that's the same as O(n^2) when you remove the coefficient.
You will have exactly n*n/2 loop iterations (or (n*n-1)/2 if n is odd).
In the big O notation we have O((n*n-1)/2) = O(n*n/2) = O(n*n) because constant factors "don't count".
Your algorithm is equivalent to
while (i += 2 < n*n)
...
which is O(n^2/2) which is the same to O(n^2) because big O complexity does not care about constants.
Let m be the number of iterations taken. Then,
i+m = n^2 - m
which gives,
m = (n^2-i)/2
In Big-O notation, this implies a complexity of O(n^2).
Yes, this algorithm is O(n^2).
To calculate complexity, we have a table the complexities:
O(1)
O(log n)
O(n)
O(n log n)
O(n²)
O(n^a)
O(a^n)
O(n!)
Each row represent a set of algorithms. A set of algorithms that is in O(1), too it is in O(n), and O(n^2), etc. But not at reverse. So, your algorithm realize n*n/2 sentences.
O(n) < O(nlogn) < O(n*n/2) < O(n²)
So, the set of algorithms that include the complexity of your algorithm, is O(n²), because O(n) and O(nlogn) are smaller.
For example:
To n = 100, sum = 5000. => 100 O(n) < 200 O(n·logn) < 5000 (n*n/2) < 10000(n^2)
I'm sorry for my english.
Even though we set j = n * n at the beginning, we increment i and decrement j during each iteration, so shouldn't the resulting number of iterations be a lot less than n*n?
Yes! That's why it's O(n^2). By the same logic, it's a lot less than n * n * n, which makes it O(n^3). It's even O(6^n), by similar logic.
big-O gives you information about upper bounds.
I believe you are trying to ask why the complexity is theta(n) or omega(n), but if you're just trying to understand what big-O is, you really need to understand that it gives upper bounds on functions first and foremost.

Is it O(n^2) or O(1)?

Is the execution time of this unique string function reduced from the naive O(n^2) approach?
This question has a lot of interesting discussion leads me to wonder if we put some threshold on the algorithm, would it change the Big-O running time complexity? For example:
void someAlgorithm(n) {
if (n < SOME_THRESHOLD) {
// do O(n^2) algorithm
}
}
Would it be O(n2) or would it be O(1).
This would be O(1), because there's a constant, such that no matter how big the input is, your algorithm will finish under a time that is smaller than that constant.
Technically, it is also O(n^2), because there's a constant c such that no matter how big your input is, your algorithm will finish under c * n ^ 2 time units. Since big-O gives you the upper bound, everything that is O(1) is also O(n^2)
If SOME_THRESHOLD is constant, then you've hard coded a constant upper bound on the growth of the function (and f(x) = O (g(x)) gives an upper bound of g(x) on the growth of f(x)).
By convention, O(k) for some constant k is just O(1) because we don't care about constant factors.
Note that the lower bound is unknown, a least theoretically, because we don't know anything about the lower bound of the O(n^2) function. We know that for f(x) = Omega(h(x)), h(x) <= 1 because f(x) = O(1). Less than constant-time functions are possible in theory, although in practice h(x) = 1, so f(x) = Omega(1).
What all this means is by forcing a constant upper bound on the function, the function now has a tight bound: f(x) = Theta(1).

Algorithm Running Time for O(n.m^2)

I would like to know, because I couldn't find any information online, how is an algorithm like O(n * m^2) or O(n * k) or O(n + k) supposed to be analysed?
Does only the n count?
The other terms are superfluous?
So O(n * m^2) is actually O(n)?
No, here the k and m terms are not superfluous,they do have a valid existence and essential for computing time complexity. They are wrapped together to provide a concrete-complexity to the code.
It may seem like the terms n and k are independent to each other in the code,but,they both combinedly determines the complexity of the algorithm.
Say, if you've to iterate a loop of size n-elements, and, in between, you have another loop of k-iterations, then the overall complexity turns O(nk).
Complexity of order O(nk), you can't dump/discard k here.
for(i=0;i<n;i++)
for(j=0;j<k;j++)
// do something
Complexity of order O(n+k), you can't dump/discard k here.
for(i=0;i<n;i++)
// do something
for(j=0;j<k;j++)
// do something
Complexity of order O(nm^2), you can't dump/discard m here.
for(i=0;i<n;i++)
for(j=0;j<m;j++)
for(k=0;k<m;k++)
// do something
Answer to the last question---So O(n.m^2) is actually O(n)?
No,O(nm^2) complexity can't be reduced further to O(n) as that would mean m doesn't have any significance,which is not the case actually.
FORMALLY: O(f(n)) is the SET of ALL functions T(n) that satisfy:
There exist positive constants c and N such that, for all n >= N,
T(n) <= c f(n)
Here are some examples of when and why factors other than n matter.
[1] 1,000,000 n is in O(n). Proof: set c = 1,000,000, N = 0.
Big-Oh notation doesn't care about (most) constant factors. We generally leave constants out; it's unnecessary to write O(2n), because O(2n) = O(n). (The 2 is not wrong; just unnecessary.)
[2] n is in O(n^3). [That's n cubed]. Proof: set c = 1, N = 1.
Big-Oh notation can be misleading. Just because an algorithm's running time is in O(n^3) doesn't mean it's slow; it might also be in O(n). Big-Oh notation only gives us an UPPER BOUND on a function.
[3] n^3 + n^2 + n is in O(n^3). Proof: set c = 3, N = 1.
Big-Oh notation is usually used only to indicate the dominating (largest
and most displeasing) term in the function. The other terms become
insignificant when n is really big.
These aren't generalizable, and each case may be different. That's the answer to the questions: "Does only the n count? The other terms are superfluous?"
Although there is already an accepted answer, I'd still like to provide the following inputs :
O(n * m^2) : Can be viewed as n*m*m and assuming that the bounds for n and m are similar then the complexity would be O(n^3).
Similarly -
O(n * k) : Would be O(n^2) (with the bounds for n and k being similar)
and -
O(n + k) : Would be O(n) (again, with the bounds for n and k being similar).
PS: It would be better not to assume the similarity between the variables and to first understand how the variables relate to each other (Eg: m=n/2; k=2n) before attempting to conclude.

What is Big O of a loop?

I was reading about Big O notation. It stated,
The big O of a loop is the number of iterations of the loop into
number of statements within the loop.
Here is a code snippet,
for (int i=0 ;i<n; i++)
{
cout <<"Hello World"<<endl;
cout <<"Hello SO";
}
Now according to the definition, the Big O should be O(n*2) but it is O(n). Can anyone help me out by explaining why is that?
Thanks in adavance.
If you check the definition of the O() notation you will see that (multiplier) constants doesn't matter.
The work to be done within the loop is not 2. There are two statements, for each of them you have to do a couple of machine instructions, maybe it's 50, or 78, or whatever, but this is completely irrelevant for the asymptotic complexity calculations because they are all constants. It doesn't depend on n. It's just O(1).
O(1) = O(2) = O(c) where c is a constant.
O(n) = O(3n) = O(cn)
O(n) is used to messure the loop agains a mathematical funciton (like n^2, n^m,..).
So if you have a loop like this
for(int i = 0; i < n; i++) {
// sumfin
}
The best describing math function the loops takes is calculated with O(n) (where n is a number between 0..infinity)
If you have a loop like this
for(int i =0 ; i< n*2; i++) {
}
Means it will took O(n*2); math function = n*2
for(int i = 0; i < n; i++) {
for(int j = 0; j < n; j++) {
}
}
This loops takes O(n^2) time; math funciton = n^n
This way you can calculate how long your loop need for n 10 or 100 or 1000
This way you can build graphs for loops and such.
Big-O notation ignores constant multipliers by design (and by definition), so being O(n) and being O(2n) is exactly the same thing. We usually write O(n) because that is shorter and more familiar, but O(2n) means the same.
First, don't call it "the Big O". That is wrong and misleading. What you are really trying to find is asymptotically how many instructions will be executed as a function of n. The right way to think about O(n) is not as a function, but rather as a set of functions. More specifically:
O(n) is the set of all functions f(x) such that there exists some constant M and some number x_0 where for all x > x_0, f(x) < M x.
In other words, as n gets very large, at some point the growth of the function (for example, number of instructions) will be bounded above by a linear function with some constant coefficient.
Depending on how you count instructions that loop can execute a different number of instructions, but no matter what it will only iterate at most n times. Therefore the number of instructions is in O(n). It doesn't matter if it repeats 6n or .5n or 100000000n times, or even if it only executes a constant number of instructions! It is still in the class of functions in O(n).
To expand a bit more, the class O(n*2) = O(0.1*n) = O(n), and the class O(n) is strictly contained in the class O(n^2). As a result, that loop is also in O(2*n) (because O(2*n) = O(n)), and contained in O(n^2) (but that upper bound is not tight).
O(n) means the loops time complexity increases linearly with the number of elements.
2*n is still linear, so you say the loop is of order O(n).
However, the loop you posted is O(n) since the instructions in the loop take constant time. Two times a constant is still a constant.
The fastest growing term in your program is the loop and the rest is just the constant so we choose the fastest growing term which is the loop O(n)
In case if your program has a nested loop in it this O(n) will be ignored and your algorithm will be given O(n^2) because your nested loop has the fastest growing term.
Usually big O notation expresses the number of principal operations in a function.
In this tou're overating over n elements. So complexity is O(n).
Surely is not O(n^2), since quadratic is the complexity of those algorithms, like bubble sort which compare every element in the input with all other elements.
As you remember, bubble sort, in order to determine the right position in which to insert an element, compare every element with the others n in a list (bubbling behaviour).
At most, you can claim that you're algorithm has complexity O(2n),since it prints 2 phrases for every element in the input, but in big O notation O(n) is quiv to O(2n).

Resources