Understanding Big O complexity - algorithm

I'm having tough time in understanding Big O time complexity.
Formal definition of Big O :
f(n) = O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n.
And worst time complexity of insertion sort is O(n^2).
I want to understand what is f(n), g(n), c and k here in case of insertion sort.

Explanation
It is not that easy to formalize an algorithm such that you can apply Big-O formally, it is a mathematical concept and does not easily translate to algorithms. Typically you would measure the amount of "computation steps" needed to perform the operation based on the size of the input.
So f is the function that measures how many computation steps the algorithm performs.
n is the size of the input, for example 5 for a list like [4, 2, 9, 8, 2].
g is the function you measure against, so g = n^2 if you check for O(n^2).
c and k heavily depend on the specific algorithm and how exactly f looks like.
Example
The biggest issue with formalizing an algorithm is that you can not really tell exactly how many computation steps are performed. Let's say we have the following Java code:
public static void printAllEven(int n) {
for (int i = 0; i < n; i++) {
if (i % 2 == 0) {
System.out.println(i);
}
}
}
How many steps does it perform? How deep should we go? What about for (int i = 0; i < count; i++)? Those are multiple statements which are executed during the loop. What about i % 2? Can we assume this is a "single operation"? On which level, one CPU cycle? One assembly line? What about the println(i), how many computation steps does it need, 1 or 5 or maybe 200?
This is not practical. We do not know the exact amount, we have to abstract and say it is a constant A, B and C amount of steps, which is okay since it runs in constant time.
After simplifying the analysis, we can say that we are effectively only interested in how often println(i) is called.
This leads to the observation that we call it precisely n / 2 times (since we have so many even numbers between 0 and n.
The exact formula for f using aboves constants would yield something like
n * A + n * B + n/2 * C
But since constants do not really play any role (they vanish in c), we could also just ignore this and simplify.
Now you are left with proving that n / 2 is in O(n^2), for example. By doing that, you will also get concrete numbers for c and k. Example:
n / 2 <= n <= 1 * n^2 // for all n >= 0
So by choosing c = 1 and k = 0 you have proven the claim. Other values for c and k work as well, example:
n / 2 <= 100 * n <= 5 * n^2 // for all n >= 20
Here we have choosen c = 5 and k = 20.
You could play the same game with the full formula as well and get something like
n * A + n * B + n/2 * C
<= n * (A + B + C)
= D * n
<= D * n^2 // for all n > 0
with c = D and k = 0.
As you see, it does not really play any role, the constants just vanish in c.

In case of insertion sort f(n) is the actual number of operations your processor will do to perform a sort. g(n)=n2. Minimal values for c and k will be implementation-defined, but they are not as important. The main idea this Big-O notation gives is that if you double the size of the array the time it takes for insertion sort to work will grow approximately by a factor of 4 (22). (For insertion sort it can be smaller, but Big-O only gives upper bound)

Related

any linear function an + b is O(n^2) CLRS

In 3rd edition of CLRS specifically section 3.1 (page 47 in my book) they say
when a > 0, any linear function an + b is in O(n^2), which is easily verified by taking c = a + |b| and n0 = max(1,-b/a).
where n0 is the value such that when n >= n0 we could show that an + b <= cn^2 in a proof of the above.
I tried to verify this but I couldn't get very far :(
How did they choose these values of c and n0? I know that the only thing that matters is that there exists such a c and n0 such that the above is true to prove that an + b is O(n^2) but I wonder how did they choose specifically those values of c and n0? They don't seem arbitrary, its as if they applied some technique I have never seen before to obtain them.
Thanks.
Let's take the simple case where a and b are both positive. What the authors are trying to do is to create a value where the quadratic function dominates the linear function for n >= 1. That's it. They're just trying to create a general solution to show where the right parabola dominates any line.
So for n=1, the value of the linear function (i.e. l(n) = an + b) will be a+b when n=1. A dominating quadratic without any linear sub-functions (i.e. q(n) = c * n^2) would dominate the linear function, l(n) at n=1 if we choose c = a + b. So, q(n) = (a+b)n^2 dominates l(n) = an + b when n>=1, assuming a and b are both positive. You can check out examples for yourself for plotting 30x^2 and 10x + 20 on Densmos.
It's a bit trickier when b is negative, but the positive case is basically the point.

Time and Space Complexity of an Algorithm - Big O Notation

I am trying to analyze the Big-O-Notation of a simple algorithm and it has been a while I've worked with it. So I've come with an analysis and trying to figure out if this is correct one according to rules for the following code:
public int Add()
{
int total = 0; //Step 1
foreach(var item in list) //Step 2
{
if(item.value == 1) //Step 3
{
total += 1; //Step 4
}
}
return total;
}
If you assign a variable or set, in this case the complexity is determined according to the rules of Big O is O(1). So the first phase will be O(1) - This means whatever the input size is, the program will execute for the same time and memory space.
The second step comes up with foreach loop. One thing is pretty clear in the loop. According to the input, the loop iterates or runs. As an example, for input 10, loop iterates 10 times and for 20, 20 times. Totally depends on the input. In accordance with the rules of the Big O, the complexity would be O(n) - n is the number of inputs. So in the above code, the loop iterates depending upon the number of items in the list.
In this step, we define a variable that determines a condition check (See Step 3 in the coding). In that case, the complexity is O(1) according to the Big O rule.
In the same way, in step 4, there is also no change (See Step 4 in the coding). If the condition check is true, then total variable increments a value by 1. So we write - complexity O(1).
So if the above calculations are perfect, then the final complexity stands as the following:
O(1) + O(n) + O(1) + O(1) or (O(1) + O(n) * O(1) + O(1))
I am not sure if this is correct. But I guess, I would expect some clarification on this if this isn't the perfect one. Thanks.
Big O notation to describe the asymptotic behavior of functions. Basically, it tells you how fast a function grows or declines
For example, when analyzing some algorithm, one might find that the time (or the number of steps) it takes to complete a problem of size n is given by
T(n) = 4 n^2 - 2 n + 2
If we ignore constants (which makes sense because those depend on the particular hardware the program is run on) and slower growing terms, we could say "T(n)" grows at the order of n^2 " and write:T(n) = O(n^2)
For the formal definition, suppose f(x) and g(x) are two functions defined on some subset of the real numbers. We write
f(x) = O(g(x))
(or f(x) = O(g(x)) for x -> infinity to be more precise) if and only if there exist constants N and C such that
|f(x)| <= C|g(x)| for all x>N
Intuitively, this means that f does not grow faster than g
If a is some real number, we write
f(x) = O(g(x)) for x->a
if and only if there exist constants d > 0 and C such that
|f(x)| <= C|g(x)| for all x with |x-a| < d
So for your case it would be
O(n) as |f(x)| > C|g(x)|
Reference from http://web.mit.edu/16.070/www/lecture/big_o.pdf
int total = 0;
for (int i = n; i < n - 1; i++) { // --> n loop
for (int j = 0; j < n; j++) { // --> n loop
total = total + 1; // -- 1 time
}
}
}
Big O Notation gives an assumption when value is very big outer loop will run n times and inner loop is running n times
Assume n -> 100 than total n^2 10000 run times
Your analysis is not exactly correct.
Step 1 indeed takes O(1) operations
Step 2 indeed takes O(n) operations
Step 3 takes O(1) operations, but it is executed n times, so its whole contribution to complexity is O(1*n)=O(n)
Step 4 takes O(1) operations, but it is executed up to n times, so its whole contribution to complexity is also O(1*n)=O(n)
The whole complexity is O(1)+O(n)+O(n)+O(n) = O(n).
Your calculation for step 3 and 4 are incorrect as both these steps are inside the for loop.
so step 2,3 and 4 complexity will be O(n)*(O(1) +O(1))=O(n)
and when clubbed with step 1 it will be O(1)+O(n)=O(n).

Algorithm Analysis: Big-O explanation

I'm currently taking a class in algorithms. The following is a question I got wrong from a quiz: Basically, we have to indicate the worst case running time in Big O notation:
int foo(int n)
{
m = 0;
while (n >=2)
{
n = n/4;
m = m + 1;
}
return m;
}
I don't understand how the worst case time for this just isn't O(n). Would appreciate an explanation. Thanks.
foo calculates log4(n) by dividing n by 4 and counting number of 4's in n using m as a counter. At the end, m is going to be the number of 4's in n. So it is linear in the final value of m, which is equal to log base 4 of n. The algorithm is then O(logn), which is also O(n).
Let's suppose that the worst case is O(n). That implies that the function takes at least n steps.
Now let's see the loop, n is being divided by 4 (or 2²) at each step. So, in the first iteration n is reduced to n/4, in the second, to n/8. It isn't being reduced linearly. It's being reduced by a power of two so, in the worst case, it's running time is O(log n).
The computation can be expressed as a recurrence formula:
f(r) = 4*f(r+1)
The solution is
f(r) = k * 4 ^(1-r)
Where ^ means exponent. In our case we can say f(0) = n
So f(r) = n * 4^(-r)
Solving for r on the end condition we have: 2 = n * 4^(-r)
Using log in both sides, log(2) = log(n) - r* log(4) we can see
r = P * log(n);
Not having more branches or inner loops, and assuming division and addition are O(1) we can confidently say the algorithm, runs P * log(n) steps therefore is a O((log(n)).
http://www.wolframalpha.com/input/?i=f%28r%2B1%29+%3D+f%28r%29%2F4%2C+f%280%29+%3D+n
Nitpickers corner: A C int usually means the largest value is 2^32 - 1 so in practice it means only max 15 iterations, which is of course O(1). But I think your teacher really means O(log(n)).

Exponentials: Little Oh [duplicate]

This question already has answers here:
Difference between Big-O and Little-O Notation
(5 answers)
Closed 8 years ago.
What does nb = o(an) (o is little oh) mean, intuitively? I am just beginning to self teach my self algorithms and I am having hard time interpreting such expressions every time I see one. Here, the way I understood is that for the function nb, the rate of growth is an. But this is not making sense to me regardless of being right or wrong.
f(n)=o(g(n)) means that f(n)/g(n)->0 when n->infinite.
For your problem,it should hold a>1. (n^b)/(a^n)->0 when n->infinite, since (n^b)/(sqrt(a)^n*sqrt(a)^n))=((n^b)/sqrt(a)^n) * (1/sqrt(a)^n). Let f(n)=((n^b)/sqrt(a)^n) is a function increase first and then decrease, so you can get the maximum value of max(f(n))=M, then (n^b)/(a^n) < M/(sqrt(a)^n), since a>1, sqrt(a)>1, so (sqrt(a)^n)->infinite when n->infinite. That is M/(sqrt(a)^n)->0 when n->infinite, At last, we get (n^b)/(a^n)->0 when n->infinite. That is n^b=o(a^n) by definition.
(For simplicity I'll assume that all functions always return positive values. This is the case for example for functions measuring run-time of an algorithm, as no algorithm runs in "negative" time.)
First, a recap of big-O notation, to clear up a common misunderstanding:
To say that f is O(g) means that f grows asymptotically at most as fast as g. More formally, treating both f and g as functions of a variable n, to say that f(n) is O(g(n)) means that there is a constant K, so that eventually, f(n) < K * g(n). The word "eventually" here means that there is some fixed value N (which is a function of K, f, and g), so that if n > N then f(n) < K * g(n).
For example, the function f(n) = n + 2 is O(n^2). To see why, let K = 1. Then, if n > 10, we have n + 2 < n^2, so our conditions are satisfied. A few things to note:
For n = 1, we have f(n) = 3 and g(n) = 1, so f(n) < K * g(n) actually fails. That's ok! Remember, the inequality only needs to hold eventually, and it does not matter if the inequality fails for some small finite list of n.
We used K = 1, but we didn't need to. For example, K = 2 would also have worked. The important thing is that there is some value of K which gives us the inequality we want eventually.
We saw that n + 2 is O(n^2). This might look confusing, and you might say, "Wait, isn't n + 2 actually O(n)?" The answer is yes. n + 2 is O(n), O(n^2), O(n^3), O(n/3), etc.
Little-o notation is slightly different. Big-O notation, intuitively, says that if f is O(g), then f grows asymptotically at most as fast as g. Little-o notation says that if f is o(g), then f grows asymptotically strictly slower than g.
Formally, f is o(g) if for any (let's say positive) choice of K, eventually the inequality f(n) < K * o(g) holds. So, for instance:
The function f(n) = n is not o(n). This is because, for K = 1, there is no value of n so that f(n) < K * g(n). Intuitively, f and g grow asymptotically at the same rate, so f does not grow strictly slower than g does.
The function f(n) = n is o(n^2). Why is this? Pick your favorite positive value of K. (To see the actual point, try to make K small, for example 0.001.) Imagine graphing the functions f(n) and K * g(n). One is a straight line through the origin of positive slope, and the other is a concave-up parabola through the origin. Eventually the parabola will be higher than the line, and will stay that way. (If you remember your pre-calc/calculus...)
Now we get to your actual question: let f(n) = n^b and g(n) = a^n. You asked why f is o(g).
Presumably, the author of the original statement treats a and b as constant, positive real numbers, and moreover a > 1 (if a <= 1 then the statement is false).
The statement, in Engish, is:
For any positive real number b, and any real number a > 1, the function n^b grows asymptotically strictly slower than a^n.
This is an important thing to know if you are ever going to deal with algorithmic complexity. Put simpler, one can say "polynomials grow much slower than exponential functions." It isn't immediately obvious that this is true, and is too much to write out, so here is a reference:
https://math.stackexchange.com/questions/55468/how-to-prove-that-exponential-grows-faster-than-polynomial
Probably you will have to have some comfort with math to be able to read any proof of this fact.
Good luck!
The super high level meaning of the statement nb is o(an) is just that exponential functions like an grow much faster than polynomial functions, like nb.
The important thing to understand when looking at big O and little o notation is that they are both upper bounds. I'm guessing that's why you're confused. nb is o(an) because the growth rate of an is much bigger. You could probably find a tighter little o upper bound on nb (one where the gap between the bound and the function is smaller) but an is still valid. It's also probably worth looking at the difference between Big O and little o.
Remember that a function f is Big O of a function g if for some constant k > 0, you can eventually find a minimum value for n so that f(n) ≤ k * g(n).
A function f is little o of a function g if for any constant k > 0 you can eventually find a minimum value for n so that f(n) ≤ k * g(n).
Note that the little o requirement is harder to fulfill, meaning that if a function f is little o of a function g, it is also Big O of g, and it means the function g grows faster than if it were just Big O of g.
In your example, if b is 3 and a is 2 and we set k to 1, we can work out the minimum value for n so that nb ≤ k * an. In this case, it's between 9 and 10 since
9³ = 729 and 1 * 2⁹ = 512, which means at 9 an is not yet greater than nb
but
10³ = 1000 and 1 * 2¹⁰ = 1024, which means n is now greater than nb.
You can see graphing these functions that n will be greater than nb for any value of n > 10. At this point we've only shown that nb is Big O of n, since Big O only requires that for some value of k > 0 (we picked 1) an ≥ nb for some minimum n (in this case it's between 9 and 10)
To show that nb is little o of an, we would have to show that for any k greater than 0 you can still find a minimum value of n so that an > nb. For example, if you picked k = .5 the minimum of 10 we found earlier doesn't work, since 10³ = 1000, and .5 * 2¹⁰ = 512. But we can just keep sliding the minimum for n out further and further, the smaller you make k the bigger the minimum for n will b. Saying nb is little o of an means no matter how small you make k we will always be able to find a big enough value for n so that nb ≤ k * an

derivation of algorithm complexity

Refreshing up on algorithm complexity, I was looking at this example:
int x = 0;
for ( int j = 1; j <= n; j++ )
for ( int k = 1; k < 3*j; k++ )
x = x + j;
I know this loops ends up being O(n^2). I'm believing inner loop is executed 3*n times( 3(1+2+...n) ), and the outer loop executes n times. So, O(3n*n) = O(3n^2) = O(n^2).
However, the source I'm looking at expands the execution of the inner loop to: 3(1+2+3+...+n) = 3n^2/2 + 3n/2. Can anyone explain the 3n^2/2 + 3n/2 execution times?
for each J you have to execute J * 3 iterations of internal loop, so you command x=x+j will be finally executed n * 3 * (1 + 2 + 3 ... + n) times, sum of Arithmetic progression is n*(n+1)/2, so you command will be executed:
3 * n * (n+1)/2 which is equals to (3*n^2)/2 + (3*n)/2
but big O is not how much iterations will be, it is about assymptotic measure, so in expression 3*n*(n+1)/2 needs to remove consts (set them all to 0 or 1), so we have 1*n*(n+0)/1 = n^2
Small update about big O calculation for this case: to make big O from the 3n(n+1)/2, for big O you can imagine than N is infinity, so:
infinity + 1 = infinity
3*infinity = infinity
infinity/2 = infinity
infinity*infinity = infinity^2
so you after this you have N^2
The sum of integers from 1 to m is m*(m+1)/2. In the given problem, j goes from 1 to n, and k goes from 1 to 3*j. So the inner loop on k is executed 3*(1+2+3+4+5+...+n) times, with each term in that series representing one value of j. That gives 3n(n+1)/2. If you expand that, you get 3n^2/2+3n/2. The whole thing is still O(n^2), though. You don't care if your execution time is going up both quadratically and linearly, since the linear gets swamped by the quadratic.
Big O notation gives an upper bound on the asymptotic running time of an algorithm. It does not take into account the lower order terms or the constant factors. Therefore O(10n2) and O(1000n2 + 4n + 56) is still O(n2).
What you are doing is try to count the number the number of operations in your algorithm. However Big O does not say anything about the exact number of operations. It simply provides you an upper bound on the worst case running time that may occur with an unfavorable input.
The exact precision of your algorithm can be found using Sigma notation like this:
It's been empirically verified.

Resources