1) int p = 0;
2) for (int i = 1; i < n; i*=2) p++;
3) for (int j = 1; j < p; j*=2) stmt;
In my analysis, line #1 O(1), line #2 O(lg(n)), and line #3 O(lg(p)). I believe that the second and third lines are independent. Therefore, the asymptotic time complexity should be O(lg(n) + lg(p)). By the way, the lecturer said O(lglg(n)) because of p = lg(n). At this point, I have three questions.
How does the second line relate to the third line? Could you please explain it in detail with some examples? I don't understand how p = lg(n) is available.
O(lg(n) + lg(p)) is wrong? Would you please explain if I am wrong?
If the complexity in my second question is correct, I don't understand O(lglg(n)) can be answer because I think O(lg(n) + lg(p)) > O(lglg(n)).
Please comment if you could not catch my question point.
It can be shown that p will be O(log n) after line 2 is finished. Therefore, the overall time complexity is O(O(stmt) * log(p) + log(n)) and since we know p, this can be reduced to O(O(stmt) * log(log(n)) + log(n)). I assume stmt is O(1), so the real runtime would be O(log(n) + log(log(n))). This can be further reduced to O(log(n)) since it can be shown that for any non-trivial n, log(n) > log(log(n)).
Why is p O(log n)? Well, consider what p evaluates to after line 2 is complete when n is 2, 4, 8, 16. Each time, p will end up being 1, 2, 3, 4. Thus, to increase p by one, you need to double n. Thus, p is the inverse of 2^n, which is log(n). This same logic must be carried to line 3, and the final construction of the runtime is detailed in the first paragraph of this post.
As of your question I made this c program and try to do the complexity analysis step by step so you can understand:
#include<stdio.h>
int main(){
//-----------------------------------//
//------------first line to analysis-------------//
//O(1) as of input size siz(p)=1
int p = 0;
int i=1,j=1,n=100;
//-----------------------------------//
//-----------second line to analysis---//
//O(log(n)) as of input size siz(loop1)=n
for(i=1;i<n;i=i*2)
printf("%d",i);
//---------------------------------//
//-------------third line to analysis---//
//O(log(p)) as of input size siz(loop2)=p
//we get O(log(n)) if we assume that input size siz(loop2)=p=n
for(j=1;j<p;j=j*2)
printf("%d",j);
}
As of first line there is one variable p and it can take only one input at a time,so the time complexity is constant time.
we can say that int p = 1 is O(1) and we take the function f(n)=O(1).
After that we have the first loop and it increases in a logarithmic scale like log with a base of 2,so it will be O(log(n)) as of input size is dependent on variable n.
so the worst case time complexity is now f(n) = O(1)+O(log(n)).
in third case it is same as second loop so we can say that time complexity is O(log(p)) as of input size is p and the 3rd line of code or 2nd loop is always independent part of the source code.if it will be a nested loop then it will depend on the first loop.
so the time complexity now f(n) = O(1)+O(log(n))+O(log(p))
Now we the time complexity formula and need to choose the worst one from this.
**O(LogLogn) Time Complexity of a loop is considered as O(LogLogn) if the loop variables is reduced / increased exponentially by a constant amount.
// Here c is a constant greater than 1
for (int i = 2; i <=n; i = pow(i, c)) {
// some O(1) expressions
}
//Here fun is sqrt or cuberoot or any other constant root
for (int i = n; i > 0; i = fun(i)) {
// some O(1) expressions
}
so by the reference of ** mark we can easily understand that the time complexity will be O(log(log(n)) if the input size of p = n.This is the answer of your 3rd question.
reference: time complexity analysis
Related
Given this algorithm (a>0, b>0) :
while(a>=b){
k=1;
while(a>=k*b){
a = a - k*b;
k++;
}
}
My question : I have to find the time complexity of this algorithm and to do so, I must find the number of instructions but I couldn't find it. Is there a way to find this number and if not, how can I find its time complexity ?
What I have done : First of all I tried to find the number of iterations of the first loop and I found a pattern : a_i = a - (i(i+1)/2)*b where i is the number of iterations. I've spent hours doing some manipulations on it but I couldn't find anything relevant (I've found weird results like q² <= a/b < q²+q where q is the number of iterations).
You have correctly calculated that the value of a after the i-th iteration of the inner loop is:
Where a_j0 is the value of a at the start of the j-th outer loop. The stopping condition for the inner loop is:
Which can be solved as a quadratic inequality:
Therefore the inner loop is approximately O(sqrt(a_j0 / b)). The next starting value of a satisfies:
Scaling roughly as sqrt(2b * a_j0). It would be quite tedious to compute the time complexity exactly, so let's apply the above approximations from here on:
Where a_n replaces a_j0, and t_n is the run-time of the inner loop – and of course the total time complexity is just the sum of t_n. Note that the first term is given by n = 1, and that the input value of a is defined to be a_0.
Before directly solving this recurrence, note that since the second term t_2 is already proportional to the square root of the first t_1, the latter dominates all other terms in the sum.
The total time complexity is therefore just O(sqrt(a / b)).
Update: numerical tests.
Note that, since all changes in the value of a are proportional to b, and all loop conditions are also proportional to b, the function can be "normalized" by setting b = 1 and only varying a.
Javascript test function, which measures the number of times that the inner loop executes:
function T(n)
{
let t = 0, k = 0;
while (n >= 1) {
k = 1;
while (n >= k) {
n -= k;
k++; t++;
}
}
return t;
}
Plot of sqrt(n) against T(n):
A convincing straight line which confirms that the time complexity is indeed half-power.
From a popular definition ,a loop or recursion that runs a constant number of times is also considered as O(1).
For example the following loop is O(1)
// Here c is a constant
for (int i = 1; i <= c; i++) {
// some O(1) expressions
}
Time Complexity of a loop is considered as O(n) if the loop variables is incremented / decremented by a constant amount.
For example following functions have O(n) time complexity.
// Here c is a positive integer constant
for (int i = 1; i <= n; i += c) {
// some O(1) expressions
}
I got a little confused with the following example here lets take c = 5 and according to the O(1) definition the below code becomes - O(1)
for(int i = 0; i < 5 ; i++){
cout<<"Hello<<endl";
}
Function 1:
for(int i = 0; i < len(array); i+=2){
if(key == array[i])
cout<<"Element found";
}
Function 2:
for(int i =0;i < len(array) ; i++){
if(key == array[i])
cout<<"Element found";
}
But when we compare the above 2 examples will they both become O(n) or first function is O(1) from definition.What exaclty does a loop running constant number of times means?
Assuming that len(array) is the b we're talking about [*], both your functions are O(n).
Function 2 will execute the if n times (once for each element of the array), making it obviously O(n).
Function 1, on the other hand, will execute the if n/2 times (once for every other element in the array), leading to a run time of O(n*1/2), and since constant factors (1/2 in this case) are usually omitted in O notation, you'll again end up with O(n).
[*] For the sake of completeness, if your array were of a fixed size, ie. len(array) were a constant, than both functions would be O(1).
"Loop running a costant number of times" means the loop runs a number of times that is limited from above by a constant, i.e. a given number that is indipendent from the input of your program.
Both in function 1 and 2 (unless the lenghts of the arrays are fixed or you can prove they'll never be grater than a specific constant, indipendently of the input) the if will be execute a number of time that depends on the size of the input so the time complexity can't be O(1).
"Time Complexity of a loop is considered as O(n) if the loop variables is incremented / decremented by a constant amount" is a misleading definition
I recently learned about formal Big-O analysis of algorithms; however, I don't see why these 2 algorithms, which do virtually the same thing, would have drastically different running times. The algorithms both print numbers 0 up to n. I will write them in pseudocode:
Algorithm 1:
def countUp(int n){
for(int i = 0; i <= n; i++){
print(n);
}
}
Algorithm 2:
def countUp2(int n){
for(int i = 0; i < 10; i++){
for(int j = 0; j < 10; j++){
... (continued so that this can print out all values 0 - Integer.MAX_VALUE)
for(int z = 0; z < 10; z++){
print("" + i + j + ... + k);
if(("" + i + j + k).stringToInt() == n){
quit();
}
}
}
}
}
So, the first algorithm runs in O(n) time, whereas the second algorithm (depending on the programming language) runs in something close to O(n^10). Is there anything with the code that causes this to happen, or is it simply the absurdity of my example that "breaks" the math?
In countUp, the loop hits all numbers in the range [0,n] once, thus resulting in a runtime of O(n).
In countUp2, you do somewhat the exact same thing, a bunch of times. The bounds on all your loops is 10.
Say you have 3 loop running with a bound of 10. So, outer loop does 10, inner does 10x10, innermost does 10x10x10. So, worst case your innermost loop will run 1000 times, which is essentially constant time. So, for n loops with bounds [0, 10), your runtime is 10^n which, again, can be called constant time, O(1), since it is not dependent on n for worst case analysis.
Assuming you can write enough loops and that the size of n is not a factor, then you would need a loop for every single digit of n. Number of digits in n is int(math.floor(math.log10(n))) + 1; lets call this dig. So, a more strict upper bound on the number of iterations would be 10^dig (which can be kinda reduced to O(n); proof is left to the reader as an exercise).
When analyzing the runtime of an algorithm, one key thing to look for is the loops. In algorithm 1, you have code that executes n times, making the runtime O(n). In algorithm 2, you have nested loops that each run 10 times, so you have a runtime of O(10^3). This is because your code runs the innermost loop 10 times for each run of the middle loop, which in turn runs 10 times for each run of the outermost loop. So the code runs 10x10x10 times. (This is purely an upper bound however, because your if-statement may end the algorithm before the looping is complete, depending on the value of n).
To count up to n in countUp2, then you need the same number of loops as the number of digits in n: so log(n) loops. Each loop can run 10 times, so the total number of iterations is 10^log(n) which is O(n).
The first runs in O(n log n) time, since print(n) outputs O(log n) digits.
The second program assumes an upper limit for n, so is trivially O(1). When we do complexity analysis, we assume a more abstract version of the programming language where (usually) integers are unbounded but arithmetic operations still perform in O(1). In your example you're mixing up the actual programming language (which has bounded integers) with this more abstract model (which doesn't). If you rewrite the program[*] so that is has a dynamically adjustable number of loops depending on n (so if your number n has k digits, then there's k+1 nested loops), then it does one iteration of the innermost code for each number from 0 up to the next power of 10 after n. The inner loop does O(log n) work[**] as it constructs the string, so overall this program too is O(n log n).
[*] you can't use for loops and variables to do this; you'd have to use recursion or something similar, and an array instead of the variables i, j, k, ..., z.
[**] that's assuming your programming language optimizes the addition of k length-1 strings so that it runs in O(k) time. The obvious string concatenation implementation would be O(k^2) time, meaning your second program would run in O(n(log n)^2) time.
I am having a hard time understanding the efficiency of an algorithm and how do you really determine that, that particular sentence or part is lg n, O (N) or log base 2 (n)?
I have two examples over here.
doIt() can be expressed as O(n)=n^2.
First example.
i=1
loop (i<n)
doIt(…)
i=i × 2
end loop
The cost of the above is as follows:
i=1 ... 1
loop (i<n) ... lg n
doIt(…) ... n^2 lg n
i=i × 2 ... lg n
end loop
Second example:
static int myMethod(int n){
int i = 1;
for(int i = 1; i <= n; i = i * 2)
doIt();
return 1;
}
The cost of the above is as follows:
static int myMethod(int n){ ... 1
int i = 1; ... 1
for(int i = 1; i <= n; i = i * 2) ... log base 2 (n)
doIt(); ... log base 2 (n) * n^2
return 1; ... 1
}
All this have left me wondering, how do you really find out what cost is what? I've been asking around, trying to understand but there is really no one who can really explain this to me. I really wanna understand how do I really determine the cost badly. Anyone can help me on this?
The big O notation is not measuring how long the program will run. It says how fast will the running time increase as the size of the problem grows.
For example, if calculating something is O(1), that could be a very long time, but it is independent of the size of the problem.
Normally, you're not expecting to estimate costs of such things as cycle iterator (supposing storing one integer value and changing it N times is too minor to include in result estimation).
What really matters - that in terms of Big-O, Big-Theta e.t.c you're expected to find functional dependence, i.e. find a function of one argument (N), for which:
Big-O: entire algorithm count of operation grows lesser than F(N)
Big-Theta: entire algorithm count of operation grows equal to F(N)
Big-Omega: entire algorithm count of operation grows greater than F(N)
so, remember - you're not trying to find a number of operations, you're trying to find functional estimation for that, i.e. functional dependence between amount of incoming data N and some function from N, which indicates speed of growth for operation's count.
So, O(1), for example, indicates, that whole algorithm will not depend from N (it is constant). You can read more here.
Also, there are different types of estimations. You can estimate memory or execution time, for example - that will be different estimations in common case.
I am reading some information on time complexity and I'm quite confused as to how the following time complexities are achieved and if there is a particular set of rules or methods for working this out?
1)
Input: int n
for(int i = 0; i < n; i++){
print("Hello World, ");
}
for(int j = n; j > 0; j--){
print("Hello World");
}
Tight: 6n + 5
Big O: O(n)
2)
Input: l = array of comparable items
Output: l = array of sorted items
Sort:
for(int i = 0; i < l.length; i++){
for(int j = 0; j < l.length; j++){
if(l{i} > l{j}){
} }
Swap(l{i},l{j});
}
return ls;
Worst Case Time Complexity: 4n2 +3n+2 = O(n2)
For a given algorithm, time complexity or Big O is a way to provide some fair enough estimation of "total elementary operations performed by the algorithm" in relationship with the given input size n.
Type-1
Lets say you have an algo like this:
a=n+1;
b=a*n;
there are 2 elementary operations in the above code, no matter how big your n is, for the above code a computer will always perform 2 operations, as the algo does not depend on the size of the input, so the Big-O of the above code is O(1).
Type-2
For this code:
for(int i = 0; i < n; i++){
a=a+i;
}
I hope you understand the Big-O in O(n), as elementary operation count directly depend on the size of n
Type-3
Now what about this code:
//Loop-1
for(int i = 0; i < n; i++){
print("Hello World, ");
}
//Loop-2
for(int i = 0; i < n; i++){
for(int j = 0; j < n; j++) {
x=x+j;
}
}
As you can see loop-1 is O(n) and loop-2 is O(n^2). So it feel like total complexity should be O(n)+O(n^2). But no, the time complexity of the above code is O(n^2). Why? Because we are trying to know the fair enough count of elementary operations performed by the algorithm for a given input size n, which will be comparatively easy to understand by another person. With this logic, O(n)+O(n^2) become O(n^2), or O(n^2)+O(n^3)+O(n^4) become O(n^4)!
Again, you may ask: But how? How all the lower power of Big-O become so insignificant as we add it with a higher power of Big-O, that we can completely omit them (lower powers) when we are describing the complexity of our algorithm to another human?
I will try show the reason for this case: O(n)+O(n^2)=O(n^2).
Lets say n=1000 then the exact count for O(n) is 1000 operations and the exact count for O(n^2) is 1000*1000=1000000, so O(n^2) is 1000 time bigger than O(n), which means your program will spend most of the execution time in O(n^2) and thus it is not worth to mention that your algorithm also has some O(n).
PS. Pardon my English :)
In the first example, the array has n elements, and you go through these elements Twice. The first time you start from index 0 until i, and the second time you start from index n to 0. So, to simplify this, we can say that it took you 2n. When dealing with Big O notation, you should keep in mind that we care about the bounds:
As a result, O(2n)=O(n)
and O(an+b)=O(n)
Input: int n // operation 1
for(int i = 0; i < n; i++){ // operation 2
print("Hello World, "); // Operation 3
}
for(int j = n; j > 0; j--) // Operation 4
{
print("Hello World"); //Operation 5
}
As you can see, we have a total of 5 operations outside the loops.
Inside the first loop, we do three internal operations: checking if i is less than n, printing "Hello World", and incrementing i .
Inside the second loop, we also have three internal operations.
So, the total number of of opetations that we need is: 3n ( for first loop) + 3n ( second loop) + 5 ( operations outside the loop). As a result, the total number of steps required is 6n+5 ( that is your tight bound).
As I mentioned before, O( an +b )= n because once an algorithm is linear, a and b do not have a great impact when n is very large.
So, your time complexity will become : O(6n+5) =O(n).
You can use the same logic for the second example keeping in mind that two nested loops take n² instead of n.
I will slightly modify Johns answer. Defining n is one constant operation, defining integer i and assigning it to 0 is 2 constant operations. defining integer j and assigning with n is another 2 constant operations. checking the conditions for i,j inside for loop,increment,print statement depends on n so the total will be 3n+3n+5 which is equal to 6n+5. Here we cannot skip any of the statements during execution so its average case running time will also be its worst case running time which is O(n)