Why does this code have complexity O(1)?
function logAtMost5(n) {
for ( let i = 1; i <= Math.min(5,n); i++ {
console.log(i)
}
}
This function does at most 5 iterations. Which is a constant.
A constant has O(1) complexity.
If there was n instead of Math.min(5,n), it would have n iterations which would be linear O(n).
Simply because the for loop will not iterate more than 5 times, it's iterations will not exceed a fixed constant.
Graph of y = min(x,5)
Related
I'm struggling to understand the concepts of calculating time complexity. I have this code in C, why does the time complexity is O(n) and not O(n log n)?
The first loop is running for maximum of 3 iteration, the outer for loop is in logarithmic complexity and each iteration of it doing linear time.
int f2 (int n)
{
int j, k, cnt=0;
do
{
++n;
} while (n%3);
for (j=n; j>0; j/=3)
{
k=j;
while (k>0)
{
cnt++;
k-=3;
}
}
return cnt;
}
Why do we neglecting the log time?
T = (n+n/3+n/9+...+1)
3*T - T = 3*n-1
T = 1.5*n-0.5
it's O(n)
It is a common beginner's mistake to reason as follows:
the outer loop follows a decreasing geometric progression, so it iteratess O(log n) times.
the inner loop follows an arithmetic progression, so its complexity is O(n).
hence the global complexity is O(n+n+n+...)=O(n log n).
The mistake is due to the lack of rigor in the notation. The correct approach is
the outer loop follows a decreasing geometric progression, so it iterates O(log n) time.
the inner loop follows an arithmetic progression, so its complexity is O(j) (not O(n) !), where j decreases geometrically.
hence the global complexity is O(n+n/3+n/9+...)=O(n).
The time complexity of you program is O(n).
Because the total number of calculations is : log(n)/log(3) + 2 * (n/3 + n/9 + n/27 + ... < log(n) + n/2 < 2*n
Question 1
public void guessWhat1(int N){
for (int i=N; i>0, i=i/2){
for (int j=0; j<i*2; j+=1){
System.out.println(“Hello World”);
}
}
}
The first loop will run for log(n).
The second loop will run for log(n).
The upper bound is O(log^2(n). What would be Big Θ?
Question 2
public void guessWhat2(int N) {
int i=1, s=1;
while (s<=N) {
i += 1;
s = s + i;
}
}
The upper bound for this is O(n). I am not quite sure about the Big Θ.
It would great if someone could clarify on these. Thank You
Lets get clear with the definitions of the notations first.
Big O: It denotes the upper bound of the algorithm.
Big Theta: It denotes the average bound of the algorithm.
For your first question
public void guessWhat1(int N){
for (int i=N; i>0, i=i/2){
for (int j=0; j<i*2; j+=1){
System.out.println(“Hello World”);
}
}
}
For i=N, inner loop runs 2N times, i=N/2 inner loop runs for N times, for i=N/4 inner loop runs for N/2 times.....
so the total complexity = O(2N+N+N/2+N/4+...+1)
which is equal to O(N(2+1+1/2+1/4+....1/N))= O(N(3+1/2+1/4+....1/N))
N(3+1/2+1/4+....1/N)
= N( 3 + 1 - (0.5)^logN ) = O(N(4-1/N)) = O(N)
So the complexity is O(N), even in theta notation its the same N as the above loops takes the same time for all cases.
For your second question
public void guessWhat2(int N) {
int i=1, s=1;
while (s<=N) {
i += 1;
s = s + i;
}
}
The while loop takes O(sqrt(N)). Same as above, here also the theta notation will also be the same as big O notation, which is sqrt(N).
The theta notation varies from big O if input has multiple cases. Lets take an example of insertion sort https://en.wikipedia.org/wiki/Insertion_sort where N is the size of the input array. If the input array is already sorted it takes linear time, but if the input array is reverse sorted it takes N^2 time to sort the array.
So in that case for insertion sort, the time complexity is O(N^2).
For best case it is theta(N) and for worst case its theta(N^2).
I've computed complexity of below algorithm as
for i = 0 to m
for j = 0 to n
//Process of O(1)
Complexity: O( m * n)
This is simple example of O( m * n). But I'm not able to figure out how O(m+n) computed. Any sample example
O(m+n) means O(max(m,n)). A code example:
for i = 0 to max(m,n)
//Process
The time complexity of this example is linear to the maximum of m and n.
for i=0 to m
//process of O(1)
for i=0 to n
//process of O(1)
time complexity of this procedure is O(m+n).
You often get O(m+n) complexity for graph algorithms. It is the complexity for example of a simple graph traversal such as BFS or DFS. Then n = |V| stands for the number of vertices, m = |E| for the number of edges, where the graph is G=(V,E).
The Knuth-Morris-Pratt string-searching algorithm is an example.
http://en.wikipedia.org/wiki/Knuth%E2%80%93Morris%E2%80%93Pratt_algorithm#Efficiency_of_the_KMP_algorithm
The string you're looking for (the needle or the pattern) is length m and the text you're searching through is length n. There is preprocessing done on the pattern which is O(m) and then the search, with the preprocessed data, is O(n), giving O(m + n).
The above example you have is a nested for loop, when you have nested loops and have 2 different inputs m and n ( considered very large in size). The complexity is said to be multiplicative. so for first for loop you write complexity O(m) and for inner for loop you write O(n) and as they are nested loop, you can write as O (m) * O(n) or O(m * n).
static void AddtiviteComplexity(int[] arr1,int[] arr2)
{
int i = 0;
int j = 0;
while (i < arr1.Length)
{
Console.WriteLine(arr1[i]);
while (j < arr2.Length)
{
Console.WriteLine(arr2[j]);
j++;
}
i++;
}
}
similarly when have 2 loops and they are not nested and have 2 different inputs m and n ( considered very large in size), the complexity is said to be additive.
For the First loop, you write the complexity O(m) and for the second loop you write the complexity O(n) and as there are separate loops, you can write the complexity as O(m) + O(n) or O(m + n).
static void AddtiviteComplexity(int[] arr1,int[] arr2)
{
int i = 0;
int j = 0;
while(i< arr1.Length)
{
Console.WriteLine(arr1[i]);
i++;
}
while (j < arr2.Length)
{
Console.WriteLine(arr2[j]);
j++;
}
}
Note: the above code is for example with int array is example purpose. Also I have used while loop, it doesn't matter if it's a while or a for loop for calculating complexity.
Hope this helps.
What is the complexity given for the following problem is O(n). Shouldn't it be
O(n^2)? That is because the outer loop is O(n) and inner is also O(n), therefore n*n = O(n^2)?
The answer sheet of this question states that the answer is O(n). How is that possible?
public static void q1d(int n) {
int count = 0;
for (int i = 0; i < n; i++) {
count++;
for (int j = 0; j < n; j++) {
count++;
}
}
}
The complexity for the following problem is O(n^2), how can you obtain that? Can someone please elaborate?
public static void q1E(int n) {
int count = 0;
for (int i = 0; i < n; i++) {
count++;
for (int j = 0; j < n/2; j++) {
count++;
}
}
}
Thanks
The first example is O(n^2), so it seems they've made a mistake. To calculate (informally) the second example, we can do n * (n/2) = (n^2)/2 = O(n^2). If this doesn't make sense, you need to go and brush up what the meaning of something being O(n^k) is.
The complexity of both code is O(n*n)
FIRST
The outer loop runs n times and the inner loop varies from 0 to n-1 times
so
total = 1 + 2 + 3 + 4 ... + n
which if you add the arithmetic progression is n * ( n + 1 ) / 2 is O(n*n)
SECOND
The outer loop runs n times and the inner loop varies from 0 to n-1/2 times
so
total = 1 + 1/2 + 3/2 + 4/2 ... + n/2
which if you add the arithmetic progression is n * ( n + 1 ) / 4 is also O(n*n)
First case is definitely O(n^2)
The second is O(n^2) as well because you omit constants when calculate big O
Your answer sheet is wrong, the first algorithm is clearly O(n^2).
Big-Oh notation is "worst case" so when calculating the Big-Oh value, we generally ignore multiplications / divisions by constants.
That being said, your second example is also O(n^2) in the worst case because, although the inner loop is "only" 1/2 n, the n is the clear bounding factor. In practice the second algorithm will be less than O(n^2) operations -- but Big-Oh is intended to be a "worst case" (ie. maximal bounding) measurement, so the exact number of operations is ignored in favor of focusing on how the algorithm behaves as n approaches infinity.
Both are O(n^2). Your answer is wrong. Or you may have written the question incorrectly.
What is the time and space complexity of:
int superFactorial4(int n, int m)
{
if(n <= 1)
{
if(m <= 1)
return 1;
else
n = m -= 1;
}
return n*superFactorial4(n-1, m);
}
It runs recursively by decreasing the value of n by 1 until it equals 1 and then it will either decrease the value of m by 1 or return 1 in case m equals 1.
I think the complexity depends on both n and m so maybe it's O(n*m).
Actually, it looks to be closer to O(N+m^2) to me. n is only used for the first "cycle".
Also, in any language that doesn't do tail call optimization the space complexity is likely to be "fails". In languages that support the optimization, the space complexity is more like O(1).
The time complexity is O(n+m^2), space complexity the same.
Reasoning: with a fixed value of m, the function makes n recursive calls to itself, each does constant work, so the complexity of calls with fixed m is n. Now, when n reaches zero, it becomes m-1 and m becomes m-1 too. So the next fixed-m-phase will take m-1, the next m-2 and so on. So you get a sum (m-1)+(m-2)+...+1 which is O(m^2).
The space complexity is equal, because for each recursive call, the recursion takes constant space and you never return from the recursion except at the end, and there is no tail recursion.
The time complexity of a Factorial function using recursion
pseudo code:
int fact(n)
{
if(n==0)
{
return 1;
}
else if(n==1)
{
return 1;
}
else if
{
return n*f(n-1);
}
}
time complexity;
let T(n) be the number of steps taken to compute fact(n).
we know in each step F(n)= n*F(n-1)+C
F(n-1)= (n-1)*F(n-2)+c
substitute this in F(n), we get
F(n)= n*(n-1)*F(n-2)+(n+1)c
using big o notation now we can say that
F(n)>= n*F(n-1)
F(n)>= n*(n-1)*F(n-2)
.
.
.
.
.
F(n)>=n!F(n-k)
T(n)>=n!T(n-k)
n-k=1;
k=n-1;
T(n)>=n!T(n-(n-1))
T(n)>=n!T(1)
since T(1)=1
T(n)>=1*n!
now it is in the form of
F(n)>=c(g(n))
so we can say that time complexity of factorial using recursion is
T(n)= O(n!)