Big O notation for n/2 in while-loop - data-structures

I'm new to data structure and the likes of it.
I would like to ask a question, how do we determine the Big-O notation value of this process:
while(n%2==0){
console.log(2);
n=n/2;
}
What is the Big-O notation? Thanks before.

If n odd then the loop is not executed. If n is even then it takes log2n (i.e., log of base 2) iterations until the loop stops. It is log2n because n gets decrement to half each loop iterations (i.e., n=n/2;).
Assuming that console.log(2); takes c time the overall complexity would be O(logn).

Related

How to calculate Big O of nested for loop

Im under the impression that to find the big O of a nested for loop, one multuplies the big O of each forloop with the next for loop. Would the big O for:
for i in range(n):
for j in range(5):
print(i*j)
be O(5n)? and if so would the big O for:
for i in range(12345):
for j in range(i**i**i)
for y in range (j*i):
print(i,j,y)
be O(12345*(i**i**i)*(j*i)? Or would it be O(n^3) because its nested 3 times?
Im so confused
This is a bit simplified, but hopefully will get across the meaning of Big-O:
Big-O is about the question "how many times does my code do something?", answering it in algebra, and then asking "which term matters the most in the long run?"
For your first example - the number of times the print statement is called is 5n times. n times in the outer loop times 5 times in the inner loop. What matters most in the long run? In the long run only n matters, as the value of 5 never changes! So the overall Big-O complexity is O(n).
For your second example - the number of times the print statement is called is very large, but constant. The outer loop runs 12345 times, the inner loop runs one time, then 16 times, then 7625597484987... all the way up to 12345^12345^12345. The innermost loop goes up in a similar fashion. What we notice is all of these are constants! The number of times the print statement is called doesn't actually vary at all. When an algorithm runs in constant time, we represent this as O(1). Conceptually this is similar to the example above - just as 5n / 5 == n, 12345 / 12345 == 1.
The two examples you have chosen only involve stripping out the constant factors (which we always do in Big-O, they never change!). Another example would be:
def more_terms(n):
for i in range(n):
for j in range(n):
print(n)
print(n)
for k in range(n):
print(n)
print(n)
print(n)
For this example, the print statement is called 2n^2 + 3n times. For the first set of loops, n times for the outer loop, n times for the inner loop and then 2 times inside the inner lop. For the second set, n times for the loop and 3 times each iteration. First we strip out the constants, leaving n^2 + n, now what matters in the long run? When n is 1, neither really matter. But the bigger n gets, the bigger the difference is, n^2 grows much faster than n - so this function has complexity O(n^2).
You are correct about O(n^3) for your second example. You can calculate big O like this:
Any number of nested loops will add an additional power of 1 to n. So, if we have three nested loops, the big O would be O(n^3). For any number of loops, the big O is O(n^(number of loops)). One loop is just O(n). Any monomial of n, such as O(5n), is just O(n).
You misunderstand what O(n) means. It's hard to understand at first, so no shame in not understanding it.O(n) means "This grows at most as fast as n". It has a rigorous mathematical definition, but it basically boils down to is this.If f and g are both functions, f=O(g) means that you could pick some constant number C, and on big inputs like n, f(n) < C*g(n)." Big O represents an upper bound, and it doesn't care about constant factors, so if f=O(5n), then f=O(n).

Run time and space complexity of this code

I'm having a bit of trouble determining the big o of this solution I came up with on a CTCI question
The space complexity should be O(1)
but is the run time O(N)? It seems to be more towards O(N^2) because of the while loop. But the while loop does not run N times for each element inside the for loop.
public static int[] missing_two(int [] n){
for(int i=0;i<n.length;i++){
while(n[i]!=i){
int temp=n[i];
n[i]=n[n[i]];
n[temp]=temp;
}
}
}
Would 6,0,1,2,3,4,5 be an example of worst case here?
The while loop will run n times on the first index. and 0 afterwards. Therefore shouldn't this be O(2n) => O(n)?
My understanding is that big O notation is used for giving an upper bound limit or a worst case scenario. The question you have to make yourself is: Could input vector n have values such that at least in one case, for all values in input vector n, the while loop is fully executed? Then a correct Big O notation would be O(N^2), if not but close, then I guess O(N^2) would still be a better estimation than O(N), as O(N) for an upper bound would have already been exceeded.
Now that you have edited the question and given more info about it I agree with you. It is O(N)

Time Complexity of the following code fragment?

I calculated it to be O(N^2), but my instructor marked it incorrect in the exam. The Correct answer was O(1). Can anyone help me, how did the time complexity come out to be O(1)?
The outer loop will run for 2N times. (int j = 2 * N) and later decrementing everytime by 1)
And since N is not changing, and the i is assigned the values of N always (int i = N), the inner loop will always run for logN base 2 times.
(Notice the way i changes i = i div 2)
Therefore, the complexity is O(NlogN)
Question: What happens when you repeatedly half input(or search space) ?(Like in Binary Search).
Answer: Well, you get log(N) complexity. (Reference : The Algorithm Design Manual by Steven S. Skiena)
See the inner loop in your algorithm, i = i div 2 makes it a log(N) complexity loop. Therefore the overall complexity will be N log(N).
Take this with a pinch of salt : Whenever you divide your input (search space) by 2, 3 , 4 or whatever constant number greater than 1, you get log(N) complexity.
P.S. : the complexity of your algorithm is nowhere near to O(1).

concept confusion, advices on solving these 2 code

In O() notation, write the complexity of the following code:
For i = 1 to x functi
call funct(i) if (x <= 0)
return some value
else
In O() notation, write the complexity of the following code:
For x = 1 to N
I'm really lost at solving these 2 big O notation complexity problem, please help!
They both appear to me to be O(N).
The first one subtracts by 1 when it calls itself, this means if given N, then it runs N times.
The second one divides the N by 2, but Big-O is determined by worst case scenario, which means that we must assume N is getting significantly larger. When you take that into account, dividing by 2 does not have much of a difference. That means while it originally is O(N/2) it can be reduced to O(N)

Big O of this equation?

for (int j=0,k=0; j<n; j++)
for (double m=1; m<n; m*=2)
k++;
I think it's O(n^2) but I'm not certain. I'm working on a practice problem and I have the following choices:
O(n^2)
O(2^n)
O(n!)
O(n log(n))
Hmmm... well, break it down.
It seems obvious that the outer loop is O(n). It is increasing by 1 each iteration.
The inner loop however, increases by a power of 2. Exponentials are certainly related (in fact inversely) to logarithms.
Why have you come to the O(n^2) solution? Prove it.
Its O(nlog2n). The code block runs n*log2n times.
Suppose n=16; Then the first loop runs 16 (=n) times. And the second loops runs 4(=log2n) times (m=1,2,4,8). So the inner statement k++ runs 64 times = (n*log2n) times.
lets look at the worst-case behaviour. for second loop search continues from 1, 2, 4, 8.... lets say n is 2^k for some k >= 0. in the worst-case we might end up searching until 2^k and realise we overshot the target. Now we know that target can be in 2^(k - 1) and 2^k. The number of elements in that range are 2^(k - 1) (think a second.). The number of elements that we have examined so far is O(k) which is O(logn) and for first loop it's O(n).(too simple to find out). then order of whole code will O(n(logn)).
A generic way to approach these sorts of problems is to consider the order of each loop, and because they are nested, you can multiply the "O" notations.
Some simple rules for big "O":
O(n)O(m) = O(nm)
O(n) + O(m) = O(n + m)
O(kn) = O(n) where 'k' is some constant
The 'j' loop iterates across n elements, so clearly it is O(n).
The 'm' loop iterates across log(n) elements, so it is O(log(n)).
Since the loops are nested, our final result would O(n) * O(log(n)) = O(n*log(n)).

Resources