is my understanding of this pseudo-code correct? if so, how do I calculate big theta of it? - pseudocode

This is the pseudo-code.
This is how I understand it:
Line 2 will execute n/5 times
Line 4 log(n) times
Line 5 is j times
So this means that line 6 will execute n/5 * log(n) * j times, is this right?
If so, how do I continue from here to calculate big theta? How does j play into things?

Line 2 - not quite, i is multiplied by 5 each loop, so the loop will execute 5-to-this-power-is-n times which is log_5(n). i is not used below, so this just multiplies the complexity by log(n).
We can ignore the base of the log, that's just a multiplicative constant.
Line 4 - loops log n times
Line 5 - and the inner loop squares the previous loop (because sum 1..X = X*(X+1)/2, approximated as X^2)
So ... log^3(n) overall?

Related

What is the Big O of this while loop?

Normally when I see a loop, I assume it is O(n). But this was given as an interview question and seems to easy.
let a = 1;
while(a < n){
a = a * 2;
}
Am I oversimplifying? It appears to simply compute the powers of 2.
Never assume that a loop is always O(n). Loops that iterate over every element of a sequential container (like arrays) normally have a time complexity of O(n), but it ultimately depends on the the condition of the loop and how the loop iterates. In your case, a is doubling in value until it becomes greater than or equal to n. If you double n a few times this is what you see:
n # iterations
----------------
1 0
2 1
4 2
8 3
16 4
32 5
64 6
As you can see, the number of iterations is proportional to log(n), making the time complexity O(log n).
Looks like a grows exponentially with n, so the loop will likely complete in O(log(n))
I haven't done all the math, but a is not LINEAR wrt n...
But if you put in a loop counter, that counter would approximate log-base-2(n)

What does the function return and what is Big-O notation for worst case? [duplicate]

function alg1(n)
1 a=0
2 for o=1 to n do
3 for t=1 to o do
4 for k=t to o+t do
5 a=a+1
6 return(a)
If anyone could guide me to how you would find the worst-case here, and how to get the output a of alg1 as a function of n, I would be very grateful. Thanks!
We can compute the exact number of increments this code executes. First, let's replace
for k=t to o+t do
with
for k=1 to o+1 do
After this change, two inner loops looks like this
for t=1 to o do
for k=1 to o+1 do
The number of iterations of these loops is obviously o*(o+1). The overall number of iterations can be calculated in the following way:
We can exclude coefficients and lower order terms of the polynomial when using big-O notation. Therefore, the complexity is O(n^3).
Subtract t from the last loop so that it becomes
for k=0 to o do
Now the 2 inner most loops would run for O(o^2) time for every value of o. The answer would be
1^2 + 2^2 + ... n^2
which is equal to
n(n+1)(2n+1)/6. Hence it would be of order of O(n^3)

How to calculate Big O of nested for loop

Im under the impression that to find the big O of a nested for loop, one multuplies the big O of each forloop with the next for loop. Would the big O for:
for i in range(n):
for j in range(5):
print(i*j)
be O(5n)? and if so would the big O for:
for i in range(12345):
for j in range(i**i**i)
for y in range (j*i):
print(i,j,y)
be O(12345*(i**i**i)*(j*i)? Or would it be O(n^3) because its nested 3 times?
Im so confused
This is a bit simplified, but hopefully will get across the meaning of Big-O:
Big-O is about the question "how many times does my code do something?", answering it in algebra, and then asking "which term matters the most in the long run?"
For your first example - the number of times the print statement is called is 5n times. n times in the outer loop times 5 times in the inner loop. What matters most in the long run? In the long run only n matters, as the value of 5 never changes! So the overall Big-O complexity is O(n).
For your second example - the number of times the print statement is called is very large, but constant. The outer loop runs 12345 times, the inner loop runs one time, then 16 times, then 7625597484987... all the way up to 12345^12345^12345. The innermost loop goes up in a similar fashion. What we notice is all of these are constants! The number of times the print statement is called doesn't actually vary at all. When an algorithm runs in constant time, we represent this as O(1). Conceptually this is similar to the example above - just as 5n / 5 == n, 12345 / 12345 == 1.
The two examples you have chosen only involve stripping out the constant factors (which we always do in Big-O, they never change!). Another example would be:
def more_terms(n):
for i in range(n):
for j in range(n):
print(n)
print(n)
for k in range(n):
print(n)
print(n)
print(n)
For this example, the print statement is called 2n^2 + 3n times. For the first set of loops, n times for the outer loop, n times for the inner loop and then 2 times inside the inner lop. For the second set, n times for the loop and 3 times each iteration. First we strip out the constants, leaving n^2 + n, now what matters in the long run? When n is 1, neither really matter. But the bigger n gets, the bigger the difference is, n^2 grows much faster than n - so this function has complexity O(n^2).
You are correct about O(n^3) for your second example. You can calculate big O like this:
Any number of nested loops will add an additional power of 1 to n. So, if we have three nested loops, the big O would be O(n^3). For any number of loops, the big O is O(n^(number of loops)). One loop is just O(n). Any monomial of n, such as O(5n), is just O(n).
You misunderstand what O(n) means. It's hard to understand at first, so no shame in not understanding it.O(n) means "This grows at most as fast as n". It has a rigorous mathematical definition, but it basically boils down to is this.If f and g are both functions, f=O(g) means that you could pick some constant number C, and on big inputs like n, f(n) < C*g(n)." Big O represents an upper bound, and it doesn't care about constant factors, so if f=O(5n), then f=O(n).

Show the O-notation for the following code fragment

The question is to Show the O-notation for the following code fragments (show each line)
for x=1 to n
{
y=1
while y < n
y=y+y
}
The O notation for the first line is n, I believe.
I am unsure what the O notation is for the while loop and why?
The answer given is O(n log 2n )
Can someone please explain this to me? Thanks!
Let's assume n=64 (or 26), then the while loop will run 6 times with the following final value of y:
2
4
8
16
32
64
If you repeat this for n=256 (or 28), you will find that there are 8 iterations. In more general terms the number of executions for a given value of n will be log 2n. As the outer loop is n, the total execution time is O(n log 2n )
In the inner loop y takes the values 1, 2, 4...
y is multiplied by 2 each time so it is of the form 2^k
This loop stops for the largest value k such as 2^k < n i.e. k < log_2(n)
There will be no more than log_2 (n) iterations in this loop
x ranging from 1 to n, may increase the total number of iteration by n.log_2(n)

Big-O complexity of a piece of code

I have a question in algorithm design about complexity. In this question a piece of code is given and I should calculate this code's complexity.
The pseudo-code is:
for(i=1;i<=n;i++){
j=i
do{
k=j;
j = j / 2;
}while(k is even);
}
I tried this algorithm for some numbers. and I have gotten different results. for example if n = 6 this algorithm output is like below
i = 1 -> executes 1 time
i = 2 -> executes 2 times
i = 3 -> executes 1 time
i = 4 -> executes 3 times
i = 5 -> executes 1 time
i = 6 -> executes 2 times
It doesn't have a regular theme, how should I calculate this?
The upper bound given by the other answers is actually too high. This algorithm has a O(n) runtime, which is a tighter upper bound than O(n*logn).
Proof: Let's count how many total iterations the inner loop will perform.
The outer loop runs n times. The inner loop runs at least once for each of those.
For even i, the inner loop runs at least twice. This happens n/2 times.
For i divisible by 4, the inner loop runs at least three times. This happens n/4 times.
For i divisible by 8, the inner loop runs at least four times. This happens n/8 times.
...
So the total amount of times the inner loop runs is:
n + n/2 + n/4 + n/8 + n/16 + ... <= 2n
The total amount of inner loop iterations is between n and 2n, i.e. it's Θ(n).
You always assume you get the worst scenario in each level.
now, you iterate over an array with N elements, so we start with O(N) already.
now let's say your i is always equals to X and X is always even (remember, worst case every time). how many times you need to divide X by 2 to get 1 ? (which is the only condition for even numbers to stop the division, when they reach 1).
in other words, we need to solve the equation
X/2^k = 1 which is X=2^k and k=log<2>(X)
this makes our algorithm take O(n log<2>(X)) steps, which can easly be written as O(nlog(n))
For such loop, we cannot separate count of inner loop and outer loop -> variables are tighted!
We thus have to count all steps.
In fact, for each iteration of outer loop (on i), we will have
1 + v_2(i) steps
where v_2 is the 2-adic valuation (see for example : http://planetmath.org/padicvaluation) which corresponds to the power of 2 in the decomposition in prime factor of i.
So if we add steps for all i we get a total number of steps of :
n_steps = \sum_{i=1}^{n} (1 + v_2(i))
= n + v_2(n!) // since v_2(i) + v_2(j) = v_2(i*j)
= 2n - s_2(n) // from Legendre formula (see http://en.wikipedia.org/wiki/Legendre%27s_formula with `p = 2`)
We then see that the number of steps is exactly :
n_steps = 2n - s_2(n)
As s_2(n) is the sum of the digits of n in base 2, it is negligible (at most log_2(n) since digit in base 2 is 0 or 1 and as there is at most log_2(n) digits) compared to n.
So the complexity of your algorithm is equivalent to n:
n_steps = O(n)
which is not the O(nlog(n)) stated in many other solutions but a smaller quantity!
lets start with worst case:
if you keep dividing with 2 (integral) you don't need to stop until you
get to 1. basically making the number of steps dependent on bit-width,
something you find out using two's logarithm. so the inner part is log n.
the outer part is obviously n, so N log N total.
A do loop halves j until k becomes odd. k is initially a copy of j which is a copy of i, so do runs 1 + power of 2 which divides i:
i=1 is odd, so it makes 1 pass through do loop,
i=2 divides by 2 once, so 1+1,
i=4 divides twice by 2, so 1+2, etc.
That makes at most 1+log(i) do executions (logarithm with base 2).
The for loop iterates i from 1 through n, so the upper bound is n times (1+log n), which is O(n log n).

Resources