What is the value returned by the following function? Express your answer as a
function of n. Give using O() notation the worst-case running time.
Pseudo code of the algorithm:
F1(n)
v = 0
for i = 1 to n
for j = n + 1 to 2n
v++
return v
My approach:
The output of F1(n) is a integer value returned from the variable v which is calculated by the inner for-loop during iteration on size 2n and on size n for the outer for-loop.
The variable v is initialized as 0 before the iterations of both loops begin. When the ith for-loop is at the 1st ith iteration then the jth for-loop(inner for-loop) iterates from size n+1 to 2n. Suppose n = 5 then the jth for-loop iterates from j = 6 to 10, this means the number of times v is incremented for 1 full iteration of the inner for-loop.
Since the jth for-loop always increments v by n-1(4 in this case) then in each full iteration this means when the ith for-loop starts from 1 to n then the variable v is incremented by n-1 times for every i iteration.
So this algorithm maps the function g(n) = (n-1) * (n-1). Which will be 4 * 4, so v = 16 when n = 5. This is true because when n = 5 every full jth iteration v is incremented by 4. So if the ith for-loop runs from i =1,…,4(1 to n) then v is incremented by 4 every time i is incremented so this explains why the result would be (n-1 * n-1) as the answer.
The worst-case Big Oh complexity for this program is O(n^2) because it has a nested loop structure where the outer loop iterates through 1 to n times and the inner loop iterates through n+1 to 2n times which is also equivalent to n times because it is iterating through all values of n.
Lastly, the initialization/incrementing of a variable takes O(1) time so this does not affect the complexity much in comparison to the nested for-loop.
To me this maps g(n)=n^2 function since loop for i=1 to n runs n times if you count all values from range [1,n]. Of course if it is [1,n) then you have g(n)=(n-1)^2 but that's matter of convention.
Two nested loops , each running n times give you O(n^2) complexity which is best-average-worst in this case. So your approach is fine if that was your question :)
Related
Trying to calculate time complexity of some simple code but I do not know how to calculate time complexity while summing a sub array. The code is as follows:
for i=1 to n {
for j = i+1 to n {
s = sum(A[i...j])
B[i,j]=s
}}
So I know the nested for loops inevitably give us a O(n^2) and I believe the function to sum to the sub array is also O(n^2). However, I think the time complexity for the whole algorithm is O(n^3). How do I get here with this information? Thank you!
I like to think of for loops as summations. As such, the number of steps (written as a function, T(n)) is:
T(n) = \sum_{i=1}^n numStepsInInnerForLoop
Here, I'm using something written in pseudo-MathJax, and have written the outer for loop as a summation from i=1 to n of the number of steps in the inner for loop (the one from i+1 to n). You can think of this analagously as summing the number of steps in the inner for loop, from i=1 to n. Substituting in numStepsInInnerForLoop results in:
T(n) = \sum_{i=1}^n [\sum_{j=i+1}^n numStepsOfSumFunction]
This function now represents the number of steps where both for loops have been fleshed out as summations. Assuming that s = sum(A[i...j]) takes j-i+1 steps and B[i,j]=s takes just one step, we can substitute numStepsOfSumFunction with these more useful parameters and the equation now becomes:
T(n) = \sum_{i=1}^n [\sum_{j=i+1}^n (j-i+1 + 1)]
When you solve these summations (using the kind of formulas you see on this summation tutorial page) you'll get a cubic function for T(n) which corresponds to O(n^3).
Your reasoning leads me to believe that you're running this algorithm on a array of size n. If so, then every time you call the sum method in the inner for loop, you're calling this method on a specific range of values (indices i to j). For each iteration of this for loop, this sum method will iterate through 1, 2, 3, ..., then finally n elements in the last iteration as j increases from (i + 1) to n. Note that this is when i = 1. As i increases, it won't necessarily go from 1, 2, 3, ..., to n anymore since it will technically go up to n - i elements. Big O, though, is the worst case so we have to use this scenario.
1 + 2 + 3 + ... + n gives us n^2. The runtime of the sum method depends on the values of i and j; however, when run in the for loop with the given conditions, the total time-complexity of all the calls to sum in one iteration of the inner for loop is O(n^2). And finally, since this inner for loop is executed n times, the total time-complexity for the whole algorithm is O(n^3).
Looking at the code below:
Algorithm sort
Declare A(1 to n)
n = length(A)
for i = 1 to n
for j = 1 to n-1 inclusive do
if A[i-1] > A[i] then
swap( A[i-1], A[i] )
end if
next j
next i
I would say that there are:
2 loops, both n, n*n = n^2 (n-1 truncated to n)
1 comparison, in the j loop, that will execute n^2 times
A swap that will execute n^2 times
There are also 2n additions with the loops, executing n^2 times, so 2n^2
The answers given in a mark scheme:
Evaluation of algorithm
Comparisons
The only comparison appears in the j loop.
Since this loop will iterate a total of n^2
times, it will execute
exactly n^2
Data swaps
There may be a swap operation carried out in the j loop.
Swap( A[i-1], A[i] ) Each of these will happen n^2 times.
Therefore there are 2n^2 operation carried out within the j loop
The i loop has one addition operation incrementing i which happens n
times
Adding these up we the number of addition operations which is 2n^2 +
n
As n gets very big then n^2 will dominate therefore it is O(n^2)
NOTE: Calculations might include assignment operations but these will not affect overall time so ignore
Marking overview:
1 mark for identifying i loop will execute n times.
1 mark for identifying j loop will execute 2n^2 times Isn't this meant to be n*n = n^2? For i and j
1 mark for correct number of calculations 2n^2 + n Why is this not
+2n?
1 mark for determining that the order will be dominated by n^2 as n
gets very big giving O(n^2) for the algorithm
Edit: As can be seen from the mark scheme, I am expected to count:
Loop numbers, but n-1 can be truncated to n
Comparisons e.g. if statements
Data swaps (counted as one statement, i.e. arr[i] = arr[i+1], temp = arr[i], etc. are considered one swap)
Calculations
Space - just n for array, etc.
Could someone kindly explain how these answers are derived?
Thank you!
Here's my take on the marking scheme, explicitly marking the operations they're counting. It seems they're counting assignments (but conveniently forgetting that it takes 2 or 3 assignments to do a swap). That explains why they count increment but not the [i-1] indexing.
Counting swaps
i loop runs n times
j loop runs n-1 times (~n^2-n)
swap (happens n^2 times) n^2
Counting additions (+=)
i loop runs n times
j loop runs n-1 times (~n^2)
increment j (happens n^2 times) n^2
increment i (happens n times) n
sum: 2n^2 + n
i am trying to find the time complexity of the bubble sort
n=length[A]
for j <- n-1 to 1
for i <- 0 to j-1
if A[i]>a[i+1]
temp=A[i]
A[i]=A[i+1]
A[i+1]=temp
return A
please any one can help thanks
In line 1 we are assigning length of array to n so constant time
In line 2 we have a for loop that decrements j by 1 every iteration until j=1 and in total will iterate n-2 times.
Inside the first for loop we have a second for loop that increments i by 1 every iteration until i=j-1 and will iterate j-1 times. On each iteration of the inner for loop we have lines 4,5,6,7 which are all just assignments and array access which cost, in total, constant time.
We can think about the two for loops in the following way: For every iteration of the outer for loop, the inner for loop will iterate j-1 times.
Therefore on the first iteration of the outer for loop, we have j = n-1. That means the inner for loop will iterate (n-1)-1 = (n-2) times. Then on the second iteration of the outer for loop we have j= n-2 so the inner for loop will iterate (n-2)-1 = (n-3) times and so on. And we do this until j = 1.
We will then have the equation: (n-2) + (n-3) + ... + 2 + 1 which is the total number of times the inner loop will iterate after the entire algorithm executes. We know that 1 + 2 + ... + n-1 + n = n(n-1)/2 so our expression can be simplified to this: n(n-1)/2 -(n-1) -n = n(n-1)/2 -2n + 1 = O(n^2).
Since our inner for loop will iterate O(n^2) times, and on each iteration do constant work, then that means our runtime will be O(cn^2) where c is the amount of constant work done by lines 4,5,6,7. Combine O(cn^2) with line 1 which is O(1) we have O(cn^2) + O(1) which is just O(n^2).
Therefore runtime of BubbleSort is O(n^2).
If you are still confused then maybe this will help: https://www.youtube.com/watch?v=Jdtq5uKz-w4
I need some help finding the complexity or Big-O of this code. If someone could explain what the Big-O of every loop would be that would be great. I think the outter loop would just be O(n) but the inner loop I'm not sure, how does the *=2 effect it?
k = 1;
do
{
j = 1;
do
{
...
j *= 2;
} while (j < n);
k++;
} while (k < n);
The outer loop runs O(n) times, since k starts at 1 and needs to be incremented n-1 times to become equal to 1.
The inner loop runs O(lg(n)) times. This is because on the m-th execcution of the loop, j = 0.5 * 2^(m).
The loop breaks when n = j = 0.5 * 2^m. Rearranging that, we get m = lg(2n) = O(lg(n)).
Putting the two loops together, the total code complexity is O(nlg(n)).
Logarithms can be tricky, but generally, whenever you see something being repeatedly being multiplied or divided by a constant factor, you can guess that the complexity of your algorithm involves a term that is either logarithmic or exponential.
That's why binary search, which repeatedly divides the size of the list it searches in half, is also O(lg(n)).
The inner loop always runs from j=1 to j=n.
For simplicity, let's assume that n is a power of 2 and that the inner loop runs k times.
The values of j for each of the k iterations are,
j = 1
j = 2
j = 4
j = 8
....
j = n
// breaks from the loop
which means that 2^k = n or k = lg(n)
So, each time, it runs for O(lg(n)) times.
Now, the outer loop is executed O(n) times, starting from k=1 to k=n.
Therefore, every time k is incremented, the inner loop runs O(lg(n)) times.
k=1 Innerloop runs for : lg(n)
k=2 Innerloop runs for : lg(n)
k=3 Innerloop runs for : lg(n)
...
k=n Innerloop runs for : lg(n)
// breaks from the loop
Therefore, total time taken is n*lg(n)
Thus, the time complexity of this is O(nlg(n))
I have a question in algorithm design about complexity. In this question a piece of code is given and I should calculate this code's complexity.
The pseudo-code is:
for(i=1;i<=n;i++){
j=i
do{
k=j;
j = j / 2;
}while(k is even);
}
I tried this algorithm for some numbers. and I have gotten different results. for example if n = 6 this algorithm output is like below
i = 1 -> executes 1 time
i = 2 -> executes 2 times
i = 3 -> executes 1 time
i = 4 -> executes 3 times
i = 5 -> executes 1 time
i = 6 -> executes 2 times
It doesn't have a regular theme, how should I calculate this?
The upper bound given by the other answers is actually too high. This algorithm has a O(n) runtime, which is a tighter upper bound than O(n*logn).
Proof: Let's count how many total iterations the inner loop will perform.
The outer loop runs n times. The inner loop runs at least once for each of those.
For even i, the inner loop runs at least twice. This happens n/2 times.
For i divisible by 4, the inner loop runs at least three times. This happens n/4 times.
For i divisible by 8, the inner loop runs at least four times. This happens n/8 times.
...
So the total amount of times the inner loop runs is:
n + n/2 + n/4 + n/8 + n/16 + ... <= 2n
The total amount of inner loop iterations is between n and 2n, i.e. it's Θ(n).
You always assume you get the worst scenario in each level.
now, you iterate over an array with N elements, so we start with O(N) already.
now let's say your i is always equals to X and X is always even (remember, worst case every time). how many times you need to divide X by 2 to get 1 ? (which is the only condition for even numbers to stop the division, when they reach 1).
in other words, we need to solve the equation
X/2^k = 1 which is X=2^k and k=log<2>(X)
this makes our algorithm take O(n log<2>(X)) steps, which can easly be written as O(nlog(n))
For such loop, we cannot separate count of inner loop and outer loop -> variables are tighted!
We thus have to count all steps.
In fact, for each iteration of outer loop (on i), we will have
1 + v_2(i) steps
where v_2 is the 2-adic valuation (see for example : http://planetmath.org/padicvaluation) which corresponds to the power of 2 in the decomposition in prime factor of i.
So if we add steps for all i we get a total number of steps of :
n_steps = \sum_{i=1}^{n} (1 + v_2(i))
= n + v_2(n!) // since v_2(i) + v_2(j) = v_2(i*j)
= 2n - s_2(n) // from Legendre formula (see http://en.wikipedia.org/wiki/Legendre%27s_formula with `p = 2`)
We then see that the number of steps is exactly :
n_steps = 2n - s_2(n)
As s_2(n) is the sum of the digits of n in base 2, it is negligible (at most log_2(n) since digit in base 2 is 0 or 1 and as there is at most log_2(n) digits) compared to n.
So the complexity of your algorithm is equivalent to n:
n_steps = O(n)
which is not the O(nlog(n)) stated in many other solutions but a smaller quantity!
lets start with worst case:
if you keep dividing with 2 (integral) you don't need to stop until you
get to 1. basically making the number of steps dependent on bit-width,
something you find out using two's logarithm. so the inner part is log n.
the outer part is obviously n, so N log N total.
A do loop halves j until k becomes odd. k is initially a copy of j which is a copy of i, so do runs 1 + power of 2 which divides i:
i=1 is odd, so it makes 1 pass through do loop,
i=2 divides by 2 once, so 1+1,
i=4 divides twice by 2, so 1+2, etc.
That makes at most 1+log(i) do executions (logarithm with base 2).
The for loop iterates i from 1 through n, so the upper bound is n times (1+log n), which is O(n log n).