I'm trying to understand time complexities better and I was hoping someone could help me figure out the time complexity in the worst case for the following algorithm (in pseudocode):
for i= 0 to n−1:
if A[i] < 0:
b= 1
while b < n:
b=b×2
end while
end if
end for
Hint:
The inner loop is executed Θ(log n) times - when it is executed - because it exits when b has as many bits as n.
Now the worst case happens when all A[i] are negative, so that the inner loop is executed n times.
Related
To start off I found this stackoverflow question that references the time complexity to be O(n^2), but it doesn't answer the question of why O(n^2) is the time complexity but instead asks for an example of such an algorithm. From my understanding an algorithm that runs 1+2+3+...+n times would be
less than O(n^2). For example, take this function
function(n: number) {
let sum = 0;
for(let i = 0; i < n; i++) {
for(let j = 0; j < i+1; j++) {
sum += 1;
}
}
return sum;
}
Here are some input and return values
num
sum
1
1
2
3
3
6
4
10
5
15
6
21
7
28
From this table you can see that this algorithm runs in less than O(n^2) but more than O(n). I also realize than algorithm that runs 1+(1+2)+(1+2+3)+...+(1+2+3+...+n) is true O(n^2) time complexity. For the algorithm stated in the problem, do we just say it runs in O(n^2) because it runs more than O(log n) times?
It's known that 1 + 2 + ... + n has a short form of n * (n + 1) / 2. Even if you didn't know that, you have to consider that, when i gets to n, the inner loop runs at most n times. So you have exactly n times (for outer loop i), each running at most n times (for inner loop j), so the O(n^2) becomes more apparent.
I agree that the complexity would be exactly n^2 if the inner loop also ran from 0 to n, so you have your reasons to think that a loop i from 0 to n and another loop j from 0 to i has to perform better and that's true, but with big Oh notation you're actually measuring the degree of algorithm's complexity, not the exact number of operations.
p.s. O(log n) is usually achieved when you split the main problem into sub-problems.
I think you should interpret the table differently. The O(N^2) complexity says that if you double the input N, the runtime should quadruple (take 4 times as long). In this case, the function(n: number) returns a number mirroring its runtime. I use f(N) as a short for it.
So say N goes from 1 to 2, which means the input has doubled (2/1 = 2). The runtime then has gone from f(1) to f(2), which means it has increased f(2)/f(1) = 3/1 = 3 times. That is not 4 times, but the Big-O complexity measure is asymptotic, dealing with the situation where N approaches infinity. If we test another input doubling from the table, we have f(6)/f(3) = 21/6 = 3.5. It is already closer to 4.
Let us now stray outside the table and try more doublings with bigger N. For example we have f(200)/f(100) = 20100/5050 = 3.980 and f(5000)/f(2500) = 12502500/3126250 = 3.999. The trend is clear. As N approaches infinity, a doubled input tends toward a quadrupled runtime. And that is the hallmark of O(N^2).
I've got the following example:
i=2;
while i<=n {
O(1)
j=2*i
while j<=n {
O(1)
j=j+i
}
i=i+1
I'm a beginner at calculating asymptotic complexity. I think it is O((n-1)*(n/4)) but I'm not sure if this answer is correct or I'm missing something.
In the inner loop, j goes from 2i to n in steps i, so the inner loop runs (n-2i)/i+1 times, which is n/i-1 (integer division).
Then the outer loop runs with i from 2 to n, for a total of n/2-1+n/3-1+n/4-1+...n/(n/2)-1 inner iterations (for larger i, there are no iterations).
This quantity is difficult to estimate, but is bounded above by n (H(n/2)-1)-n/2 where H denotes an Harmonic Number.
We know that Hn~O(log n), hence, asymptotically, the running time is O(n log n).
I need some help with these functions and if the run-time complexities for it are correct, I'm learning the concepts currently in class and I've looked at videos and such but I can't find any videos explaining these tougher ones, so I'm hoping I can get some help here if I'm doing it right.
sum = 0
for i = 1 to n*n
for j = 1 to i * i * i
sum++
For this one I am thinking the answer is O(n^5) because the outer loop is running n^2 times while the inner loop will be running n^3 times and together that'll make n^5
sum = 0
for i = 1 to n^2 // O(n^2) times
j = i
while j > 0 //(O(n+1) since the while loop will check one more time if the loop is valid
sum++
j = (j div 5)
for this run time I'm assuming its going to run O(n^3 + 1) times since outer loop is running n^2 times and while loop will be n+1 and together thats n^3 + 1.
for i = 1 to n // n times
for j = 1 to n { // n^2 times
C[i,j] = 0
for k = 1 to n // n^3 times?
C[i,j] = C[i,j] + A[i,k]*B[k,j]
}
so for this one I'm thinking it's O(n^6) but I am really iffy on this one. I have seen some examples online where people will figure the loop to be O(n log n) but I am totally lost on how that is found. Any help would be greatly appreciated!
Your understanding of the first and the third algorithms looks correct. The second, however, is totally off. The inner loop
while j > 0 //(O(n+1) since the while loop will check one more time if the loop is valid
sum++
j = (j div 5)
starts with j being equal to i and divides j by 5 at each iteration, so it runs log(i) times. In turn, i varies from 1 to n^2, and the total execution time is a
sum[i: 1..n^2] log(i)
By the property of a logarithm this sum is equal to log ((n^2)!). Using Stirling approximation for factorial one obtains the time complexity being O(n^2 log(n^2)) = O(2 n^2 log(n)) = O(n^2 log(n)).
Looking at the code below:
Algorithm sort
Declare A(1 to n)
n = length(A)
for i = 1 to n
for j = 1 to n-1 inclusive do
if A[i-1] > A[i] then
swap( A[i-1], A[i] )
end if
next j
next i
I would say that there are:
2 loops, both n, n*n = n^2 (n-1 truncated to n)
1 comparison, in the j loop, that will execute n^2 times
A swap that will execute n^2 times
There are also 2n additions with the loops, executing n^2 times, so 2n^2
The answers given in a mark scheme:
Evaluation of algorithm
Comparisons
The only comparison appears in the j loop.
Since this loop will iterate a total of n^2
times, it will execute
exactly n^2
Data swaps
There may be a swap operation carried out in the j loop.
Swap( A[i-1], A[i] ) Each of these will happen n^2 times.
Therefore there are 2n^2 operation carried out within the j loop
The i loop has one addition operation incrementing i which happens n
times
Adding these up we the number of addition operations which is 2n^2 +
n
As n gets very big then n^2 will dominate therefore it is O(n^2)
NOTE: Calculations might include assignment operations but these will not affect overall time so ignore
Marking overview:
1 mark for identifying i loop will execute n times.
1 mark for identifying j loop will execute 2n^2 times Isn't this meant to be n*n = n^2? For i and j
1 mark for correct number of calculations 2n^2 + n Why is this not
+2n?
1 mark for determining that the order will be dominated by n^2 as n
gets very big giving O(n^2) for the algorithm
Edit: As can be seen from the mark scheme, I am expected to count:
Loop numbers, but n-1 can be truncated to n
Comparisons e.g. if statements
Data swaps (counted as one statement, i.e. arr[i] = arr[i+1], temp = arr[i], etc. are considered one swap)
Calculations
Space - just n for array, etc.
Could someone kindly explain how these answers are derived?
Thank you!
Here's my take on the marking scheme, explicitly marking the operations they're counting. It seems they're counting assignments (but conveniently forgetting that it takes 2 or 3 assignments to do a swap). That explains why they count increment but not the [i-1] indexing.
Counting swaps
i loop runs n times
j loop runs n-1 times (~n^2-n)
swap (happens n^2 times) n^2
Counting additions (+=)
i loop runs n times
j loop runs n-1 times (~n^2)
increment j (happens n^2 times) n^2
increment i (happens n times) n
sum: 2n^2 + n
I need some help finding the complexity or Big-O of this code. If someone could explain what the Big-O of every loop would be that would be great. I think the outter loop would just be O(n) but the inner loop I'm not sure, how does the *=2 effect it?
k = 1;
do
{
j = 1;
do
{
...
j *= 2;
} while (j < n);
k++;
} while (k < n);
The outer loop runs O(n) times, since k starts at 1 and needs to be incremented n-1 times to become equal to 1.
The inner loop runs O(lg(n)) times. This is because on the m-th execcution of the loop, j = 0.5 * 2^(m).
The loop breaks when n = j = 0.5 * 2^m. Rearranging that, we get m = lg(2n) = O(lg(n)).
Putting the two loops together, the total code complexity is O(nlg(n)).
Logarithms can be tricky, but generally, whenever you see something being repeatedly being multiplied or divided by a constant factor, you can guess that the complexity of your algorithm involves a term that is either logarithmic or exponential.
That's why binary search, which repeatedly divides the size of the list it searches in half, is also O(lg(n)).
The inner loop always runs from j=1 to j=n.
For simplicity, let's assume that n is a power of 2 and that the inner loop runs k times.
The values of j for each of the k iterations are,
j = 1
j = 2
j = 4
j = 8
....
j = n
// breaks from the loop
which means that 2^k = n or k = lg(n)
So, each time, it runs for O(lg(n)) times.
Now, the outer loop is executed O(n) times, starting from k=1 to k=n.
Therefore, every time k is incremented, the inner loop runs O(lg(n)) times.
k=1 Innerloop runs for : lg(n)
k=2 Innerloop runs for : lg(n)
k=3 Innerloop runs for : lg(n)
...
k=n Innerloop runs for : lg(n)
// breaks from the loop
Therefore, total time taken is n*lg(n)
Thus, the time complexity of this is O(nlg(n))