I am having trouble figuring out the tight bound and the lower bound for this pseudocode. Could anyone help ?
Array S;
for i <-- 0 to n-1
for j <-- 0 to n-1
for k <-- 0 to n-1
M[i][j] = M1[i][k]*M2[k][j]
return M
Thanks!
Three nested loops and no options to end early would mean a complexity of n^3, best and worst case are the same.
Related
I have the following algorithm which I want to rewrite so it has time complexity O(n). I am new to algorithms but from my understanding since the two for loops both do a multiple of n iterations, the complexity will always be O(n2). Is it even possible to reduce the complexity of this?
Algorithm example(ArrayA, ArrayB, n)
Input: 2 arrays of integers, ArrayA and ArrayB, both length n
Output: integer
value <- 0 1 operation
for i <- 0 to n-1 n-1 operations
for j <- 0 to n-1 (n-1)^2 operations
value <- value + (ArrayA[i] * ArrayB[j]) 3(n-1)^2 operations
return value 1 operation
Total primitive operations: n2 + 2n - 1, giving it a time complexity of O(n2).
By applying a bit of algebra:
So here is an algorithm which computes the same result in O(n) time:
sum_A ← 0
for i ← 0 to n-1
sum_A ← sum_A + ArrayA[i]
sum_B ← 0
for j ← 0 to n-1
sum_B ← sum_B + ArrayB[j]
return sum_A * sum_B
Generally speaking, an algorithm with nested loops cannot always be changed to reduce the time complexity; but in some cases you can do it, if you can identify something specific about the computation which means it can be done in a different way.
For sums like this, it's sometimes possible to compute the result more efficiently by writing something algebraically equivalent. So, put your mathematician's hat on when faced with such a problem.
This type of operation is going to only ever run in n2 time. The reason being is that you have to compare each element of i, to each element of j. For example:
i*j, i*j+1,...,i*j+(n-1)
(i+1)*j, (i+1)*(j+1),...,(i+1)*(j+n-1)
.
.
.
(i+n-1)*j, (i+n-1)*(j+1),...,(i+n-1)*(j+n-1)
There's just no way to reduce the complexity.
This algorithm appears to have a quadratic efficiency. (Why?)
To analyze complexity, you just have to count the number of operations.
Here, there are two nested loops:
for i in 0 to n – 1 do
for j in 0 to n – 1 do
operation() // Do something
done
done
With i = 0, operation will be ran for all j in [0,n-1] that is n times. Then increment i, and repeat until i > n-1. That is the operation is ran n*n times in the worst case.
So in the end, this codes does n^2 operations, that's why it has quadratic efficiency.
This is a trick question, it appears to be both n^2 and 2^n at the same time depending on the whether the if statement on line j is executed.
Could someone help me out with the time complexity analysis on this pseudocode?
I'm looking for the worst-case complexity here, and I can't figure out if it's O(n^4), O(n^5) or something else entirely. If you could go into detail into how you solved it exactly, it would be much appreciated.
sum = 0
for i = 1 to n do
for j = 1 to i*i do
if j mod i == 0 then
for k = 1 to j do
sum = sum + 1
First loop: O(n)
Second loop: i is in average n/2, you could have an exact formula but it's O(n²)
Third loop happens i times inside the second loop, so an average of n/2 times. And it's O(n²) as well, estimating it.
So it's O(n*n²*(1 + 1/n*n²)), I'd say O(n^4). The 1/n comes from the fact that the third loop happens roughly 1/n times inside the second one.
It's all a ballpark estimation, with no rigorous proof, but it should be right. You could confirm it by running code yourself.
I'm in a Data Structures class now, and we're covering Big-O as a means of algorithm analysis. Unfortunately after many hours of study, I'm still somewhat confused. I understand what Big-O is, and several good code examples I found online make sense. However I have a homework question I don't understand. Any explanation of the following would be greatly appreciated.
Determine how many times the output statement is executed in each of
the following fragments (give a number in terms of n). Then indicate
whether the algorithm is O(n) or O(n2):
for (int i = 0; i < n; i++)
for (int j = 0; j < i; j++)
if (j % i == 0)
System.out.println(i + ” ” + j);
Suppose n = 5. Then, the values of i would be 0, 1, 2, 3, and 4. This means that means that the inner loop will iterate 1, 2, 3, 4, and 5 times, respectively. Because of this, the total number of times that the if comparison will execute is 1+2+3+4+5. A mathematical formula for the sum of integers from 1 to n is n*(n+1)/2. Expanded, this gives us n^2 / 2 + n / 2.
Therefore, the algorithm itself is O(n^2).
For the number of times that something is printed, we need to look at the times that j%i=0. When j < i, the only time that this can be true is when j = 0, so this is the number of times that j = 0 and i is not 0. This means that it is only true once in each iteration of the outer loop, except the first iteration (when i = 0).
Therefore, System.out.println is called n-1 times.
A simple way to look at it is :
A single loop has a complexity of O(n)
A loop within a loop has a complexity of O(n^2) and so on.
So the above loop has a complexity of O(n^2)
This function appears to execute in Quadratic Time - O(n^2).
Here's a trick for something like this. For each nested for loop add one to the exponent for n. If there was three loops this algorithm would run in cubic time O(n^3). If there is only one loop (no halving involved) then it would be linear O(n). If the array was halved each time (recursively or iteratively) it would be considered logarithmic time O(log n) -> base 2.
Hope that helps.
I'm having trouble with finding the complexity of recursive methods. I have an algorithm that sorts the elements of an array in ascending order. Basically what I did is write down each step in the algorithm and the best/worst case number of executions, then took the sum of each case and found Big-O/Big-Omega. But I'm not sure about the recursive call? Do I put down the number of times it was called inside the method, or the number of times it was called in total (which may vary)?
So suppose I have an array A = [5, 4, 3, 2, 1] (this would be the worst case, if I'm not mistaken), then I start by going through the array once in the first while loop (see algorithm below), then again backwards in the second while loop, then it's the recursive call. In total, I called my method once (original call), then a second time, and then a third time (which did not go into the if-statement). So that's 3 times for an array of n = 5 elements. But inside the method itself, the recursive call occurs once. I'm so confused! :S
Also, what is the difference when looking at time complexity vs space complexity? Any tips/advice would be helpful.
Thanks!
Here is the given algorithm:
Algorithm MyAlgorithm(A, n)
Input: Array of integer containing n elements
Output: Possibly modified Array A
done ← true
j ← 0
while j ≤ n - 2 do
if A[j] > A[j + 1] then
swap(A[j], A[j + 1])
done:= false
j ← j + 1
end while
j ← n - 1
while j ≥ 1 do
if A[j] < A[j - 1] then
swap(A[j - 1], A[j])
done:= false
j ← j - 1
end while
if ¬ done
MyAlgorithm(A, n)
else
return A
And here is my solution:
Statement Worst Case Best Case
------------------------------------------------------------------
done = true 1 1
j = 0 1 1
j <= n-2 n n
A[j] > A[j+1] n-1 n-1
swap(A[j], A[j+1]) n-1 0
done = false n-1 0
j = j + 1 n-1 n-1
j = n - 1 1 1
j >= 1 n-1 n-1
A[j] < A[j-1] n-1 n-1
swap(A[j-1], A[j]) n-1 0
done = false n-1 0
j = j - 1 n-1 n-1
if ¬done 1 1
MyAlgorithm(A, n) 1 0
return A 1 1
------------------------------------------------------------------
Total: 10n-2 6n
Complexity: f(n) is O(n) f(n) is Omega(n)
Also this is my first post here on stackoverflow so I'm not sure if I posted those correctly.
It looks like this algorithm is some kind of variation on the bubble sort. Assuming it works correctly, it should have a performance of O(n^2).
To analyze the performance, you should note that the body of the procedure (absent the recursion) takes O(n), so the total time taken by the algorithm is O(R n), where R is the number of times the recursion is called before it finishes. Since each bubble pass should leave at least one element at a final, sorted location, R<=n/2, therefore the overall algorithm is O(n^2) worst case.
Unfortunately, the way recursion is used in your algorithm is not particularly useful for determining its performance: you could easily replace the recursion with an outer while loop around the two bubble passes which make up the rest of the procedure body (which might have avoided most of your confusion...).
Algorithms for which a recursive analysis is useful typically have some kind of divide-and-conquer structure, where the recursive procedure calls solve a smaller sub-problem. This is conspicuously lacking in your algorithm: the recursive call is always the same size as the original.