Reducing the time complexity of an algorithm with nested loops - algorithm

I have the following algorithm which I want to rewrite so it has time complexity O(n). I am new to algorithms but from my understanding since the two for loops both do a multiple of n iterations, the complexity will always be O(n2). Is it even possible to reduce the complexity of this?
Algorithm example(ArrayA, ArrayB, n)
Input: 2 arrays of integers, ArrayA and ArrayB, both length n
Output: integer
value <- 0 1 operation
for i <- 0 to n-1 n-1 operations
for j <- 0 to n-1 (n-1)^2 operations
value <- value + (ArrayA[i] * ArrayB[j]) 3(n-1)^2 operations
return value 1 operation
Total primitive operations: n2 + 2n - 1, giving it a time complexity of O(n2).

By applying a bit of algebra:
So here is an algorithm which computes the same result in O(n) time:
sum_A ← 0
for i ← 0 to n-1
sum_A ← sum_A + ArrayA[i]
sum_B ← 0
for j ← 0 to n-1
sum_B ← sum_B + ArrayB[j]
return sum_A * sum_B
Generally speaking, an algorithm with nested loops cannot always be changed to reduce the time complexity; but in some cases you can do it, if you can identify something specific about the computation which means it can be done in a different way.
For sums like this, it's sometimes possible to compute the result more efficiently by writing something algebraically equivalent. So, put your mathematician's hat on when faced with such a problem.

This type of operation is going to only ever run in n2 time. The reason being is that you have to compare each element of i, to each element of j. For example:
i*j, i*j+1,...,i*j+(n-1)
(i+1)*j, (i+1)*(j+1),...,(i+1)*(j+n-1)
.
.
.
(i+n-1)*j, (i+n-1)*(j+1),...,(i+n-1)*(j+n-1)
There's just no way to reduce the complexity.

Related

Worst case time complexity for this stupid sort?

The code looks like:
for (int i = 1; i < N; i++) {
if (a[i] < a[i-1]) {
swap(i, i-1);
i = 0;
}
}
After trying out a few things i figure the worst case is when the input array is in descending order. Then looks like the compares will be maximum and hence we will consider only compares. Then it seems it would be a sum of sums, i.e sum of ... {1+2+3+...+(n-1)}+{1+2+3+...+(n-2)}+{1+2+3+...+(n-3)}+ .... + 1 if so what would be O(n) ?
If I am not on the right path can someone point out what O(n) would be and how can it be derived? cheers!
For starters, the summation
(1 + 2 + 3 + ... + n) + (1 + 2 + 3 + ... + n - 1) + ... + 1
is not actually O(n). Instead, it's O(n3). You can see this because the sum 1 + 2 + ... + n = O(n2, and there are n copies of each of them. You can more properly show that this summation is Θ(n3) by looking at the first n / 2 of these terms. Each of those terms is at least 1 + 2 + 3 + ... + n / 2 = Θ(n2), so there are n / 2 copies of something that's Θ(n2), giving a tight bound of Θ(n3).
We can upper-bound the total runtime of this algorithm at O(n3) by noting that every swap decreases the number of inversions in the array by one (an inversion is a pair of elements out of place). There can be at most O(n2) inversions in an array and a sorted array has no inversions in it (do you see why?), so there are at most O(n2) passes over the array and each takes at most O(n) work. That collectively gives a bound of O(n3).
Therefore, the Θ(n3) worst-case runtime you've identified is asymptotically tight, so the algorithm runs in time O(n3) and has worst-case runtime Θ(n3).
Hope this helps!
It does one iteration of the list per swap. The maximum number of swaps necessary is O(n * n) for a reversed list. Doing each iteration is O(n).
Therefore the algorithm is O(n * n * n).
This is one half of the infamous Bubble Sort, which has a O(N^2). This partial sort has O(N) because the For loop goes from 1 to N. After one iteration, you will end up with the largest element at the end of the list and the rest of the list in some changed order. To be a proper Bubble Sort, it needs another loop inside this one to iterate j from 1 to N-i and do the same thing. The If goes inside the inner loop.
Now you have two loops, one inside the other, and they both go from 1 to N (sort of). You will have N * N or N^2 iterations. Thus O(N^2) for the Bubble Sort.
Now you have take your next step as a programmer: Finish writing the Bubble Sort and make it work correctly. Try it with different lengths of list a and see how long it takes. Then never use it again. ;-)

Algorithm Run Time Matrix

I am having trouble figuring out the tight bound and the lower bound for this pseudocode. Could anyone help ?
Array S;
for i <-- 0 to n-1
for j <-- 0 to n-1
for k <-- 0 to n-1
M[i][j] = M1[i][k]*M2[k][j]
return M
Thanks!
Three nested loops and no options to end early would mean a complexity of n^3, best and worst case are the same.

What is the complexity of an arithmetic progression?

I dont really understand how to calculate the complexity of a code. I was told that i need to look on the number of actions that are done on each item in my code. So when I have a loop that runs over an array and based on the idea of arithmetic progression (I want to calculate the sum from every index till the end of the array) which means at first i pass over n cells and the second time n-1 cells and so on... why is the complexity considerd O(N^2) and not O(n) ?
As I see it, n + n-1 +n-2 + n-c.. is xn -c , In other words O(n). SO WHY am i wrong ?
As I see it, n + n-1 +n-2 + n-c.. is xn -c , In other words O(n). SO WHY am i wrong ?
Actually, it is not true. The sum of this arithmetic progression is n*(n-1)/2 = O(n^2)
P.S I have read your task : you need only one loop over an array using the previous results, so you can solve this one with O(n) complexity.
for i=1 to n
result[i] = a[i]+result[i-1]
What your code is telling to do is the following :-
traverse array from 1 to n
traverse array from 2 to n
... similarly after total n-1 iterations
traverse array's nth element
As you can notice that array traversing of cells is decreasing in order of 1.
Each traversal is being guided by loop which is increasing upto value of i. The whole code is wrapped under a function of n.
The concrete idea for number of actions performed on each item of the array is :-
for ( i = 1 to n )
for ( j = i to n )
traverse array[j] ;
Hence, complexity of your code = O(n^2) and the order is clearly in AP as it forms the series n + (n-1) + ... + 1 with a common difference of 1.
I hope it is clear...
The time complexity is: 1 + 2 + ... + n.
This is equal to n(n+1)/2.
For example, for n = 3: 1 + 2 + 3 = 6
and 3(4)/2 = 12/2 = 6
n(n+1)/2 = (n^2 + n) / 2 which is O(n^2) because we can remove constant factors and lower order terms.
As an arithmetic progression has a closed form solution, its efficient computation is o(1): that is its computation time does not depend on the number of elements.
If you were to use a loop then it would be o(n) as the execution time would be linear on the number of elements.
You're adding up n numbers whose average value is (n/2) because they range from 1 to n. Thus n times (n/2) = n^2 / 2. We don't care about the constant multiple, so O(n^2).
You are getting it wrong somewhere! The sum of an arithmetic progression is of the order of n^{2}
To clear your doubts on arithmetic progression, visit this link: http://www.mathsisfun.com/algebra/sequences-sums-arithmetic.html
And as you said, you face difficulty in finding the complexity of any code, you can read from these two links:
http://discrete.gr/complexity/
http://www.cs.cmu.edu/~adamchik/15-121/lectures/Algorithmic%20Complexity/complexity.html
Good enough to get you going and help you understand how to find the complexity of most of the algorithms.

Finding complexity of recursive algorithm?

I'm having trouble with finding the complexity of recursive methods. I have an algorithm that sorts the elements of an array in ascending order. Basically what I did is write down each step in the algorithm and the best/worst case number of executions, then took the sum of each case and found Big-O/Big-Omega. But I'm not sure about the recursive call? Do I put down the number of times it was called inside the method, or the number of times it was called in total (which may vary)?
So suppose I have an array A = [5, 4, 3, 2, 1] (this would be the worst case, if I'm not mistaken), then I start by going through the array once in the first while loop (see algorithm below), then again backwards in the second while loop, then it's the recursive call. In total, I called my method once (original call), then a second time, and then a third time (which did not go into the if-statement). So that's 3 times for an array of n = 5 elements. But inside the method itself, the recursive call occurs once. I'm so confused! :S
Also, what is the difference when looking at time complexity vs space complexity? Any tips/advice would be helpful.
Thanks!
Here is the given algorithm:
Algorithm MyAlgorithm(A, n)
Input: Array of integer containing n elements
Output: Possibly modified Array A
done ← true
j ← 0
while j ≤ n - 2 do
if A[j] > A[j + 1] then
swap(A[j], A[j + 1])
done:= false
j ← j + 1
end while
j ← n - 1
while j ≥ 1 do
if A[j] < A[j - 1] then
swap(A[j - 1], A[j])
done:= false
j ← j - 1
end while
if ¬ done
MyAlgorithm(A, n)
else
return A
And here is my solution:
Statement Worst Case Best Case
------------------------------------------------------------------
done = true 1 1
j = 0 1 1
j <= n-2 n n
A[j] > A[j+1] n-1 n-1
swap(A[j], A[j+1]) n-1 0
done = false n-1 0
j = j + 1 n-1 n-1
j = n - 1 1 1
j >= 1 n-1 n-1
A[j] < A[j-1] n-1 n-1
swap(A[j-1], A[j]) n-1 0
done = false n-1 0
j = j - 1 n-1 n-1
if ¬done 1 1
MyAlgorithm(A, n) 1 0
return A 1 1
------------------------------------------------------------------
Total: 10n-2 6n
Complexity: f(n) is O(n) f(n) is Omega(n)
Also this is my first post here on stackoverflow so I'm not sure if I posted those correctly.
It looks like this algorithm is some kind of variation on the bubble sort. Assuming it works correctly, it should have a performance of O(n^2).
To analyze the performance, you should note that the body of the procedure (absent the recursion) takes O(n), so the total time taken by the algorithm is O(R n), where R is the number of times the recursion is called before it finishes. Since each bubble pass should leave at least one element at a final, sorted location, R<=n/2, therefore the overall algorithm is O(n^2) worst case.
Unfortunately, the way recursion is used in your algorithm is not particularly useful for determining its performance: you could easily replace the recursion with an outer while loop around the two bubble passes which make up the rest of the procedure body (which might have avoided most of your confusion...).
Algorithms for which a recursive analysis is useful typically have some kind of divide-and-conquer structure, where the recursive procedure calls solve a smaller sub-problem. This is conspicuously lacking in your algorithm: the recursive call is always the same size as the original.

Discrete Mathematics Big-O notation Algorithm Complexity

I can probably figure out part b if you can help me do part a. I've been looking at this and similar problems all day, and I'm just having problems grasping what to do with nested loops. For the first loop there are n iterations, for the second there are n-1, and for the third there are n-1.. Am I thinking about this correctly?
Consider the following algorithm,
which takes as input a sequence of n integers a1, a2, ..., an
and produces as output a matrix M = {mij}
where mij is the minimum term
in the sequence of integers ai, a + 1, ..., aj for j >= i and mij = 0 otherwise.
initialize M so that mij = ai if j >= i and mij = 0
for i:=1 to n do
for j:=i+1 to n do
for k:=i+1 to j do
m[i][j] := min(m[i][j], a[k])
end
end
end
return M = {m[i][j]}
(a) Show that this algorithm uses Big-O(n^3) comparisons to compute the matrix M.
(b) Show that this algorithm uses Big-Omega(n^3) comparisons to compute the matrix M.
Using this face and part (a), conclude that the algorithm uses Big-theta(n^3) comparisons.
In part A, you need to find an upper bound for the number of min ops.
In order to do so, it is clear that the above algorithm has less min ops then the following:
for i=1 to n
for j=1 to n //bigger range then your algorithm
for k=1 to n //bigger range then your algorithm
(something with min)
The above has exactly n^3 min ops - thus in your algorithm, there are less then n^3 min ops.
From this we can conclude: #minOps <= 1 * n^3 (for each n > 10, where 10 is arbitrary).
By definition of Big-O, this means the algorithm is O(n^3)
You said you can figure B alone, so I'll let you try it :)
hint: the middle loop has more iterations then for j=i+1 to n/2
For each iteration of outer loop inner two nested loop would give n^2 complexity if i == n. Outer loop will run for i = 1 to n. So total complexity would be a series like: 1^2 + 2^2 + 3^2 + 4^2 + ... ... ... + n^2. This summation value is n(n+1)(2n+1)/6. Ignoring lower order terms of this summation term ultimately the order would be O(n^3)

Resources