I'm having trouble with finding the complexity of recursive methods. I have an algorithm that sorts the elements of an array in ascending order. Basically what I did is write down each step in the algorithm and the best/worst case number of executions, then took the sum of each case and found Big-O/Big-Omega. But I'm not sure about the recursive call? Do I put down the number of times it was called inside the method, or the number of times it was called in total (which may vary)?
So suppose I have an array A = [5, 4, 3, 2, 1] (this would be the worst case, if I'm not mistaken), then I start by going through the array once in the first while loop (see algorithm below), then again backwards in the second while loop, then it's the recursive call. In total, I called my method once (original call), then a second time, and then a third time (which did not go into the if-statement). So that's 3 times for an array of n = 5 elements. But inside the method itself, the recursive call occurs once. I'm so confused! :S
Also, what is the difference when looking at time complexity vs space complexity? Any tips/advice would be helpful.
Thanks!
Here is the given algorithm:
Algorithm MyAlgorithm(A, n)
Input: Array of integer containing n elements
Output: Possibly modified Array A
done ← true
j ← 0
while j ≤ n - 2 do
if A[j] > A[j + 1] then
swap(A[j], A[j + 1])
done:= false
j ← j + 1
end while
j ← n - 1
while j ≥ 1 do
if A[j] < A[j - 1] then
swap(A[j - 1], A[j])
done:= false
j ← j - 1
end while
if ¬ done
MyAlgorithm(A, n)
else
return A
And here is my solution:
Statement Worst Case Best Case
------------------------------------------------------------------
done = true 1 1
j = 0 1 1
j <= n-2 n n
A[j] > A[j+1] n-1 n-1
swap(A[j], A[j+1]) n-1 0
done = false n-1 0
j = j + 1 n-1 n-1
j = n - 1 1 1
j >= 1 n-1 n-1
A[j] < A[j-1] n-1 n-1
swap(A[j-1], A[j]) n-1 0
done = false n-1 0
j = j - 1 n-1 n-1
if ¬done 1 1
MyAlgorithm(A, n) 1 0
return A 1 1
------------------------------------------------------------------
Total: 10n-2 6n
Complexity: f(n) is O(n) f(n) is Omega(n)
Also this is my first post here on stackoverflow so I'm not sure if I posted those correctly.
It looks like this algorithm is some kind of variation on the bubble sort. Assuming it works correctly, it should have a performance of O(n^2).
To analyze the performance, you should note that the body of the procedure (absent the recursion) takes O(n), so the total time taken by the algorithm is O(R n), where R is the number of times the recursion is called before it finishes. Since each bubble pass should leave at least one element at a final, sorted location, R<=n/2, therefore the overall algorithm is O(n^2) worst case.
Unfortunately, the way recursion is used in your algorithm is not particularly useful for determining its performance: you could easily replace the recursion with an outer while loop around the two bubble passes which make up the rest of the procedure body (which might have avoided most of your confusion...).
Algorithms for which a recursive analysis is useful typically have some kind of divide-and-conquer structure, where the recursive procedure calls solve a smaller sub-problem. This is conspicuously lacking in your algorithm: the recursive call is always the same size as the original.
Related
I have the following algorithm which I want to rewrite so it has time complexity O(n). I am new to algorithms but from my understanding since the two for loops both do a multiple of n iterations, the complexity will always be O(n2). Is it even possible to reduce the complexity of this?
Algorithm example(ArrayA, ArrayB, n)
Input: 2 arrays of integers, ArrayA and ArrayB, both length n
Output: integer
value <- 0 1 operation
for i <- 0 to n-1 n-1 operations
for j <- 0 to n-1 (n-1)^2 operations
value <- value + (ArrayA[i] * ArrayB[j]) 3(n-1)^2 operations
return value 1 operation
Total primitive operations: n2 + 2n - 1, giving it a time complexity of O(n2).
By applying a bit of algebra:
So here is an algorithm which computes the same result in O(n) time:
sum_A ← 0
for i ← 0 to n-1
sum_A ← sum_A + ArrayA[i]
sum_B ← 0
for j ← 0 to n-1
sum_B ← sum_B + ArrayB[j]
return sum_A * sum_B
Generally speaking, an algorithm with nested loops cannot always be changed to reduce the time complexity; but in some cases you can do it, if you can identify something specific about the computation which means it can be done in a different way.
For sums like this, it's sometimes possible to compute the result more efficiently by writing something algebraically equivalent. So, put your mathematician's hat on when faced with such a problem.
This type of operation is going to only ever run in n2 time. The reason being is that you have to compare each element of i, to each element of j. For example:
i*j, i*j+1,...,i*j+(n-1)
(i+1)*j, (i+1)*(j+1),...,(i+1)*(j+n-1)
.
.
.
(i+n-1)*j, (i+n-1)*(j+1),...,(i+n-1)*(j+n-1)
There's just no way to reduce the complexity.
Looking at the code below:
Algorithm sort
Declare A(1 to n)
n = length(A)
for i = 1 to n
for j = 1 to n-1 inclusive do
if A[i-1] > A[i] then
swap( A[i-1], A[i] )
end if
next j
next i
I would say that there are:
2 loops, both n, n*n = n^2 (n-1 truncated to n)
1 comparison, in the j loop, that will execute n^2 times
A swap that will execute n^2 times
There are also 2n additions with the loops, executing n^2 times, so 2n^2
The answers given in a mark scheme:
Evaluation of algorithm
Comparisons
The only comparison appears in the j loop.
Since this loop will iterate a total of n^2
times, it will execute
exactly n^2
Data swaps
There may be a swap operation carried out in the j loop.
Swap( A[i-1], A[i] ) Each of these will happen n^2 times.
Therefore there are 2n^2 operation carried out within the j loop
The i loop has one addition operation incrementing i which happens n
times
Adding these up we the number of addition operations which is 2n^2 +
n
As n gets very big then n^2 will dominate therefore it is O(n^2)
NOTE: Calculations might include assignment operations but these will not affect overall time so ignore
Marking overview:
1 mark for identifying i loop will execute n times.
1 mark for identifying j loop will execute 2n^2 times Isn't this meant to be n*n = n^2? For i and j
1 mark for correct number of calculations 2n^2 + n Why is this not
+2n?
1 mark for determining that the order will be dominated by n^2 as n
gets very big giving O(n^2) for the algorithm
Edit: As can be seen from the mark scheme, I am expected to count:
Loop numbers, but n-1 can be truncated to n
Comparisons e.g. if statements
Data swaps (counted as one statement, i.e. arr[i] = arr[i+1], temp = arr[i], etc. are considered one swap)
Calculations
Space - just n for array, etc.
Could someone kindly explain how these answers are derived?
Thank you!
Here's my take on the marking scheme, explicitly marking the operations they're counting. It seems they're counting assignments (but conveniently forgetting that it takes 2 or 3 assignments to do a swap). That explains why they count increment but not the [i-1] indexing.
Counting swaps
i loop runs n times
j loop runs n-1 times (~n^2-n)
swap (happens n^2 times) n^2
Counting additions (+=)
i loop runs n times
j loop runs n-1 times (~n^2)
increment j (happens n^2 times) n^2
increment i (happens n times) n
sum: 2n^2 + n
My attempt for the Big-O of each of these two algorithms..
1) Algorithm threeD(matrix, n)
// a 3D matrix of size n x n x n
layer ← 0
while (layer < n)
row ← 0
while (row < layer)
col ← 0
while (col < row)
print matrix[layer][row][col]
col ← col + 1
done
row ← row + 1
done
layer ← layer * 2
done
O((n^2)log(n)) because the two outer loops are each O(N) and the innermost one seems to be O(log n)
2) Algorithm Magic(n)
//Integer, n > 0
i ← 0
while (i < n)
j ← 0
while (j < power(2,i))
j ← j + 1
done
i ← i + 1
done
O(N) for outer loop, O(2^n) for inner? = O(n(2^n))?
1. Algorithm
First of all: This algorithm never terminates due to layer is initiated with zero. layer is only multipyed by 2 so it will never get bigger than zero, specially not bigger than n.
To get this work, you have to start with layer > 0.
So lets start with layer = 1.
The time-complexity can be written as T(n) = T(n/2) + n^2.
You can get this by looking that way: At the end the layer is setted at most to n. Then the inner loops do n^2 steps. Bevor that, the layer was only half that big. So you have to do the n^2 steps on the last rould of the outer loop and all stepps of the round bevor wirtten as T(n/2).
The masters theorem gets you Theta(n^2).
2. Algorithm
You can just count the steps:
2^0 + 2^1 + 2^2 + ... + 2^(n-1) = sum_(i=0)^(n-1)2^i = 2^n-1
To get this simplification just take a look at binary numbers: The sum of steps corresponds to a binary number containing only 1's (like 1111 1111). This number equals 2^n-1.
So the time complexity is Theta(2^n)
Notice: Both your Big-O's are not wrong, there are bette boundings.
I need to build a recurrence relation for the following algorithm (T(n) stands for number of elemental actions) and find it's time complexity:
Alg (n)
{
if (n < 3) return;
for i=1 to n
{
for j=i to 2i
{
for k=j-i to j-i+100
write (i, j, k);
}
}
for i=1 to 7
Alg(n-2);
}
I came to this Recurrence relation (don't know if it's right):
T(n) = 1 if n < 3
T(n) = 7T(n-2)+100n2 otherwise.
I don't know how to get the time complexity, though.
Is my recurrence correct? What's the time complexity of this code?
Let's take a look at the code to see what the recurrence should be.
First, let's look at the loop:
for i=1 to n
{
for j=i to 2i
{
for k=j-i to j-i+100
write (i, j, k);
}
}
How much work does this do? Well, let's begin by simplifying it. Rather than having j count up from i to 2i, let's define a new variable j' that counts up from 0 to i. This means that j' = j - i, and so we get this:
for i=1 to n
{
for j' = 0 to i
{
for k=j' to j'+100
write (i, j' + i, k);
}
}
Ah, that's much better! Now, let's also rewrite k as k', where k' ranges from 1 to 100:
for i=1 to n
{
for j' = 0 to i
{
for k'= 1 to 100
write (i, j' + i, k' + j);
}
}
From this, it's easier to see that this loop has time complexity Θ(n2), since the innermost loop does O(1) work, and the middle loop will run 1 + 2 + 3 + 4 + ... + n = Θ(n2) times. Notice that it's not exactly 100n2 because the summation isn't exactly n2, but it is close.
Now, let's look at the recursive part:
for i=1 to 7
Alg(n-2);
For starters, this is just plain silly! There's no reason you'd ever want to do something like this. But, that said, we can say that this is 7 calls to the algorithm on an input of size n - 2.
Accordingly, we get this recurrence relation:
T(n) = 7T(n - 2) + Θ(n2) [if n ≥ 3]
T(n) = Θ(1) [otherwise]
Now that we have the recurrence, we can start to work out the time complexity. That ends up being a little bit tricky. If you think about how much work we'll end up doing, we'll get that
There's 1 call of size n.
There's 7 calls of size n - 2.
There's 49 calls of size n - 4.
There's 343 calls of size n - 6.
...
There's 7k calls of size n - 2k
From this, we immediately get a lower bound of Ω(7n/2), since that's the number of calls that will get made. Each call does O(n2) work, so we can get an upper boudn of O(n27n/2). The true value lies somewhere in there, though I honestly don't know how to figure out what it is. Sorry about that!
Hope this helps!
A formal method is to do the following:
The prevailing order of growth can be intuitively inferred from the source code, when it comes to the number of recursive calls.
An algorithm with 2 recursive calls has a complexity of 2^n; with 3 recursive calls the complexity 3^n and so on.
I'm trying to figure out how to give a worst case time complexity. I'm not sure about my analysis. I have read nested for loops big O is n^2; is this correct for a for loop with a while loop inside?
// A is an array of real numbers.
// The size of A is n. i,j are of type int, key is
// of type real.
Procedure IS(A)
for j = 2 to length[A]
{
key = A[ j ]
i = j-1
while i>0 and A[i]>key
{
A[i+1] = A[i]
i=i-1
}
A[i+1] = key
}
so far I have:
j=2 (+1 op)
i>0 (+n ops)
A[i] > key (+n ops)
so T(n) = 2n+1?
But I'm not sure if I have to go inside of the while and for loops to analyze a worse case time complexity...
Now I have to prove that it is tightly bound, that is Big theta.
I've read that nested for loops have Big O of n^2. Is this also true for Big Theta? If not how would I go about finding Big Theta?!
**C1= C sub 1, C2= C sub 2, and no= n naught all are elements of positive real numbers
To find the T(n) I looked at the values of j and looked at how many times the while loop executed:
values of J: 2, 3, 4, ... n
Loop executes: 1, 2, 3, ... n
Analysis:
Take the summation of the while loop executions and recognize that it is (n(n+1))/2
I will assign this as my T(n) and prove it is tightly bounded by n^2.
That is n(n+1)/2= θ(n^2)
Scratch Work:
Find C1, C2, no є R+ such that 0 ≤ C1(n^2) ≤ (n(n+1))/2 ≤ C2(n^2)for all n ≥ no
To make 0 ≤ C1(n) true, C1, no, can be any positive reals
To make C1(n^2) ≤ (n(n+1))/2, C1 must be ≤ 1
To make (n(n+1))/2 ≤ C2(n^2), C2 must be ≥ 1
PF:
Find C1, C2, no є R+ such that 0 ≤ C1(n^2) ≤ (n(n+1))/2 ≤ C2(n^2) for all n ≥ no
Let C1= 1/2, C2= 1 and no = 1.
show that 0 ≤ C1(n^2) is true
C1(n^2)= n^2/2
n^2/2≥ no^2/2
⇒no^2/2≥ 0
1/2 > 0
Therefore C1(n^2) ≥ 0 is proven true!
show that C1(n^2) ≤ (n(n+1))/2 is true
C1(n^2) ≤ (n(n+1))/2
n^2/2 ≤ (n(n+1))/2
n^2 ≤ n(n+1)
n^2 ≤ n^2+n
0 ≤ n
This we know is true since n ≥ no = 1
Therefore C1(n^2) ≤ (n(n+1))/2 is proven true!
Show that (n(n+1))/2 ≤ C2(n^2) is true
(n(n+1))/2 ≤ C2(n^2)
(n+1)/2 ≤ C2(n)
n+1 ≤ 2 C2(n)
n+1 ≤ 2(n)
1 ≤ 2n-n
1 ≤ n(2-1) = n
1≤ n
Also, we know this to be true since n ≥ no = 1
Hence by 1, 2 and 3, θ(n^2 )= (n(n+1))/2 is true since
0 ≤ C1(n^2) ≤ (n(n+1))/2 ≤ C2(n^2) for all n ≥ no
Tell me what you thing guys... I'm trying to understand this material and would like y'alls input!
You seem to be implementing the insertion sort algorithm, which Wikipedia claims is O(N2).
Generally, you break down components based off your variable N rather than your constant C when dealing with Big-O. In your case, all you need to do is look at the loops.
Your two loops are (worse cases):
for j=2 to length[A]
i=j-1
while i > 0
/*action*/
i=i-1
The outer loop is O(N), because it directly relates to the number of elements.
Notice how your inner loop depends on the progress of the outer loop. That means that (ignoring off-by-one issues) the inner and outer loops are related as follows:
j's inner
value loops
----- -----
2 1
3 2
4 3
N N-1
----- -----
total (N-1)*N/2
So the total number of times that /*action*/ is encountered is (N2 - N)/2, which is O(N2).
Looking at the number of nested loops isn't the best way to go about getting a solution. It's better to look at the "work" that's being done in the code, under a heavy load N. For example,
for(int i = 0; i < a.size(); i++)
{
for(int j = 0; j < a.size(); j++)
{
// Do stuff
i++;
}
}
is O(N).
A function f is in Big-Theta of g if it is both in Big-Oh of g and Big-Omega of g. The worst case happens when the data A is monotonically decreasing function. Then, for every iteration of the outer loop, the while loop executes. If each statement contributed a time value of 1, then the total time would be 5*(1 + 2 + ... + n - 2) = 5*(n - 2)*(n - 1) / 2. This gives a quadratic dependence on the data. However, if the data A is a monotonically increasing sequence, the condition A[i] > key will always fail. Thus the outer loop executes in constant time, N - 3 times. The best case of f then has a linear dependence on the data. For the average case, we take the next number in A and find its place in the sorting that has previously occurred. On average, this number will be in the middle of this range, which implies the inner while loop will run half as often as in the worst case, giving a quadratic dependence on the data.
Big O (basically) about how many times the elements in your loop will be looked at in order to complete a task.
For example, a O(n) algorithm will iterate through every element just once.
A O(1) algorithm will not have to iterate through every element at all, it will know exactly where in the array to look because it has an index. An example of this is an array or hash table.
The reason a loop inside of a loop is O(n^2) is because every element in the loop has to be iterated over itself ^2 times. Changing the type of the loop has nothing to do with it since it's about # of iterations essentially.
There are approaches to algorithms that will allow you to reduce the number of iterations you need. An example of these are "divide & conquer" algorithms like Quicksort, which if I recall correctly is O(nlog(n)).
It's tough to come up with a better alternative to your example without knowing more specifically what you're trying to accomplish.