I have a question..I got an algorithm
procedure summation(A[1...n])
s=0
for i= 1 to n do
j=min{max{i,A[i],n^3}
s= s + j
return s
And I want to find the min and max output at this algorithm with the use of asymptotic notation θ.
Any ideas on how to do that?
What I have to look on an algorithm to understand it's complexity?
If you want to know the big O notation or time complexity works? You might want to look at the following post What is a plain English explanation of "Big O" notation?.
For the psuedo code that you showed, the complexity is O(n). were n is the length of the array.
Often you can determine the complexity by just looking at how many nested loops the algorithm has. Of course is this not always the case but this can used as rule of thumb.
In the following example:
procedure summation(A[B1[1...n],B2[1...n],...,Bn[1...n]])
s=0
for i= 1 to n do
for j= 1 to m do
j=min{max{i,A[i,j],n^3}
s= s + j
return s
the complexity would be O(n m). (length of all arrays b -> m)
best or worst case
For the algorithm that you showed there is no best or worse case. It always runs the same time for the same array, the only influence on the run time is the length of the array.
An example where there you could be a best or worst case is following.
lets say you need to find the location of a specific number in a array.
If you method would be to to through the array from start to end. The best case would that the number is at the start. The worst case would be if the number would be at the end.
For a more detailed explanation look at the link.
Cheers.
The best and the worst case are the same because the algorithm will run the "same way" every time no matter the input. So based on that we will calculate the time complexity of the algorithm using math:
T(n) = 1 + (3+2) + 1
T(n) = 2 + 5
T(n) = 2 + 5 1
T(n) = 2 + 5 (n-1+1)
T(n) = 5n + 2
This summation (3+2) is due to the fact that inside the loop we have 5 distinct and measurable actions:
j = min{max{i,A[i]}, n^3} that counts as three actions because we have 2 comparisons and a value assignment to the variable j.
s = s + j that counts as 2 actions because we have one addition and a value assignment to the variable s.
Asymptotically: Θ(n)
How we calculate Θ(n):
We look at the result that is 5n + 2 and we take out the constants so it becomes
n. Then we choose the "biggest" variable that is n.
Other examples:
8n^3+5n+2 -> Θ(n)=n^3
10logn+n^4+7 -> Θ(n)=n^4
More info: http://bigocheatsheet.com/
Related
I dont really understand how to calculate the complexity of a code. I was told that i need to look on the number of actions that are done on each item in my code. So when I have a loop that runs over an array and based on the idea of arithmetic progression (I want to calculate the sum from every index till the end of the array) which means at first i pass over n cells and the second time n-1 cells and so on... why is the complexity considerd O(N^2) and not O(n) ?
As I see it, n + n-1 +n-2 + n-c.. is xn -c , In other words O(n). SO WHY am i wrong ?
As I see it, n + n-1 +n-2 + n-c.. is xn -c , In other words O(n). SO WHY am i wrong ?
Actually, it is not true. The sum of this arithmetic progression is n*(n-1)/2 = O(n^2)
P.S I have read your task : you need only one loop over an array using the previous results, so you can solve this one with O(n) complexity.
for i=1 to n
result[i] = a[i]+result[i-1]
What your code is telling to do is the following :-
traverse array from 1 to n
traverse array from 2 to n
... similarly after total n-1 iterations
traverse array's nth element
As you can notice that array traversing of cells is decreasing in order of 1.
Each traversal is being guided by loop which is increasing upto value of i. The whole code is wrapped under a function of n.
The concrete idea for number of actions performed on each item of the array is :-
for ( i = 1 to n )
for ( j = i to n )
traverse array[j] ;
Hence, complexity of your code = O(n^2) and the order is clearly in AP as it forms the series n + (n-1) + ... + 1 with a common difference of 1.
I hope it is clear...
The time complexity is: 1 + 2 + ... + n.
This is equal to n(n+1)/2.
For example, for n = 3: 1 + 2 + 3 = 6
and 3(4)/2 = 12/2 = 6
n(n+1)/2 = (n^2 + n) / 2 which is O(n^2) because we can remove constant factors and lower order terms.
As an arithmetic progression has a closed form solution, its efficient computation is o(1): that is its computation time does not depend on the number of elements.
If you were to use a loop then it would be o(n) as the execution time would be linear on the number of elements.
You're adding up n numbers whose average value is (n/2) because they range from 1 to n. Thus n times (n/2) = n^2 / 2. We don't care about the constant multiple, so O(n^2).
You are getting it wrong somewhere! The sum of an arithmetic progression is of the order of n^{2}
To clear your doubts on arithmetic progression, visit this link: http://www.mathsisfun.com/algebra/sequences-sums-arithmetic.html
And as you said, you face difficulty in finding the complexity of any code, you can read from these two links:
http://discrete.gr/complexity/
http://www.cs.cmu.edu/~adamchik/15-121/lectures/Algorithmic%20Complexity/complexity.html
Good enough to get you going and help you understand how to find the complexity of most of the algorithms.
Given the below algorithm:
Algorithm Find-Max(Array, size)
Max = -INFINITY
for k:= 1 to n do
if(A[k] > Max-sf) Then
Max-sf:=A[k]
end if
The question is what is the average times is the variable max updated?
I am practicing algorithm analysis and below is my thought but I am not sure about it so I would like to ask for advice.
Let T(n) be the number of comparisons in each call on find-Max with size = n.
T(n) = T(n-1) + 1/n
where 1/n is the probability such that the the largest number is at the index n. Therefore,
T(n-1) = T(n-2) + 1/(n-1)
T(n-2) = T(n-3) + 1/(n-2)
By telescoping,
T(n) = 1/n + 1/(n-1)+ 1/(n-2) + .... + 1
, which is harmonic series. Therefore the average times the variable Max-sf updated is log(n))
This is how i prove it.
So, I would like to ask 3 questions:
(1) Is the proof above correct?
(2) Is there a way to get precise value of the number of comparisons?
(3) Supposed that we use the divide and conquer method by using similar idea as merge sort instead of scanning an array, will the number of updates still the same?
1) I'm not sure regarding your proof, but I find this one to be the most formal and convincing one.
2) The precise number of comparisons seems to be fixed. You always do n comparisons in the loop.
3) Regarding the divide and conquer option, it can't be better than the worst case number of updates (which is n), since it behaves like:
T(n) = 2T(n/2) + 1
Which results in T(2^n) = 2*2^n-1, which means Theta(n) complexity.
Our prof and various materials say Summation(n) = (n) (n+1) /2 and hence is theta(n^2). But intuitively, we just need one loop to find the sum of first n terms! So, it has to be theta(n).I'm wondering what am I missing here?!
All of these answers are misunderstanding the problem just like the original question: The point is not to measure the runtime complexity of an algorithm for summing integers, it's talking about how to reason about the complexity of an algorithm which takes i steps during each pass for i in 1..n. Consider insertion sort: On each step i to insert one member of the original list the output list is i elements long, thus it takes i steps (on average) to perform the insert. What is the complexity of insertion sort? It's the sum of all of those steps, or the sum of i for i in 1..n. That sum is n(n+1)/2 which has an n^2 in it, thus insertion sort is O(n^2).
The running time of the this code is Θ(1) (assuming addition/subtraction and multiplaction are constant time operations):
result = n*(n + 1)/2 // This statement executes once
The running time of the following pseudocode, which is what you described, is indeed Θ(n):
result = 0
for i from 1 up to n:
result = result + i // This statement executes exactly n times
Here is another way to compute it which has a running time of Θ(n²):
result = 0
for i from 1 up to n:
for j from i up to n:
result = result + 1 // This statement executes exactly n*(n + 1)/2 times
All three of those code blocks compute the natural numbers' sum from 1 to n.
This Θ(n²) loop is probably the type you are being asked to analyse. Whenever you have a loop of the form:
for i from 1 up to n:
for j from i up to n:
// Some statements that run in constant time
You have a running time complexity of Θ(n²), because those statements execute exactly summation(n) times.
I think the problem is that you're incorrectly assuming that the summation formula has time complexity theta(n^2).
The formula has an n^2 in it, but it doesn't require a number of computations or amount of time proportional to n^2.
Summing everything up to n in a loop would be theta(n), as you say, because you would have to iterate through the loop n times.
However, calculating the result of the equation n(n+1)/2 would just be theta(1) as it's a single calculation that is performed once regardless of how big n is.
Summation(n) being n(n+1)/2 refers to the sum of numbers from 1 to n. Which is a mathematical formula and can be calculated without a loop which is O(1) time. If you iterate an array to sum all values that is an O(n) algorithm.
Somewhat similar to fibonacci sequence
Running time of an algorithm is given by
T (n) =T (n-1)+T(n-2)+T(n-3) if n > 3
= n otherwise the order of this algorithm is?
if calculated by induction method then
T(n) = T(n-1) + T(n-2) + T(n-3)
Let us assume T(n) to be some function aⁿ
then aⁿ = an-1 + an-2 + an-3
=> a³ = a² + a + 1
which give complex solutions also roots of above equation according to my calculations are
a = 1.839286755
a = 0.419643 - i ( 0.606291)
a = 0.419643 + i ( 0.606291)
Now, how can I proceed further or is there any other method for this?
If I remember correctly, when you have determined the roots of the characteristic equation, then the T(n) can be the linear combination of the powers of those Roots
T(n)=A1*root1^n+A2*root2^n+A3*root3^n
So I guess the maximum complexity here will be
(maxroot)^n where maxroot is the maximum absolute value of your roots. So for your case it is ~ 1.83^n
Asymptotic analysis is done for running times of programs which give us how the running time will grow with the input.
For Recurrence relations (like the one you mentioned), we use a two step process:
Estimate the running time using the recursion tree method.
Validate(Confirm) the estimate using the substitution method.
You can find explanation of these methods in any algorithm text (eg. Cormen).
it can be aproximated like 3+9+27+......3^n which is O(3^n)
I am trying to prove the following worst-case scenario for the Quicksort algorithm but am having some trouble. Initially, we have an array of size n, where n = ij. The idea is that at every partition step of Quicksort, you end up with two sub-arrays where one is of size i and the other is of size i(j-1). i in this case is an integer constant greater than 0. I have drawn out the recursive tree of some examples and understand why this is a worst-case scenario and that the running time will be theta(n^2). To prove this, I've used the iteration method to solve the recurrence equation:
T(n) = T(ij) = m if j = 1
T(n) = T(ij) = T(i) + T(i(j-1)) + cn if j > 1
T(i) = m
T(2i) = m + m + c*2i = 2m + 2ci
T(3i) = m + 2m + 2ci + 3ci = 3m + 5ci
So it looks like the recurrence is:
j
T(n) = jm + ci * sum k - 1
k=1
At this point, I'm a bit lost as to what to do. It looks the summation at the end will result in j^2 if expanded out, but I need to show that it somehow equals n^2. Any explanation on how to continue with this would be appreciated.
Pay attention, the quicksort algorithm worst case scenario is when you have two subproblems of size 0 and n-1. In this scenario, you have this recurrence equations for each level:
T(n) = T(n-1) + T(0) < -- at first level of tree
T(n-1) = T(n-2) + T(0) < -- at second level of tree
T(n-2) = T(n-3) + T(0) < -- at third level of tree
.
.
.
The sum of costs at each level is an arithmetic serie:
n n(n-1)
T(n) = sum k = ------ ~ n^2 (for n -> +inf)
k=1 2
It is O(n^2).
Its a problem of simple mathematics. The complexity as you have calculated correctly is
O(jm + ij^2)
what you have found out is a parameterized complextiy. The standard O(n^2) is contained in this as follows - assuming i=1 you have a standard base case so m=O(1) hence j=n therefore we get O(n^2). if you put ij=n you will get O(nm/i+n^2/i) . Now what you should remember is that m is a function of i depending upon what you will use as the base case algorithm hence m=f(i) thus you are left with O(nf(i)/i + n^2/i). Now again note that since there is no linear algorithm for general sorting hence f(i) = omega(ilogi) which will give you O(nlogi + n^2/i). So you have only one degree of freedom that is i. Check that for any value of i you cannot reduce it below nlogn which is the best bound for comparison based.
Now what I am confused is that you are doing some worst case analysis of quick sort. This is not the way its done. When you say worst case it implies you are using randomization in which case the worst case will always be when i=1 hence the worst case bound will be O(n^2). An elegant way to do this is explained in randomized algorithm book by R. Motwani and Raghavan alternatively if you are a programmer then you look at Cormen.