About the time complexity algorithm and asymptotic growth - algorithm

I've got the question about the time complexity algorithm and asymptotic growth.
The pseudo code of question is
1: function NAIVE(x,A)
2: answer = 0
3: n= length of A
4: for I from - to n do
5: aux = 1
6. for j from 1 to I do
7: aux = aux*x
8: answer = answer + aux * A[I]
9. return answer
I have to find upperbound with O-notation and lowerbound witn Ω-notation.
I got the time complexity f(n) = 5n^2 + 4n + 8 and g(n) = n^2.
My question is I'm not too sure about the running time of line number 6 to 8.
I got constant 2 and time n+1 for line number 4 and constant 1 and time 1 for line 5.
I'm stuck on after that. I tried it and I got constant 2 and time n^2 + 1 for line 6, because it runs in the for loop (for loop and for loop) so I thought its n^2+1. Is that correct?
And for line number 8 its has 3 constants and run time is n^2. is this correct? I'm not too sure about the line number 8. This is how I got f(n) = 5n^2 + 4n + 8!
Please help me out to complete this question!
I wonder my work is right or not!
Thank you

Let's go through this step by step.
The complexity of line 7 T7 = 1.
The complexity of the surrounding for will be T6(I) = I * T7 = I.
The complexity of line 5 T5 = 1.
The complexity of line 8 T8 = 1.
The complexity of the surrounding for (assuming that - stands for 0) is
T4(n) = Sum{I from 0 to n} (T5 + T6(I) + T8)
= Sum{I from 0 to n}(2 + I)
= Sum{I from 0 to n}(2) + Sum{I from 0 to n}(I)
= (n + 1) * 2 + (n+1)/2 * (0 + n)
= 2n + 2 + n^2/2 + n/2
= 1/2 n^2 + 5/2 n + 2
The complexity of the remaining lines is T2 = T3 = T9 = 1.
The complexity of the entire algorithm is
T(n) = T2 + T3 + T4(n) + T9
= 1 + 1 + 1/2 n^2 + 5/2 n + 2 + 1
= 1/2 n^2 + 5/2 n + 5
This runtime is in the complexity classes O(n^2) and Ω(n^2).

Related

What is the time complexity?

What is the time complexity for the following function?
for(int i = 0; i < a.size; i++) {
for(int j = i; j < a.size; i++) {
//
}
}
I think it is less than big O n^2 because we arent iterating over all of the elements in the second for loop. I believe the time complexity comes out to be something like this:
n[ (n) + (n-1) + (n-2) + ... + (n-n) ]
But when I solve this formula it comes out to be
n^2 - n + n^2 - 2n + n^2 - 3n + ... + n^2 - n^2
Which doesn't seem correct at all. Can somebody tell me exactly how to solve this problem, and where I am wrong.
That is O(n^2). If you consider the iteration where i = a.size() - 1, and you work your way backwards (i = a.size() - 2, i = a.size - 3, etc), you are looking at the following sum of number of iterations, where n = a.size.
1 + 2 + 3 + 4 + ... + n
The sum of this series is n(n+1)/2, which is O(n^2). Note that big-O notation ignores constants and takes the highest polynomial power when it is applied to a polynomial function.
It will run for:
1 + 2 + 3 + .. + n
Which is 1/2 n(n+1) which give us O(n^2)
The Big-O notation will only keep the dominant term, neglecting constants too
The Big-O is only used to compare algorithms on the same variation of a problem using the same complexity analysis standard, if and only if the dominant terms are different.
If the dominant terms are the same, you need to compare Big-Theta or Time complexity, which will show minor differences.
Example
A
for i = 1 .. n
for j = i .. n
..
B
for i = 1 .. n
for j = 1 .. n
..
We have
Time(A) = 1/2 n(n+1) ~ O(n^2)
Time(B) = n^2 ~ O(n^2)
O(A) = O(B)
T(A) < T(B)
Analysis
To visualize how we got 1 + 2 + 3 + .. n:
for i = 1 .. n:
print "(1 + "
sum = 0
for j = i .. n:
sum++
print sum") + "
will print the following:
(1+n) + (1+(n-1)) + .. + (1+3) + (1+2) + (1+1) + (1+0)
n+1 + n + n-1 + .. + 3 + 2 + 1
1 + 2 + 3 + .. + n + n+1
1/2 n(n+1) + (n+1)
1/2 n^2 + 1/2 n + n + 1
1/2 n^2 + 3/2 n + 1
Yes, the number of iterations is strictly less than n^2, but it's still Θ(n^2). It will eventually be greater than n^k for any k<2, and it will eventually be less than n^k for any k>2.
(As a side note, computer scientists often say big-O when they really mean big-theta (Θ). It's technically correct to say that almost every algorithm you've seen has O(n!) running time; all reasonably algorithms have running times that grow no more quickly than n!. But it's not really useful to say that the complexity is O(n!) if it's also O(n log n), so by some kind of Gricean maxim we assume that when someone says an algorithm's complexiy is O(f(x)) that f(x) is as small as possible.)

Big O runtime for this algorithm?

Here's the pseudocode:
Baz(A) {
big = −∞
for i = 1 to length(A)
for j = 1 to length(A) - i + 1
sum = 0
for k = j to j + i - 1
sum = sum + A(k)
if sum > big
big = sum
return big
So line 3 will be O(n) (n being the length of the array, A)
I'm not sure what line 4 would be...I know it decreases by 1 each time it is run, because i will increase.
and I can't get line 6 without getting line 4...
All help is appreciated, thanks in advance.
Let us first understand how first two for loops work
for i = 1 to length(A)
for j = 1 to length(A) - i + 1
First for loop will run from 1 to n(length of Array A) and the second for loop will depend on value of i. SO when i = 1 second for loop will run for n times..When i increments to 2 your second for loop will run for (n-1) time ..so it will go on till 1.
So your second for loop will run as follows:
n + (n - 1) + (n - 2) + (n - 3) + .... + 1 times...
You can use following formula: sum(1 to n) = N * (N + 1) / 2 which gives (N^2 + N)/2 So we have Big oh for these two loops as
O(n^2) (Big Oh of n square )
Now let us consider third loop also...
Your third for loop looks like this
for k = j to j + i - 1
But this actually means,
for k = 0 to i - 1 (you are just shifting the range of values by adding/subtracting j but number of times the loop should run will not change, as difference remains same)
So your third loop will run from 0 to 1(value of i) for first n iterations of second loop then it will run from 0 to 2(value of i) for first (n - 1) iterations of second loop and so on..
So you get:
n + 2(n-1) + 3(n-2) + 4(n-3).....
= n + 2n - 2 + 3n - 6 + 4n - 12 + ....
= n(1 + 2 + 3 + 4....) - (addition of some numbers but this can not be greater than n^2)
= `N(N(N+1)/2)`
= O(N^3)
So your time complexity will be N^3 (Big Oh of n cube)
Hope this helps!
Methodically, you can follow the steps using Sigma Notation:
Baz(A):
big = −∞
for i = 1 to length(A)
for j = 1 to length(A) - i + 1
sum = 0
for k = j to j + i - 1
sum = sum + A(k)
if sum > big
big = sum
return big
For Big-O, you need to look for the worst scenario
Also the easiest way to find the Big-O is to look into most important parts of the algorithm, it can be loops or recursion
So we have this part of the algorithm consisting of loops
for i = 1 to length(A)
for j = 1 to length(A) - i + 1
for k = j to j + i - 1
sum = sum + A(k)
We have,
SUM { SUM { i } for j = 1 to n-i+1 } for i = 1 to n
= 1/6 n (n+1) (n+2)
= (1/6 n^2 + 1/6 n) (n + 2)
= 1/6 n^3 + 2/6 2 n^2 + 1/6 n^2 + 2/6 n
= 1/6 n^3 + 3/6 2 n^2 + 2/6 n
= 1/6 n^3 + 1/2 2 n^2 + 1/3 n
T(n) ~ O(n^3)

complexity theory-sorting algorithm

I'am taking a course in complexity theory,and so it's need some mathematical background which i have a problem,.
so while i'am trying to do some practicing i stuck in the bellow example
1) for (i = 1; i < n; i++) {
2) SmallPos = i;
3) Smallest = Array[SmallPos];
4) for (j = i+1; j <= n; j++)
5) if (Array[j] < Smallest) {
6) SmallPos = j;
7) Smallest = Array[SmallPos]
}
8) Array[SmallPos] = Array[i];
9) Array[i] = Smallest;
}
Thus, the total computing time is:
T(n) = (n) + 4(n-1) + n(n+1)/2 – 1 + 3[n(n-1) / 2]
= n + 4n - 4 + (n^2 + n)/2 – 1 + (3n^2 - 3n) / 2
= 5n - 5 + (4n2 - 2n) / 2
= 5n - 5 + 2n^2 - n
= 2n^2 + 4n - 5
= O(n^2)
and what i don't understand or confused about line 4 analyzed to n(n+1)/2 – 1,
and line 5 3[n(n-1) / 2].
i knew that the sum of positive series is =n(first+last)/2 ,but when i tried to calculate it as i understand it it gives me different result.
i calculate for line no 4 so it shoulb be =n((n-1)+2)/2 according to n(first+last)/2 ,but here it's n(n+1)/2 – 1.
and same for 3[n(n-1) / 2].....i don't understand this too
also here's what is written in the analysis it could help if anyone can explain to me,
Statement 1 is executed n times (n - 1 + 1); statements 2, 3, 8, and 9 (each representing O(1) time) are executed n - 1 times each, once on each pass through the outer loop. On the first pass through this loop with i = 1, statement 4 is executed n times; statement 5 is executed n - 1 times, and assuming a worst case where the elements of the array are in descending order, statements 6 and 7 (each O(1) time) are executed n - 1 times.
On the second pass through the outer loop with i = 2, statement 4 is executed n - 1 times and statements 5, 6, and 7 are executed n - 2 times, etc. Thus, statement 4 is executed (n) + (n-1) +... + 2 times and statements 5, 6, and 7 are executed (n-1) + (n-2) + ... + 2 + 1 times. The first sum is equal to n(n+1)/2 - 1, and the second is equal to n(n-1)/2.
Thus, the total computing time is:
T(n) = (n) + 4(n-1) + n(n+1)/2 – 1 + 3[n(n-1) / 2]
= n + 4n - 4 + (n^2 + n)/2 – 1 + (3n^2 - 3n) / 2
= 5n - 5 + (4n2 - 2n) / 2
= 5n - 5 + 2n^2 - n
= 2n^2 + 4n - 5
= O(n^2)
here's the link for the file containing this example:
http://www.google.com.eg/url?sa=t&rct=j&q=Consider+the+sorting+algorithm+shown+below.++Find+the+number+of+instructions+executed+&source=web&cd=1&cad=rja&ved=0CB8QFjAA&url=http%3A%2F%2Fgcu.googlecode.com%2Ffiles%2FAnalysis%2520of%2520Algorithms%2520I.doc&ei=3H5wUNiOINDLswaO3ICYBQ&usg=AFQjCNEBqgrtQldfp6eqdfSY_EFKOe76yg
line 4: as the analysis says, it is executed n+(n-1)+...+2 times. This is a sum of (n-1) terms. In the formula you use, n(first+last)/2, n represents the number of terms. If you apply the formula to your sequence of n-1 terms, then it should be (n-1)((n)+(2))/2=(n²+n-2)/2=n(n+1)/2-1.
line 5: the same formula can be used. As the analysis says, you have to calculate (n-1)+...+1. This is a sum of n-1 terms, with the first and last being n-1 and 1. The sum is given by (n-1)(n-1+1)/2. The factor 3 is from the 3 lines (5,6,7) that are each being done (n-1)(n)/2 times

time complexity of following recurrence?

Find out the time complexity (Big Oh Bound) of the recurrence T(n) = T(⌊n⌋) + T(⌈n⌉) + 1.
How the time complexity of this comes out to be O(n)??
You probably ment T(n)=T(⌊n/2⌋)+ T(⌈n/2⌉) + 1.
Lets calculate first few values of T(n).
T(1) = 1
T(2) = 3
T(3) = 5
T(4) = 7
We can guess that T(n) = 2 * n - 1.
Lets prove that by mathematical induction
Basis
T(1) = 1
T(2) = 3
T(3) = 5
T(4) = 7
Inductive step
T(2*n) = T(⌊2*n/2⌋)+ T(⌈2*n/2⌉) + 1
= T(⌊n⌋)+ T(⌈n⌉) + 1
= (2*n - 1) + (2*n - 1) + 1
= 4*n - 1
= 2 * (2*n) - 1
T(2*n+1) = T(⌊(2*n+1)/2⌋)+ T(⌈(2*n+1)/2⌉) + 1
= T(n)+ T(n+1) + 1
= (2*n - 1) + (2*(n+1) - 1) + 1 =
= 4*n + 1 =
= (2*n+1)*2 - 1
Since both the basis and the inductive step have been proved, it has now been proved by mathematical induction that T(n) holds for all natural 2*n - 1.
T(n) = 2*n - 1 = O(n)
What you have currently does not make sense. Since n is usually taken to be a natural number, then n=⌊n⌋=⌈n⌉. The recurrence then reads: break down a problem of size n into two problems of size n and spend time 1 doing that. The two new problems you just created will be split in turn, and so on- all you are doing is creating more work for yourself.

Confused on recurrence and Big O

I know that
T(n) = T(n/2) + θ(1) can be result to O(Log N)
and my book said this is a case of Binary Search.
But, how do you know that? Is it just by the fact that Binary Search cuts the problem in half each time so it is O(Log N)?
And T(n) = 2T(n/2) + θ(1)
why is it that the result is O(N) and not O(Log N) when the algorithm divides in half each time as well.
Then T(n) = 2T(n/2) + θ(n)
can be result to O(N Log N)? I see the O(N) is from θ(n) and O(Log N) is from T(n/2)
I am really confused about how to determine the Big O of an algorithm that I don't even know how to word it properly. I hope my question is making sense.
Thanks in advance!
an intuitive solution for these problems is to see the result when unfolding the recursive formula:
Let's assume Theta(1) is actually 1 and Theta(n) is n, for simplicity
T(n) = T(n/2) + 1 = T(n/4) + 1 + 1 = T(n/8) + 1 + 1 + 1 = ... =
= T(0) + 1 + ... + 1 [logN times] = logn
T'(n) = 2T'(n/2) + 1 = 2(2T'(n/4) + 1) + 1 = 4T'(n/4) + 2 + 1 =
= 8T'(n/4) + 4 + 2 + 1 = ... = 2^(logn) + 2^(logn-1) + ... + 1 = n + n/2 + ... + 1 =
= 2n-1
T''(n) = 2T(n/2) + n = 2(2T''(n/2) + n/2) + n = 4T''(n/4) + 2* (n/2) + n =
= 8T''(n/8) + 4*n/4 + 2*n/2 + n = .... = n + n + .. + n [logn times] = nlogn
To formally prove these equations, you should use induction. Assume T(n/2) = X, and using it - prove T(n) = Y, as expected.
For example, for the first formula [T(n) = T(n/2) + 1] - and assume base is T(1) = 0
Base trivially holds for n = 1
Assume T(n) <= logn for any k <= n-1, and prove it for k = n
T(n) = T(n/2) + 1 <= (induction hypothesis) log(n/2) + 1 = log(n/2) + log(2) = log(n/2*2) = log(n)
I find an easy way to understand these is to consider the time the algorithm spends on each step of the recurrence, and then add them up to find the total time. First, let's consider
T(n) = T(n/2) + O(1)
where n=64. Let's add up how much the algorithm takes at each step:
T(64) = T(32) + 1 ... 1 so far
T(32) = T(16) + 1 ... 2 so far
T(16) = T(08) + 1 ... 3 so far
T(08) = T(04) + 1 ... 4 so far
T(04) = T(02) + 1 ... 5 so far
T(02) = T(01) + 1 ... 6 so far
T(01) = 1 ... 7 total
So, we can see that the algorithm took '1' time at each step. And, since each step divides the input in half, the total work is the number of times the algorithm had to divide the input in two... which is log2 n.
Next, let's consider the case where
T(n) = 2T(n/2) + O(1)
However, to make things simpler, we'll build up from the base case T(1) = 1.
T(01) = 1 ... 1 so far
now we have to do T(01) twice and then add one, so
T(02) = 2T(01) + 1 ... (1*2)+1 = 3
now we have to do T(02) twice, and then add one, so
T(04) = 2T(02) + 1 ... (3*2)+1 = 7
T(08) = 2T(04) + 1 ... (7*2)+1 = 15
T(16) = 2T(08) + 1 ... (15*2)+1 = 31
T(32) = 2T(16) + 1 ... (32*2)+1 = 63
T(64) = 2T(32) + 1 ... (65*2)+1 = 127
So we can see that here the algorithm has done 127 work - which is equal to the input multiplied by a constant (2) and plus a constant (-1), which is O(n). Basically this recursion corresponds to the infinite sequence (1 + 1/2 + 1/4 + 1/8 + 1/16) which sums to 2.
Try using this method on T(n) = 2T(n/2) + n and see if it makes more sense to you.
One visual solution to find the T(n) for a recursive equation is to sketch it with a tree then:
T(n) = number of nodes * time specified on each node.
In your case T(n) = 2T(n/2) + 1
I write the one in the node itself and expand it to two node T(n/2)
Note T(n/2) = 2T(n/4) + 1, and again I do the same for it.
T(n) + 1
/ \
T(n/2)+1 T(n/2)+1
/ \ / \
T(n/4)+1 T(n/4)+1 T(n/4)+1 T(n/4)+1
... ... .. .. .. .. .. ..
T(1) T(1) .......... ............T(1)
In this tree the number of nodes equals
2*height of tree = 2*log(n) = n
Then T(n) = n * 1 = n = O(n)

Resources