Assume that there is a job and many workers are available.
The following code may be a bad optimization idea.
But it is just for analyzing the complexity.
A is a set of N worker
while (A is not empty)
{
B=empty set
foreach a1 in A
{
foreach a2 in A
{
b= merge(a1, a2)
if (b works better than a1 **and** b works better than a2)
add b to B
}
}
A=B
}
The problem is that the probability of "b works better than a1 and a2" is unknown.
So, how to estimate the time complexity of the above code ?
For the two inner loops, the complexity is independent of probability of "b works better than a1 and a2".
However, the code seems a bit broken as I don't find there is an exit to the while loop.
Not considering the while loop, the time complexity will be
O(a1*a2) = O(N^2).
Recurrence equation will be
T(N) = T(N-1) + C
Related
I am a student who is taking the programming class data structures and algorithms and I am in need of help with a exam question I cant seem to get a grip off.
Here is the problem:
Consider the following algorithm func on a given array A = {a1, a2, ..., an}:
If n = 1, then return.
If a1 > an, then exchange a1 and an.
Run func on {a1, a2, ... ,a2n/3}.
Run func on {an/3, a(n/3)+1, ... ,an}.
Run func on {a1, a2, ... ,a2n/3}.
Give a recurrence for the worst-case running time of this algorithm.
Here is a link to an image of the assignment if my explanation wasnt clear: http://i.imgur.com/VftEgDX.png
I understand that it is a divide and conquer problem but Im having a hard time to figure out how to solve it.
Thank you :)
If a1 > an, then exchange a1 and an.
this is a constant operation - so O(1)
Run func on {a1, a2, ... ,a2n/3}.
You invoke the array recursively on 2n/3 of it, so T(2n/3)
Run func on {an/3, a(n/3)+1, ... ,an}.
Run func on {a1, a2, ... ,a2n/3}.
Similar to the above, each one is T(2n/3)
this gives you total of T(n) = 3T(2n/3) + O(1), and T(1) = O(1).
Now, we can get a big O notation, using master theorem case 1:
log_{3/2}(3) ~= 2.7
O(1) is in O(n^2.7), so we can use the case, and get that T(n) is in
Theta(n^log_{3/2}(3)) ~= Theta(n^2.7)
Here is an algorithm for finding kth smallest number in n element array using partition algorithm of Quicksort.
small(a,i,j,k)
{
if(i==j) return(a[i]);
else
{
m=partition(a,i,j);
if(m==k) return(a[m]);
else
{
if(m>k) small(a,i,m-1,k);
else small(a,m+1,j,k);
}
}
}
Where i,j are starting and ending indices of array(j-i=n(no of elements in array)) and k is kth smallest no to be found.
I want to know what is the best case,and average case of above algorithm and how in brief. I know we should not calculate termination condition in best case and also partition algorithm takes O(n). I do not want asymptotic notation but exact mathematical result if possible.
First of all, I'm assuming the array is sorted - something you didn't mention - because that code wouldn't otherwise work. And, well, this looks to me like a regular binary search.
Anyway...
The best case scenario is when either the array is one element long (you return immediately because i == j), or, for large values of n, if the middle position, m, is the same as k; in that case, no recursive calls are made and it returns immediately as well. That makes it O(1) in best case.
For the general case, consider that T(n) denotes the time taken to solve a problem of size n using your algorithm. We know that:
T(1) = c
T(n) = T(n/2) + c
Where c is a constant time operation (for example, the time to compare if i is the same as j, etc.). The general idea is that to solve a problem of size n, we consume some constant time c (to decide if m == k, if m > k, to calculate m, etc.), and then we consume the time taken to solve a problem of half the size.
Expanding the recurrence can help you derive a general formula, although it is pretty intuitive that this is O(log(n)):
T(n) = T(n/2) + c = T(n/4) + c + c = T(n/8) + c + c + c = ... = T(1) + c*log(n) = c*(log(n) + 1)
That should be the exact mathematical result. The algorithm runs in O(log(n)) time. An average case analysis is harder because you need to know the conditions in which the algorithm will be used. What is the typical size of the array? The typical size of k? What is the mos likely position for k in the array? If it's in the middle, for example, the average case may be O(1). It really depends on how you use this.
I need to Find the solution of the recurrence for n, a power of two if T(n)=3T(n/2)+n for n>1 and T(n)=1 otherwise.
using substitution of n=2^m,S(m)=T(2^(m-1)) I can get down to:
S(m)=2^m+3*2^(m-1)+3^2*2^(m-2)+⋯+3^(m-1) 2^1+3^m
But I have no idea how to simply that.
These types of recurrences are most easily solved by Master Theorem for analysis of algorithms which is explained as follows:
Let a be an integer greater than or equal to 1, b be a real number greater than 1, and c be a positive real number. Given a recurrence of the form -
T (n) = a * T(n/b) + nc where n > 1, then for n a power of b, if
Logba < c, T (n) = Θ(nc);
Logba = c, T (n) = Θ(nc * Log n);
Logba > c, T (n) = Θ(nlogba).
English translation of your recurrence
The most critical thing to understand in Master Theorem is the constants a, b, and c mentioned in the recurrence. Let's take your own recurrence - T(n) = 3T(n/2) + n - for example.
This recurrence is actually saying that the algorithm represented by it is such that,
(Time to solve a problem of size n) = (Time taken to solve 3 problems of size n/2) + n
The n at the end is the cost of merging the results of those 3 n/2 sized problems.
Now, intuitively you can understand that:
if the cost of "solving 3 problems of size n/2" has more weight than "n" then the first item will determine the overall complexity;
if the cost "n" has more weight than "solving 3 problems of size n/2" then the second item will determine the overall complexity; and,
if both parts are of same weight then solving the sub-problems and merging their results will have an overall compounded weight.
From the above three intuitive understanding, only the three cases of Master Theorem arise.
In your example, a = 3, b = 2 and c = 1. So it falls in case-3 as Logba = Log23 which is greater than 1 (the value of c).
The complexity therefore is straightforward - Θ(nlogba) = Θ(nlog23).
You can solve this using Masters theorem, but also by opening the recursion tree in the following way:
At the root of the recursion tree, you will have a work of n.
In the second stage, the tree splits into three parts, and in each part, the work will be n / 2.
Keep going until you reach the leaves. The entire work leaf will be: O (1) = O (n / 2 ^ k) when: n = 2 ^ k.
Note that at each step m have 3 ^ m splits.
Now we'll combine all the steps together, using the geometric progression and logarithms rules. In the end, you will get:
T(n) = 3T(n/2)+n = 2n^(log3)-2n
the calculation
Have a look here at page 60 http://www.cs.columbia.edu/~cs4205/files/CM2.pdf.
And maybe you should have asked here https://math.stackexchange.com/
The problems like this can be solved using Masters theorem.
In your case a = 3, b = 2 and f(n) = n.
So c = log_b(a) = log_2(3), which is bigger than 1, and therefore you fall into the first case. So your complexity is:
O(n^{log_2(3)}) = O(n^{1.58})
I am learning about big O and recurrences.
I encountered a problem that mentioned,
t = { 0, n =1 ; T(n-) , n > 1 }
Can anyone tell me how to get to O(n^2) from this ?
The functioin in your question have the complexity O(n) if it was O(n²) it should look like this:
T(x) = { 1, x =0 ; n + T(x-1) , x > 1 }
wheer n is the number of calculations for t(x) then x /= 0
I do no quite understand what you are trying to ask. But, typically, O(n^2) algorithms will feature the main operation being executed inside 2-Level nested loops. Like:
for(a=0;a<5;a++) {
for(b=0;b<5;b++) {
/* Some of the main operations of the algorithm */
}
}
Similarly, 3-Level nested loops containing the main operations of the algorithm are likely to have complexity of O(n^3) and so on.
(Note: Exceptions may be there to the above methods)
There are three arrays a1, a2, a3 of size n. Function searches for common number in these arrays.
Algorithm is the next:
foreach n in a1
if n is found in a2
if n is found in a3
return true
return false
My guess that worse case will be the next: a1 and a2 are equal, a3 does not contain any common number with a1.
Complexity to iterate through array a1 will be O(i).
Complexity to search array a2 or a3 is f(n) (we do not know how they are searched).
My guess that overall complexity for worse case would be:
O(n) = n * f(n) * f(n) = n * (f(n))^2
I was told that that it is wrong.
What is correct answer then?
n * f(n) * f(n) = n * (f(n))^2
I was told that that it is wrong. What is correct answer then?
The correct answer for the given algorithm:
n * (f(n) + f(n)) = O(n*f(n))
You don't search a3 array f(n) times for each n in a1 so you should use + instead of *.
Place the elements of a2 into a set s2 and the elements of a3 into a set s3. Both of these operations are linear in the number of elements of each array. Then, iterate over a1 and check if the element is in s2 and s3. The lookup is constant time. So the best achievable complexity of the whole algorithm is:
O(n1 + n2 + n3)
Where n1 is the number of elements of a1, and so on for n2 and n3. In other words, the algorithm is linear in the number of elements.
IMO worst complexity is n.log n. You sort each one of arrays and then compare.
The important thing to note here is what is the running time of the "is found" function ?
There are two possible answers :
case 1: If the a2 list and a3 list are sorted,
then this function is log N time binary search, so you get n*log(n)^2.
case 2: If the lists are unordered,
then each search will take n time (where n is the length of the each list)... and thus it will be n * n * n = n^3
For the given algorithm:
foreach item (n) you're looping through 2 other arrays (2f(n)) .. so it's n*2f(n) = o(n*f(n))
BTW, best way to do it is:
Keep an array or hash of the items you find in the first array.
Then go through the other 2 arrays and see if they have items that are already found.
Saving items in array or hash, and Lookup is O(1).
And you're just looping through the 3 arrays one time each, so you have a complexity of O(max{n,f(n)})
Well, the quick answer is O(n^3) assuming that they all have the same length. The worst case is that we find the element we are looking for in a2 at the last position, so we will span the whole array, and the same for a3 or it doesn't exist in a3. and this is the same for all the elements in a1, by that, we will have to span the 3 arrays all the time, assuming that each has length n, then total complexity is of order n^3
So in the worst case you described, where a1 and a2 are equal, and a3 doesn't contain any common numbers with the others, then for each n in a1 you will search a2 and a3.
So it looks like the time to run will be proportional to 2n^2. That is, it would be the same as writing:
int jcnt, kcnt;
for (int i = 0; i < n; ++i)
{
for (int j = 0; j < n; ++j)
{
++jcnt;
}
for (int k = 0; k < n; ++k)
{
++kcnt;
}
}
int total = jcnt+kcnt;
You'll find that total will be equal to 2n^2.
Assuming, of course, that the arrays are unordered. If a2 and a3 are ordered and you can do binary search, then it would be 2n(log n).