Worst-case running time of divide and conquer algorithm - algorithm

I am a student who is taking the programming class data structures and algorithms and I am in need of help with a exam question I cant seem to get a grip off.
Here is the problem:
Consider the following algorithm func on a given array A = {a1, a2, ..., an}:
If n = 1, then return.
If a1 > an, then exchange a1 and an.
Run func on {a1, a2, ... ,a2n/3}.
Run func on {an/3, a(n/3)+1, ... ,an}.
Run func on {a1, a2, ... ,a2n/3}.
Give a recurrence for the worst-case running time of this algorithm.
Here is a link to an image of the assignment if my explanation wasnt clear: http://i.imgur.com/VftEgDX.png
I understand that it is a divide and conquer problem but Im having a hard time to figure out how to solve it.
Thank you :)

If a1 > an, then exchange a1 and an.
this is a constant operation - so O(1)
Run func on {a1, a2, ... ,a2n/3}.
You invoke the array recursively on 2n/3 of it, so T(2n/3)
Run func on {an/3, a(n/3)+1, ... ,an}.
Run func on {a1, a2, ... ,a2n/3}.
Similar to the above, each one is T(2n/3)
this gives you total of T(n) = 3T(2n/3) + O(1), and T(1) = O(1).
Now, we can get a big O notation, using master theorem case 1:
log_{3/2}(3) ~= 2.7
O(1) is in O(n^2.7), so we can use the case, and get that T(n) is in
Theta(n^log_{3/2}(3)) ~= Theta(n^2.7)

Related

Calculate the recurrence use repeated unfolding

I am trying to calculate T (n) = 2 T (n/2) + n (log n)^2.
Following the step I got:
=2^kT(n/2^k)+ nlog2 (n/2^(k-1))+ nlog2 (n/2^(k-2))+…+ n(log (n/2))^2 + n (log2 n)^2
when n=2^k I got:
But I have no idea about how to simplify the summation formula and get Θ() notation.
Any one can help? Thanks a lot
The summation you have doesn't look quite right to me. Let's re-derive it:
... after m iterations. Let's assume the stopping condition is n = 1 (without loss of generality):
... where we have employed two of the logarithm rules. As you can see the summation is in fact over "free indices" and not the logs themselves. Using the following integer power sums:
... we get:
To evaluate the Θ-notation, the highest order term is:
If you have read the Master theorem, you will realise that the question you have asked is actually the 2nd case of Master Theorem (Refer to the link above).
So, here a=2, b=2, and f(n) = 0[n^(c_crit)(log n)^k] where k=2 and c known as c_crit = log a to base b = 1.
So, by Master Theorem, T(n) = 0[(n^c_crit)(log k)^(k+1)] = 0[n(log n)^3]

Computational complexity of unknown probability

Assume that there is a job and many workers are available.
The following code may be a bad optimization idea.
But it is just for analyzing the complexity.
A is a set of N worker
while (A is not empty)
{
B=empty set
foreach a1 in A
{
foreach a2 in A
{
b= merge(a1, a2)
if (b works better than a1 **and** b works better than a2)
add b to B
}
}
A=B
}
The problem is that the probability of "b works better than a1 and a2" is unknown.
So, how to estimate the time complexity of the above code ?
For the two inner loops, the complexity is independent of probability of "b works better than a1 and a2".
However, the code seems a bit broken as I don't find there is an exit to the while loop.
Not considering the while loop, the time complexity will be
O(a1*a2) = O(N^2).
Recurrence equation will be
T(N) = T(N-1) + C

Computational complexity of simple algorithm

I have simple algorithm, something like
h = SHA1(message)
r = a^b mod p
r = h * r mod p
l = Str1 || Str2
if ( l == r)
return success
else
return false
Now I want to compute its complexity, but I didn't konw how to do it. I don't know e.g. how the multiplication is done, so I don't understand how to do it. Assume worst case O(n^2) or best case or average case? Maybe I must look on it from other side?
Additionaly the numbers are keep as a byte arrays.
If you want to know the complexity of this algorithm, you just have to add the complexitys of the operations you use and sum it up.
sha1(message) has a complexity depending on the length m of the message, so lets say poly(m), since I dont know the complexity of sha1.
ab mod p can be done in O(log b) multiplications.
h * r mod p is exactly one multiplication
Str1 || Str2 Is this bitwise or? If yes it will take O(s) where s is the length of Str1
l == r will take as much comparisons as the length of the byte array is. This will also be s.
When numbers are realy big. The can not multiplicated in one processor step, so complexity of one multiplications will be in O(log p), since log p is the length of the numbers.
All together you get O(poly(m) + log(b) ⋅ log(p) + s).
Notice: If the length of the numbers (log(b) and log(p)) will never change, this part will be constant. This also holds for s.
You said the numbers are 256 Bit long, so the complexity is only O(poly(m)), which is the complexity of the Sha1-algorithm.
Notice: If you have an algorithm with any complexity, an you only use input of a fixed length, the complexity will always be constany. Complexity is a tool to see how the runtime will expand if the input is growing. If it is not growing, the runtime will also not.
If your input has always a fixed length, than you are more interested in the performance of the implementation of an algorithm.

Algorithms additions

For the algorithms below I need help with the following.
Algorithm Sum(m, n)
//Input: A positive integer n and another positive integer m ≤ n
//Output: ?
sum = 0
for i=m to n do
for j=1 to i do
sum = sum + 1
end for j
end for i
return sum
I need help figuring out what it computes? And what is the formula of the total number of additions sum=(sum+1).
I have The algorithm computes all of the positive integers between m and n including m and n.
The formula for the number of additions is.
m+m+1+…..+n
I don't get your questions...It seems you ask something but you also provide the answers by yourself already...anyway here's my answer to the questions...
For Q1, it seems you are asking the output and the number of total number of iteration (which is summation(m..n) = (n+m)(n-m+1)/2)
For Q2, it seems you are also asking how many times of the comparison has been performed, which is n-1 times.
To solve the recurrence T(n) = aT(n-1) + c where a,c is a constant,
by repeat substitution of n-2, n-3 ... until 1, you can find that T(n) = O(n)
PS: If it is a homework, maybe you did as you seem to have your own answer already, I strongly advice you to try go through some specific cases For Q1. For Q2 you should try to understand several methods to work out the recurrence relation, substitution method can be used to solve this kind of easy relation, many others may need to use master theorem.
Also you should be make yourself able to understand why Q2's complexity is actually the same as a normal naive for loop iterative method.

Pseudo polynomial or fast solution for the relaxed subset-sum

I have an array A of positive integers [a0, a1, a2, ..., an] and a positive number K. I need to find all (or almost all) pairs of subsets U and V of array A such as:
sum of all elements in U are less or equal to K
sum of all elements in V are less or equal to K
U + V may contain not all elements of original array A
all elements from U should come before all elements in V in initial array A. For example, let's imagine that we choose U = [a1, a3, a5] then we can start building array V only from a6. It is not allowed to use element a0, a2 or a4 in this case.
I was able to find DP solution, which is O(N^2 * K^2) (where N is total number of elements in A). Although N and K are small (< 100) it is still too slow.
I'm looking for some approximation algorithm or pseudo-polynomial dynamic programming algorithm. Bin packing problem looks similar to mine, but I'm not sure how I can apply it to my constraints...
Please advise.
EDIT: each number has upper bound equal to 50

Resources