For the algorithms below I need help with the following.
Algorithm Sum(m, n)
//Input: A positive integer n and another positive integer m ≤ n
//Output: ?
sum = 0
for i=m to n do
for j=1 to i do
sum = sum + 1
end for j
end for i
return sum
I need help figuring out what it computes? And what is the formula of the total number of additions sum=(sum+1).
I have The algorithm computes all of the positive integers between m and n including m and n.
The formula for the number of additions is.
m+m+1+…..+n
I don't get your questions...It seems you ask something but you also provide the answers by yourself already...anyway here's my answer to the questions...
For Q1, it seems you are asking the output and the number of total number of iteration (which is summation(m..n) = (n+m)(n-m+1)/2)
For Q2, it seems you are also asking how many times of the comparison has been performed, which is n-1 times.
To solve the recurrence T(n) = aT(n-1) + c where a,c is a constant,
by repeat substitution of n-2, n-3 ... until 1, you can find that T(n) = O(n)
PS: If it is a homework, maybe you did as you seem to have your own answer already, I strongly advice you to try go through some specific cases For Q1. For Q2 you should try to understand several methods to work out the recurrence relation, substitution method can be used to solve this kind of easy relation, many others may need to use master theorem.
Also you should be make yourself able to understand why Q2's complexity is actually the same as a normal naive for loop iterative method.
Related
I know that the time complexity of a recursive function dividing its input by /2 is log n base 2,I have come across some interesting scenarios on
https://stackoverflow.com/a/42038565/8169857
Kindly help me to understand the logic behind the scenarios in the answer regarding the derivation of the formula
It's back to the recursion tree. Why for 1/2 is O(log2(n))? Because if n = 2^k, you should divide k times to reach to 1. Hence, the number of computation is k = log2(n) comparison at most. Now suppose it is (c-1)/c. Hence, if n = (c/(c-1))^k, we need log_{c/(c-1)}(n) operations to reach to 1.
Now as for any constant c > 1, limit log2(n)/log_{c/(c-1)}(n), n \to \infty is equal to a constant greater than zero, log_{c/(c-1)}(n) = \Theta(log2(n)). Indeed, you can say this for any constants a, b > 1, log_a(n) = \Theta(log_b(n)). Now, the proof is completed.
How do I prove that this algorithm is O(loglogn)
i <-- 2
while i < n
i <-- i*i
Well, I believe we should first start with n / 2^k < 1, but that will yield O(logn). Any ideas?
I want to look at this in a simple way, what happends after one iteration, after two iterations, and after k iterations, I think this way I'll be able to understand better how to compute this correctly. What do you think about this approach? I'm new to this, so excuse me.
Let us use the name A for the presented algorithm. Let us further assume that the input variable is n.
Then, strictly speaking, A is not in the runtime complexity class O(log log n). A must be in (Omega)(n), i.e. in terms of runtime complexity, it is at least linear. Why? There is i*i, a multiplication that depends on i that depends on n. A naive multiplication approach might require quadratic runtime complexity. More sophisticated approaches will reduce the exponent, but not below linear in terms of n.
For the sake of completeness, the comparison < is also a linear operation.
For the purpose of the question, we could assume that multiplication and comparison is done in constant time. Then, we can formulate the question: How often do we have to apply the constant time operations > and * until A terminates for a given n?
Simply speaking, the multiplication reduces the effort logarithmic and the iterative application leads to a further logarithmic reduce. How can we show this? Thankfully to the simple structure of A, we can transform A to an equation that we can solve directly.
A changes i to the power of 2 and does this repeatedly. Therefore, A calculates 2^(2^k). When is 2^(2^k) = n? To solve this for k, we apply the logarithm (base 2) two times, i.e., with ignoring the bases, we get k = log log n. The < can be ignored due to the O notation.
To answer the very last part of the original question, we can also look at examples for each iteration. We can note the state of i at the end of the while loop body for each iteration of the while loop:
1: i = 4 = 2^2 = 2^(2^1)
2: i = 16 = 4*4 = (2^2)*(2^2) = 2^(2^2)
3: i = 256 = 16*16 = 4*4 = (2^2)*(2^2)*(2^2)*(2^2) = 2^(2^3)
4: i = 65536 = 256*256 = 16*16*16*16 = ... = 2^(2^4)
...
k: i = ... = 2^(2^k)
Here is an algorithm for finding kth smallest number in n element array using partition algorithm of Quicksort.
small(a,i,j,k)
{
if(i==j) return(a[i]);
else
{
m=partition(a,i,j);
if(m==k) return(a[m]);
else
{
if(m>k) small(a,i,m-1,k);
else small(a,m+1,j,k);
}
}
}
Where i,j are starting and ending indices of array(j-i=n(no of elements in array)) and k is kth smallest no to be found.
I want to know what is the best case,and average case of above algorithm and how in brief. I know we should not calculate termination condition in best case and also partition algorithm takes O(n). I do not want asymptotic notation but exact mathematical result if possible.
First of all, I'm assuming the array is sorted - something you didn't mention - because that code wouldn't otherwise work. And, well, this looks to me like a regular binary search.
Anyway...
The best case scenario is when either the array is one element long (you return immediately because i == j), or, for large values of n, if the middle position, m, is the same as k; in that case, no recursive calls are made and it returns immediately as well. That makes it O(1) in best case.
For the general case, consider that T(n) denotes the time taken to solve a problem of size n using your algorithm. We know that:
T(1) = c
T(n) = T(n/2) + c
Where c is a constant time operation (for example, the time to compare if i is the same as j, etc.). The general idea is that to solve a problem of size n, we consume some constant time c (to decide if m == k, if m > k, to calculate m, etc.), and then we consume the time taken to solve a problem of half the size.
Expanding the recurrence can help you derive a general formula, although it is pretty intuitive that this is O(log(n)):
T(n) = T(n/2) + c = T(n/4) + c + c = T(n/8) + c + c + c = ... = T(1) + c*log(n) = c*(log(n) + 1)
That should be the exact mathematical result. The algorithm runs in O(log(n)) time. An average case analysis is harder because you need to know the conditions in which the algorithm will be used. What is the typical size of the array? The typical size of k? What is the mos likely position for k in the array? If it's in the middle, for example, the average case may be O(1). It really depends on how you use this.
This is a question from Introduction to Algorithms By Cormen. But this isn't a homework problem instead self-study.
There is an array of length n. Consider a modification to merge sort in which n/k sublists each of length k are sorted using insertion sort and then merged using merging mechanism, where k is a value to be determined.
The relationship between n and k isn't known. The length of array is n. k sublists of n/k means n * (n/k) equals n elements of the array. Hence k is simply a limit at which the splitting of array for use with merge-sort is stopped and instead insertion-sort is used because of its smaller constant factor.
I was able to do the mathematical proof that the modified algorithm works in Θ(n*k + n*lg(n/k)) worst-case time. Now the book went on to say to
find the largest value of k as a function of n for which this modified algorithm has the same running time as standard merge sort, in terms of Θ notation. How should we choose k in practice?
Now this got me thinking for a lot of time but I couldn't come up with anything. I tried to solve
n*k + n*lg(n/k) = n*lg(n) for a relationship. I thought that finding an equality for the 2 running times would give me the limit and greater can be checked using simple hit-and-trial.
I solved it like this
n k + n lg(n/k) = n lg(n)
k + lg(n/k) = lg(n)
lg(2^k) + lg(n/k) = lg(n)
(2^k * n)/k = n
2^k = k
But it gave me 2 ^ k = k which doesn't show any relationship. What is the relationship? I think I might have taken the wrong equation for finding the relationship.
I can implement the algorithm and I suppose adding an if (length_Array < k) statement in the merge_sort function here(Github link of merge sort implementation) for calling insertion sort would be good enough. But how do I choose k in real life?
Well, this is a mathematical minimization problem, and to solve it, we need some basic calculus.
We need to find the value of k for which d[n*k + n*lg(n/k)] / dk == 0.
We should also check for the edge cases, which are k == n, and k == 1.
The candidate for the value of k that will give the minimal result for n*k + n*lg(n/k) is the minimum in the required range, and is thus the optimal value of k.
Attachment, solving the derivitives equation:
d[n*k + n*lg(n/k)] / dk = d[n*k + nlg(n) - nlg(k)] / dk
= n + 0 - n*1/k = n - n/k
=>
n - n/k = 0 => n = n/k => 1/k = 1 => k = 1
Now, we have the candidates: k=n, k=1. For k=n we get O(n^2), thus we conclude optimal k is k == 1.
Note that we found the derivitives on the function from the big Theta, and not on the exact complexity function that uses the needed constants.
Doing this on the exact complexity function, with all the constants might yield a bit different end result - but the way to solve it is pretty much the same, only take derivitives from a different function.
maybe k should be lg(n)
theta(nk + nlog(n/k)) have two terms, we have the assumption that k>=1, so the second term is less than nlog(n).
only when k=lg(n), the whole result is theta(nlog(n))
Given an array A of integers, find any 3 of them that sum to any given T.
I saw this on some online post, which claims it has a O(NlogN) solution.
For 2 numbers, I know hashtable could help for O(N), but for 3 numbers, I cannot find one.
I also feel this problem sounds familar to some hard problems, but cannot recall the name and therefore cannot google for it. (While the worst is obviously O(N^3), and with the solution to 2 numbers it is really O(N^2) )
It does not really solve anything in the real world, just bugs me..
Any idea?
I think your problem is equivalent to the 3SUM problem.
For three sum problem, you cannot find a solution better than O(n^2). You can refer to http://en.wikipedia.org/wiki/List_of_unsolved_problems_in_computer_science
2SUM problem can be solved in O(nlgn) time.
First sort the array which takes at most O(nlgn) operations. Now at ith iteration we picked the element a[i] and find the the element -a[i] in the remaining part of the array (i.e from i+1 to n-1) and this search could be conducted in binary search which takes at most lgn time. So overall it will take O(nlgn) operations.
But 3SUM problem cant be solved in O(nlgn) time . We could reduce it to O(n^2)
Sounds like a homework question...
If you can find two values that sum to N but you want to extend the search to three values, couldn't you, for each value M in the set, look for two values that sum to (N - M)? If you can find two values that sum to a specific value in O(log N) time, then that will be O(N log N).
I think this is just the subset sum problem
If so, it is NP-Complete.
EDIT: Never mind, it is 3sum, as stated in another answer.
Yes! 3SUM has an O(nlogn) algorithms using Fast Fourier Transform(FFT), here is a general idea:
Lifted directly from https://en.wikipedia.org/wiki/3SUM
sort(S);
for i=0 to n-3 do
a = S[i];
start = i+1;
end = n-1;
while (start < end) do
b = S[start];
c = S[end];
if (a+b+c == 0) then
output a, b, c;
// Continue search for all triplet combinations summing to zero.
start = start + 1
end = end - 1
else if (a+b+c > 0) then
end = end - 1;
else
start = start + 1;
end
end
end