Complexity of a recursive function with a loop - algorithm

I have a recursive function working on a list, the function contains a loop where itself is called, and ends up with another function g. Its structure is similar as follows, to simplify the issue, we can assume that l is always a list without duplicate elements.
let rec f l = function
| [] -> g ()
| _ ->
List.fold_left
(fun acc x ->
let lr = List.filter (fun a -> (a <> x)) l in
acc + (f lr))
1 l
I am not sure how to express the complexity of this function, with List.length l and the complexity of g.
I think it is proportional to the complexity of g and the factorial of List.length l, could anyone confirm?

Since you assume that the list l does not contain any duplicates, what this function does is compute all sublists that have one less element than the original list and call itself recursively on all of them. So, the number of times g is called when starting with a list of size n is g?(n) = n · g?(n-1) = n!
Now, let's consider everything else the function has to do. The amount of work at each step of the recursion includes :
For each element in the original list, constructing a new list of one less element. This is a total amount of work equal to n2
Once the result of the recursive call is known, add it to an accumulator. This is a total amount of work equal to n (this part can be ignored, since the filter is more costly).
So, since we know how many times each recursive step will be called (based on our previous analysis), the total amount of non-g related work is: t?(n) = n2 + n (n-1)2 + n (n-1) (n-2)2 + ... + n!
This formula looks like a pain, but in fact t?(n) / n! has a finite non-zero limit as n increases (it is the sum of the k+1 / k! with 0 < k < n) and so t?(n) = Θ(n!).

Okay. I don't mean to seem mustrustful. This really does look like a functional programming homework because it's not very practical code.
Let F(n) be the number of comparisons plus the number of additions for an input of length n. And let G be the run time of g. Since g doesn't take any operands, G is constant. We are just counting the number of times its called.
The fold will execute its function n times. Each execution will call filter to do n comparisons and remove exactly one element from its input each time, then recursively call f on this shortened list and do one addition. So the total cost is
F(n) = n * (n + F(n - 1) + 1) [ if n > 0 ]
= G [ otherwise ]
The first term expands to
F(n) = n * F(n - 1) + n^2 + n
This is O(n! + n^3 + n^2 + nG) = O(n! + nG) as you proposed.
I hope this is helpful.

Related

Time complexity of a recursive function where n size reduces randomly

I created the following pseudocode but I am not sure how to calculate it's complexity:
(Pseudocode)
MyFunction(Q, L)
if (Q = empty) return
M = empty queue
NM = empty queue
M.Enqueue(Q.Dequeue)
while (Q is not empty)
pt = Q.Dequeue()
if (pt.y > M.peek().y) M.Enqueue(pt)
else NM.Enqueue(pt)
L.add(M)
if (NM is not empty) MyFunction(NM, L)
return L;
MyFunction receives a set Q of n points and a list L in which we will save k subsets of Q (1<=k<=n). When we calculate the first subset we go through all the n points of Q and select the ones that belong to the first subset. For the second subset we go through all the n points of Q except those that are already in the first subset and so on.
So, every recursive call the number of points will be reduced by an integer x until the number of points is 0. This integer x can be different from one recursive call to the other (it can be any value between 1 and n (n being the current number of points))
What would be the complexity of my algorithm then?
I was thinking that my recurrence relation would be something like this:
T(0) = 1
T(n) = T(n-x) + an
Is this correct? and if so how can I solve it?
Without any information on the distribution of points in Q, we can not know how they will be dispatched to M or NM queues.
However, it is easy to calculate the worst-case complexity of your algorithm. To calculate this, we assume that at each recursive call, all points in Q will end up in NM except the one that is being added to M before entering the loop. With this assumption, x becomes 1 in your recurrence relation. And you end up having O(n^2).

How to solve the equation sum{max(a_i, x)}=y with variable x? Is there any algorithm with O(n) time complexity?

I am trying to find an algorithm to solve the following equation:
∑ max(ai, x) = y
in which the ai are constants and x is the variable.
I can find an algorithm with O(n log n) time complexity as follows:
First of all, sort the ai in O(n log n) time, and arrange intervals
(−∞, a0), (a0, a1), …, (ai, ai+1), …, (an−1, an), (an, ∞)
Then, for each interval, assume x belongs to this interval, and solve the equation. We could get a x̂, and then test whether x̂ belongs to this interval or not. If x̂ belongs to the corresponding interval, we will assign x̂ to x, and return x. On the other hand, we will try the next interval until we get the solution.
The above method is an O(n log n) algorithm due to the sort. With the definition of the equation-solving problem, I expect an algorithm with O(n) time complexity. Is there any reference for this problem?
First of all, this only has a solution if the sum of all a_i is smaller than y. You should check this first, because the algorithm below depends on this property.
Assume that we have chosen some pivot p from all a_i and want to calculate the x that corresponds to the interval [p, q), where q is the next larger a_i. This is:
If you move p to the next larger a_i, x changes as follows:
, where p' is the new pivot and n is the old number of a_i that are smaller or equal to p. Under the assumption that the sum of all a_i is smaller than y, this clearly leads to a decrease of x. Similarly, if we choose a smaller p, x is increased.
Coming back to the first equation, we can observe the following: If x is smaller than p, we should choose a smaller p. If x is greater than the smallest of the greater a_is, we should choose a larger p. In every other case, we have found the right x.
This can be utilized in a quick select procedure. #MvG's comment brought me onto this track. All credits for the quick select idea go to him. Here is some pseudo code (modified version from Wikipedia):
findX(list, y)
left := 0
right := length(list) - 1
sumGreater := 0 // the sum of all a_i greater than the current interval
numSmaller := 0 // the number of all a_i smaller than the current interval
minGreater := inf //the minimum of all a_i greater than the current interval
loop
if left = right
return (y - sumGreater) / (numSmaller + 1)
pivotIndex := medianOfMedians(list, left, right)
//the partition function will also sum the elements larger than the pivot,
//count the elements smaller than the pivot, and find the minimum of the
//larger elements
(pivotIndex, partialSumGreater, partialNumSmaller, partialMinGreater)
:= partition(list, left, right, pivotIndex)
x := (y - sumGreater - partialSumGreater) / (numSmaller + partialNumSmaller + 1)
if(x >= list[pivotIndex] && x < min(partialMinGreater, minGreater))
return x
else if x < list[pivotIndex]
right := pivotIndex - 1
minGreater := list[pivotIndex]
sumGreater += partialSumGreater + list[pivotIndex]
else
left := pivotIndex + 1
numSmaller += partialNumSmaller + 1
The key idea is that the partitioning function gathers some additional statistics. This does not change the time complexity of the partitioning function because it requires O(n) additional operations, leaving a total time complexity of O(n) for the partitioning function. The medianOfMedians function is also linear in time. The remaining operations in the loop are constant time. Assuming that the median of medians yields good pivots, the total time of the entire algorithm is approximately O(n + n/2 + n/4 + n/8 ...) = O(n).
Since comments might get deleted, I'm turning my own comments into a coherent answer. Contrary to the original question, I'm using indices 1 through n, avoiding the a0 originally used. So this is consistent one-based indexing using inclusive indices.
Assume for the moment that bi are the coefficients from your input, but in sorted order, so bi ≤ bi+1. As you essentially already wrote, if bi ≤ x ≤ bi+1 then the result is i ⋅ x + bi+1 + ⋯ + bn since the first i terms will use the x and the other terms will use the bj. Solving for x you get x = (y − bi+1 − ⋯ - bn) / i and putting that back into your inequality you have i ⋅ bi ≤ y − bi+1 − ⋯ − bn ≤ i ⋅ bi+1. Concentrating on one of the inequalities, you want the largest i such that
i ⋅ bi ≤ y − bi+1 − ⋯ − bn       (subsequently called “the inequality”)
But in order to make this work on unsorted ai, you'd need something similar to the median of medians. That is an algorithm which achieves O(n) guaranteed worst-case behavior for the problem of selecting a median, where the typical quickselect would take O(n²) in the worst case although it usually does quite well in practice.
Actually your problem is not that different from quickselect. You can pick a pivot coefficient, and split the remainder into larger and smaller values. Then you evaluate the inequality for the pivot element. If it is satisfied, you recurse into the list of larger elements, otherwise you recurse into the list of smaller elements, until at some point you have two adjacent elements, one which satisfies the inequality and one which does not.
This is O(n²) in the worst case, since you might need O(n) recursive calls, each of them taking O(n) time to process its input. Just like the O(n²) quickselect itself is suboptimal. The median-of-medians shows that that problem can indeed be solved in O(n). So we either need to find a similar solution here, or reformulate this problem here in terms of finding the median, or write some algorithm wich makes use of the median in a reasonable way.
Actually Nico Schertler found a way to achieve that last option: Take the algorithm I outlined above, but choose the pivot element to be the median. That way you can guarantee that each recursive call will process at most half as much input as the previous call. Since the median of medians itself is O(n) this can be done without exceeding the O(n) bound for each recursive call.
So in pseudocode it's like this (using inclusive indices throughout):
# f: Process whole problem with coefficients a_1 through a_n
f(y, a, n) := begin
if y < (sum of a_i for i from 1 through n): # O(n)
throw Error "Cannot satisfy equation" # Or omit check and risk division by zero
return g(a, 1, n, y) # O(n)
end
# g: Recursively process part of the problem, namely a_l through a_r
# Precondition: we know inequality holds for i = l - 1 and fails for i = r + 1
# a: the array as provided to f; will get modified in place
# l: left index (inclusive)
# r: right index (inclusive)
# y: (original y) - (sum of a_j for j from r + 1 through n)
g(a, l, r, y) := begin # process a_l through a_r O(r-l)
if r < l: # inequality holds in r but fails in l O(1)
return y / r # compute x for the case of i = r O(1)
m = median(a, l, r) # computed using median of medians O(r-l)
i = floor((l + r) / 2) # index of median, with same tie breaks O(1)
partition(a, l, r, m) # so a_l…a_(i-1) ≤ a_i=m ≤ a_(i+1)…a_r O(r-l)
rhs = y - (sum of a_j for j from i + 1 to r) # O((r-l)/2)
if i * a_i ≤ rhs: # condition holds, check larger i
return g(a, i + 1, r, y) # recurse in right half of list O((r-l)/2)
else: # condition fails, check smaller i
return g(a, l, i - 1, rhs - m) # recurse in left half of list O((r-l)/2)
end

Algorithm: constrained XOR of numbers within a range

Let us say we are given a number n.
We need to find the number of values S ^ (S+n) lying in the range [L, R].
(Where S is any non-negative integer and ^ is the bitwise xor operator).
I can easily do this if n is power of two (they have a very useful pattern)
I am not sure how to solve this for any general n.
Any suggestions?
EDIT:
n is also a non-negative integer.
n, L, R are all less than 10^18.
This was a programming question in some practice test which i gave sometime back, i just remembered this seeing a similar question in StackOverflow today.
EDIT 2:
Explaining with an example,
say n = 1.
Then we know that S ^ (S + 1) will always have a binary representation of all ones. eg: 1,3,7,...
So solving this is easy we just have to count the number of such numbers within the Range [L,R] it is quite simple.
For n = any power of 2 similar methods work. But i have no idea what to do if n is not a power of 2.
Let C(n) be the (infinite) set of numbers that can be written as S ^ (S + n) for some S.
We have the following recurrence relations on the sets C(n):
If n = 2k is even, then C(n) = {2x : x in C(k)};
If n = 2k + 1 is odd, then C(n) = {2x + 1 : x in C(k)} union {2x + 1 : x in C(k + 1)}.
An algorithm can be deduced from these relations. More precisely, a pair (C(n), C(n + 1)) can be deduced from (C(n / 2), C(n / 2 + 1)). Note that the union above is really a disjoint union, because every element in C(n) has the same parity as n, hence C(k) and C(k + 1) do not intersect.
Proof of the recurrence relations:
Simply look at the last binary digits of n and S.

Divide-and-conquer algorithms' property example

I'm having trouble with understanding the following property of divide-and-conquer algorithms.
A recursive method that divides a problem of size N into two independent
(nonempty) parts that it solves recursively calls itself less than N times.
The proof is
A recursive function that divides a problem of size N into two independent
(nonempty) parts that it solves recursively calls itself less than N times.
If the parts are one of size k and one of size N-k, then the total number of
recursive calls that we use is T(n) = T(k) + T(n-k) + 1, for N>=1 with T(1) = 0.
The solution T(N) = N-1 is immediate by induction. If the sizes sum to a value
less than N, the proof that the number of calls is less than N-1 follows from
same inductive argument.
I perfectly understand the formal proof above. What I don't understand is how this property is connected to the examples that are usually used to demonstrate the divide-and-conquer idea, particularly to the finding the maximum problem:
static double max(double a[], int l, int r)
{
if (l == r) return a[l];
int m = (l+r)/2;
double u = max(a, l, m);
double v = max(a, m+1, r);
if (u > v) return u; else return v;
}
In this case when a consists of N=2 elements max(0,1) will call itself 2 more times, that is max(0,0) and max(1,1), which equals to N. If N=4, max(0,3) will call itself 2 times, and then each of the subsequent calls will also call max 2 times, so the total number of calls is 6 > N. What am I missing?
You're not missing anything. The theorem and its proof are wrong. The error is here:
T(n) = T(k) + T(n-k) + 1
The constant term of 1 should be 2, as the function makes one recursive call for each of the two pieces into which it divides the problem. The correct bound is 2N-1, rather than N. Hopefully, this error will be fixed in the next edition of your textbook, or at least in the errata.

Choosing minimum length k of array for merge sort where use of insertion sort to sort the subarrays is more optimal than standard merge sort

This is a question from Introduction to Algorithms By Cormen. But this isn't a homework problem instead self-study.
There is an array of length n. Consider a modification to merge sort in which n/k sublists each of length k are sorted using insertion sort and then merged using merging mechanism, where k is a value to be determined.
The relationship between n and k isn't known. The length of array is n. k sublists of n/k means n * (n/k) equals n elements of the array. Hence k is simply a limit at which the splitting of array for use with merge-sort is stopped and instead insertion-sort is used because of its smaller constant factor.
I was able to do the mathematical proof that the modified algorithm works in Θ(n*k + n*lg(n/k)) worst-case time. Now the book went on to say to
find the largest value of k as a function of n for which this modified algorithm has the same running time as standard merge sort, in terms of Θ notation. How should we choose k in practice?
Now this got me thinking for a lot of time but I couldn't come up with anything. I tried to solve
n*k + n*lg(n/k) = n*lg(n) for a relationship. I thought that finding an equality for the 2 running times would give me the limit and greater can be checked using simple hit-and-trial.
I solved it like this
n k + n lg(n/k) = n lg(n)
k + lg(n/k) = lg(n)
lg(2^k) + lg(n/k) = lg(n)
(2^k * n)/k = n
2^k = k
But it gave me 2 ^ k = k which doesn't show any relationship. What is the relationship? I think I might have taken the wrong equation for finding the relationship.
I can implement the algorithm and I suppose adding an if (length_Array < k) statement in the merge_sort function here(Github link of merge sort implementation) for calling insertion sort would be good enough. But how do I choose k in real life?
Well, this is a mathematical minimization problem, and to solve it, we need some basic calculus.
We need to find the value of k for which d[n*k + n*lg(n/k)] / dk == 0.
We should also check for the edge cases, which are k == n, and k == 1.
The candidate for the value of k that will give the minimal result for n*k + n*lg(n/k) is the minimum in the required range, and is thus the optimal value of k.
Attachment, solving the derivitives equation:
d[n*k + n*lg(n/k)] / dk = d[n*k + nlg(n) - nlg(k)] / dk
= n + 0 - n*1/k = n - n/k
=>
n - n/k = 0 => n = n/k => 1/k = 1 => k = 1
Now, we have the candidates: k=n, k=1. For k=n we get O(n^2), thus we conclude optimal k is k == 1.
Note that we found the derivitives on the function from the big Theta, and not on the exact complexity function that uses the needed constants.
Doing this on the exact complexity function, with all the constants might yield a bit different end result - but the way to solve it is pretty much the same, only take derivitives from a different function.
maybe k should be lg(n)
theta(nk + nlog(n/k)) have two terms, we have the assumption that k>=1, so the second term is less than nlog(n).
only when k=lg(n), the whole result is theta(nlog(n))

Resources