Related
I'm looking for a way to distribute points along a portion of the perimeter of a rectangle. These points needs to be evenly far away from each other.
I have a rectangular (usually square) bounds, and 2 points (ps and pe) along that perimeter that mark the allowed range for points. Here I've marked the allowed range in red:
I need to place n points along that segment (usually 1-3). These points need to be evenly spaced at a distance d. So the distance between n0..n1 and n1..n2 etc should all be d. The boundary points also count for the purposes of distribution, so the distance between the first and last points and ps/pe should be d as well.
This seemed like a straightforward task going in but I quickly realised the naive method doesn't work here. Taking the length of the segment and dividing by n+1 doesn't factor in corners. For example: n = 1, puts the point too close to pe:
My math is pretty rusty (day job doesn't require much of it usually), but I've tried several different approaches and none have quite worked out. I was able to solve for n = 1 using vectors, by finding the midpoint between ps and pe, finding a perpendicular vector and then intersecting that with the segment, like below. I have no idea how to make this approach work if n is something else though, or even if it can be done.
One final note, if completely even distribution is impracticable then a good enough approximation is fine. Ideally the approximation is off by roughly the same amount throughout the range (instead of say, worse on the edges).
I am going to suggest an algorithm, but since the derivation is a bit mathematically messy, I did not have enough time to think it through carefully and to check it carefully for correctness. Plus I might have included some redundant checks, which if one proves some proper inequalities may become redundant and one may prove the existence of a solution may always exist under reasonable assumptions. I believe the idea is correct, but I might have made some mistakes writing this thing up, so be careful.
Because according to your comment, having only one corner inside the segment along the boundary of the square is enough to solve the rest of the cases due to symmetry, I will focus on the one corner case.
Your polygonal segment with one 90 degree corner is divided into a pair of perpendicular straight line segments, the first one of length l1 and the second of length l2. These two lengths are given to you. You also want to add on the polygonal segment, which is of total length l1 + l2, a given number of n points so that the euclidean straight line distance between any two consecutive points is the same. Call that unknown distance d. When you do that you are going to end up with n1 full segments of unknown length d on l1 and n2 full segments of unknown length d on l2 so that
n1 + n2 = n
In general, you will also end up with an extra segment of length d1 <= d on l1 with one end at the 90 degree corner. Analogously, you will also have an extra segment of length d2 <= d on l2 with one end at the 90 degree corner. Thus, the two segments d1 and d2 share a common end and are perpendicular, so they form a right-angled triangle. According to Pythagoras' theorem, these two segments satisfy the equation
d1^2 + d2^2 = d^2
If we combine all the equations and information up to now, we obtain a system of equations and restrictions which are:
n1*d + d1 = l1
n2*d + d2 = l2
d1^2 + d2^2 = d^2
n1 + n2 = n
n1 and n2 are non-negative integers
where the variables d, d1, d2, n1, n2 are unknown while l1, l2, n are given.
From the first two equations, you can express d1 and d2 and substitute in the third equation:
d1 = l1 - n1*d
d2 = l2 - n2*d
(l1 - n1*d)^2 + (l2 - n2*d)^2 = d^2
n1 + n2 = n
n1 and n2 are non-negative integers
In the special case, when one wants to add only one point, i.e. n = 1, one has either n1 = n = 1 or n2 = n = 1 depending on whether l1 > l2 or l1 <= l2 respectively.
Say l1 > l2. Then n1 = n = 1 and n2 = 0 so
d1 = l1 - d
d2 = l2
(l1 - d)^2 + l2^2 = d^2
Expand the equation, simplify it and solve for d:
l1^2 - 2*l1*d + d^2 + l2^2 = d^2
l1^2 + l2^2 - 2*l1*d = 0
d = (l1^2 + l2^2) / (2*l1)
Next, let us go back to the general case. You have to solve the system
(l1 - n1*d)^2 + (l2 - n2*d)^2 = d^2
n1 + n2 = n
n1 and n2 are non-negative integers
where the variables d, n1, n2 are unknown while l1, l2, n are given. Expand the first equation:
l1^2 - 2 * l1 * n1 * d + n1^2 * d^2 + l2^2 - 2 * l2 * n2 * d + n2^2 * d^2 = d^2
n1 + n2 = n
n1 and n2 are non-negative integers
and group the terms together
(n1^2 + n2^2 - 1) * d^2 - 2 * (l1*n1 + l2*n2) * d + (l1^2 + l2^2) = 0
n1 + n2 = n
n1 and n2 are non-negative integers
The first equation is a quadratic equation in d
(n1^2 + n2^2 - 1) * d^2 - 2 * (l1*n1 + l2*n2) * d + (l1^2 + l2^2) = 0
By the quadratic formula you expect two solutions (in general, you choose whichever is positive.
If both are positive and d < l1 and d < l2, you may have two solutions):
d = ( (l1*n1 + l2*n2) +- sqrt( (l1*n1 + l2*n2)^2 - (l1^2 + l2^2)*(n1^2 + n2^2 - 1) ) ) / (n1^2 + n2^2 - 1)
n1 + n2 = n
n1 and n2 are non-negative integers
Now, if you can find appropriate n1 and n2, you can calculate the necessary d using the quadratic formula above.
For solutions to exist, the expression in the square root has to be non-negative, so you have the inequality restriciton
d = ( (l1*n1 + l2*n2) +- sqrt( (l1*n1 + l2*n2)^2 - (l1^2 + l2^2)*(n1^2 + n2^2 - 1) ) ) / (n1^2 + n2^2 - 1)
(l1*n1 + l2*n2)^2 - (l1^2 + l2^2)*(n1^2 + n2^2 - 1) >= 0
n1 + n2 = n
n1 and n2 are non-negative integers
Simplify the inequality espression:
(l1*n1 + l2*n2)^2 - (l1^2 + l2^2)*(n1^2 + n2^2 - 1) = (l1^2 + l2^2) - (l1*n2 - l2*n1)^2
which brings us to the following system
d = ( (l1*n1 + l2*n2) +- sqrt( (l1^2 + l2^2) - (l1*n2 - l2*n1)^2 ) ) / (n1^2 + n2^2 - 1)
(l1^2 + l2^2) - (l1*n2 - l2*n1)^2 >= 0
n1 + n2 = n
n1 and n2 are non-negative integers
Factorizing the inequality,
d = ( (l1*n1 + l2*n2) +- sqrt( (l1^2 + l2^2) - (l1*n2 - l2*n1)^2 ) ) / (n1^2 + n2^2 - 1)
(sqrt(l1^2 + l2^2) - l1*n2 + l2*n1) * (sqrt(l1^2 + l2^2) + l1*n2 - l2*n1) >= 0
n1 + n2 = n
n1 and n2 are non-negative integers
So you have two cases for these restrictions:
Case 1:
d = ( (l1*n1 + l2*n2) +- sqrt( (l1^2 + l2^2) - (l1*n2 - l2*n1)^2 ) ) / (n1^2 + n2^2 - 1)
sqrt(l1^2 + l2^2) - l1*n2 + l2*n1 >= 0
sqrt(l1^2 + l2^2) + l1*n2 - l2*n1 >= 0
n1 + n2 = n
n1 and n2 are positive integers
or
Case 2:
d = ( (l1*n1 + l2*n2) +- sqrt( (l1^2 + l2^2) - (l1*n2 - l2*n1)^2 ) ) / (n1^2 + n2^2 - 1)
sqrt(l1^2 + l2^2) - l1*n2 + l2*n1 <= 0
sqrt(l1^2 + l2^2) + l1*n2 - l2*n1 <= 0
n1 + n2 = n
n1 and n2 are positive integers
we focus on case 1 and see that case 2 is not possible. Start by expressing n2 = n - n1, then substitute it in each of the two inequalities and isolate n1 on one side of each inequality. This procedure yields:
Case1:
d = ( (l1*n1 + l2*n2) +- sqrt( (l1^2 + l2^2) - (l1*n2 - l2*n1)^2 ) ) / (n1^2 + n2^2 - 1)
( l1*n - sqrt(l1^2 + l2^2) ) / (l1 + l2) <= n1 <= ( l1*n + sqrt(l1^2 + l2^2) ) / (l1 + l2)
n1 + n2 = n
n1 and n2 are positive integers
One can see that case 2 inverts the inequalities, which is impossible because the left side is less than the right one.
So the algorithm could be something like this:
function d = find_d(l1, l2, n)
{
if n = 1 and l1 > l2 {
return d = (l1^2 + l2^2) / (2*l1)
} else if n = 1 and l1 <= l2 {
return d = (l1^2 + l2^2) / (2*l2)
}
for integer n1 >= 0 starting from floor( ( l1*n - sqrt(l1^2 + l2^2) ) / (l1 + l2) ) to floor( ( l1*n + sqrt(l1^2 + l2^2) ) / (l1 + l2) ) + 1
{
n2 = n - n1
D = (l1^2 + l2^2) - (l1*n2 - l2*n1)^2
if D >= 0
{
d1 = ( (l1*n1 + l2*n2) - sqrt( D ) ) / (n1^2 + n2^2 - 1)
d2 = ( (l1*n1 + l2*n2) + sqrt( D ) ) / (n1^2 + n2^2 - 1)
if 0 < d1 < max(l1, l2) {
return d1
} else if 0 < d2 < max(l1, l2) {
return d2
} else {
return "could not find a solution"
}
}
}
}
This is a preliminary version, so I suggest to approach with some caution. I did not have enough time to check the algorithm whether there might be some close to degenerate cases, for which one may have to add somewhere few short loops with if statements. However, in general, this will probably work almost always. I am just posting a python implementation but when I find a bit more time and if you want me to, I can write down the math behind this algorithm. Some of the ideas from this algorithm can simplify the previous one for one corner.
import numpy as np
import math
import matplotlib.pyplot as plt
def sq_root(x, m, K):
return math.sqrt(x**2 - (K - m*x)**2)
def f(x, n, L):
return sq_root(x, n[0], L[0]) + sq_root(x, n[2], L[2]) + n[1]*x - L[1]
def df(x, n, L):
return ((1-n[0]**2)*x + L[0]*n[0])/sq_root(x, n[0], L[0]) + ((1-n[2]**2)*x + L[2]*n[2])/sq_root(x, n[2], L[2]) + n[1]
#Solving the nonlinear equation for d by using Newton's method:
def solve_f(n, L):
x = sum(L) / (sum(n) + 2)
y = f(x, n, L)
while abs(y) >= 0.0000001:
x = x - y / df(x, n, L)
y = f(x, n, L)
return x - y / df(x, n, L)
def find_n(L, N):
x0 = sum(L) / (N + 1)
# x <= x0
n = np.array([0,0,0])
n[0] = math.floor(L[0]/x0)
n[2] = math.floor(L[2]/x0)
n[1] = N - n[0] - n[2] - 1
return n
def find_d(L, N):
if N==1:
d2 = (L[2]**2 + L[1]**2 - L[0]**2)/(2*L[1])
return math.sqrt(L[0]**2 + d2**2), np.array([0,0,0])
n = find_n(L, N)
return solve_f(n, L), n
def find_the_points(L, N):
d, n = find_d(L, N)
d2 = math.sqrt(d**2 - (L[0]-n[0]*d)**2)
#d3 = math.sqrt(d**2 - (L[2]-n[2]*d)**2)
p = np.zeros((sum(n) + 3, 2))
p[ 0] = np.array([0, L[1]-L[0]])
p[-1] = np.array([L[1], L[1]-L[2]])
e_x = np.array([1,0])
e_y = np.array([0,1])
corner = np.array([0,L[1]])
for i in range(n[0]):
p[i+1] = p[0] + (i+1)*d*e_y
for i in range(n[1]+1):
p[n[0]+i+1] = corner + (d2 + i*d)*e_x
for i in range(n[2]):
p[-(2+i)] = p[-1] + (i+1)*d*e_y
return p, d, n
'''
Test example:
'''
# lengths of the three straight segments along the edges of a square of edge_length L2:
L1 = 5
L2 = 7
L3 = 3
L = np.array([L1, L2, L3])
N = 7
# N = number of points to be added
# If there are two corners then number of segments aligned with edges of square is N - 1
# total number of equidistant segments is N + 1
# n = n[0], n[1], n[2] represents the number of segments aligned with each
# striaght segment from the rectangular polyline along square's boundary
points, d, n = find_the_points(L, N)
print(points)
print(d)
print(n)
plt.figure()
plt.plot(points[:,0], points[:,1])
for j in range(points.shape[0]):
plt.plot(points[j,0], points[j,1], 'ro')
axx = plt.gca()
axx.set_aspect('equal')
plt.show() # if you need...
Pythagoras to the rescue.
Considering the case where n = 2 with one corner1. Rest of the cases will be likely similar just with more edge cases and possible configurations2.
Let us name the length of the vertical segment i and length of the horizontal segment j.
We are looking for a points X, Y, so that distance dist(ps,x) = dist(x,y) = dist(y,pe)
First, let's assume that the corner lies between points X and Y. In that case, we are looking for the solution of this equation:
x2 = (i - x)2 + (j - x)2 where i and j are known.
If the abovementioned equation has no solution, it means that the corner has to lie either between ps and X or between Y and pe. I will only cover the case of ps and X since the other one is symetrical:
x2 = i2 + (j-2x)2 is the equation for this case.
For more points and corners, there will just be more possible configurations of points and corners. However since the distance should be equal and the lengths are known, a series of quadratic equations will be most likely enough. The case for 5+ points and three corners will be a wee bit iffy though.
1 The case for n=1 will be nearly the same as the asymmetric variant of the n=2 that I will cover.
2Which will make them rather ugly.
The points on any one of the sides are to be equidistant according to the problem. Just guess the distance and use binary search to find the solution to any degree of accuracy. There's a slightly tricky part in determining how many points lie on each rectangle side. (Hopefully it's just "slightly." :)
I need to partition n elements into k groups and the sum of these groups are all the same.
For instance:
I have a list of numbers going from 1 to 99 [1, 2, 3, 4, 5...] and I need to partition the list into 3 groups. The sum of all elements of each groups must be equal. In this example, n=99 and k=3.
What is an efficient and elegant algorithm to achieve this?
I'm just asking for algorithm suggestions to use; I don't want a solution.
Let me start by clarifying something: when you say you want an algorithm, but no solution, I have to assume that what you mean is that you want an algorithm that'll give you a solution for k=3 and n any positive integer.
I believe the problem has a solution for any n >= 5, provided either n or n+1 is divisible by 3.
This is because the sum 1 + ... + n equals n*(n+1)/2, which is divisible by 3 if and only if either n or n+1 is divisible by 3.
Assuming n satisfies these conditions, the algorithm goes like this.
Let s = [n*(n+1)/2]/3.
Find the number m in the decreasing sequence n, n-1, n-2, ..., such that
n + (n - 1) + (n - 2) + ... + m <= s < n + (n - 1) + (n - 2 ) + ... + m + (m - 1).
Let h = s - [n + (n - 1) + (n - 2) + ... + m].
Then h + [n + (n - 1) + (n - 2) + ... + m] = s, and we have our first sum.
Note that, given how m was obtained, h is guaranteed to be in the decreasing sequence m-2, m-3, ..., 1, hence is available for our first sum.
We now find the number q in the decreasing sequence m-1, m-2, ..., h+1, h, h-1, ..., 1, such that
m + (m - 1) + (m - 2) + ... + q <= s < m + (m - 1) + (m - 2) + ... + q + (q - 1),
keeping in mind that these sums may include h+1 or h-1, but not h. I am abusing the notational conventions for sums here.
This is where things are getting a bit hairy.
I conjecture that the number p = s - [m + (m - 1) + (m - 2) + ... + q] is among the numbers left over (unused) from the original sequence, but I won't prove it. (Exercise for the reader, as they say.)
Then p + [m + (m - 1) + (m - 2) + ... + q] = s, and we have have our second sum.
(Again: the number h may not be in the sum [m + (m - 1) + (m - 2) + ... + q].)
The last sum is that of all the remaining numbers, left over from the original sequence.
For example (well, yes, that is a solution --- but for illustrative purposes only...):
1 + ... + 99 = 99*100/2 = 3*550
= (99 + 98 + 97 + 96 + 95 + 65)
+ (94 + 93 + 92 + 91 + 90 + 89 + 1)
+ (88 + ... + 66 + 64 + ... + 2).
Here n = 99, m = 95, h = 65, q = 89, and p = 1.
So here is an algorithm that is supposed to return the polynomial value of P(x) of a given polynomial with any given x.
A[] is the coefficient array and P[] the power of x array.
(e.g. x^2 +2*x + 1 would have: A[] = {1,2,1} , P[]= {2,1,0})
Also, recPower() = O(logn)
int polynomial(int x, int A[], int P[], int l, int r)
{
if (r - l == 1)
return ( A[l] * recPower(x, P[l]) ) + ( A[r] * recPower (x, P[r]) );
int m = (l + r) / 2;
return polynomial(x, A, P, l, m) + polynomial(x, A, P, m, r);
}
How do I go about calculating this time complexity? I am perplexed due to the if statement. I have no idea what the recurrence relation will be.
Following observation might help: As soon as we have r = l + 1, we spend O(logn) time and we are done.
My answer requires good understanding of Recursion Tree. So proceed wisely.
So our aim is to find : after how many iterations will we be able to tell that we have r = l + 1?
Lets find out:
Focusing on return polynomial(x, A, P, l, m) + polynomial(x, A, P, m, r);
Let us first consider left function polynomial(x, A, P, l, m). Key thing to note is that l , remains constant , in all subsequent left function called recursively.
By left function I mean polynomial(x, A, P, l, m) and by right function I mean
polynomial(x, A, P, m, r).
For left function polynomial(x, A, P, l, m), We have:
First iteration
l = l and r = (l + r)/2
Second iteration
l = l and r = (l + (l + r)/2)/2
which means that
r = (2l + l + r)/2
Third iteration
l = l and r = (l + (l + (l + r)/2)/2)/2
which means that
r = (4l + 2l + l + r)/4
Fourth iteration
l = l and r = (l + (l + (l + (l + r)/2)/2)/2)/2
which means that
r = (8l + 4l + 2l + l + r)/8
This means in nth iteration we have:
r = (l(1 + 2 + 4 + 8 +......2^n-1) + r)/2^n
and terminating condition is r = l + 1
Solving (l(1 + 2 + 4 + 8 +......2^n-1) + r)/2^n = l + 1, we get
2^n = r - l
This means that n = log(r - l). One might say that in all subsequent calls of left function we ignored the other call, that is right function call. The reason is this:
Since in the right function call we l = m, where m is already a reduced , as we take the mean, and r = r, which is even more averaged this asymptotically wont have any effect on time complexity.
So our recursion tree will have maximum depth = log(r - l). Its true that not all levels will be fully populated, but for the sake of simplicity, we assume this in asymptotic analysis. So after reaching a depth of log(r - l), we call function recPower, which takes O(logn) time. Total nodes (assuming all levels above are full) at depth log(r - l) is 2^(log(r - l) - 1). For a single node , we take O(logn) time.
Therefore we have total time = O( logn*(2^(log(r - l) - 1)) ).
This might help:
T(#terms) = 2T(#terms/2) + a
T(2) = 2logn + b
Where a and b are constants, and #terms refer to number of terms in polynomial.
This recurrence relation can be solved using Master's Theorem or using the Recursion tree method.
Suppose we have a series summation
s = 1 + 2a + 3a^2 + 4a^3 + .... + ba^(b-1)
i need to find s MOD M, where M is a prime number and b is relatively big integer.
I have found an O((log n)^2) divide and conquer solution.
where,
g(n) = (1 + a + a^2 + ... + a^n) MOD M
f(a, b) = [f(a, b/2) + a^b/2*(f(a,b/2) + b/2*g(b/2))] MOD M, where b is even number
f(a,b) = [f(a,b/2) + a^b/2*(f(a,b/2) + b/2*g(b/2)) + ba(b-1)] MOD M, where b is odd number
is there any O(log n) solution for this problem?
Yes. Observe that 1 + 2a + 3a^2 + ... + ba^(b-1) is the derivative in a of 1 + a + a^2 + a^3 + ... + a^b. (The field of formal power series covers a lot of tricks like this.) We can evaluate the latter with automatic differentiation with dual numbers in time O(log b) arithmetic ops. Something like this:
def fdf(a, b, m):
if b == 0:
return (1, 0)
elif b % 2 == 1:
f, df = fdf((a**2) % m, (b - 1) / 2, m)
df *= 2 * a
return ((1 + a) * f % m, (f + (1 + a) * df) % m)
else:
f, df = fdf((a**2) % m, (b - 2) / 2, m)
df *= 2 * a
return ((1 + (a + a**2) * f) % m, (
(1 + 2 * a) * f + (a + a**2) * df) % m)
The answer is fdf(a, b, m)[1]. Note the use of the chain rule when we go from the derivative with respect to a**2 to the derivative with respect to a.
Given the following function:
Function f(n,m)
if n == 0 or m == 0: return 1
return f(n-1, m) + f(n, m-1)
What's the runtime compexity of f? I understand how to do it quick and dirty, but how to properly characterize it? Is it O(2^(m*n))?
This is an instance of Pascal's triangle: every element is the sum of the two elements just above it, the sides being all ones.
So f(n, m) = (n + m)! / (n! . m!).
Now to know the number of calls to f required to compute f(n, m), you can construct a modified Pascal's triangle: instead of the sum of the elements above, consider 1 + sum (the call itself plus the two recursive calls).
Draw the modified triangle and you will quickly convince yourself that this is exactly 2.f(n, m) - 1.
You can obtain the asymptotic behavior of the binomial coefficients from Stirling's approximation. http://en.wikipedia.org/wiki/Binomial_coefficient#Bounds_and_asymptotic_formulas
f(n, m) ~ (n + m)^(n + m) / (n^n . m^m)
The runtime of f(n, m) is in O(f(n, m)). This is easily verified by the following observation:
Function g(n, m):
if n=0 or m=0: return 1
return g(n-1, m) + g(n, m-1) + 1
The function f is called equally often as g. Furthermore, the function g is called exactly g(n, m) times to evaluate the result of g(n, m). Likewise, the function f is called exactly g(n, m) = 2*f(n, m)-1 times in order to evaluate the result of f(n, m).
As #Yves Daoust points out in his answer, f(n, m) = (n + m)!/(n!*m!), therefore you get a non-recursive runtime of O((n+m)!/(n!*m!)) for f.
Understanding the Recursive Function
f(n, 0) = 1
f(0, m) = 1
f(n, m) = f(n - 1, m) + f(n, m - 1)
The values look like a Pascal triangle to me:
n 0 1 2 3 4 ..
m
0 1 1 1 1 1 ..
1 1 2 3 4
2 1 3 6
3 1 4 .
4 1 .
. .
. .
Solving the Recursion Equation
The values of Pascal's triangle can be expressed as binomial coefficients. Translating the coordinates one gets the solution for f:
f(n, m) = (n + m)
( m )
= (n + m)! / (m! (n + m - m)!)
= (n + m)! / (n! m!)
which is a nice term symmetric in both arguments n and m. (Final term first given by #Yves Daoust in this discussion)
Pascal's Rule
The recursion equation of f can be derived by using the symmetry of the binomial coefficients and Pascal's Rule
f(n, m) = (n + m)
( n )
= (n + m)
( m )
= ((n + m) - 1) + ((n + m) - 1)
( m ) ( m - 1 )
= ((n - 1) + m) + (n + (m - 1))
( m ) ( m )
= f(n - 1, m) + f(n, m - 1)
Determing the Number of Calls
The "number of calls of f" calculation function F is similar to f, we just have to add the call to f itself and the two recursive calls:
F(0, m) = F(n, 0) = 1, otherwise
F(n, m) = 1 + F(n - 1, m) + F(n, m - 1)
(Given first by #blubb in this discussion).
Understanding the Number of Calls Function
If we write it down, we get another triangle scheme:
1 1 1 1 1 ..
1 3 5 7
1 5 11
1 7 .
1 .
.
.
Comparing the triangles value by value, one guesses
F(n, m) = 2 f(n, m) - 1 (*)
(Result first suggested by #blubb in this discussion)
Proof
We get
F(0, m) = 2 f(0, m) - 1 ; using (*)
= 1 ; yields boundary condition for F
F(n, 0) = 2 f(n, 0) - 1
= 1
as it should and examining the otherwise clause, we see that
F(n, m) = 2 f(n, m) - 1 ; assumption
= 2 ( f(n - 1, m) + f(n, m - 1) ) - 1 ; definition f
= 1 + (2 f(n - 1, m) - 1) + (2 f(n, m - 1) - 1) ; algebra
= 1 + F(n - 1, m) + F(n, m - 1) ; 2 * assumption
Thus if we use (*) and the otherwise clause for f, the otherwise clause for F results.
As the finite difference equation and the start condition for F hold, we know it is F (uniqueness of the solution).
Estimating the Asymptotic Behaviour of the Number of Calls
Now on calculating / estimating the values of F (i.e. the runtime of your algorithm).
As
F = 2 f - 1
we see that
O(F) = O(f).
So the runtime of this algorithm is
O( (n + m)! / (n! m!) )
(Result first given by #Yves Daoust in this discussion)
Approximating the Runtime
Using the Stirling approximation
n! ~= sqrt(2 pi n) (n / e)^n
one can get a form without hard to calculate factorials. One gets
f(n, m) ~= 1/(2 pi) sqrt((n+m) / (n m)) [(n + m)^(n + m)] / (n^n m^m)
thus arriving at
O( sqrt((n + m) / (n m)) [(n + m)^(n + m)] / (n^n m^m) )
(Use of Stirling's formula first suggested by #Yves Daoust in this discussion)