algorithm em : comprehension and example - algorithm

I'm studying pattern recognition and I found an interesting algorithm that I'd like to deepen, the Expectations Maximization Algorithm. I haven't great knowledge of probability and statistics and I've read some article on the operation of the algorithm on normal or Gaussian distributions, but I would start with a simple example to understand better. I hope that the example may be suitable.
Assume we have a jar with 3 colors, red, green, blue. Corresponding probability of drawing each colored ball are: pr, pg, pb. Now, let's assume that we have the following parametrized model for the probabilities of drawing the different colours :
pr = 1/4
pg = 1/4 + p/4
pb = 1/2 - p/4
with p unknown parameter. Now assume that the man who is doing the experiment is actually colourblind and cannot discern the red from the green balls. He draws N balls, but only sees
m1 = nR + nG red/green balls and m2 = nB blue balls.
The question is, can the man still estimate the parameter p and with that in hand calculate his best guess for the number of red and green balls (obviously, he knows the number of blue balls)? I think that obviously he can, but what about EM? What I have to consider?

Well, the general outline of the EM algorithm is that if you know the values of some of the parameters, then computing the MLE for the other parameters is very simple. The commonly-given example is mixture density estimation. If you know the mixture weights, then estimating the parameters for the individual densities is easy (M step). Then you go back a step: if you know the individual densities then you can estimate the mixture weights (E step). There isn't necessarily an EM algorithm for every problem, and even if there is one, it's not necessarily the most efficient algorithm. It is, however, usually simpler and therefore more convenient.
In the problem you stated, you can pretend that you know the numbers of red and green balls and then you can carry out ML estimation for p (M step). Then with the value of p you go back and estimate the numbers of red and green balls (E step). Without thinking about it too much, my guess is that you could reverse the roles of the parameters and still work it as an EM algorithm: you could pretend that you know p and carry out ML estimation for the numbers of balls, then go back and estimate p.
If you are still following, we can work out formulas for all this stuff.

When "p" is not known, you can go for maximum likihood or MLE.
First, from your descriptions, "p" has to be in [-1, 2] or the probabilities will not make sense.
You have two certain observations: nG + nR = m and nB = N - m (m = m1, N = m1 + m2)
The chances of this happening is N! / (m! (N - m)!) (1- pb)^m (1 - pb)^(N - m).
Ignoring the constant of N choose m, we will maximize the second term:
p* = argmax over p of (1 - pb)^m pb^(N - m)
The easy solution is that p* should make pb = (N - m) / N = 1 - m / N.
So 0.5 - 0.25 p* = 1 = m / N ==> p* = max(-1, -2 + 4 * m / N)

Related

Is there an easy function from a pair of 32-bit ints to a single 64-bit int that preserves rotational order?

This is a question that came up in the context of sorting points with integer coordinates into clockwise order, but this question is not about how to do that sorting.
This question is about the observation that 2-d vectors have a natural cyclic ordering. Unsigned integers with usual overflow behavior (or signed integers using twos-complement) also have a natural cyclic ordering. Can you easily map from the first ordering to the second?
So, the exact question is whether there is a map from pairs of twos-complement signed 32-bit integers to unsigned (or twos-complement signed) 64-bit integers such that any list of vectors that is in clockwise order maps to integers that are in decreasing (modulo overflow) order?
Some technical cases that people will likely ask about:
Yes, vectors that are multiples of each other should map to the same thing
No, I don't care which vector (if any) maps to 0
No, the images of antipodal vectors don't have to differ by 2^63 (although that is a nice-to-have)
The obvious answer is that since there are only around 0.6*2^64 distinct slopes, the answer is yes, such a map exists, but I'm looking for one that is easily computable. I understand that "easily" is subjective, but I'm really looking for something reasonably efficient and not terrible to implement. So, in particular, no counting every lattice point between the ray and the positive x-axis (unless you know a clever way to do that without enumerating them all).
An important thing to note is that it can be done by mapping to 65-bit integers. Simply project the vector out to where it hits the box bounded by x,y=+/-2^62 and round toward negative infinity. You need 63 bits to represent that integer and two more to encode which side of the box you hit. The implementation needs a little care to make sure you don't overflow, but only has one branch and two divides and is otherwise quite cheap. It doesn't work if you project out to 2^61 because you don't get enough resolution to separate some slopes.
Also, before you suggest "just use atan2", compute atan2(1073741821,2147483643) and atan2(1073741820,2147483641)
EDIT: Expansion on the "atan2" comment:
Given two values x_1 and x_2 that are coprime and just less than 2^31 (I used 2^31-5 and 2^31-7 in my example), we can use the extended Euclidean algorithm to find y_1 and y_2 such that y_1/x_1-y_2/x_2 = 1/(x_1*x_2) ~= 2^-62. Since the derivative of arctan is bounded by 1, the difference of the outputs of atan2 on these values is not going to be bigger than that. So, there are lots of pairs of vectors that won't be distinguishable by atan2 as vanilla IEEE 754 doubles.
If you have 80-bit extended registers and you are sure you can retain residency in those registers throughout the computation (and don't get kicked out by a context switch or just plain running out of extended registers), then you're fine. But, I really don't like the correctness of my code relying on staying resident in extended registers.
Here's one possible approach, inspired by a comment in your question. (For the tl;dr version, skip down to the definition of point_to_line at the bottom of this answer: that gives a mapping for the first quadrant only. Extension to the whole plane is left as a not-too-difficult exercise.)
Your question says:
in particular, no counting every lattice point between the ray and the positive x-axis (unless you know a clever way to do that without enumerating them all).
There is an algorithm to do that counting without enumerating the points; its efficiency is akin to that of the Euclidean algorithm for finding greatest common divisors. I'm not sure to what extent it counts as either "easily computable" or "clever".
Suppose that we're given a point (p, q) with integer coordinates and both p and q positive (so that the point lies in the first quadrant). We might as well also assume that q < p, so that the point (p, q) lies between the x-axis y = 0 and the diagonal line y = x: if we can solve the problem for the half of the first quadrant that lies below the diagonal, we can make use of symmetry to solve it generally.
Write M for the bound on the size of p and q, so that in your example we want M = 2^31.
Then the number of lattice points strictly inside the triangle bounded by:
the x-axis y = 0
the ray y = (q/p)x that starts at the origin and passes through (p, q), and
the vertical line x = M
is the sum as x ranges over integers in (0, M) of ⌈qx/p⌉ - 1.
For convenience, I'll drop the -1 and include 0 in the range of the sum; both those changes are trivial to compensate for. And now the core functionality we need is the ability to evaluate the sum of ⌈qx/p⌉ as x ranges over the integers in an interval [0, M). While we're at it, we might also want to be able to compute a closely-related sum: the sum of ⌊qx/p⌋ over that same range of x (and it'll turn out that it makes sense to evaluate both of these together).
For testing purposes, here are slow, naive-but-obviously-correct versions of the functions we're interested in, here written in Python:
def floor_sum_slow(p, q, M):
"""
Sum of floor(q * x / p) for 0 <= x < M.
Assumes p positive, q and M nonnegative.
"""
return sum(q * x // p for x in range(M))
def ceil_sum_slow(p, q, M):
"""
Sum of ceil(q * x / p) for 0 <= x < M.
Assumes p positive, q and M nonnegative.
"""
return sum((q * x + p - 1) // p for x in range(M))
And an example use:
>>> floor_sum_slow(51, 43, 2**28) # takes several seconds to complete
30377220771239253
>>> ceil_sum_slow(140552068, 161600507, 2**28)
41424305916577422
These sums can be evaluated much faster. The first key observation is that if q >= p, then we can apply the Euclidean "division algorithm" and write q = ap + r for some integers a and r. The sum then simplifies: the ap part contributes a factor of a * M * (M - 1) // 2, and we're reduced from computing floor_sum(p, q, M) to computing floor_sum(p, r, M). Similarly, the computation of ceil_sum(p, q, M) reduces to the computation of ceil_sum(p, q % p, M).
The second key observation is that we can express floor_sum(p, q, M) in terms of ceil_sum(q, p, N), where N is the ceiling of (q/p)M. To do this, we consider the rectangle [0, M) x (0, (q/p)M), and divide that rectangle into two triangles using the line y = (q/p)x. The number of lattice points within the rectangle that lie on or below the line is floor_sum(p, q, M), while the number of lattice points within the rectangle that lie above the line is ceil_sum(q, p, N). Since the total number of lattice points in the rectangle is (N - 1)M, we can deduce the value of floor_sum(p, q, M) from that of ceil_sum(q, p, N), and vice versa.
Combining those two ideas, and working through the details, we end up with a pair of mutually recursive functions that look like this:
def floor_sum(p, q, M):
"""
Sum of floor(q * x / p) for 0 <= x < M.
Assumes p positive, q and M nonnegative.
"""
a = q // p
r = q % p
if r == 0:
return a * M * (M - 1) // 2
else:
N = (M * r + p - 1) // p
return a * M * (M - 1) // 2 + (N - 1) * M - ceil_sum(r, p, N)
def ceil_sum(p, q, M):
"""
Sum of ceil(q * x / p) for 0 <= x < M.
Assumes p positive, q and M nonnegative.
"""
a = q // p
r = q % p
if r == 0:
return a * M * (M - 1) // 2
else:
N = (M * r + p - 1) // p
return a * M * (M - 1) // 2 + N * (M - 1) - floor_sum(r, p, N)
Performing the same calculation as before, we get exactly the same results, but this time the result is instant:
>>> floor_sum(51, 43, 2**28)
30377220771239253
>>> ceil_sum(140552068, 161600507, 2**28)
41424305916577422
A bit of experimentation should convince you that the floor_sum and floor_sum_slow functions give the same result in all cases, and similarly for ceil_sum and ceil_sum_slow.
Here's a function that uses floor_sum and ceil_sum to give an appropriate mapping for the first quadrant. I failed to resist the temptation to make it a full bijection, enumerating points in the order that they appear on each ray, but you can fix that by simply replacing the + gcd(p, q) term with + 1 in both branches.
from math import gcd
def point_to_line(p, q, M):
"""
Bijection from [0, M) x [0, M) to [0, M^2), preserving
the 'angle' ordering.
"""
if p == q == 0:
return 0
elif q <= p:
return ceil_sum(p, q, M) + gcd(p, q)
else:
return M * (M - 1) - floor_sum(q, p, M) + gcd(p, q)
Extending to the whole plane should be straightforward, though just a little bit messy due to the asymmetry between the negative range and the positive range in the two's complement representation.
Here's a visual demonstration for the case M = 7, printed using this code:
M = 7
for q in reversed(range(M)):
for p in range(M):
print(" {:02d}".format(point_to_line(p, q, M)), end="")
print()
Results:
48 42 39 36 32 28 27
47 41 37 33 29 26 21
46 40 35 30 25 20 18
45 38 31 24 19 16 15
44 34 23 17 14 12 11
43 22 13 10 09 08 07
00 01 02 03 04 05 06
This doesn't meet your requirement for an "easy" function, nor for a "reasonably efficient" one. But in principle it would work, and it might give some idea of how difficult the problem is. To keep things simple, let's consider just the case where 0 < y ≤ x, because the full problem can be solved by splitting the full 2D plane into eight octants and mapping each to its own range of integers in essentially the same way.
A point (x1, y1) is "anticlockwise" of (x2, y2) if and only if the slope y1/x1 is greater than the slope y2/x2. To map the slopes to integers in an order-preserving way, we can consider the sequence of all distinct fractions whose numerators and denominators are within range (i.e. up to 231), in ascending numerical order. Note that each fraction's numerical value is between 0 and 1 since we are just considering one octant of the plane.
This sequence of fractions is finite, so each fraction has an index at which it occurs in the sequence; so to map a point (x, y) to an integer, first reduce the fraction y/x to its simplest form (e.g. using Euclid's algorithm to find the GCD to divide by), then compute that fraction's index in the sequence.
It turns out this sequence is called a Farey sequence; specifically, it's the Farey sequence of order 231. Unfortunately, computing the index of a given fraction in this sequence turns out to be neither easy nor reasonably efficient. According to the paper
Computing Order Statistics in the Farey Sequence by Corina E. Pǎtraşcu and Mihai Pǎtraşcu, there is a somewhat complicated algorithm to compute the rank (i.e. index) of a fraction in O(n) time, where n in your case is 231, and there is unlikely to be an algorithm in time polynomial in log n because the algorithm can be used to factorise integers.
All of that said, there might be a much easier solution to your problem, because I've started from the assumption of wanting to map these fractions to integers as densely as possible (i.e. no "unused" integers in the target range), whereas in your question you wrote that the number of distinct fractions is about 60% of the available range of size 264. Intuitively, that amount of leeway doesn't seem like a lot to me, so I think the problem is probably quite difficult and you may need to settle for a solution that uses a larger output range, or a smaller input range. At the very least, by writing this answer I might save somebody else the effort of investigating whether this approach is feasible.
Just some random ideas / observations:
(edit: added two more and marked the first one as wrong as pointed out in the comments)
Divide into 16 22.5° segments instead of 8 45° segments
If I understand the problem correctly, the lines spread out "more" towards 45°, "wasting" resolution that you need for smaller angles. (Incorrect, see below)
In the mapping to 62 bit integers, there must be gaps. Identify enough low density areas to map down to 61 bits? Perhaps plot for a smaller problem to potentially see a pattern?
As the range for x and y is limited, for a given x0, all (legal) x < x0 with y > q must have a smaller angle. Could this help to break down the problem in some way? Perhaps cutting a triangle where points can easily be enumerated out of the problem for each quadrant?

Correct implementation of weighted K-Nearest Neighbors

From what I understood, the classical KNN algorithm works like this (for discrete data):
Let x be the point you want to classify
Let dist(a,b) be the Euclidean distance between points a and b
Iterate through the training set points pᵢ, taking the distances dist(pᵢ,x)
Classify x as the most frequent class between the K points closest (according to dist) to x.
How would I introduce weights on this classic KNN? I read that more importance should be given to nearer points, and I read this, but couldn't understand how this would apply to discrete data.
For me, first of all, using argmax doesn't make any sense, and if the weight acts increasing the distance, than it would make the distance worse. Sorry if I'm talking nonsense.
Consider a simple example with three classifications (red green blue) and the six nearest neighbors denoted by R, G, B. I'll make this linear to simplify visualization and arithmetic
R B G x G R R
The points listed with distance are
class dist
R 3
B 2
G 1
G 1
R 2
R 3
Thus, if we're using unweighted nearest neighbours, the simple "voting" algorithm is 3-2-1 in favor of Red. However, with the weighted influences, we have ...
red_total = 1/3^2 + 1/2^2 + 1/3^2 = 1/4 + 2/9 ~= .47
blue_total = 1/2^2 ..............................= .25
green_total = 1/1^2 + 1/1^2 ......................= 2.00
... and x winds up as Green due to proximity.
That lower-delta function is merely the classification function; in this simple example, it returns red | green | blue. In a more complex example, ... well, I'll leave that to later tutorials.
Okay, off the bat let me say I am not the fan of the link you provided, it has image equations and follows a different notation in the images and the text.
So leaving that off let's look at the regular k-NN algorithm. regular k-NN is actually just a special case of weighted k-NN. You assign a weight of 1 to k neighbors and 0 to the rest.
Let Wqj denote the weight associated with a point j relative to a point q
Let yj be the class label associated with the data point j. For simplicity let us assume we are classifying birds as either crows, hens or turkeys => discrete classes. So for all j, yj <- {crow, turkey, hen}
A good weight metric is the inverse of the distance , whatever distance be it Euclidean, Mahalanobis etc.
Given all this, the class label yq you would associate with the point q you are trying to predict would be the the sum of the wqj . yj terms diviided by the sum of all weights. You do not have to the division if you normalize the weights first.
You would end up with an equation as follows somevalue1 . crow + somevalue2 . hen + somevalue3 . turkey
One of these classes will have a higher somevalue. The class witht he highest value is what you will predict for point q
For the purpose of training you can factor in the error anyway you want. Since the classes are discrete there are a limited number of simple ways you can adjust the weight to improve accuracy

How to solve this recurrence?

I have a recurrence like this f(n)=(2*f(n-1)+2*f(n-2))%10007;
Now for a particular n i need to find:
g(n)=(f(n)f(0)+f(n-1)f(1)+....+f(0)f(n))%10007.
For example if n=3,
g(3)=(f(3)f(0)+f(2)f(1)+f(1)f(2)+f(0)f(3))%10007.
n can be as large as 10^9. I can find value of f(n) using matrix exponent in log(n) but i cant figure out how to get g(n).
(I need this to solve a problem from amritapuri 2008 regional
Forget about 10007 for a second.
Let F(x)=sum(f(n)*x^n). Then F(x)=(f(0)+x*(f(1)-2f(0))/(1-2x-2x^2).
Let G(x)=sum(g(n)*x^n). Then G(x)=F(x)^2.
Thus the problem is reduced to finding the coefficient of a series (modulo 10007).
BACKGROUND
The original question is about how to tile a 2*n rectangle with 4 types of tiles.
What is unusual is that the tiling must divide into two pieces.
HINT 1
However, you can also consider this as a way of tiling this with the original 4 tiles coloured red, and another set of 4 tiles coloured blue such that the final board has a red side and a blue side.
HINT 2
Let f(n) be the number of ways of tiling a 2*n rectangle with just red tiles, and h(n) be the number of ways of tiling a 2*n rectangle with 0 or more columns of red tiles followed by 1 or more columns of blue tiles.
HINT 3
You can now find a simple matrix multiplication that gives the next values of h and f in terms of their two previous values and use the standard matrix power exponentiation to find the final values.
EXAMPLE CODE
Here is a Python demonstration that this formula gives the same answer as the original summation.
def f(n):
"""Number of ways to tile 2*n board with red tiles"""
if n<0: return 0
if n==0: return 1
return 2*f(n-1)+2*f(n-2)
def g_orig(n):
"""Number of ways to tile 2*n board in two halves"""
return sum(f(k)*f(n-k) for k in range(n+1))
def h(n):
"""Number of ways to tile 2*n board with red tiles and at least one column of blue tiles"""
if n<1: return 0
# Consider placing one column of blue tiles (either 2*1 or 2 1*1)
t=2*(f(n-1)+h(n-1))
# Also consider placing two columns of blue tiles (either a 2*2 or L shaped and a 1*1)
t+=2*(f(n-2)+h(n-2))
return t
def g(n):
return f(n)+h(n)
for n in range(10):
print n,g_orig(n),g(n)
The trick is that the sequence f(n) mod 10007 has a period of 10007, i.e. f(n) mod 10007 = f(n + 10007) mod 10007. So all you need to do is simply (1) calculate f(0 .. n - 1) mod 10007, (2) calculate f(n - k)f(k) mod 10007 for 0 <= k < 10007, and then (3) sum them up according to your equation. You don't even need the power exponentiation method to calculate f(n).

Efficient way to take determinant of an n! x n! matrix in Maple

I have a large matrix, n! x n!, for which I need to take the determinant. For each permutation of n, I associate
a vector of length 2n (this is easy computationally)
a polynomial of in 2n variables (a product of linear factors computed recursively on n)
The matrix is the evaluation matrix for the polynomials at the vectors (thought of as points). So the sigma,tau entry of the matrix (indexed by permutations) is the polynomial for sigma evaluated at the vector for tau.
Example: For n=3, if the ith polynomial is (x1 - 4)(x3 - 5)(x4 - 4)(x6 - 1) and the jth point is (2,2,1,3,5,2), then the (i,j)th entry of the matrix will be (2 - 4)(1 - 5)(3 - 4)(2 - 1) = -8. Here n=3, so the points are in R^(3!) = R^6 and the polynomials have 3!=6 variables.
My goal is to determine whether or not the matrix is nonsingular.
My approach right now is this:
the function point takes a permutation and outputs a vector
the function poly takes a permutation and outputs a polynomial
the function nextPerm gives the next permutation in lexicographic order
The abridged pseudocode version of my code is this:
B := [];
P := [];
w := [1,2,...,n];
while w <> NULL do
B := B append poly(w);
P := P append point(w);
w := nextPerm(w);
od;
// BUILD A MATRIX IN MAPLE
M := Matrix(n!, (i,j) -> eval(B[i],P[j]));
// COMPUTE DETERMINANT IN MAPLE
det := LinearAlgebra[Determinant]( M );
// TELL ME IF IT'S NONSINGULAR
if det = 0 then return false;
else return true; fi;
I'm working in Maple using the built in function LinearAlgebra[Determinant], but everything else is a custom built function that uses low level Maple functions (e.g. seq, convert and cat).
My problem is that this takes too long, meaning I can go up to n=7 with patience, but getting n=8 takes days. Ideally, I want to be able to get to n=10.
Does anyone have an idea for how I could improve the time? I'm open to working in a different language, e.g. Matlab or C, but would prefer to find a way to speed this up within Maple.
I realize this might be hard to answer without all the gory details, but the code for each function, e.g. point and poly, is already optimized, so the real question here is if there is a faster way to take a determinant by building the matrix on the fly, or something like that.
UPDATE: Here are two ideas that I've toyed with that don't work:
I can store the polynomials (since they take a while to compute, I don't want to redo that if I can help it) into a vector of length n!, and compute the points on the fly, and plug these values into the permutation formula for the determinant:
The problem here is that this is O(N!) in the size of the matrix, so for my case this will be O((n!)!). When n=10, (n!)! = 3,628,800! which is way to big to even consider doing.
Compute the determinant using the LU decomposition. Luckily, the main diagonal of my matrix is nonzero, so this is feasible. Since this is O(N^3) in the size of the matrix, that becomes O((n!)^3) which is much closer to doable. The problem, though, is that it requires me to store the whole matrix, which puts serious strain on memory, nevermind the run time. So this doesn't work either, at least not without a bit more cleverness. Any ideas?
It isn't clear to me if your problem is space or time. Obviously the two trade back and forth. If you only wish to know if the determinant is positive or not, then you definitely should go with LU decomposition. The reason is that if A = LU with L lower triangular and U upper triangular, then
det(A) = det(L) det(U) = l_11 * ... * l_nn * u_11 * ... * u_nn
so you only need to determine if any of the main diagonal entries of L or U is 0.
To simplify further, use Doolittle's algorithm, where l_ii = 1. If at any point the algorithm breaks down, the matrix is singular so you can stop. Here's the gist:
for k := 1, 2, ..., n do {
for j := k, k+1, ..., n do {
u_kj := a_kj - sum_{s=1...k-1} l_ks u_sj;
}
for i = k+1, k+2, ..., n do {
l_ik := (a_ik - sum_{s=1...k-1} l_is u_sk)/u_kk;
}
}
The key is that you can compute the ith row of U and the ith column of L at the same time, and you only need to know the previous row/column to move forward. This way you parallel process as much as you can and store as little as you need. Since you can compute the entries a_ij as needed, this requires you to store two vectors of length n while generating two more vectors of length n (rows of U, columns of L). The algorithm takes n^2 time. You might be able to find a few more tricks, but that depends on your space/time trade off.
Not sure if I've followed your problem; is it (or does it reduce to) the following?
You have two vectors of n numbers, call them x and c, then the matrix element is product over k of (x_k+c_k), with each row/column corresponding to distinct orderings of x and c?
If so, then I believe the matrix will be singular whenever there are repeated values in either x or c, since the matrix will then have repeated rows/columns. Try a bunch of Monte Carlo's on a smaller n with distinct values of x and c to see if that case is in general non-singular - it's quite likely if that's true for 6, it'll be true for 10.
As far as brute-force goes, your method:
Is a non-starter
Will work much more quickly (should be a few seconds for n=7), though instead of LU you might want to try SVD, which will do a much better job of letting you know how well behaved your matrix is.

Number of ways to move from Point 1 to Point 2 in a co-ordinate 2D plain

I came across a question where it was asked to find the number of unique ways of reaching from point 1 to point 2 in a 2D co-ordinate plain.
Note: This can be assumed without loss of generality that x1 < x2 and y1 < y2.
Moreover the motions are constrained int he following manner. One can move only right or up. means a valid move is from (xa, ya) to (xb, yb) if xa < xb and ya < yb.
Mathematically, this can be found by ( [(x2-x1)+(y2-y1)]! ) / [(x2-x1)!] * [(y2-y1)!]. I have thought of code too.
I have approaches where I coded with dynamic programming and my approach takes around O([max(x2,y2)]^2) time and Theta( x2 * y2 ) where I can just manage with the upper or lower triangular matrix.
Can you think of some other approaches where running time is less than this? I am thinking of a recursive solution where the minimum running time is O(max(x2,y2)).
A simple efficient solution is the mathematical one.
Let x2-x1 = n and y2-y1 = m.
You need to take exactly n steps to the right, and m steps up, all is left to determine is their order.
This can be modeled as number of binary vectors with n+m elements with exactly n elements set to 1.
Thus, the total number of possibilities is chose(n,n+m) = (n+m)! / (n! * m!), which is exactly what you got.
Given that the mathematical answer is both proven and both faster to calculate - I see no reason for using a different solution with these restrictions.
If you are eager to use recursion here, the recursive formula for binomial coefficient will probably be a good fit here.
EDIT:
You might be looking for the multiplicative formula to calculate it.
To compute the answer, you can use this formula:
(n+m)!/(n!m!)=(n+1)*(n+2)/2*(n+3)/3*…*(n+m)/m
So the pseudo code is:
let foo(n,m) =
ans=1;
for i = 1 to m do
ans = ans*(n+i)/i;
done;
ans
The order of multiplications and divisions is important, if you modify it you can have an overflow even if the result is not so large.
I finally managed to write the article to describe this question in detail and complete the answer as well. Here is the link for the same. http://techieme.in/dynamic-programming-distinct-paths-between-two-points/
try this formula:
ans = (x2-x1) * (y2-y1) + 1;

Resources