Optimizing repeatedly called math function - performance

I am writing some code as part of a nonlinear regression tool and I am trying to figure out an approach for returning the nth partial derivative of a given function in a way that is a good balance of readable and fast. The function (and the analytic representations of the partials) are known at runtime, these can be hardcoded. Here's what I have so far (which works):
let getPartials (paramVect: array<float>) idx =
let a = paramVect.[0]
let b = paramVect.[1]
let c = paramVect.[2]
match idx with
| 1 -> (fun x -> (1.0+b+c*x)**(-1.0/b)) // df(x)/da
| 2 -> (fun x -> ((a*(1.0+c*b*x)**(-(b+1.0)/b))*((b*c*x+1.0)*Math.Log(b*c*x+1.0)-b*c*x))/(b*b)) // df(x)/db
| 3 -> (fun x -> -a*x*(b*c*x+1.0)**(-(b+1.0)/b)) // df/dc
| _ -> (fun x -> 0.0) //everything else is zero
The way I am using this is to first construct a partial function with the parameter vector so that I am minimizing the number of times that needs to be passed in. Then I am repeatedly calling (getPartials(i) x_val) to construct a jacobian. This function gets called an extremely large number of times over the lifecycle the program.
I am getting pretty acceptable performance with this, however, I suspect it can be improved. Profiling shows that the evaluation of the 2nd function calcuation (the long one) is a cpu drain - can this be optimized? I am unsure if the anonymous functions create a performance problem, as readable as it is...
I am brand new to F# programming, so please let me know if you spot any egregarious problems with either the style/form or the performance!
Thank you
Update: after implementing the changes suggested by JohnPalmer and refactoring so that instead of returning an anonymous function which accepts the x-value as an argument, it instead does the whole calculation in-place, I am seeing approximately a 300% speed increase. It was more convenient to be able to return the partial functions, but not worth the cost.
let getPartials (paramVect: array<float>) idx x =
let a = paramVect.[0]
let b = paramVect.[1]
let cbx = paramVect.[1] * paramVect.[2] * x
match idx with
| 1 -> (1.0+cbx)**(-1.0/b) // df(x)/da
| 2 -> ((a*(1.0+cbx)**(-(b+1.0)/b))*((cbx+1.0)*Math.Log(cbx+1.0)-cbx))/(b**2.0) // df(x)/db
| 3 -> -a*x*(cbx+1.0)**(-(b+1.0)/b) // df/dc
| _ -> 0.0 //everything else is zero

The most likely reason for the anonymous functions to cause a performance problem would be the fact that you create a new heap object each time you call the getPartials function. If you have only a small number of different paramVects then you might get some performance benefit by caching the anonymous functions.
As for the evaluation of the second expression, you might try this (taking John Palmer's suggestion to eliminate the common subexpressions):
fun x -> let bcx = b * c * x
let bcx1 = bcx + 1.0
a * bcx1 ** (-(b+1.0)/b) * (bcx1 * Math.Log bcx1 - bcx)/(b*b)

Related

How to optimise Haskell code to pass HackerRanks timed out test cases (Not for any ongoing competition, just me practicing)

I been learning Haskell for around 4 months now and I have to say, the learning curve is definitely hard(scary also :p).
After solving about 15 easy questions, today I moved to my first medium difficulty problem on HackerRank https://www.hackerrank.com/challenges/climbing-the-leaderboard/problem.
It was 10 test cases and I am able to pass 6 of them, but the rest fail with timeout, now the interesting part is, I can already see a few parts that have potential for performance increase, for example, I am using nub to remove duplicated from a [Int], but still I am not able to build a mental model for algorithmic performance, the main cause of that being unsure about Haskell compiler will change my code and how laziness plays a role here.
import Data.List (nub)
getInputs :: [String] -> [String]
getInputs (_:r:_:p:[]) = [r, p]
findRating :: Int -> Int -> [Int] -> Int
findRating step _ [] = step
findRating step point (x:xs) = if point >= x then step else findRating (step + 1) point xs
solution :: [[Int]] -> [Int]
solution [rankings, points] = map (\p -> findRating 1 p duplicateRemovedRatings) points
where duplicateRemovedRatings = nub rankings
main :: IO ()
main = interact (unlines . map show . solution . map (map read . words) . getInputs . lines)
Test Case in GHCI
:l "solution"
let i = "7\n100 100 50 40 40 20 10\n4\n5 25 50 120"
solution i // "6\n4\n2\n1\n"
Specific questions I have:
Will the duplicateRemovedRankings variable be calculated once, or on each iteration of the map function call.
Like in imperative programming languages, I can verify the above question using some sort of printing mechanism, is there some equivalent way of doing the same with Haskell.
According to my current understanding, the complexity of this algorithm would be, I know nub is O(n^2)
findRating is O(n)
getInputs is O(1)
solution is O(n^2)
How can I reason about this and build a mental model for performance.
If this violates community guidelines, please comment and I will delete this. Thank you for the help :)
First, to answer your questions:
Yes, duplicateRemovedRankings is computed only once. No repeated computation.
To debug-trace, you can use trace and its friends (see the docs for examples and explanation). Yes, it can be used even in pure, non-IO code. But obviously, don't use it for "normal" output.
Yes, your understanding of complexity is correct.
Now, how to pass HackerRank's tricky tests.
First, yes, you're right that nub is O(N^2). However, in this particular case you don't have to settle for that. You can use the fact that the rankings come pre-sorted to get a linear version of nub. All you have to do is skip elements while they're equal to the next one:
betterNub (x:y:rest)
| x == y = betterNub (y:rest)
| otherwise = x : betterNub (y:rest)
betterNub xs = xs
This gives you O(N) for betterNub itself, but it's still not good enough for HackerRank, because the overall solution is still O(N*M) - for each game you are iterating over all rankings. No bueno.
But here you can get another improvement by observing that the rankings are sorted, and searching in a sorted list doesn't have to be linear. You can use a binary search instead!
To do this, you'll have to get yourself constant-time indexing, which can be achieved by using Array instead of list.
Here's my implementation (please don't judge harshly; I realize I probably got edge cases overengineered, but hey, it works!):
import Data.Array (listArray, bounds, (!))
findIndex arr p
| arr!end' > p = end' + 1
| otherwise = go start' end'
where
(start', end') = bounds arr
go start end =
let mid = (start + end) `div` 2
midValue = arr ! mid
in
if midValue == p then mid
else if mid == start then (if midValue < p then start else end)
else if midValue < p then go start mid
else go mid end
solution :: [[Int]] -> [Int]
solution [rankings, points] = map (\p -> findIndex duplicateRemovedRatings p + 1) points
where duplicateRemovedRatings = toArr $ betterNub rankings
toArr l = listArray (0, (length l - 1)) l
With this, you get O(log N) for the search itself, making the overall solution O(M * log N). And this seems to be good enough for HackerRank.
(note that I'm adding 1 to the result of findIndex - this is because the exercise requires 1-based index)
I believe Fyodor's answer is excellent for your first two and a half questions. For the final half, "How can I build a mental model for performance?", I can say that SPJ is an absolute master of writing highly technical papers in a way accessible to the smart but ignorant reader. The implementation book Implementing lazy functional languages on stock hardware is excellent and can serve as the basis of a mental execution model. There is also Okasaki's thesis, Purely functional data structures, which discusses a complementary and significantly higher-level approach to doing asymptotic complexity analyses. (Actually, I read his book, which apparently includes some extra content, so bear that in mind when deciding for yourself about this recommendation.)
Please don't be daunted by their length. I personally found they were actually actively fun to read; and the topic they cover is a big one, not compressible to a short/quick answer.

Can I do two sums in parallel with collection functions, in F#, or generally, optimize the whole thing

I have the following code:
// volume queue
let volumeQueue = Queue<float>()
let queueSize = 10 * 500 // 10 events per second, 500 seconds max
// add a signed volume to the queue
let addToVolumeQueue x =
volumeQueue.Enqueue(x)
while volumeQueue.Count > queueSize do volumeQueue.TryDequeue() |> ignore
// calculate the direction of the queue, normalized between +1 (buy) and -1 (sell)
let queueDirection length =
let subQueue =
volumeQueue
|> Seq.skip (queueSize - length)
let boughtVolume =
subQueue
|> Seq.filter (fun l -> l > 0.)
|> Seq.sum
let totalVolume =
subQueue
|> Seq.sumBy (fun l -> abs l)
2. * boughtVolume / totalVolume - 1.
What this does is run a fixed length queue to which transaction volumes are added, some positive, some negative.
And then it calculates the cumulative ratio of positive over negative entries and normalizes it between +1 and -1, with 0 meaning the sums are half / half.
There is no optimization right now but this code's performance will matter. So I'd like to make it fast, without compromising readability (it's called roughly every 100ms).
The first thing that comes to mind is to do the two sums at once (the positive numbers and all the numbers) in a single loop. It can easily be done in a for loop, but can it be done with collection functions?
The next option I was thinking about is to get rid of the queue and use a circular buffer, but since the code is run on a part of the buffer (the last 'length' items), I'd have to handle the wrap around part; I guess I could extend the buffer to the size of a power of 2 and get automatic wrap around that way.
Any idea is welcome, but my first original question is: can I do the two sums in a single pass with the collection functions? I can't iterate in the queue with an indexer, so I can't use a for loop (or I guess I'd have to instance an iterator)
First of all, there is nothing inherently wrong with using mutable variables and loops in F#. Especially at a small scale (e.g. inside a function), this can often be quite readable - or at least, easy to understand if there is a suitable comment.
To do this using a single iteration, you could use fold. This basically calculates the two sums in a single iteration at the cost of some readability:
let queueDirectionFold length =
let boughtVolume, totalVolume =
volumeQueue
|> Seq.skip (queueSize - length)
|> Seq.fold (fun (bv, tv) v ->
(if v > 0.0 then bv else bv + v), tv + abs v) (0.0, 0.0)
2. * boughtVolume / totalVolume - 1.
As I mentioned earlier, I would also consider using a loop. The loop itself is quite simple, but some complexity is added by the fact that you need to skip some elements. Still, I think it's quite clear:
let queueDirectionLoop length =
let mutable i = 0
let mutable boughtVolume = 0.
let mutable totalVolume = 0.
for v in volumeQueue do
if i >= queueSize - length then
totalVolume <- totalVolume + abs v
if v > 0. then boughtVolume <- boughtVolume + v
i <- i + 1
2. * boughtVolume / totalVolume - 1.
I tested the performance using 4000 elements and here is what I got:
#time
let rnd = System.Random()
for i in 0 .. 4000 do volumeQueue.Enqueue(rnd.NextDouble())
for i in 0 .. 10000 do ignore(queueDirection 1000) // ~900 ms
for i in 0 .. 10000 do ignore(queueDirectionFold 1000) // ~460 ms
for i in 0 .. 10000 do ignore(queueDirectionLoop 1000) // ~370 ms
Iterating over the queue just once definitely helps with performance. Doing this in an imperative loop helps the performance even more - this may be worth it if you care about performance. The code may be a bit less readable than the original, but I think it's not much worse than fold.

Why does this code grow so large in memory?

if I call crash with any value for increment and 50000000 (fifty million) for spins, it ramps up and keeps growing in memory size until has choked every last bit of memory and then crashes.
crash :: Int -> Int -> Int
crash increment spins = snd $ foldl' spin' (0,0) [1..spins]
where spin' = spin increment
spin increment (index,element1) spinNumber = (next,nextElementOne)
where
next
| indexNIncrement >= spinNumber = 1 + (indexNIncrement `rem` spinNumber)
| otherwise = 1 + indexNIncrement
indexNIncrement = index + increment
nextElementOne
| next==1 = spinNumber
| otherwise = element1
I don't see how memory is leaked. Doesn't each call to spin replace the accumulator value? Doesn't it get released?
Basically, at each step foldl' will evaluate the result of the function spin' at every step, bringing it to WHNF (weak head normal form). Concretely, this means that the result will be evaluated until the first constructor.
However, the result of spin' is (next,nextElementOne), which is already in WHNF, since it starts with a pair constructor. What we want is to force the evaluation of the pair components here. One basic solution is to return
spin ... = next `seq` (nextElementOne `seq` (next, nextElementOne))
so that the components will be evaluated before returning the pair.
A more modern approach could be exploiting BangPatterns, instead.

How is this memoized DP table too slow for SPOJ?

SPOILERS: I'm working on http://www.spoj.pl/problems/KNAPSACK/ so don't peek if you don't want a possible solution spoiled for you.
The boilerplate:
import Data.Sequence (index, fromList)
import Data.MemoCombinators (memo2, integral)
main = interact knapsackStr
knapsackStr :: String -> String
knapsackStr str = show $ knapsack items capacity numItems
where [capacity, numItems] = map read . words $ head ls
ls = lines str
items = map (makeItem . words) $ take numItems $ tail ls
Some types and helpers to set the stage:
type Item = (Weight, Value)
type Weight = Int
type Value = Int
weight :: Item -> Weight
weight = fst
value :: Item -> Value
value = snd
makeItem :: [String] -> Item
makeItem [w, v] = (read w, read v)
And the primary function:
knapsack :: [Item] -> Weight -> Int -> Value
knapsack itemsList = go
where go = memo2 integral integral knapsack'
items = fromList $ (0,0):itemsList
knapsack' 0 _ = 0
knapsack' _ 0 = 0
knapsack' w i | wi > w = exclude
| otherwise = max exclude include
where wi = weight item
vi = value item
item = items `index` i
exclude = go w (i-1)
include = go (w-wi) (i-1) + vi
And this code works; I've tried plugging in the SPOJ sample test case and it produces the correct result. But when I submit this solution to SPOJ (instead of importing Luke Palmer's MemoCombinators, I simply copy and paste the necessary parts into the submitted source), it exceeds the time limit. =/
I don't understand why; I asked earlier about an efficient way to perform 0-1 knapsack, and I'm fairly convinced that this is about as fast as it gets: a memoized function that will only recursively calculate the sub-entries that it absolutely needs in order to produce the correct result. Did I mess up the memoization somehow? Is there a slow point in this code that I am missing? Is SPOJ just biased against Haskell?
I even put {-# OPTIONS_GHC -O2 #-} at the top of the submission, but alas, it didn't help. I have tried a similar solution that uses a 2D array of Sequences, but it was also rejected as too slow.
There's one major problem which really slows this down. It's too polymorphic. Type-specialized versions of functions can be much faster than polymorphic varieties, and for whatever reason GHC isn't inlining this code to the point where it can determine the exact types in use. When I change the definition of integral to:
integral :: Memo Int
integral = wrap id id bits
I get an approximately 5-fold speedup; I think it's fast enough to be accepted on SPOJ.
This is still significantly slower than gorlum0's solution however. I suspect the reason is because he's using arrays and you use a custom trie type. Using a trie will take much more memory and also make lookups slower due to extra indirections, cache misses, etc. You might be able to make up a lot of the difference if you strictify and unbox fields in IntMap, but I'm not sure that's possible. Trying to strictify fields in BitTrie creates runtime crashes for me.
Pure haskell memoizing code can be good, but I don't think it's as fast as doing unsafe things (at least under the hood). You might apply Lennart Augustsson's technique to see if it fares better at memoization.
The one thing that slows down Haskell is IO, The String type in Haskell gives UTF8 support which we don't need for SPOJ. ByteStrings are blazing fast so you might want to consider using them instead.

Representing continuous probability distributions

I have a problem involving a collection of continuous probability distribution functions, most of which are determined empirically (e.g. departure times, transit times). What I need is some way of taking two of these PDFs and doing arithmetic on them. E.g. if I have two values x taken from PDF X, and y taken from PDF Y, I need to get the PDF for (x+y), or any other operation f(x,y).
An analytical solution is not possible, so what I'm looking for is some representation of PDFs that allows such things. An obvious (but computationally expensive) solution is monte-carlo: generate lots of values of x and y, and then just measure f(x, y). But that takes too much CPU time.
I did think about representing the PDF as a list of ranges where each range has a roughly equal probability, effectively representing the PDF as the union of a list of uniform distributions. But I can't see how to combine them.
Does anyone have any good solutions to this problem?
Edit: The goal is to create a mini-language (aka Domain Specific Language) for manipulating PDFs. But first I need to sort out the underlying representation and algorithms.
Edit 2: dmckee suggests a histogram implementation. That is what I was getting at with my list of uniform distributions. But I don't see how to combine them to create new distributions. Ultimately I need to find things like P(x < y) in cases where this may be quite small.
Edit 3: I have a bunch of histograms. They are not evenly distributed because I'm generating them from occurance data, so basically if I have 100 samples and I want ten points in the histogram then I allocate 10 samples to each bar, and make the bars variable width but constant area.
I've figured out that to add PDFs you convolve them, and I've boned up on the maths for that. When you convolve two uniform distributions you get a new distribution with three sections: the wider uniform distribution is still there, but with a triangle stuck on each side the width of the narrower one. So if I convolve each element of X and Y I'll get a bunch of these, all overlapping. Now I'm trying to figure out how to sum them all and then get a histogram that is the best approximation to it.
I'm beginning to wonder if Monte-Carlo wasn't such a bad idea after all.
Edit 4: This paper discusses convolutions of uniform distributions in some detail. In general you get a "trapezoid" distribution. Since each "column" in the histograms is a uniform distribution, I had hoped that the problem could be solved by convolving these columns and summing the results.
However the result is considerably more complex than the inputs, and also includes triangles. Edit 5: [Wrong stuff removed]. But if these trapezoids are approximated to rectangles with the same area then you get the Right Answer, and reducing the number of rectangles in the result looks pretty straightforward too. This might be the solution I've been trying to find.
Edit 6: Solved! Here is the final Haskell code for this problem:
-- | Continuous distributions of scalars are represented as a
-- | histogram where each bar has approximately constant area but
-- | variable width and height. A histogram with N bars is stored as
-- | a list of N+1 values.
data Continuous = C {
cN :: Int,
-- ^ Number of bars in the histogram.
cAreas :: [Double],
-- ^ Areas of the bars. #length cAreas == cN#
cBars :: [Double]
-- ^ Boundaries of the bars. #length cBars == cN + 1#
} deriving (Show, Read)
{- | Add distributions. If two random variables #vX# and #vY# are
taken from distributions #x# and #y# respectively then the
distribution of #(vX + vY)# will be #(x .+. y).
This is implemented as the convolution of distributions x and y.
Each is a histogram, which is to say the sum of a collection of
uniform distributions (the "bars"). Therefore the convolution can be
computed as the sum of the convolutions of the cross product of the
components of x and y.
When you convolve two uniform distributions of unequal size you get a
trapezoidal distribution. Let p = p2-p1, q - q2-q1. Then we get:
> | |
> | ______ |
> | | | with | _____________
> | | | | | |
> +-----+----+------- +--+-----------+-
> p1 p2 q1 q2
>
> gives h|....... _______________
> | /: :\
> | / : : \ 1
> | / : : \ where h = -
> | / : : \ q
> | / : : \
> +--+-----+-------------+-----+-----
> p1+q1 p2+q1 p1+q2 p2+q2
However we cannot keep the trapezoid in the final result because our
representation is restricted to uniform distributions. So instead we
store a uniform approximation to the trapezoid with the same area:
> h|......___________________
> | | / \ |
> | |/ \|
> | | |
> | /| |\
> | / | | \
> +-----+-------------------+--------
> p1+q1+p/2 p2+q2-p/2
-}
(.+.) :: Continuous -> Continuous -> Continuous
c .+. d = C {cN = length bars - 1,
cBars = map fst bars,
cAreas = zipWith barArea bars (tail bars)}
where
-- The convolve function returns a list of two (x, deltaY) pairs.
-- These can be sorted by x and then sequentially summed to get
-- the new histogram. The "b" parameter is the product of the
-- height of the input bars, which was omitted from the diagrams
-- above.
convolve b c1 c2 d1 d2 =
if (c2-c1) < (d2-d1) then convolve1 b c1 c2 d1 d2 else convolve1 b d1
d2 c1 c2
convolve1 b p1 p2 q1 q2 =
[(p1+q1+halfP, h), (p2+q2-halfP, (-h))]
where
halfP = (p2-p1)/2
h = b / (q2-q1)
outline = map sumGroup $ groupBy ((==) `on` fst) $ sortBy (comparing fst)
$ concat
[convolve (areaC*areaD) c1 c2 d1 d2 |
(c1, c2, areaC) <- zip3 (cBars c) (tail $ cBars c) (cAreas c),
(d1, d2, areaD) <- zip3 (cBars d) (tail $ cBars d) (cAreas d)
]
sumGroup pairs = (fst $ head pairs, sum $ map snd pairs)
bars = tail $ scanl (\(_,y) (x2,dy) -> (x2, y+dy)) (0, 0) outline
barArea (x1, h) (x2, _) = (x2 - x1) * h
Other operators are left as an exercise for the reader.
No need for histograms or symbolic computation: everything can be done at the language level in closed form, if the right point of view is taken.
[I shall use the term "measure" and "distribution" interchangeably. Also, my Haskell is rusty and I ask you to forgive me for being imprecise in this area.]
Probability distributions are really codata.
Let mu be a probability measure. The only thing you can do with a measure is integrate it against a test function (this is one possible mathematical definition of "measure"). Note that this is what you will eventually do: for instance integrating against identity is taking the mean:
mean :: Measure -> Double
mean mu = mu id
another example:
variance :: Measure -> Double
variance mu = (mu $ \x -> x ^ 2) - (mean mu) ^ 2
another example, which computes P(mu < x):
cdf :: Measure -> Double -> Double
cdf mu x = mu $ \z -> if z < x then 1 else 0
This suggests an approach by duality.
The type Measure shall therefore denote the type (Double -> Double) -> Double. This allows you to model results of MC simulation, numerical/symbolic quadrature against a PDF, etc. For instance, the function
empirical :: [Double] -> Measure
empirical h:t f = (f h) + empirical t f
returns the integral of f against an empirical measure obtained by eg. MC sampling. Also
from_pdf :: (Double -> Double) -> Measure
from_pdf rho f = my_favorite_quadrature_method rho f
construct measures from (regular) densities.
Now, the good news. If mu and nu are two measures, the convolution mu ** nu is given by:
(mu ** nu) f = nu $ \y -> (mu $ \x -> f $ x + y)
So, given two measures, you can integrate against their convolution.
Also, given a random variable X of law mu, the law of a * X is given by:
rescale :: Double -> Measure -> Measure
rescale a mu f = mu $ \x -> f(a * x)
Also, the distribution of phi(X) is given by the image measure phi_* X, in our framework:
apply :: (Double -> Double) -> Measure -> Measure
apply phi mu f = mu $ f . phi
So now you can easily work out an embedded language for measures. There are much more things to do here, particularly with respect to sample spaces other than the real line, dependencies between random variables, conditionning, but I hope you get the point.
In particular, the pushforward is functorial:
newtype Measure a = (a -> Double) -> Double
instance Functor Measure a where
fmap f mu = apply f mu
It is a monad too (exercise -- hint: this very much looks like the continuation monad. What is return ? What is the analog of call/cc ?).
Also, combined with a differential geometry framework, this can probably be turned into something which compute Bayesian posterior distributions automatically.
At the end of the day, you can write stuff like
m = mean $ apply cos ((from_pdf gauss) ** (empirical data))
to compute the mean of cos(X + Y) where X has pdf gauss and Y has been sampled by a MC method whose results are in data.
Probability distributions form a monad; see eg the work of Claire Jones and also the LICS 1989 paper, but the ideas go back to a 1982 paper by Giry (DOI 10.1007/BFb0092872) and to a 1962 note by Lawvere that I cannot track down (http://permalink.gmane.org/gmane.science.mathematics.categories/6541).
But I don't see the comonad: there's no way to get an "a" out of an "(a->Double)->Double". Perhaps if you make it polymorphic - (a->r)->r for all r? (That's the continuation monad.)
Is there anything that stops you from employing a mini-language for this?
By that I mean, define a language that lets you write f = x + y and evaluates f for you just as written. And similarly for g = x * z, h = y(x), etc. ad nauseum. (The semantics I'm suggesting call for the evaluator to select a random number on each innermost PDF appearing on the RHS at evaluation time, and not to try to understand the composted form of the resulting PDFs. This may not be fast enough...)
Assuming that you understand the precision limits you need, you can represent a PDF fairly simply with a histogram or spline (the former being a degenerate case of the later). If you need to mix analytically defined PDFs with experimentally determined ones, you'll have to add a type mechanism.
A histogram is just an array, the contents of which represent the incidence in a particular region of the input range. You haven't said if you have a language preference, so I'll assume something c-like. You need to know the bin-structure (uniorm sizes are easy, but not always best) including the high and low limits and possibly the normalizatation:
struct histogram_struct {
int bins; /* Assumed to be uniform */
double low;
double high;
/* double normalization; */
/* double *errors; */ /* if using, intialize with enough space,
* and store _squared_ errors
*/
double contents[];
};
This kind of thing is very common in scientific analysis software, and you might want to use an existing implementation.
I worked on similar problems for my dissertation.
One way to compute approximate convolutions is to take the Fourier transform of the density functions (histograms in this case), multiply them, then take the inverse Fourier transform to get the convolution.
Look at Appendix C of my dissertation for formulas for various special cases of operations on probability distributions. You can find the dissertation at: http://riso.sourceforge.net
I wrote Java code to carry out those operations. You can find the code at: https://sourceforge.net/projects/riso
Autonomous mobile robotics deals with similar issue in localization and navigation, in particular the Markov localization and Kalman filter (sensor fusion). See An experimental comparison of localization methods continued for example.
Another approach you could borrow from mobile robots is path planning using potential fields.
A couple of responses:
1) If you have empirically determined PDFs they either you have histograms or you have an approximation to a parametric PDF. A PDF is a continuous function and you don't have infinite data...
2) Let's assume that the variables are independent. Then if you make the PDF discrete then P(f(x,y)) = f(x,y)p(x,y) = f(x,y)p(x)p(y) summed over all the combinations of x and y such that f(x,y) meets your target.
If you are going to fit the empirical PDFs to standard PDFs, e.g. the normal distribution, then you can use already-determined functions to figure out the sum, etc.
If the variables are not independent, then you have more trouble on your hands and I think you have to use copulas.
I think that defining your own mini-language, etc., is overkill. you can do this with arrays...
Some initial thoughts:
First, Mathematica has a nice facility for doing this with exact distributions.
Second, representation as histograms (ie, empirical PDFs) is problematic since you have to make choices about bin size. That can be avoided by storing a cumulative distribution instead, ie, an empirical CDF. (In fact, you then retain the ability to recreate the full data set of samples that the empirical distribution is based on.)
Here's some ugly Mathematica code to take a list of samples and return an empirical CDF, namely a list of value-probability pairs. Run the output of this through ListPlot to see a plot of the empirical CDF.
empiricalCDF[t_] :=
Flatten[{{#[[2,1]],#[[1,2]]},#[[2]]}&/#Partition[Prepend[Transpose[{#[[1]],
Rest[FoldList[Plus,0,#[[2]]]]/Length[t]}&[Transpose[{First[#],Length[#]}&/#
Split[Sort[t]]]]],{Null,0}],2,1],1]
Finally, here's some information on combining discrete probability distributions:
http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter7.pdf
I think the histograms or the list of 1/N area regions is a good idea. For the sake of argument, I'll assume that you'll have a fixed N for all distributions.
Use the paper you linked edit 4 to generate the new distribution. Then, approximate it with a new N-element distribution.
If you don't want N to be fixed, it's even easier. Take each convex polygon (trapezoid or triangle) in the new generated distribution and approximate it with a uniform distribution.
Another suggestion is to use kernel densities. Especially if you use Gaussian kernels, then they can be relatively easy to work with... except that the distributions quickly explode in size without care. Depending on the application, there are additional approximation techniques like importance sampling that can be used.
If you want some fun, try representing them symbolically like Maple or Mathemetica would do. Maple uses directed acyclic graphs, while Matematica uses a list/lisp like appoach (I believe, but it's been a loooong time, since I even thought about this).
Do all your manipulations symbolically, then at the end push through numerical values. (Or just find a way to launch off in a shell and do the computations).
Paul.

Resources