sampling multivariate uniform in PyMC3 - pymc

I would like to use sample from a custom distribution with uniform prior using DensityDist. Something in spirit of:
import theano.tensor as T
from pymc3 import DensityDist, Uniform, Model
with Model() as model:
lim = 3
x0 = Uniform('x0', -lim, lim)
x1 = Uniform('x1', -lim, lim)
x = T.concatenate([x0,x1])
# Create custom densities
star = DensityDist('star', lambda x: star(x[:,0],x[:,1]))
Where star is an function mapping a 2D cartesian point to an un-normalized log-likelihood function. It is the function I want to sample from using Metropolis-Hastings.
I tried a number of variations but none worked. The current code fails with:
ValueError: The index list is longer (size 2) than the number of dimensions of the tensor(namely 0). You are asking for a dimension of the tensor that does not exist! You might need to use dimshuffle to add extra dimension to your tensor.
Any help appreciated!

The index to x is wrong. It is only one dimensional, so indexing along two dimensions can't really work.
import theano.tensor as tt
from pymc3 import DensityDist, Uniform, Model
def star(x):
return -0.5 * tt.exp(-tt.sum(x ** 2))
# or if you need the components individually
#return -0.5 * tt.exp(-x[0] ** 2 - x[1] ** 2)
with Model() as model:
lim = 3
x0 = Uniform('x0', -lim, lim)
x1 = Uniform('x1', -lim, lim)
x = T.stack([x0,x1])
# Create custom densities
star = DensityDist('star', star)

Related

Sample two random variables uniformly, in region where sum is greater than zero

I am trying to figure out how to sample for two random variables uniformly in the region where the sum of the two is greater than zero. I thought a solution might be to sample for X~U(-1,1) and then sample for Y~U(-x,1) where x would be the current sample for X.
But this resulted in a distribution that looks like this.
This doesn't look uniformly distributed as the density of points at the top left is higher and keeps reducing as we move to the right. Can someone point out where the flaw in my reasoning is and how to possibly fix this?
Thank you
You just need to make sure that adjust the density of x points away from the "top-left" corner appropriately. I'd also suggest generating in [0,1] and then transforming into [-1,1] afterwards.
For example:
import numpy as np
# generate points, sqrt takes care of moving points away from zero
n = 50000
x = np.sqrt(np.random.uniform(size=n))
y = np.random.uniform(1-x)
# transform to -1,1
x = x * 2 - 1
y = y * 2 - 1
plotting these gives:
which looks reasonable to me. Note I've colored the [-1,1] square to show where it should fit.
Could you please elaborate a bit on how you arrived at the answer?
Well, the main problem consists in getting a fair way to sample the non-uniform distribution of coordinate X.
From elementary geometry, the area of the part of the upper triangle with x < x0 is: (1/2) * (x0 + 1)2. As the total area of this upper triangle is equal to 2, it follows that the cumulative probability P of (X < x0) within the upper triangle is: P = (1/4) * (x0 + 1)2.
So, inverting the last formula, we have: x0 = 2*sqrt(P) - 1
Now, from the Inverse Transform Sampling theorem, we know that we can generate a fair sampling of X by reinterpreting P as a random variable U0 uniformly distributed between 0 and 1.
In Python, this gives us:
u0 = random.uniform(0.0, 1.0)
x = (2*math.sqrt(u0)) - 1.0
or equivalently:
u0 = random.random()
x = (2 * math.sqrt(u0)) - 1.0
Note that this is essentially the same maths as in the excellent answer by #SamMason. That thing comes from a general statistical principle. It can just as well be used to prove that a fair sampling of the latitude on a 3D sphere is given by arcsin(2*u - 1).
So now we have x, but we still need y. The underlying 2D density is an uniform one, so for a given x, all possible values of y are equidistributed.
The interval of possible values for y is [-x, 1]. So if U1 is yet another independent random variable uniformly distributed between 0 and 1, y can be drawn from the equation:
y = (1+x) * u1 - x
which in Python is rendered by:
u1 = random.random()
y = (1+x)*u1 - x
Overall, the Python code can be written like this:
import math
import random
import matplotlib.pyplot as plt
def mySampler():
u0 = random.random()
u1 = random.random()
x = 2*math.sqrt(u0) - 1.0
y = (1+x)*u1 - x
return (x,y)
#--- Main program:
points = (mySampler() for _ in range(10000)) # an iterator object
xx, yy = zip(*points)
plt.scatter(xx, yy, s=0.2)
plt.show()
Graphically, the result looks good enough:
Side note: a cheaper, ad hoc solution:
There is always the possibility of sampling uniformly in the whole square, and rejecting the points whose x+y sum happens to be negative. But this is a bit wasteful. We can have a more elegant solution by noting that the “bad” region has the same shape and area as the “good” region.
So if we get a “bad” point, instead of just rejecting it, we can replace it by its symmetic point with respect to the x+y=0 dividing line. This can be done using the following Python code:
def mySampler2():
x0 = random.uniform(-1.0, 1.0)
y0 = random.uniform(-1.0, 1.0)
s = x0+y0
if (s >= 0):
return (x0, y0) # good point
else:
return (x0-s, y0-s) # symmetric of bad point
This works fine too. And this is probably the cheapest possible solution regarding CPU time, as we reject nothing, and we don't need to compute a square root.
Following Generate random locations within a triangular domain
Code, to sample uniformly in any triangle, Python 3.9.4, Win 10 x64
import math
import random
import matplotlib.pyplot as plt
def trisample(A, B, C):
"""
Given three vertices A, B, C,
sample point uniformly in the triangle
"""
r1 = random.random()
r2 = random.random()
s1 = math.sqrt(r1)
x = A[0] * (1.0 - s1) + B[0] * (1.0 - r2) * s1 + C[0] * r2 * s1
y = A[1] * (1.0 - s1) + B[1] * (1.0 - r2) * s1 + C[1] * r2 * s1
return (x, y)
random.seed(312345)
A = (1, 0)
B = (1, 1)
C = (0, 1)
points = [trisample(A, B, C) for _ in range(10000)]
xx, yy = zip(*points)
plt.scatter(xx, yy, s=0.2)
plt.show()

SciPy: von Mises distribution on a half circle?

I'm trying to figure out the best way to define a von-Mises distribution wrapped on a half-circle (I'm using it to draw directionless lines at different concentrations). I'm currently using SciPy's vonmises.rvs(). Essentially, I want to be able to put in, say, a mean orientation of pi/2 and have the distribution truncated to no more than pi/2 either side.
I could use a truncated normal distribution, but I will lose the wrapping of the von-mises (say if I want a mean orientation of 0)
I've seen this done in research papers looking at mapping fibre orientations, but I can't figure out how to implement it (in python). I'm a bit stuck on where to start.
If my von Mesis is defined as (from numpy.vonmises):
np.exp(kappa*np.cos(x-mu))/(2*np.pi*i0(kappa))
with:
mu, kappa = 0, 4.0
x = np.linspace(-np.pi, np.pi, num=51)
How would I alter it to use a wrap around a half-circle instead?
Could anyone with some experience with this offer some guidance?
Is is useful to have direct numerical inverse CDF sampling, it should work great for distribution with bounded domain. Here is code sample, building PDF and CDF tables and sampling using inverse CDF method. Could be optimized and vectorized, of course
Code, Python 3.8, x64 Windows 10
import numpy as np
import matplotlib.pyplot as plt
import scipy.integrate as integrate
def PDF(x, μ, κ):
return np.exp(κ*np.cos(x - μ))
N = 201
μ = np.pi/2.0
κ = 4.0
xlo = μ - np.pi/2.0
xhi = μ + np.pi/2.0
# PDF normaliztion
I = integrate.quad(lambda x: PDF(x, μ, κ), xlo, xhi)
print(I)
I = I[0]
x = np.linspace(xlo, xhi, N, dtype=np.float64)
step = (xhi-xlo)/(N-1)
p = PDF(x, μ, κ)/I # PDF table
# making CDF table
c = np.zeros(N, dtype=np.float64)
for k in range(1, N):
c[k] = integrate.quad(lambda x: PDF(x, μ, κ), xlo, x[k])[0] / I
c[N-1] = 1.0 # so random() in [0...1) range would work right
#%%
# sampling from tabular CDF via insverse CDF method
def InvCDFsample(c, x, gen):
r = gen.random()
i = np.searchsorted(c, r, side='right')
q = (r - c[i-1]) / (c[i] - c[i-1])
return (1.0 - q) * x[i-1] + q * x[i]
# sampling test
RNG = np.random.default_rng()
s = np.empty(20000)
for k in range(0, len(s)):
s[k] = InvCDFsample(c, x, RNG)
# plotting PDF, CDF and sampling density
plt.plot(x, p, 'b^') # PDF
plt.plot(x, c, 'r.') # CDF
n, bins, patches = plt.hist(s, x, density = True, color ='green', alpha = 0.7)
plt.show()
and graph with PDF, CDF and sampling histogram
You could discard the values outside the desired range via numpy's filtering (theta=theta[(theta>=0)&(theta<=np.pi)], shortening the array of samples). So, you could first increment the number of generated samples, then filter and then take a subarray of the desired size.
Or you could add/subtract pi to put them all into that range (via theta = np.where(theta < 0, theta + np.pi, np.where(theta > np.pi, theta - np.pi, theta))). As noted by #SeverinPappadeux such changes the distribution and is probably not desired.
import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection
import numpy as np
from scipy.stats import vonmises
mu = np.pi / 2
kappa = 4
orig_theta = vonmises.rvs(kappa, loc=mu, size=(10000))
fig, axes = plt.subplots(ncols=2, sharex=True, sharey=True, figsize=(12, 4))
for ax in axes:
theta = orig_theta.copy()
if ax == axes[0]:
ax.set_title(f"$Von Mises, \\mu={mu:.2f}, \\kappa={kappa}$")
else:
theta = theta[(theta >= 0) & (theta <= np.pi)]
print(len(theta))
ax.set_title(f"$Von Mises, angles\\ filtered\\ ({100 * len(theta) / (len(orig_theta)):.2f}\\ \\%)$")
segs = np.zeros((len(theta), 2, 2))
segs[:, 1, 0] = np.cos(theta)
segs[:, 1, 1] = np.sin(theta)
line_segments = LineCollection(segs, linewidths=.1, colors='blue', alpha=0.5)
ax.add_collection(line_segments)
ax.autoscale()
ax.set_aspect('equal')
plt.show()

Optimal parameters not found: Number of calls to function has reached maxfev = 100

I'm new to python, I try to give some adjustment to the data, but when I get the graph, only the original data appears and with the message "Optimal parameters not found: Number of calls to function has reached maxfev = 1000." Could you help me find my mistake?
%matplotlib inline
import matplotlib.pylab as m
from scipy.optimize import curve_fit
import numpy as num
import scipy.optimize as optimize
xData=num.array([0,0,100,200,250,300,400], dtype="float")
yData=num.array([0,0,0,0,75,100,100], dtype="float")
m.plot(xData, yData, 'ro', label='Datos originales')
def fun(x, a, b):
return a + b * num.log(x)
popt,pcov=optimize.curve_fit(fun, xData, yData,p0=[1,1], maxfev=1000)
print=popt
x=num.linspace(1,400,7)
m.plot(x,fun(x, *popt), label='Función ajustada')
m.xlabel('concentración')
m.ylabel('% mortalidad')
m.legend()
m.grid()
The model in your code is "a + b * num.log(x)". Because your data contains an x value of 0.0, the evaluation of log(0.0) gives errors and will not allow the fitting software to function. Sometimes these x values of 0.0 can be replaced with very small numbers, as log(small number) will not fail - but in this case the equation and data do not appear to match and so using that technique alone would not be sufficient here.
My thought is that a different equation would be a better model for this data. I performed an equation search using your data, and found that several different sigmoidal type equations gave suspiciously good fits to this data set - which is not surprising because of the small number of data points.
The sigmoidal equations I tried were all extremely sensitive to the initial parameter estimates. Here is a graphical Python fitter using scipy's Differential Evolution genetic algorithm module to determine the initial parameter estimates for curve_fit's non-linear solver. That scipy module uses the Latin Hypercube algorithm to ensure a thorough search of parameter space, requiring bounds within which to search. Here those bounds are taken from the data maximum and minimun values.
I personally would not use this fit precisely because the small number of data points is giving such suspiciously good fits, and strongly recommend taking additional data points if at all possible. I could however not find any equations with less than three parameters that would fit the data.
import numpy, scipy, matplotlib
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.optimize import differential_evolution
import warnings
xData=numpy.array([0,0,100,200,250,300,400], dtype="float")
yData=numpy.array([0,0,0,0,75,100,100], dtype="float")
def func(x, a, b, c): # Sigmoid B equation from zunzun.com
return a / (1.0 + numpy.exp(-1.0 * (x - b) / c))
# function for genetic algorithm to minimize (sum of squared error)
def sumOfSquaredError(parameterTuple):
warnings.filterwarnings("ignore") # do not print warnings by genetic algorithm
val = func(xData, *parameterTuple)
return numpy.sum((yData - val) ** 2.0)
def generate_Initial_Parameters():
# min and max used for bounds
maxX = max(xData)
minX = min(xData)
parameterBounds = []
parameterBounds.append([minX, maxX]) # search bounds for a
parameterBounds.append([minX, maxX]) # search bounds for b
parameterBounds.append([0.0, 2.0]) # search bounds for c
# "seed" the numpy random number generator for repeatable results
result = differential_evolution(sumOfSquaredError, parameterBounds, seed=3)
return result.x
# by default, differential_evolution completes by calling curve_fit() using parameter bounds
geneticParameters = generate_Initial_Parameters()
# now call curve_fit without passing bounds from the genetic algorithm,
# just in case the best fit parameters are aoutside those bounds
fittedParameters, pcov = curve_fit(func, xData, yData, geneticParameters)
print('Fitted parameters:', fittedParameters)
print()
modelPredictions = func(xData, *fittedParameters)
absError = modelPredictions - yData
SE = numpy.square(absError) # squared errors
MSE = numpy.mean(SE) # mean squared errors
RMSE = numpy.sqrt(MSE) # Root Mean Squared Error, RMSE
Rsquared = 1.0 - (numpy.var(absError) / numpy.var(yData))
print()
print('RMSE:', RMSE)
print('R-squared:', Rsquared)
print()
##########################################################
# graphics output section
def ModelAndScatterPlot(graphWidth, graphHeight):
f = plt.figure(figsize=(graphWidth/100.0, graphHeight/100.0), dpi=100)
axes = f.add_subplot(111)
# first the raw data as a scatter plot
axes.plot(xData, yData, 'D')
# create data for the fitted equation plot
xModel = numpy.linspace(min(xData), max(xData), 100)
yModel = func(xModel, *fittedParameters)
# now the model as a line plot
axes.plot(xModel, yModel)
axes.set_xlabel('X Data') # X axis data label
axes.set_ylabel('Y Data') # Y axis data label
plt.show()
plt.close('all') # clean up after using pyplot
graphWidth = 800
graphHeight = 600
ModelAndScatterPlot(graphWidth, graphHeight)

How do I perform a curve fit with an array of points and touching a specific point in that array

I need help with curve fitting a given set of points. The points form a parabola and I ought to find the peak point of the result. Issue is when I do a curve fit, it sometimes doesn't touch the max y-coordinate even if the actual point is given in the input array.
Following is the code snippet. Here 1.88 is the actual peak y-coordinate (13.05,1.88). But the graph generated by the code does not touch the point due to curve fitting. So is there a way to fit the curve making sure that it touches the max point given in the input array?
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit, minimize_scalar
fig = plt.gcf()
#fig.set_size_inches(18.5, 10.5)
x = [4.59,9.02,13.05,18.47,20.3]
y = [1.7,1.84,1.88,1.7,1.64]
def f(x, p1, p2, p3):
return p3*(p1/((x-p2)**2 + (p1/2)**2))
plt.plot(x,y,"ro")
popt, pcov = curve_fit(f, x, y)
# find the peak
fm = lambda x: -f(x, *popt)
r = minimize_scalar(fm, bounds=(1, 5))
print( "maximum:", r["x"], f(r["x"], *popt) ) #maximum: 2.99846874275 18.3928199902
plt.text(1,1.9,'maximum '+str(round(r["x"],2))+'( #'+str(round(f(r["x"], *popt),2)) + ' )')
x_curve = np.linspace(min(x), max(x), 50)
plt.plot(x_curve, f(x_curve, *popt))
plt.plot(r['x'], f(r['x'], *popt), 'ko')
plt.show()
Here is a graphical code example using your equation with weighted fitting, where I have made the max point larger to more easily see the effect of the weighting. In non-weighted curve fitting, all weights are implicitly 1.0 as all data points have equal weight. Scipy's curve_fit routine uses weights in the form of uncertainties, so that giving a point a very small uncertainty (which I have done) is like giving the point a very large weight. This technique can be used to make a fit pass arbitrarily close to any single data point by any software that can perform weghted fitting.
import numpy, scipy, matplotlib
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
x = [4.59,9.02,13.05,18.47,20.3]
y = [1.7,1.84,2.0,1.7,1.64]
# note the single very small uncertainty - try making this value 1.0
uncertainties = numpy.array([1.0, 1.0, 1.0E-6, 1.0, 1.0])
# rename data to use previous example
xData = numpy.array(x)
yData = numpy.array(y)
def func(x, p1, p2, p3):
return p3*(p1/((x-p2)**2 + (p1/2)**2))
# these are the same as the scipy defaults
initialParameters = numpy.array([1.0, 1.0, 1.0])
# curve fit the test data, first without uncertainties to
# get us closer to initial starting parameters
ssqParameters, pcov = curve_fit(func, xData, yData, p0 = initialParameters)
# now that we have better starting parameters, use uncertainties
fittedParameters, pcov = curve_fit(func, xData, yData, p0 = ssqParameters, sigma=uncertainties, absolute_sigma=True)
modelPredictions = func(xData, *fittedParameters)
absError = modelPredictions - yData
SE = numpy.square(absError) # squared errors
MSE = numpy.mean(SE) # mean squared errors
RMSE = numpy.sqrt(MSE) # Root Mean Squared Error, RMSE
Rsquared = 1.0 - (numpy.var(absError) / numpy.var(yData))
print('Parameters:', fittedParameters)
print('RMSE:', RMSE)
print('R-squared:', Rsquared)
print()
##########################################################
# graphics output section
def ModelAndScatterPlot(graphWidth, graphHeight):
f = plt.figure(figsize=(graphWidth/100.0, graphHeight/100.0), dpi=100)
axes = f.add_subplot(111)
# first the raw data as a scatter plot
axes.plot(xData, yData, 'D')
# create data for the fitted equation plot
xModel = numpy.linspace(min(xData), max(xData))
yModel = func(xModel, *fittedParameters)
# now the model as a line plot
axes.plot(xModel, yModel)
axes.set_xlabel('X Data') # X axis data label
axes.set_ylabel('Y Data') # Y axis data label
plt.show()
plt.close('all') # clean up after using pyplot
graphWidth = 800
graphHeight = 600
ModelAndScatterPlot(graphWidth, graphHeight)

Uniform sampling of a triangle by dividing it into smaller parts?

To sample a triangle ABC uniformly, I can use the following formula:
P = (1 - sqrt(r1)) * A + (sqrt(r1)*(1 - r2)) * B + (r2*sqrt(r1)) * C
where r1 and r2 are random numbers between 0 and 1. The more samples you take, the better. But what if I want to get a better distribution, while keeping then number of samples low?
For example if I had a square, I can implicitly divide it into an N x N grid and generate a random sample inside the smaller grid squares. Like this:
float u = (x + rnd(seed)) / width;
float v = (y + rnd(seed)) / height;
The point is I force the sampling to cover the entire grid at a lower sample resolution.
How can I achieve this with a triangle? The only way I can think of is to explicitly subdivide it into a number of triangles using a library like Triangle. But is there a way to do this implicitly like with a square, without having to actually divide the triangle?
OK, I had some thoughts and believe using quasirandom numbers could improve "uniformity" of the points-in-the-triangle coverage without doing subdivision into smaller triangles. Quasirandom sampling using Sobol sequences could provide a lot better coverage as seen in the Wiki article.
Here is 200 points in triangle using standard RNG (whatever it is in Python)
And here is picture with 200 points sampled from Sobol 2D sequence
Looks a lot better to me. Python code to play with
import os
import math
import random
import numpy as np
import matplotlib.pyplot as plt
import sobol_seq
def trisample(A, B, C, r1, r2):
s1 = math.sqrt(r1)
x = A[0] * (1.0 - s1) + B[0] * (1.0 - r2) * s1 + C[0] * r2 * s1
y = A[1] * (1.0 - s1) + B[1] * (1.0 - r2) * s1 + C[1] * r2 * s1
return (x, y)
if __name__ == "__main__":
N = 200
A = (0.0, 0.0)
B = (1.0, 0.0)
C = (0.5, 1.0)
seed = 1
xx = list()
yy = list()
random.seed(312345)
for k in range(0, N):
pts, seed = sobol_seq.i4_sobol(2, seed)
r1 = pts[0]
r2 = pts[1]
# uncomment if you want standard rng
#r1 = random.random()
#r2 = random.random()
pt = trisample(A, B, C, r1, r2)
xx.append(pt[0])
yy.append(pt[1])
plt.scatter(xx, yy)
plt.show()
sys.exit(0)
I'd suggest using Poisson disk sampling (short academic paper link,
pretty visualization link, wiki link, code link) to generate a configuration within the bounding box of your triangle and then cropping to the area bounded by the triangle.
I suggest starting with the short academic paper. The principle at work here is pretty easy to understand. There are many variations of this idea floating around out there, so get a handle on it and find the one that works for you.

Resources