Dirichlet process in PyMC 3 - pymc

I would like to implement to implement the Dirichlet process example referenced in
Implementing Dirichlet processes for Bayesian semi-parametric models (source: here) in PyMC 3.
In the example the stick-breaking probabilities are computed using the pymc.deterministic
decorator:
v = pymc.Beta('v', alpha=1, beta=alpha, size=N_dp)
#pymc.deterministic
def p(v=v):
""" Calculate Dirichlet probabilities """
# Probabilities from betas
value = [u*np.prod(1-v[:i]) for i,u in enumerate(v)]
# Enforce sum to unity constraint
value[-1] = 1-sum(value[:-1])
return value
z = pymc.Categorical('z', p, size=len(set(counties)))
How would you implement this in PyMC 3 which is using Theano for the gradient computation?
edit:
I tried the following solution using the theano.scan method:
with pm.Model() as mod:
conc = Uniform('concentration', lower=0.5, upper=10)
v = Beta('v', alpha=1, beta=conc, shape=n_dp)
p, updates = theano.scan(fn=lambda stick, idx: stick * t.prod(1 - v[:idx]),
outputs_info=None,
sequences=[v, t.arange(n_dp)])
t.set_subtensor(p[-1], 1 - t.sum(p[:-1]))
category = Categorical('category', p, shape=n_algs)
sd = Uniform('precs', lower=0, upper=20, shape=n_dp)
means = Normal('means', mu=0, sd=100, shape=n_dp)
points = Normal('obs',
means[category],
sd=sd[category],
observed=data)
step1 = pm.Slice([conc, v, sd, means])
step3 = pm.ElemwiseCategoricalStep(var=category, values=range(n_dp))
trace = pm.sample(2000, step=[step1, step3], progressbar=True)
Which sadly is really slow and does not obtain the original parameters of the synthetic data.
Is there a better solution and is this even correct?

Not sure I have a good answer but perhaps this could be sped up by instead using a theano blackbox op which allows you to write a distribution (or deterministic) in python code. E.g.: https://github.com/pymc-devs/pymc3/blob/master/pymc3/examples/disaster_model_arbitrary_deterministic.py

Related

Discrete path tracking with python gekko

I have some discrete data points representing a path and I want to minimize the distance between a trajectory of an object to these path points along with some other constraints. I'm trying out gekko as a tool to solve this problem and for that I made a simple problem by making data points from a parabola and a constraint to the path. My attempt to solve it is
from gekko import GEKKO
import numpy as np
import matplotlib.pyplot as plt
import time
#path data points
x_ref = np.linspace(0, 4, num=21)
y_ref = - np.square(x_ref) + 16
#constraint for visualization purposes
x_bound = np.linspace(0, 4, num=10)
y_bound = 1.5*x_bound + 4
def distfunc(x,y,xref,yref,p):
'''
Shortest distance from (x,y) to (xref, yref)
'''
dtemp = []
for i in range(len(xref)):
d = (x-xref[i])**2+(y-yref[i])**2
dtemp.append(dtemp)
min_id = dtemp.index(min(dtemp))
if min_id == 0:
next_id = min_id+1
elif min_id == len(x_ref):
next_id = min_id-1
else:
d2 = (x-xref[min_id-1])**2+(y-yref[min_id-1])**2
d1 = (x-xref[min_id+1])**2+(y-yref[mid_id+1])**2
d_next = [d2, d1]
next_id = min_id + 2*d_next.index(min(d_next)) - 1
n1 = xref[next_id] - xref[min_id]
n2 = yref[next_id] - yref[min_id]
nnorm = p.sqrt(n1**2+n2**2)
n1 = n1 / nnorm
n2 = n2 / nnorm
difx = x-xref[min_id]
dify = y-yref[min_id]
dot = difx*n1 + dify*n2
deltax = difx - dot*n1
deltay = dify - dot*n2
return deltax**2+deltay**2
v_ref = 3
now = time.time()
p = GEKKO(remote=False)
p.time = np.linspace(0,10,21)
x = p.Var(value=0)
y = p.Var(value=16)
vx = p.Var(value=1)
vy = p.Var(value=0)
ax = p.Var(value=0)
ay = p.Var(value=0)
p.options.IMODE = 6
p.options.SOLVER = 3
p.options.WEB = 0
x_refg = p.Param(value=x_ref)
y_refg = p.Param(value=y_ref)
x_refg = p.Param(value=x_ref)
y_refg = p.Param(value=y_ref)
v_ref = p.Const(value=v_ref)
p.Obj(distfunc(x,y,x_refg,y_refg,p))
p.Obj( (p.sqrt(vx**2+vy**2) - v_ref)**2 + ax**2 + ay**2)
p.Equation(x.dt()==vx)
p.Equation(y.dt()==vy)
p.Equation(vx.dt()==ax)
p.Equation(vy.dt()==ay)
p.Equation(y>=1.5*x+4)
p.solve(disp=False, debug=True)
print(f'run time: {time.time()-now}')
plt.plot(x_ref, y_ref)
plt.plot(x_bound, y_bound)
plt.plot(x1.value,x2.value)
plt.show()
This is the result that I get. As you can see, its not exactly the solution that one should expect. For reference to a solution that you may expect, here is what I get using the cost function below
p.Obj((x-x_refg)**2 + (y-y_refg)**2 + ax**2 + ay**2)
However since what I actually wanted is the shortest distance to a path described by these points I expect the distfunc to be closer to what I want since the shortest distance is most likely to some interpolated point. So my question is twofold:
Is this the correct gekko expression/formulation for the objective function?
My other goal is solution speed so is there a more efficient way of expressing this problem for gekko?
You can't define an objective function that changes based on conditions unless you insert logical conditions that are continuously differentiable such as with the if2 or if3 function. Gekko evaluates the symbolic model once and then passes that off to an executable for solution. It only calls the Python model build once because it is compiling the model to efficient byte-code for execution. You can see the model that you created with p.open_folder(). The model file ends in the apm extension: gk_model0.apm.
Model
Constants
i0 = 3
End Constants
Parameters
p1
p2
p3
p4
End Parameters
Variables
v1 = 0
v2 = 16
v3 = 1
v4 = 0
v5 = 0
v6 = 0
End Variables
Equations
v3=$v1
v4=$v2
v5=$v3
v6=$v4
v2>=(((1.5)*(v1))+4)
minimize (((((v1-0.0)-((((((v1-0.0))*((0.2/sqrt(0.04159999999999994))))+(((v2-16.0))&
*((-0.03999999999999915/sqrt(0.04159999999999994))))))*&
((0.2/sqrt(0.04159999999999994))))))^(2))+((((v2-16.0)&
-((((((v1-0.0))*((0.2/sqrt(0.04159999999999994))))+(((v2-16.0))&
*((-0.03999999999999915/sqrt(0.04159999999999994))))))&
*((-0.03999999999999915/sqrt(0.04159999999999994))))))^(2)))
minimize (((((sqrt((((v3)^(2))+((v4)^(2))))-i0))^(2))+((v5)^(2)))+((v6)^(2)))
End Equations
End Model
One strategy is to split your problem into multiple optimization problems that are all minimal time problems where you navigate to the first way-point and then re-initialize the problem to navigate to the second way-point, and so on. If you want to preserve momentum and anticipate the turning then you'll need to use more advanced methods such as shown in the Pigeon / Eagle tracking problem (see source files) or similar to a trajectory optimization with UAVs or HALE UAVs (see references below).
Martin, R.A., Gates, N., Ning, A., Hedengren, J.D., Dynamic Optimization of High-Altitude Solar Aircraft Trajectories Under Station-Keeping Constraints, Journal of Guidance, Control, and Dynamics, 2018, doi: 10.2514/1.G003737.
Gates, N.S., Moore, K.R., Ning, A., Hedengren, J.D., Combined Trajectory, Propulsion and Battery Mass Optimization for Solar-Regenerative High-Altitude Long Endurance Unmanned Aircraft, AIAA Science and Technology Forum (SciTech), 2019.

Python: how to write this code to run on GPU?

I have been trying for quite some time to implement my code to run on GPU, however with little success. I would really appreciate someone helping with the implementation.
Let me say a few words about the problem. I have a graph G with N nodes and a distribution mx on each node x. I would like to compute the distance between the distributions for every pair of nodes for all edges. For a given pair, (x,y), I use the code ot.sinkhorn(mx, my, dNxNy) from the python POT package to compute the distance. Again, mx, my are vectors of size Nx and Ny on nodes x and y and dNxNy is a Nx x Ny distance matrix.
Now, I discovered that there is a GPU implementation of this code ot.gpu.sinkhorn(mx, my, dNxNy). However, this is not good enough because I mx, my and dNxNy would need to be uploaded to the GPU at every iteration, which is a massive overhead. So, the idea is to parallelise this for all edges on GPU.
The essence of the code is as follows. mx_all is all the distributions
for i,e in enumerate(G.edges):
W[i] = W_comp(mx_all,dist,e)
def W_comp(mx_all, dist, e):
i = e[0]
j = e[1]
Nx = np.array(mx_all[i][1]).flatten()
Ny = np.array(mx_all[j][1]).flatten()
mx = np.array(mx_all[i][0]).flatten()
my = np.array(mx_all[j][0]).flatten()
dNxNy = dist[Nx,:][:,Ny].copy(order='C')
W = ot.sinkhorn2(mx, my, dNxNy, 1)
Below is a minimal working example. Please ignore everything except the part between dashed === signs.
import ot
import numpy as np
import scipy as sc
def main():
import networkx as nx
#some example graph
G = nx.planted_partition_graph(4, 20, 0.6, 0.3, seed=2)
L = nx.normalized_laplacian_matrix(G)
#this just computes all distributions (IGNORE)
mx_all = []
for i in G.nodes:
mx_all.append(mx_comp(L,1,1,i))
#some random distance matrix (IGNORE)
dist = np.random.randint(5,size=(nx.number_of_nodes(G),nx.number_of_nodes(G)))
# =============================================================================
#this is what needs to be parallelised on GPU
W = np.zeros(nx.Graph.size(G))
for i,e in enumerate(G.edges):
print(i)
W[i] = W_comp(mx_all,dist,e)
return W
def W_comp(mx_all, dist, e):
i = e[0]
j = e[1]
Nx = np.array(mx_all[i][1]).flatten()
Ny = np.array(mx_all[j][1]).flatten()
mx = np.array(mx_all[i][0]).flatten()
my = np.array(mx_all[j][0]).flatten()
dNxNy = dist[Nx,:][:,Ny].copy(order='C')
return ot.sinkhorn2(mx, my, dNxNy,1)
# =============================================================================
#some other functions (IGNORE)
def delta(i, n):
p0 = np.zeros(n)
p0[i] = 1.
return p0
# all neighbourhood densities
def mx_comp(L, t, cutoff, i):
N = np.shape(L)[0]
mx_all = sc.sparse.linalg.expm_multiply(-t*L, delta(i, N))
Nx_all = np.argwhere(mx_all > (1-cutoff)*np.max(mx_all))
return mx_all, Nx_all
if __name__ == "__main__":
main()
Thank you!!
There are some packages, which allow you to run code on your GPU.
You can use one of the following packages:
pyCuda
numba(Pro)
Theano
When you want to use numba, the Python Anaconda distribution is recommended for doing this. Also, Anaconda Accelerate is needed. You can install it using conda install accelerate. In this example, you can see how the usage of the GPU is achieved https://gist.githubusercontent.com/aweeraman/ae6e40f54a924f1f5832081be9521d92/raw/d6775c421aa4fa4c0d582e6c58873499d28b913a/gpu.py .
It's done by adding target='cuda' to the #vectorize decorator. Note the import from numba import vectorize. The vectorize decorator takes the signature of the function that is to be accelerated as input.
Good luck!
Sources:
https://weeraman.com/put-that-gpu-to-good-use-with-python-e5a437168c01
https://www.researchgate.net/post/How_do_I_run_a_python_code_in_the_GPU

Enumerate through variable (porting PyMC to PyMC3)

I'm starting out with PyMC3 by translating this code from PyMC to PyMC3.
I'm not sure how to translate this segment:
v = pymc.Beta('v', alpha=1, beta=alpha, size=N_dp)
#pymc.deterministic
def p(v=v):
""" Calculate Dirichlet probabilities """
# Probabilities from betas
# this line creates the error:
value = [u*np.prod(1-v[:i]) for i,u in enumerate(v)]
# Enforce sum to unity constraint
value[-1] = 1-sum(value[:-1])
return value
z = pymc.Categorical('z', p, size=len(set(counties)))
I assume I have to replace p in the last line with p(v) and remove the #pymc.deterministic but the problem seems to be that I cannot enumerate through v: ValueError: length not known: ViewOp [id A] 'v'.
Can someone show me how to do the translation or link me to the relevant bit in the documentation? Thanks.
The Dirichlet distribution is actually built into pymc3, so that whole code block can be replaced by:
with pm.Model():
...
v = pm.Beta('v', alpha=1, beta=alpha, shape=N_dp)
p = pm.Dirichlet('p', a=v, shape=N_dp)
...
trace = pm.sample(20000)

PyMC: sampling step by step?

I would like to know why the sampler is incredibly slow when sampling step by step.
For example, if I run:
mcmc = MCMC(model)
mcmc.sample(1000)
the sampling is fast. However, if I run:
mcmc = MCMC(model)
for i in arange(1000):
mcmc.sample(1)
the sampling is slower (and the more it samples, the slower it is).
If you are wondering why I am asking this.. well, I need a step by step sampling because I want to perform some operations on the values of the variables after each step of the sampler.
Is there a way to speed it up?
Thank you in advance!
------------------ EDIT -------------------------------------------------------------
Here I present the specific problem in more details:
I have two models in competition and they are part of a bigger model that has a categorical variable functioning as a 'switch' between the two.
In this toy example, I have the observed vector 'Y', that could be explained by a Poisson or a Geometric distribution. The Categorical variable 'switch_model' selects the Geometric model when = 0 and the Poisson model when =1.
After each sample, if switch_model selects the Geometric model, I want the variables of the Poisson model NOT to be updated, because they are not influencing the likelihood and therefore they are just drifting away. The opposite is true if the switch_model selects the Poisson model.
Basically what I do at each step is to 'change' the value of the non-selected model by bringing it manually one step back.
I hope that my explanation and the commented code will be clear enough. Let me know if you need further details.
import numpy as np
import pymc as pm
import pandas as pd
import matplotlib.pyplot as plt
# OBSERVED VALUES
Y = np.array([0, 1, 2, 3, 8])
# PRIOR ON THE MODELS
pi = (0.5, 0.5)
switch_model = pm.Categorical("switch_model", p = pi)
# switch_model = 0 for Geometric, switch_model = 1 for Poisson
p = pm.Uniform('p', lower = 0, upper = 1) # Prior of the parameter of the geometric distribution
mu = pm.Uniform('mu', lower = 0, upper = 10) # Prior of the parameter of the Poisson distribution
# LIKELIHOOD
#pm.observed
def Ylike(value = Y, mu = mu, p = p, M = switch_model):
if M == 0:
out = pm.geometric_like(value+1, p)
elif M == 1:
out = pm.poisson_like(value, mu)
return out
model = pm.Model([Ylike, p, mu, switch_model])
mcmc = pm.MCMC(model)
n_samples = 5000
traces = {}
for var in mcmc.stochastics:
traces[str(var)] = np.zeros(n_samples)
bar = pm.progressbar.progress_bar(n_samples)
bar.update(0)
mcmc.sample(1, progress_bar=False)
for var in mcmc.stochastics:
traces[str(var)][0] = mcmc.trace(var)[-1]
for i in np.arange(1,n_samples):
mcmc.sample(1, progress_bar=False)
bar.update(i)
for var in mcmc.stochastics:
traces[str(var)][i] = mcmc.trace(var)[-1]
if mcmc.trace('switch_model')[-1] == 0: # Gemetric wins
traces['mu'][i] = traces['mu'][i-1] # One step back for the sampler of the Poisson parameter
mu.value = traces['mu'][i-1]
elif mcmc.trace('switch_model')[-1] == 1: # Poisson wins
traces['p'][i] = traces['p'][i-1] # One step back for the sampler of the Geometric parameter
p.value = traces['p'][i-1]
print '\n\n'
traces=pd.DataFrame(traces)
traces['mu'][traces['switch_model'] == 0] = np.nan
traces['p'][traces['switch_model'] == 1] = np.nan
print traces.describe()
traces.plot()
plt.show()
The reason this is so slow is that Python's for loops are pretty slow, especially when they are compared to FORTRAN loops (Which is what PyMC is written in basically.) If you could show more detailed code, it might be easier to see what you are trying to do and to provide faster alternative algorithms.
Actually I found a 'crazy' solution, and I have the suspect to know why it works. I would still like to get an expert opinion on my trick.
Basically if I modify the for loop in the following way, adding a 'reset of the mcmc' every 1000 loops, the sampling fires up again:
for i in np.arange(1,n_samples):
mcmc.sample(1, progress_bar=False)
bar.update(i)
for var in mcmc.stochastics:
traces[str(var)][i] = mcmc.trace(var)[-1]
if mcmc.trace('switch_model')[-1] == 0: # Gemetric wins
traces['mu'][i] = traces['mu'][i-1] # One step back for the sampler of the Poisson parameter
mu.value = traces['mu'][i-1]
elif mcmc.trace('switch_model')[-1] == 1: # Poisson wins
traces['p'][i] = traces['p'][i-1] # One step back for the sampler of the Geometric parameter
p.value = traces['p'][i-1]
if i%1000 == 0:
mcmc = pm.MCMC(model)
In practice this trick erases the traces and the database of the sampler every 1000 steps. It looks like the sampler does not like having a long database, although I do not really understand why. (of course 1000 steps is arbitrary, too short it adds too much overhead, too long it will cause the traces and database to be too long).
I find this hack a bit crazy and definitely not elegant.. does any of the experts or developers have a comment on it? Thank you!

PyMC for Model Averaging

I am interested in applying PyMC to model averaging. My goal is to estimate many linear models and average estimates across them, weighting by their posterior model probabilities. I am currently using the Bayesian Information Criterion (BIC) to approximate the likelihood of my data (therefore, my analysis is not fully Bayesian). I have successfully simulated a Markov Chain of models using one of my own scripts but I want to use PyMC because it seems like a great tool.
In my attempts thus far, I have not been forming the Markov Chain correctly. I am not visiting models with higher posterior weights more often than others. I will include the example code below. Please also see the IPython notebook here! on github for the math markup and code together.
import numpy as np
from pymc import stochastic, DiscreteMetropolis, MCMC
import statsmodels.api as sm
import pandas as pd
import random
def pack(alist, rank):
binary = [str(1) if i in alist else str(0) for i in xrange(0,rank)]
string = '0b1'+''.join(binary)
return int(string, 2)
def unpack(integer):
string = bin(integer)[3:]
return [int(i) for i in xrange(len(string)) if string[i]=='1']
def make_bma():
# Simulating Data
size = 100
rank = 20
X = 10*np.random.randn(size, rank)
error = 30*np.random.randn(size,1)
coefficients = np.array([10, 2, 2, 2, 2, 2]).reshape((6,1))
y = np.dot(sm.add_constant(X[:,:5], prepend=True), coefficients) + error
# Number of allowable regressors
predictors = [3,4,5,6,7]
#stochastic(dtype=int)
def regression_model():
def logp(value):
columns = unpack(value)
x = sm.add_constant(X[:,columns], prepend=True)
corr = np.corrcoef(x[:,1:], rowvar=0)
prior = np.linalg.det(corr)
ols = sm.OLS(y,x).fit()
posterior = np.exp(-0.5*ols.bic)*prior
return np.log(posterior)
def random():
k = np.random.choice(predictors)
columns = sorted(np.random.choice(xrange(0,rank), size=k, replace=False))
return pack(columns, rank)
class ModelMetropolis(DiscreteMetropolis):
def __init__(self, stochastic):
DiscreteMetropolis.__init__(self, stochastic)
def propose(self):
'''considers a neighborhood around the previous model,
defined as having one regressor removed or added, provided
the total number of regressors coincides with predictors
'''
# Building set of neighboring models
last = unpack(self.stochastic.value)
last_indicator = np.zeros(rank)
last_indicator[last] = 1
last_indicator = last_indicator.reshape((-1,1))
neighbors = abs(np.diag(np.ones(rank)) - last_indicator)
neighbors = neighbors[:,np.any([neighbors.sum(axis=0) == i \
for i in predictors], axis=0)]
neighbors = pd.DataFrame(neighbors)
# Drawing one model at random from the neighborhood
draw = random.choice(xrange(neighbors.shape[1]))
self.stochastic.value = pack(list(neighbors[draw][neighbors[draw]==1].index), rank)
# def step(self):
#
# logp_p = self.stochastic.logp
#
# self.propose()
#
# logp = self.stochastic.logp
#
# if np.log(random.random()) > logp_p - logp:
#
# self.reject()
return locals()
if __name__ == '__main__':
model = make_bma()
M = MCMC(model)
M.use_step_method(model['ModelMetropolis'], model['regression_model'])
M.sample(iter=5000, burn=1000, thin=1)
model_chain = M.trace("regression_model")[:]
from collections import Counter
counts = Counter(model_chain).items()
counts.sort(reverse=True, key=lambda x: x[1])
for f in counts[:10]:
columns = unpack(f[0])
print('Visits:', f[1])
print(np.array([1. if i in columns else 0 for i in range(0,M.rank)]))
print(M.coefficients.flatten())
X = sm.add_constant(M.X[:, columns], prepend=True)
corr = np.corrcoef(X[:,1:], rowvar=0)
prior = np.linalg.det(corr)
fit = sm.OLS(model['y'],X).fit()
posterior = np.exp(-0.5*fit.bic)*prior
print(fit.params)
print('R-squared:', fit.rsquared)
print('BIC', fit.bic)
print('Prior', prior)
print('Posterior', posterior)
print(" ")
It sounds like you are trying to do something akin to reversible jump MCMC, where you are sampling from the model space in addition to the parameter space(s). PyMC does not currently do rjMCMC, though it probably ought to. The trick is to account for the change in dimension when moving among models. If you do have a modest number of models, you can use an indicator function to select from the models, all of which are fit simultaneously.

Resources