I would like to set the maximal number of function evaluations when using Pyomo (with the BARON solver). My code is:
from __future__ import division
from pyomo.environ import *
opt = SolverFactory('baron')
m = ConcreteModel()
m.x1 = Var(bounds=(-10.0, 10.0))
m.x2 = Var(bounds=(-10.0, 10.0))
m.o = Objective(expr=(2.0 * m.x2 + m.x1 - 7.0) ** 2.0 + (2.0 * m.x1 + m.x2 - 5.0) ** 2.0)
results = opt.solve(m) # maxEvaluations=5
print results
where the expr corresponds to Booth' function. I would like to set the maximal number of function evaluations as the terminating criteria. How can I achieve this?
If it is also possible to get a more verbose output of results, ideally listing the running best result with the number of function evaluations, then that would be a bonus.
You can send options to solvers as a dictionary using the options keyword argument for the solve method. Options are passed through to the solver verbatim. You will need to look at the individual solver documentation to see what options it supports (for BARON, see here). For example:
solver = SolverFactory('baron')
solver.solve(model, options={'MaxIter': 5})
If you want to watch the solver process in realtime, you can tell Pyomo to not suppress the solver stdout/stderr output using the tee option:
solver.solve(model, options={'MaxIter': 5}, tee=True)
As almost all solvers are launched as separate subprocesses, there is (currently) no way for Pyomo to get intrusive information (like the current incumbent variable values) during the solver execution.
Related
I would like to write down the following SARIMAX model (2,0,0) (2,0,0,12) in PyMC3 to perform bayesian estimation of its coefficients but I cannot figure out how to start with the seasonal part
Has anyone tries something like this?
with pm.Model() as ar2:
theta = pm.Normal("theta", 0.0, 1.0, shape=2)
sigma = pm.HalfNormal("sigma", 3)
likelihood = pm.AR("y", theta, sigma=sigma, observed=data)
trace = pm.sample(
1000,
tune=2000,
random_seed=13,
)
idata = az.from_pymc3(trace)
Although it would be best (e.g. best performance) if you can get an answer that uses PyMC3 exclusively, in case that does not exist yet, there is an alternative way to do this that uses the SARIMAX model in Statsmodels in combination with PyMC3.
There are too many details to repeat a full answer here, but basically you wrap the log-likelihood and gradient methods associated with a Statsmodels SARIMAX model. Here is a link to an example Jupyter notebook that shows how to do this:
https://www.statsmodels.org/stable/examples/notebooks/generated/statespace_sarimax_pymc3.html
I'm not sure if you'll still need it, however, expanding on cfulton's answer, here is how to fix the error in the statsmodels example (https://www.statsmodels.org/dev/examples/notebooks/generated/statespace_sarimax_pymc3.html, cell 8):
with pm.Model():
# Priors
arL1 = pm.Uniform('ar.L1', -0.99, 0.99)
maL1 = pm.Uniform('ma.L1', -0.99, 0.99)
sigma2 = pm.InverseGamma('sigma2', 2, 4)
# convert variables to tensor vectors
# # this is wrong:
theta = tt.as_tensor_variable([arL1, maL1, sigma2])
# # this is correct:
theta = tt.as_tensor_variable([arL1, maL1, sigma2], 'v')
# use a DensityDist (use a lamdba function to "call" the Op)
# # this is wrong:
# pm.DensityDist('likelihood', lambda v: loglike(v), observed={'v': theta})
# # this is correct:
pm.DensityDist('likelihood', lambda v: loglike(v), observed=theta)
# Draw samples
trace = pm.sample(ndraws, tune=nburn, discard_tuned_samples=True, cores=4)
I'm no pymc3/theano expert, but I think the error means that Theano has failed to associate the tensor's name with the values. If you define the name along with the values right at the beginning, it works.
I know it's not a direct answer to your question. Nevertheless, I hope it helps.
I'm relatively new to Z3 and experimenting with it in python. I've coded a program which returns the order in which different actions is performed, represented with a number. Z3 returns an integer representing the second the action starts.
Now I want to look at the model and see if there is an instance of time where nothing happens. To do this I made a list with only 0's and I want to change the index at the times where each action is being executed, to 1. For instance, if an action start at the 5th second and takes 8 seconds to be executed, the index 5 to 12 would be set to 1. Doing this with all the actions and then look for 0's in the list would hopefully give me the instances where nothing happens.
The problem is: I would like to write something like this for coding the problem
list_for_check = [0]*total_time
m = s.model()
for action in actions:
for index in range(m.evaluate(action.number) , m.evaluate(action.number) + action.time_it_takes):
list_for_check[index] = 1
But I get the error:
'IntNumRef' object cannot be interpreted as an integer
I've understood that Z3 isn't returning normal ints or bools in their models, but writing
if m.evaluate(action.boolean):
works, so I'm assuming the if is overwritten in a way, but this doesn't seem to be the case with range. So my question is: Is there a way to use range with Z3 ints? Or is there another way to do this?
The problem might also be that action.time_it_takes is an integer and adding a Z3int with a "normal" int doesn't work. (Done in the second part of the range).
I've also tried using int(m.evaluate(action.number)), but it doesn't work.
Thanks in advance :)
When you call evaluate it returns an IntNumRef, which is an internal z3 representation of an integer number inside z3. You need to call as_long() method of it to convert it to a Python number. Here's an example:
from z3 import *
s = Solver()
a = Int('a')
s.add(a > 4);
s.add(a < 7);
if s.check() == sat:
m = s.model()
print("a is %s" % m.evaluate(a))
print("Iterating from a to a+5:")
av = m.evaluate(a).as_long()
for index in range(av, av + 5):
print(index)
When I run this, I get:
a is 5
Iterating from a to a+5:
5
6
7
8
9
which is exactly what you're trying to achieve.
The method as_long() is defined here. Note that there are similar conversion functions from bit-vectors and rationals as well. You can search the z3py api using the interface at: https://z3prover.github.io/api/html/namespacez3py.html
In tensorflow CIFAR-10 tutorial in cifar10_inputs.py line 174 it is said you should randomize the order of the operations random_contrast and random_brightness for better data augmentation.
To do so the first thing I think of is drawing a random variable from the uniform distribution between 0 and 1 : p_order. And do:
if p_order>0.5:
distorted_image=tf.image.random_contrast(image)
distorted_image=tf.image.random_brightness(distorted_image)
else:
distorted_image=tf.image.random_brightness(image)
distorted_image=tf.image.random_contrast(distorted_image)
However there are two possible options for getting p_order:
1) Using numpy which disatisfies me as I wanted pure TF and that TF discourages its user to mix numpy and tensorflow
2) Using TF, however as p_order can only be evaluated in a tf.Session()
I do not really know if I should do:
with tf.Session() as sess2:
p_order_tensor=tf.random_uniform([1,],0.,1.)
p_order=float(p_order_tensor.eval())
All those operations are inside the body of a function and are run from another script which has a different session/graph. Or I could pass the graph from the other script as an argument to this function but I am confused.
Even the fact that tensorflow functions like this one or inference for example seem to define the graph in a global fashion without explicitly returning it as an output is a bit hard to understand for me.
You can use tf.cond(pred, fn1, fn2, name=None) (see doc).
This function allows you to use the boolean value of pred inside the TensorFlow graph (no need to call self.eval() or sess.run(), hence no need of a Session).
Here is an example of how to use it:
def fn1():
distorted_image=tf.image.random_contrast(image)
distorted_image=tf.image.random_brightness(distorted_image)
return distorted_image
def fn2():
distorted_image=tf.image.random_brightness(image)
distorted_image=tf.image.random_contrast(distorted_image)
return distorted_image
# Uniform variable in [0,1)
p_order = tf.random_uniform(shape=[], minval=0., maxval=1., dtype=tf.float32)
pred = tf.less(p_order, 0.5)
distorted_image = tf.cond(pred, fn1, fn2)
I'm not sure if this is a PyMC3 question or a Theano question. I've used PyMC2 for a long time to fit a cosmology to supernova data. This requires some messy integrals (see i.e. http://arxiv.org/abs/astroph/9905116 )
So I use a package in python called Cosmolopy to do the integration and for some other convenience functions. Whereas this used to work fine with PyMC2, with the reliance on theano in PyMC3, I can't figure out if there is even a way to use Cosmolopy.
Here is some example code of my current understanding of how to build a model in PyMC3
import numpy as np
import pymc as pm
import cosmolopy as cp
# generate some redshifts
nSNe = 100
z = np.random.uniform( low=0.0, high=1.0, size=nSNe )
# set cosmology and simulate some distance moduli and errors
cosmo = cp.fidcosmo
muSN = cp.magnitudes.distance_modulus( z, **cosmo ) + np.random.normal( loc=0, scale=0.15, size=nSNe )
muSN_err = np.random.uniform(low=0.1, high=0.3, size=nSNe)
# pymc model
with pm.Model() as model:
# omega matter is the free parameter in this simple example
omega_matter = pm.Uniform( 'omega_matter', lower=0.0, upper=1.0 )
# the cosmology as a function of omega_matter
cosmo['omega_M_0'] = omega_matter
cosmo['omega_lambda_0'] = 1.0 - omega_matter
mu_fit = cp.magnitudes.distance_modulus( z, **cosmo )
# what should be fit by the MCMC
snr = pm.Normal( 'snr', mu = mu_fit, sd = muSN_err, observed = muSN )
This code crashes because Cosmolopy expects a float for omega_matter but receives a theano.TensorVariable instead.
So the question is two-fold:
Am I just missing something syntactically with PyMC3 that would allow me to do this (possibly because I am still stuck somehow on PyMC2 model-building)?
If not 1, then do I need to find a way to do the integrals in theano?
I don't know well PyMC3, but I know well Theano. Theano use symbolic compiler and TensorVariable are such symbolic variable. You need to compile and execute the function to get a value out of it. I don't know where to do this in PyMC3. A fast thing to try that will work if the variable depend only on constant and shared variable is to do this call::
the_tensor_variable.eval()
This will compile the function and suppose it don't take any variable input and if it compile, it will run it and return the value.
I think one possible solution would be to write a custom Theano Op following the instructions at http://deeplearning.net/software/theano/extending/
I would write a pure Python op without support for gradient computation, in which you would only have to implement the make_node() and perform() methods.
In Ruby, I need to draw from an exponential distribution with mean m. Please show me how to do it quickly and efficiently. Eg. let's have:
m = 4.2
def exponential_distribution
rand( m * 2 )
end
But of course, this code is wrong, and also, it only returns whole number results. I'm already tired today, please hint me towards a good solution.
If you want to do it from scratch you can use inversion.
def exponential(mean)
-mean * Math.log(rand) if mean > 0
end
If you want to parameterize it with rate lambda, the mean and rate are inverses of each other. Divide by -lambda rather than multiplying by -mean.
Technically it should be log(1.0 - rand), but since 1.0 - rand has a uniform distribution you can save one arithmetic operation by just using rand.
How about using the distribution gem? Here's an example:
require 'distribution'
mean = 4.2
lambda = mean**-1
# generate a rng with exponential distribution
rng = Distribution::Exponential.rng(lambda)
# sample a value
sample = rng.call
If you need to change the value of lambda very often it might be useful to use the p_value method directly. A good sample can be found in the source code for Distribution::Exponential#rng, which basically just uses p_value internally. Here's an example of how to do it:
require 'distribution'
# use the same rng for each call
rng = Random
1.step(5, 0.1) do |mean|
lambda = mean**-1
# sample a value
sample = Distribution::Exponential.p_value(rng.rand, lambda)
end