Getting optimal control with economic cost function to converge - gekko

I have been using gekko to optimize a bioreactor using example 12 (https://apmonitor.com/wiki/index.php/Main/GekkoPythonOptimization) as a basis.
My model is slightly more complicated with 6 states, 7 states and 2 manipulated variables. When I run it for small values of time (t ~20), the simulation is able to converge (albeit requiring a fine time resolution (dt < 0.1). However, when I try to extend the time (e.g., t = 30), it fails quite consistently with the following error:
EXIT: Converged to a point of local infeasibility. Problem may be infeasible
I have tried the following:
Employing different solvers with m.options.SOLVER = 1,2,3
Increasing m.options.MAX_ITER to 10000
Decreasing m.options.NODES to 1 (a lower order descretization seems to help with convergence)
Supplying a reasonable initial guess to the MVs by specifying a value in the declaration:
D = m.MV(value=0.1,lb=0.0,ub=0.1). From some of the various posts, it seems this should help.
I am not too sure how to go about solving this. For a simplified model (3 states, 5 parameters and 2 MVs), gekko is able to optimize it quite well (though it fails somewhat when I try to go to large t) even though the rate constants of the simplified model are a subset of the full model.
My code is as follows:
from gekko import GEKKO
import numpy as np
import matplotlib.pyplot as plt
#Parameters and IC
full_params = [0.027,2.12e-9,7.13e-3,168,168,0.035,1e-3]
full_x0 = [5e6,0.0,0.0,0.0,1.25e5,0.0]
mu,k1,k2,k3,k33,k4, f= full_params
#Initialize model
m = GEKKO()
#Time discretization
n_steps = 201
m.time = np.linspace(0,20,n_steps)
#Define MVs
D = m.MV(value=0.1,lb=0.0,ub=0.1)
D.STATUS = 1
D.DCOST = 0.0
Tin = m.MV(value=1e7,lb=0.0,ub=1e7)
Tin.STATUS = 1
Tin.DCOST = 0.0
#Define States
T = m.Var(value=full_x0[0])
Id = m.Var(value=full_x0[1])
Is = m.Var(value=full_x0[2])
Ic = m.Var(value=full_x0[3])
Vs = m.Var(value=full_x0[4])
Vd = m.Var(value=full_x0[5])
#Define equations
m.Equation(T.dt() == mu*T -k1*(Vs+Vd)*T + D*(Tin-T))
m.Equation(Id.dt() == k1*Vd*T -(k1*Vs -mu)*Id -D*Id)
m.Equation(Is.dt() == k1*Vs*T -(k1*Vd + k2)*Is -D*Is)
m.Equation(Ic.dt() == k1*(Vs*Id + Vd*Is) -k2*Ic -D*Ic)
m.Equation(Vs.dt() == k3*Is - (k1*(T+Id+Is+Ic) + k4 + D)*Vs)
m.Equation(Vd.dt() == k33*Ic + f*k3*Is - (k1*(T+Id+Is+Ic) + k4 + D)*Vd)
#Define objective function
J = m.Var(value=0) # objective (profit)
Jf = m.FV() # final objective
Jf.STATUS = 1
m.Connection(Jf,J,pos2="end")
m.Equation(J.dt() == D*(Vs + Vd))
m.Obj(-Jf)
m.options.IMODE = 6 # optimal control
m.options.NODES = 1 # collocation nodes
m.options.SOLVER = 3
m.options.MAX_ITER = 10000
#Solve
m.solve()
For clarity, the model equations are:
I would be grateful for any assistance e.g., how to implement the scaling of the parameters per https://apmonitor.com/do/index.php/Main/ModelInitialization. Thank you!

Try increasing the value of the final time until the solver can no-longer find a solution such as with tf=28 (successful). A plot of the solution reveals that Tin is adjusted to be zero at about the time where the solution almost fails to converge. I added a couple additional objective forms that didn't help the convergence (see Objective Method #1 and #2). The values of J, Vs, Vd are high but not unmanageable by the solver. One way to think about scaling is by changing units such as changing from kg/day to kg/s as the basis. Gekko automatically scales variables by the initial condition.
from gekko import GEKKO
import numpy as np
import matplotlib.pyplot as plt
#Parameters and IC
full_params = [0.027,2.12e-9,7.13e-3,168,168,0.035,1e-3]
full_x0 = [5e6,0.0,0.0,0.0,1.25e5,0.0]
mu,k1,k2,k3,k33,k4, f= full_params
#Initialize model
m = GEKKO()
#Time discretization
tf = 28
n_steps = tf*10+1
m.time = np.linspace(0,tf,n_steps)
#Define MVs
D = m.MV(value=0.1,lb=0.0,ub=0.1)
D.STATUS = 1
D.DCOST = 0.0
Tin = m.MV(value=1e7,lb=0,ub=1e7)
Tin.STATUS = 1
Tin.DCOST = 0.0
#Define States
T = m.Var(value=full_x0[0])
Id = m.Var(value=full_x0[1])
Is = m.Var(value=full_x0[2])
Ic = m.Var(value=full_x0[3])
Vs = m.Var(value=full_x0[4])
Vd = m.Var(value=full_x0[5])
#Define equations
m.Equation(T.dt() == mu*T -k1*(Vs+Vd)*T + D*(Tin-T))
m.Equation(Id.dt() == k1*Vd*T -(k1*Vs -mu)*Id -D*Id)
m.Equation(Is.dt() == k1*Vs*T -(k1*Vd + k2)*Is -D*Is)
m.Equation(Ic.dt() == k1*(Vs*Id + Vd*Is) -k2*Ic -D*Ic)
m.Equation(Vs.dt() == k3*Is - (k1*(T+Id+Is+Ic) + k4 + D)*Vs)
m.Equation(Vd.dt() == k33*Ic + f*k3*Is - (k1*(T+Id+Is+Ic) + k4 + D)*Vd)
# Original Objective
if True:
J = m.Var(value=0) # objective (profit)
Jf = m.FV() # final objective
Jf.STATUS = 1
m.Connection(Jf,J,pos2="end")
m.Equation(J.dt() == D*(Vs + Vd))
m.Obj(-Jf)
# Objective Method 1
if False:
p=np.zeros_like(m.time); p[-1]=1
final = m.Param(p)
J = m.Var(value=0) # objective (profit)
m.Equation(J.dt() == D*(Vs + Vd))
m.Maximize(J*final)
# Objective Method 2
if False:
m.Maximize(D*(Vs + Vd))
m.options.IMODE = 6 # optimal control
m.options.NODES = 2 # collocation nodes
m.options.SOLVER = 3
m.options.MAX_ITER = 10000
#Solve
m.solve()
plt.figure(figsize=(10,8))
plt.subplot(3,1,1)
plt.plot(m.time,Tin.value,'r.-',label='Tin')
plt.legend(); plt.grid()
plt.subplot(3,1,2)
plt.semilogy(m.time,T.value,label='T')
plt.semilogy(m.time,Id.value,label='Id')
plt.semilogy(m.time,Is.value,label='Is')
plt.semilogy(m.time,Ic.value,label='Ic')
plt.legend(); plt.grid()
plt.subplot(3,1,3)
plt.semilogy(m.time,Vs.value,label='Vs')
plt.semilogy(m.time,Vd.value,label='Vd')
plt.semilogy(m.time,J.value,label='Objective')
plt.legend(); plt.grid()
plt.show()
Is there any type of constraint in the problem that would favor a decrease at the end? This may be the cause of the infeasibility at tf=30. Another way to get a feasible solution is to solve with m.options.TIME_STEP=20 and resolve the problem with the initial conditions from the prior solution equal to the value at time step 20.
#Solve
m.solve()
m.options.TIME_SHIFT=20
m.solve()
This way, the solution steps forward in time to optimize in parts. This strategy was used to optimize a High Altitude Long Endurance (HALE) UAV and is called Receding Horizon Control.
Martin, R.A., Gates, N., Ning, A., Hedengren, J.D., Dynamic
Optimization of High-Altitude Solar Aircraft Trajectories Under
Station-Keeping Constraints, Journal of Guidance, Control, and
Dynamics, 2018, doi: 10.2514/1.G003737.

Related

Modelling an input that can be used a finite amount of times along a time horizon and is also binary

Happy holidays everyone! I finally have some time off to work on my project, and of course I'm stuck as usual lol.
I'm looking for guidance/examples that would let me be able to model the following:
I have an input (lets call it a 'jump') that is binary (0 or 1) and I want it to only be able to be used only once (or possibly a 'n' number of times where n<#time steps) over the entire time horizon. The affect this input has on the system is it will instantaneously increase the velocity of the system by some predetermined amount.
The second affect this has on the system is that the set of dynamics that progress the system forward in time change. (In this case the 'jump' will change the dynamics from a driving system to a flying system). In the future there will also be a 'double_jump' that does not change the dynamics but does still provide an instantaneous change in velocity. Currently I'm trying to get the first part down then I'm going to attempt to implement this. Just want to keep my bigger vision clear to anyone reading this.
Also another part that is for the future of the model: I'd like to be able to have the system interact with a ball object by let's say using the if2/if3 and if the system's position is some radius from another object's position an impulse will be imparted on the ball object dependent on things like the velocities of the ball and the system. To do properly I imagine I need a way to define a time step that happens at the interaction point, which I believe means I'll need some sort of variable time vector. Any examples for these would be much appreciated.
Okay so 2 and 3 are just here to be here, not really the main points of this question. I think I'll be able to figure them out once I can wrap my head around implementing this weird 'jump' input.
My current plan is to have an MV called 'u_jump' that is a non-integer. Then have a Var called 'jump_hist' that is essentially the 'integral' of 'u_jump', and I give jump_hist an upper bound of 1. What I do right now is just pretend this u_jump is an acceleration on the system by adding to the velocity.dt() equation. This works in theory but doesn't really represent the system I'm trying to control perfectly.
What would be the best example for me to learn some lessons from for implementing this? And another question, is there a way to make the IPOPT solver work for integer solutions by giving the integers a tolerance? Somewhat like the minlp solver option 'minlp_integer_tol 0.05', that way I can still get the speed of IPOPT but the ability to incorporate integer style variables/equations like if3() etc... If not, are there ways I can approach the integer solution with a non-integer solution such that when I implement the control on a real system, the difference between the non-integer solution and the integer solution is within some acceptable tolerance to consider it a disturbance that a feedback controller could mitigate?
Kind of a mouthful I know, my questions always are haha. Hopefully this is helpful for others in the future! Here's my code currently. Let me know if the code gives you issues or anything I could clear up in the question.
Oh and one final note, this is currently setup as a 2D flying system. I've removed the driving dynamics (the c splines commented out) for simplicity of implementing this 'jump' input.
Happy Holidays again everyone!
import numpy as np
import matplotlib.pyplot as plt
import math
import gekko
from gekko import GEKKO
import csv
from mpl_toolkits.mplot3d import Axes3D
class Optimizer():
def __init__(self):
#################GROUND DRIVING OPTIMIZER SETTTINGS##############
self.d = GEKKO(remote=False) # Driving on ground optimizer
ntd = 21
self.d.time = np.linspace(0, 1, ntd) # Time vector normalized 0-1
# options
self.d.options.NODES = 3
self.d.options.SOLVER = 3
self.d.options.IMODE = 6# MPC mode
self.d.options.MAX_ITER = 800
self.d.options.MV_TYPE = 0
self.d.options.DIAGLEVEL = 0
# self.d.options.OTOL = 1
# final time for driving optimizer
self.tf = self.d.FV(value=1.0,lb=0.1,ub=10.0, name='tf')
# allow gekko to change the tf value
self.tf.STATUS = 1
# time variable
self.t = self.d.Var(value=0)
self.d.Equation(self.t.dt()/self.tf == 1)
# Acceleration variable
self.a = self.d.MV(fixed_initial=False, lb = 0, ub = 1, name='a')
self.a.STATUS = 1
# Jumping integer varaibles and equations
self.u_jump = self.d.MV(fixed_initial=False, lb=0, ub=1, integer=True)
self.u_jump.STATUS = 1
self.jump_hist = self.d.Var(value=0, name='jump_hist', lb=0, ub=1)
self.d.Equation(self.jump_hist.dt() == self.u_jump*(ntd-1))
# self.d.Equation(1.0 >= self.jump_hist)
# pitch input throttle (rotation of system)
self.u_p = self.d.MV(fixed_initial=False, lb = -1, ub=1)
self.u_p.STATUS = 1
# Final variable that allows you to set an objective function considering only final state
self.p_d = np.zeros(ntd)
self.p_d[-1] = 1.0
self.final = self.d.Param(value = self.p_d, name='final')
# Model constants and parameters
self.Dp = self.d.Const(value = 2.7982, name='D_pitch')
self.Tp = self.d.Const(value = 12.146, name='T_pitch')
self.pi = self.d.Const(value = 3.14159, name='pi')
self.g = self.d.Const(value = 0, name='Fg')
self.jump_magnitude = self.d.Param(value = 3000, name = 'jump_mag')
def optimize2D(self, si, sf, vi, vf, ri, omegai): #these are 1x2 vectors s or v [x, z]
# variables and intial conditions
# Position in 2d
self.sx = self.d.Var(value=si[0], lb=-4096, ub=4096, name='x') #x position
# self.sy = self.d.Var(value=si[1], lb=-5120, ub=5120, name='y') #y position
self.sz = self.d.Var(value = si[1])
# Pitch rotation and angular velocity
self.pitch = self.d.Var(value = ri, name='pitch', lb=-1*self.pi, ub=self.pi)
self.pitch_dot = self.d.Var(fixed_initial=False, name='pitch_dot')
# Velocity in 2D
self.v_mag = self.d.Var(value=(vi), name='v_mag')
self.vx = self.d.Var(value=np.cos(ri), name='vx') #x velocity
# self.vy = self.d.Var(value=(np.sin(ri) * vi), name='vy') #y velocity
self.vz = self.d.Var(value = (np.sin(ri) * vi), name='vz')
## Non-linear state dependent dynamics descired as csplines.
#curvature vs vel as a cubic spline for driving state
cur = np.array([0.0069, 0.00398, 0.00235, 0.001375, 0.0011, 0.00088])
v_cur = np.array([0,500,1000,1500,1750,2300])
v_cur_fine = np.linspace(0,2300,100)
cur_fine = np.interp(v_cur_fine, v_cur, cur)
self.curvature = self.d.Var(name='curvature')
self.d.cspline(self.v_mag, self.curvature, v_cur_fine, cur_fine)
# throttle vs vel as cubic spline for driving state
ba=991.666 #Boost acceleration magnitude
kv = np.array([0, 1410, 2300]) #velocity input
ka = np.array([1600+ba, 0+ba, 0+ba]) #acceleration ouput
kv_fine = np.linspace(0, 2300, 100) # Higher resolution
ka_fine = np.interp(kv_fine, kv, ka) # Piecewise linear high resolution of ka
self.throttle_acceleration = self.d.Var(fixed_initial=False, name='throttle_accel')
self.d.cspline(self.v_mag, self.throttle_acceleration, kv_fine, ka_fine)
# Differental equations
# Velocity diff eqs
self.d.Equation(self.vx.dt()/self.tf == (self.a*ba * self.d.cos(self.pitch)*self.jump_hist) + (self.a * self.throttle_acceleration * (1-self.jump_hist)) + (self.u_jump * self.jump_magnitude * self.d.cos(self.pitch + np.pi/2)))
self.d.Equation(self.vz.dt()/self.tf == (self.a*ba * self.d.sin(self.pitch)*self.jump_hist) - (self.g * (1-self.jump_hist)) + (self.u_jump * self.jump_magnitude * self.d.sin(self.pitch + np.pi/2)))
self.d.Equation(self.v_mag == self.d.sqrt((self.vx*self.vx) + (self.vz*self.vz)))
self.d.Equation(2300 >= self.v_mag)
# Position diff eqs
self.d.Equation(self.sx.dt()/self.tf == self.vx)
# self.d.Equation(self.sy.dt()/self.tf == self.vy)
self.d.Equation(self.sz.dt()/self.tf == self.vz)
# Orientation diff eqs
self.d.Equation(self.pitch_dot.dt()/self.tf == ((self.Tp * self.u_p) + (self.Dp * self.pitch_dot * (1 - self.d.abs2(self.u_p)))) * self.jump_hist)
self.d.Equation(self.pitch.dt()/self.tf == self.pitch_dot)
# Objective functions
# Final Position Objectives
self.d.Minimize(self.final*1e2*((self.sz-sf[1])**2)) # z final position objective
self.d.Minimize(self.final*1e2*((self.sx-sf[0])**2)) # x final position objective
# Final Velocity Objectives
# self.d.Obj(self.final*1e3*(self.vz-vf[1])**2)
# self.d.Obj(self.final*1e3*(self.vx-vf[0])**2)
# Minimum Time Objective
self.d.Minimize(1e4*self.tf)
#solve
# self.d.solve('http://127.0.0.1') # Solve with local apmonitor server
self.d.open_folder()
self.d.solve(disp=True)
self.ts = np.multiply(self.d.time, self.tf.value[0])
return self.a, self.u_p, self.ts
def getTrajectoryData(self):
return [self.ts, self.sx, self.sz, self.vx, self.vz, self.pitch, self.pitch_dot]
def getInputData(self):
return [self.ts, self.a]
# Main Code
opt = Optimizer()
s_ti = [0,0]
v_ti = 0
s_tf = [1000,500]
v_tf = [00.00, 00.0]
r_ti = 0 # inital orientation of the car
omega_ti = 0.0 # initial angular velocity of car
acceleration, turning, t_star = opt.optimize2D(s_ti, s_tf, v_ti, v_tf, r_ti, omega_ti)
# Printing stuff
# print('u', acceleration.value)
# print('tf', opt.tf.value)
# print('tf', opt.tf.value[0])
# print('u jump', opt.jump)
# for i in opt.u_jump: print(i.value)
print('u_jump', opt.u_jump.value)
print('jump his', opt.jump_hist.value)
print('v_mag', opt.v_mag.value)
print('a', opt.a.value)
# Plotting stuff
ts = opt.d.time * opt.tf.value[0]
t_max = opt.tf.value[0]
x_max = np.max(opt.sx.value)
vx_max = np.max(opt.vx.value)
z_max = np.max(opt.sz.value)
vz_max = np.max(opt.vz.value)
# plot results
fig = plt.figure(2)
ax = fig.add_subplot(111, projection='3d')
# plt.subplot(2, 1, 1)
Axes3D.plot(ax, opt.sx.value, ts, opt.sz.value, c='r', marker ='o')
plt.ylim(0, t_max)
plt.xlim(0, x_max)
plt.ylabel('time')
plt.xlabel('Position x')
ax.set_zlabel('position z')
n=5 #num plots
fig = plt.figure(3)
ax = fig.add_subplot(111, projection='3d')
# plt.subplot(2, 1, 1)
Axes3D.plot(ax, opt.vx.value, ts, opt.vz.value, c='r', marker ='o')
plt.ylim(0, t_max)
plt.xlim(-1*vx_max, vx_max)
# plt.zlim(0, 2000)
plt.ylabel('time')
plt.xlabel('Velocity x')
ax.set_zlabel('vz')
plt.figure(1)
plt.subplot(n,1,1)
plt.plot(ts, opt.a, 'r-')
plt.ylabel('acceleration')
plt.subplot(n,1,2)
plt.plot(ts, np.multiply(opt.pitch, 1/math.pi), 'r-')
plt.ylabel('pitch orientation')
plt.subplot(n, 1, 3)
plt.plot(ts, opt.v_mag, 'b-')
plt.ylabel('vmag')
plt.subplot(n, 1, 4)
plt.plot(ts, opt.u_p, 'b-')
plt.ylabel('u_p')
plt.subplot(n, 1, 5)
plt.plot(ts, opt.u_jump, 'b-')
plt.plot(ts, opt.jump_hist, 'r-')
plt.ylabel('jump(b), jump hist(r)')
plt.show()
print('asdf')
One thing to try is solve with IPOPT for initialization and then APOPT to get the integer solution. Another thing to try is to use an MPCC for a switching condition that does not rely on a binary variable. I've found the MPCC form to be much less reliable than a binary variable switching condition because the solver often gets stuck at the saddle point. However, integer solutions often take much longer to solve.
self.d.options.SOLVER=3
self.d.solve(disp=True)
self.d.options.TIME_SHIFT=0
self.d.options.SOLVER=1
self.d.solve(disp=True)
Here is the solution with IPOPT:
EXIT: Optimal Solution Found.
The solution was found.
The final value of the objective function is 506284.8987787149
---------------------------------------------------
Solver : IPOPT (v3.12)
Solution time : 7.4613000000000005 sec
Objective : 506284.8987787149
Successful solution
---------------------------------------------------
The integer solution is obtained with APOPT.
--------- APM Model Size ------------
Variable time shift OFF
Number of state variables: 1286
Number of total equations: - 1180
Number of slack variables: - 40
---------------------------------------
Degrees of freedom : 66
----------------------------------------------
Dynamic Control with APOPT Solver
----------------------------------------------
Iter: 1 I: 0 Tm: 2.72 NLPi: 92 Dpth: 0 Lvs: 3 Obj: 5.07E+05 Gap: NaN
Iter: 2 I: -1 Tm: 0.53 NLPi: 17 Dpth: 1 Lvs: 2 Obj: 5.07E+05 Gap: NaN
Iter: 3 I: -9 Tm: 47.59 NLPi: 801 Dpth: 1 Lvs: 1 Obj: 5.07E+05 Gap: NaN
Iter: 4 I: 0 Tm: 2.26 NLPi: 35 Dpth: 1 Lvs: 3 Obj: 5.08E+05 Gap: NaN
--Integer Solution: 2.54E+07 Lowest Leaf: 5.08E+05 Gap: 1.92E+00
Iter: 5 I: 0 Tm: 3.56 NLPi: 32 Dpth: 2 Lvs: 2 Obj: 2.54E+07 Gap: 1.92E+00
Iter: 6 I: -9 Tm: 54.65 NLPi: 801 Dpth: 2 Lvs: 1 Obj: 5.08E+05 Gap: 1.92E+00
Iter: 7 I: -1 Tm: 2.18 NLPi: 83 Dpth: 2 Lvs: 0 Obj: 5.08E+05 Gap: 1.92E+00
No additional trial points, returning the best integer solution
Successful solution
---------------------------------------------------
Solver : APOPT (v1.0)
Solution time : 113.5842 sec
Objective : 2.5419931399165962E+7
Successful solution
---------------------------------------------------
APOPT chooses not to jump to minimize the final objective. You may need to add a hard constraint that the vsum() of u_jump is 1. There are additional tutorials on MPCC and integer / binary forms of switching conditions in the Optimization course.
Thanks for sharing your application and keep us updated!

Discrete path tracking with python gekko

I have some discrete data points representing a path and I want to minimize the distance between a trajectory of an object to these path points along with some other constraints. I'm trying out gekko as a tool to solve this problem and for that I made a simple problem by making data points from a parabola and a constraint to the path. My attempt to solve it is
from gekko import GEKKO
import numpy as np
import matplotlib.pyplot as plt
import time
#path data points
x_ref = np.linspace(0, 4, num=21)
y_ref = - np.square(x_ref) + 16
#constraint for visualization purposes
x_bound = np.linspace(0, 4, num=10)
y_bound = 1.5*x_bound + 4
def distfunc(x,y,xref,yref,p):
'''
Shortest distance from (x,y) to (xref, yref)
'''
dtemp = []
for i in range(len(xref)):
d = (x-xref[i])**2+(y-yref[i])**2
dtemp.append(dtemp)
min_id = dtemp.index(min(dtemp))
if min_id == 0:
next_id = min_id+1
elif min_id == len(x_ref):
next_id = min_id-1
else:
d2 = (x-xref[min_id-1])**2+(y-yref[min_id-1])**2
d1 = (x-xref[min_id+1])**2+(y-yref[mid_id+1])**2
d_next = [d2, d1]
next_id = min_id + 2*d_next.index(min(d_next)) - 1
n1 = xref[next_id] - xref[min_id]
n2 = yref[next_id] - yref[min_id]
nnorm = p.sqrt(n1**2+n2**2)
n1 = n1 / nnorm
n2 = n2 / nnorm
difx = x-xref[min_id]
dify = y-yref[min_id]
dot = difx*n1 + dify*n2
deltax = difx - dot*n1
deltay = dify - dot*n2
return deltax**2+deltay**2
v_ref = 3
now = time.time()
p = GEKKO(remote=False)
p.time = np.linspace(0,10,21)
x = p.Var(value=0)
y = p.Var(value=16)
vx = p.Var(value=1)
vy = p.Var(value=0)
ax = p.Var(value=0)
ay = p.Var(value=0)
p.options.IMODE = 6
p.options.SOLVER = 3
p.options.WEB = 0
x_refg = p.Param(value=x_ref)
y_refg = p.Param(value=y_ref)
x_refg = p.Param(value=x_ref)
y_refg = p.Param(value=y_ref)
v_ref = p.Const(value=v_ref)
p.Obj(distfunc(x,y,x_refg,y_refg,p))
p.Obj( (p.sqrt(vx**2+vy**2) - v_ref)**2 + ax**2 + ay**2)
p.Equation(x.dt()==vx)
p.Equation(y.dt()==vy)
p.Equation(vx.dt()==ax)
p.Equation(vy.dt()==ay)
p.Equation(y>=1.5*x+4)
p.solve(disp=False, debug=True)
print(f'run time: {time.time()-now}')
plt.plot(x_ref, y_ref)
plt.plot(x_bound, y_bound)
plt.plot(x1.value,x2.value)
plt.show()
This is the result that I get. As you can see, its not exactly the solution that one should expect. For reference to a solution that you may expect, here is what I get using the cost function below
p.Obj((x-x_refg)**2 + (y-y_refg)**2 + ax**2 + ay**2)
However since what I actually wanted is the shortest distance to a path described by these points I expect the distfunc to be closer to what I want since the shortest distance is most likely to some interpolated point. So my question is twofold:
Is this the correct gekko expression/formulation for the objective function?
My other goal is solution speed so is there a more efficient way of expressing this problem for gekko?
You can't define an objective function that changes based on conditions unless you insert logical conditions that are continuously differentiable such as with the if2 or if3 function. Gekko evaluates the symbolic model once and then passes that off to an executable for solution. It only calls the Python model build once because it is compiling the model to efficient byte-code for execution. You can see the model that you created with p.open_folder(). The model file ends in the apm extension: gk_model0.apm.
Model
Constants
i0 = 3
End Constants
Parameters
p1
p2
p3
p4
End Parameters
Variables
v1 = 0
v2 = 16
v3 = 1
v4 = 0
v5 = 0
v6 = 0
End Variables
Equations
v3=$v1
v4=$v2
v5=$v3
v6=$v4
v2>=(((1.5)*(v1))+4)
minimize (((((v1-0.0)-((((((v1-0.0))*((0.2/sqrt(0.04159999999999994))))+(((v2-16.0))&
*((-0.03999999999999915/sqrt(0.04159999999999994))))))*&
((0.2/sqrt(0.04159999999999994))))))^(2))+((((v2-16.0)&
-((((((v1-0.0))*((0.2/sqrt(0.04159999999999994))))+(((v2-16.0))&
*((-0.03999999999999915/sqrt(0.04159999999999994))))))&
*((-0.03999999999999915/sqrt(0.04159999999999994))))))^(2)))
minimize (((((sqrt((((v3)^(2))+((v4)^(2))))-i0))^(2))+((v5)^(2)))+((v6)^(2)))
End Equations
End Model
One strategy is to split your problem into multiple optimization problems that are all minimal time problems where you navigate to the first way-point and then re-initialize the problem to navigate to the second way-point, and so on. If you want to preserve momentum and anticipate the turning then you'll need to use more advanced methods such as shown in the Pigeon / Eagle tracking problem (see source files) or similar to a trajectory optimization with UAVs or HALE UAVs (see references below).
Martin, R.A., Gates, N., Ning, A., Hedengren, J.D., Dynamic Optimization of High-Altitude Solar Aircraft Trajectories Under Station-Keeping Constraints, Journal of Guidance, Control, and Dynamics, 2018, doi: 10.2514/1.G003737.
Gates, N.S., Moore, K.R., Ning, A., Hedengren, J.D., Combined Trajectory, Propulsion and Battery Mass Optimization for Solar-Regenerative High-Altitude Long Endurance Unmanned Aircraft, AIAA Science and Technology Forum (SciTech), 2019.

GEKKO RTO vs MPC MODES

This is a question derived from this one. After posting my question I found a solution (more like a patch to force the optimizer to optimize). There is something that baffles me. John Hedengren correctly points out that a b=1.0 in the ODE leads to an infeasible solution with IMODE=6. However in my patchy work around with IMODE=3 I do get a solution.
I'm trying to understand what's happening here reading GEKKO's documentation for IMODE=3 and 6 but it is not clear to me
IMODE=3
RTO Real-Time Optimization (RTO) is a steady-state mode that allows
decision variables (FV or MV types with STATUS=1) or additional
variables in excess of the number of equations. An objective function
guides the selection of the additional variables to select the optimal
feasible solution. RTO is the default mode for Gekko if
m.options.IMODE is not specified.
IMODE=6
MPC Model Predictive Control (MPC) is implemented with IMODE=6 as a
simultaneous solution or with IMODE=9 as a sequential shooting method.
Why b=1. works in one mode but not in the other?
This is my patchy work around with IMODE=3 and b=1.0:
from gekko import GEKKO
import numpy as np
import matplotlib.pyplot as plt
m = GEKKO(remote=False)
m.time = np.linspace(0,23,24)
#initialize variables
T_e = [50.,50.,50.,50.,45.,45.,45.,60.,60.,63.,\
64.,45.,45.,50.,52.,53.,53.,54.,54.,53.,52.,51.,50.,45.]
temp_low = [55.,55.,55.,55.,55.,55.,55.,68.,68.,68.,68.,55.,55.,68.,\
68.,68.,68.,55.,55.,55.,55.,55.,55.,55.]
temp_upper = [75.,75.,75.,75.,75.,75.,75.,70.,70.,70.,70.,75.,75.,\
70.,70.,70.,70.,75.,75.,75.,75.,75.,75.,75.]
TOU = [0.05,0.05,0.05,0.05,0.05,0.05,0.05,200.,200.,\
200.,200.,200.,200.,200.,200.,200.,200.,200.,\
200.,200.,200.,0.05,0.05,0.05]
b = m.Param(value=1.)
k = m.Param(value=0.05)
u = [m.MV(0.,lb=0.,ub=1.) for i in range(24)]
# Controlled Variable
T = [m.SV(60.,lb=temp_low[i],ub=temp_upper[i]) for i in range(24)]
for i in range(24):
u[i].STATUS = 1
for i in range(23):
m.Equation( T[i+1]-T[i]-k*(T_e[i]-T[i])-b*u[i]==0.0 )
m.Obj(np.dot(TOU,u))
m.options.IMODE = 3
m.solve(debug=True)
myu =[u[0:][i][0] for i in range(24)]
print myu
myt =[T[0:][i][0] for i in range(24)]
plt.plot(myt)
plt.plot(temp_low)
plt.plot(temp_upper)
plt.show()
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(myu,color='b')
ax2.plot(TOU,color='k')
plt.show()
Results:
The difference between the infeasible IMODE=6 and the feasible IMODE=3 is that the IMODE=3 case allows the temperature initial condition to be adjusted by the optimizer. The optimizer recognizes that the initial condition can be changed and so it modifies it to 75 to both stay feasible and also minimize the future energy consumption.
from gekko import GEKKO
import numpy as np
m = GEKKO(remote=False)
m.time = np.linspace(0,23,24)
#initialize variables
T_external = [50.,50.,50.,50.,45.,45.,45.,60.,60.,63.,\
64.,45.,45.,50.,52.,53.,53.,54.,54.,\
53.,52.,51.,50.,45.]
temp_low = [55.,55.,55.,55.,55.,55.,55.,68.,68.,68.,68.,\
55.,55.,68.,68.,68.,68.,55.,55.,55.,55.,55.,55.,55.]
temp_upper = [75.,75.,75.,75.,75.,75.,75.,70.,70.,70.,70.,75.,\
75.,70.,70.,70.,70.,75.,75.,75.,75.,75.,75.,75.]
TOU_v = [0.05,0.05,0.05,0.05,0.05,0.05,0.05,200.,200.,200.,200.,\
200.,200.,200.,200.,200.,200.,200.,200.,200.,200.,0.05,\
0.05,0.05]
b = m.Param(value=1.)
k = m.Param(value=0.05)
T_e = m.Param(value=T_external)
TL = m.Param(value=temp_low)
TH = m.Param(value=temp_upper)
TOU = m.Param(value=TOU_v)
u = m.MV(lb=0, ub=1)
u.STATUS = 1 # allow optimizer to change
# Controlled Variable
T = m.SV(value=75)
m.Equations([T>=TL,T<=TH])
m.Equation(T.dt() == k*(T_e-T) + b*u)
m.Minimize(TOU*u)
m.options.IMODE = 6
m.solve(disp=True,debug=True)
import matplotlib.pyplot as plt
plt.subplot(2,1,1)
plt.plot(m.time,temp_low,'k--')
plt.plot(m.time,temp_upper,'k--')
plt.plot(m.time,T.value,'r-')
plt.ylabel('Temperature')
plt.subplot(2,1,2)
plt.step(m.time,u.value,'b:')
plt.ylabel('Heater')
plt.xlabel('Time (hr)')
plt.show()
If you went for another day (48 hours), you'd probably see that the problem would eventually be infeasible because the smaller heater b=1 wouldn't be able to meet the temperature lower constraint.
One of the advantages of using IMODE=6 is that you can write the differential equation instead of doing the discretization yourself. With IMODE=3, you use an Euler's method for the differential equation. The default discretization for IMODE>=4 is NODES=2, equivalent to your Euler finite difference method. Setting NODES=3-6 increases the accuracy with orthogonal collocation on finite elements.

How to constrain model variables in GEKKO

I like to constrain the variable value u < 1 in y model. Added ub=1 to the variable definition u = m.Var(name='u', value=0, lb=-2, ub=1) but it resulted in "No soulution found" (EXIT: Converged to a point of local infeasibility. Problem may be infeasible.). I guess I have to reformulate the problem to avoid this, but I have not been able to find examples how this should be done. How do i write a proper model to avoid infeasible solutions when constraining variable values?
I hav tied to reformulate the problem by adding equation like m.Equation(u < 1) with no success.
import numpy as np
from gekko import GEKKO
import matplotlib.pyplot as pyplt
m = GEKKO(remote=False)
t = np.linspace(0, 1000, 101) # time
d = np.ones(t.shape)
d[0:10] = 0
y_delay=0
# Add data to model
m.time = t
K = m.Const(0.01, name='K')
r = m.Const(name='r', value=0) # Reference
d = m.Param(name='d', value=d) # Disturbance
y = m.Var(name='y', value=0, lb=-2, ub=2) # State variable
u = m.Var(name='u', value=0, lb=-2, ub=1) # Output
e = m.Var(name='e', value=0)
Tc = m.FV(name='Tc', value=1200, lb=60, ub=1200) # time constant
# Update variable status
Tc.STATUS = 1 # Optimizer can adjust value
Kp = m.Intermediate(1 / K * 1 / Tc, name='Kp')
Ti = m.Intermediate(4 * Tc, name='Ti')
# Model equations
m.Equations([y.dt() == K * (u-d),
e == r-y,
u.dt() == Kp*e.dt()+Kp/Ti*e])
# Model constraints
m.Equation(y < 0.5)
m.Equation(y > -0.5)
# Model objective
m.Obj(-Tc)
# options
m.options.IMODE = 6 # Problem type: 6 = Dynamic optimization
# solve
m.solve(disp=True, debug=True)
print('Tc: %6.2f [s]' % (Tc.value[-1], ))
fig1, (ax1, ax2, ax3) = pyplt.subplots(3, sharex='all')
ax1.plot(t, y.value)
ax1.set_ylabel("y", fontsize=8), ax1.grid(True, which='both')
ax2.plot(t, e.value)
ax2.set_ylabel("e", fontsize=8), ax2.grid(True, which='both')
ax3.plot(t, u.value)
ax3.plot(t, d.value)
ax3.set_ylabel("u and d", fontsize=8), ax3.grid(True, which='both')
pyplt.show()
EXIT: Converged to a point of local infeasibility. Problem may be infeasible.
An error occured.
The error code is 2
If I change the upper bound of u to 2, the optimization problem is solved as expected.
Hard constraints on variables can lead to an infeasible solution, as you observed. I recommend that you use soft constraints by specifying the variable y as a Controlled Variable and set an upper and lower set point range with SPHI and SPLO.
y = m.CV(name='y', value=0) # Controlled variable
y.STATUS = 1
y.TR_INIT = 0
y.SPHI = 0.5
y.SPLO = -0.5
I also removed the lb and ub from y and u to not give them hard bounds that can lead to the infeasibility. You also have an objective to maximize the value of Tc with m.Obj(-Tc). It goes to the maximum limit: 1200 when the solver is able to adjust the value. As you can see from the plot, the value of y exceeds the setpoint range. It may not be possible for the controller to keep it within that range. A soft constraint (objective based) approach to constraints penalizes deviations but does not lead to an infeasible solution. If you need to increase the penalty on violations of the SPHI or SPLO, the parameters WSPHI and WSPLO can be adjusted.
It appears that you have a first order dynamic model and you are trying to optimize PID parameters. If you need to model saturation of the controller output (actuator) then the if3, max3, min3 or corresponding if2, max2, min2 functions may be useful. There is more information on CV objectives and tuning in the Dynamic Optimization course.
Here is a feasible solution to your problem:
import numpy as np
from gekko import GEKKO
import matplotlib.pyplot as pyplt
m = GEKKO() # remote=False
t = np.linspace(0, 1000, 101) # time
d = np.ones(t.shape)
d[0:10] = 0
y_delay=0
# Add data to model
m.time = t
K = m.Const(0.01, name='K')
r = m.Const(name='r', value=0) # Reference
d = m.Param(name='d', value=d) # Disturbance
e = m.Var(name='e', value=0)
u = m.Var(name='u', value=0) # Output
Tc = m.FV(name='Tc', value=1200, lb=60, ub=1200) # time constant
y = m.CV(name='y', value=0) # Controlled variable
y.STATUS = 1
y.TR_INIT = 0
y.SPHI = 0.5
y.SPLO = -0.5
# Update variable status
Tc.STATUS = 1 # Optimizer can adjust value
Kp = m.Intermediate((1 / K) * (1 / Tc), name='Kp')
Ti = m.Intermediate(4 * Tc, name='Ti')
# Model equations
m.Equations([y.dt() == K * (u-d),
e == r-y,
u.dt() == Kp*e.dt()+(Kp/Ti)*e])
# Model constraints
#m.Equation(y < 0.5)
#m.Equation(y > -0.5)
# Model objective
m.Obj(-Tc)
# options
m.options.IMODE = 6 # Problem type: 6 = Dynamic optimization
m.options.SOLVER = 3
m.options.MAX_ITER = 1000
# solve
m.solve(disp=True, debug=True)
print('Tc: %6.2f [s]' % (Tc.value[-1], ))
fig1, (ax1, ax2, ax3) = pyplt.subplots(3, sharex='all')
ax1.plot(t, y.value)
ax1.plot([min(t),max(t)],[0.5,0.5],'k--')
ax1.plot([min(t),max(t)],[-0.5,-0.5],'k--')
ax1.set_ylabel("y", fontsize=8), ax1.grid(True, which='both')
ax2.plot(t, e.value)
ax2.set_ylabel("e", fontsize=8), ax2.grid(True, which='both')
ax3.plot(t, u.value)
ax3.plot(t, d.value)
ax3.set_ylabel("u and d", fontsize=8), ax3.grid(True, which='both')
pyplt.show()
Thanks for an extensive and useful answer to my question. I really appreciate it.
As you correctly observed I am trying to optimize tuning parameters for my simple control problem. I have executed your code with soft constraints, and it sure solves the feasibility issue. I also added the WSPHI/LO parameters and set their values high to have a solution within the constraints. Still, I like to have a model where the control output (“u”) is bounded [0,1]. Based on your answer I probably must add “if” or “max/min” statements in the model to avoid having a non-feasible set of equations when “u” hits the bound. Something like “if u<0, u.dt() = 0 else u.dt() = Kp*e ….”. Could it alternatively be possible to add a variable (a type slack variable) to ensure feasibility of the equation set? I will also investigate the material in the dynamic optimization course links to get a better understanding of dynamic modelling. Thanks again for guiding me in the right direction in this issue.

Why do the trace values have periods of (unwanted) stability?

I have a fairly simple test data set I am trying to fit with pymc3.
The result generated by traceplot looks something like this.
Essentially the trace of all parameter look like there is a standard 'caterpillar' for 100 iterations, followed by a flat line for 750 iterations, followed by the caterpillar again.
The initial 100 iterations happen after 25,000 ADVI iterations, and 10,000 tune iterations. If I change these amounts, I randomly will/won't have these periods of unwanted stability.
I'm wondering if anyone has any advice about how I can stop this from happening - and what is causing it?
Thanks.
The full code is below. In brief, I am generating a set of 'phases' (-pi -> pi) with a corresponding set of values y = a(j)*sin(phase) + b(j)*sin(phase). a and b are drawn for each subject j at random, but are related to each other.
I then essentially try to fit this same model.
Edit: Here is a similar example, running for 25,000 iterations. Something goes wrong around iteration 20,000.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import matplotlib.pyplot as plt
import numpy as np
import pymc3 as pm
%matplotlib inline
np.random.seed(0)
n_draw = 2000
n_tune = 10000
n_init = 25000
init_string = 'advi'
target_accept = 0.95
##
# Generate some test data
# Just generates:
# x a vector of phases
# y a vector corresponding to some sinusoidal function of x
# subject_idx a vector corresponding to which subject x is
#9 Subjects
N_j = 9
#Each with 276 measurements
N_i = 276
sigma_y = 1.0
mean = [0.1, 0.1]
cov = [[0.01, 0], [0, 0.01]] # diagonal covariance
x_sub = np.zeros((N_j,N_i))
y_sub = np.zeros((N_j,N_i))
y_true_sub = np.zeros((N_j,N_i))
ab_sub = np.zeros((N_j,2))
tuning_sub = np.zeros((N_j,1))
sub_ix_sub = np.zeros((N_j,N_i))
for j in range(0,N_j):
aj,bj = np.random.multivariate_normal(mean, cov)
#aj = np.abs(aj)
#bj = np.abs(bj)
xj = np.random.uniform(-1,1,size = (N_i,1))*np.pi
xj = np.sort(xj)#for convenience
yj_true = aj*np.sin(xj) + bj*np.cos(xj)
yj = yj_true + np.random.normal(scale=sigma_y, size=(N_i,1))
x_sub[j,:] = xj.ravel()
y_sub[j,:] = yj.ravel()
y_true_sub[j,:] = yj_true.ravel()
ab_sub[j,:] = [aj,bj]
tuning_sub[j,:] = np.sqrt(((aj**2)+(bj**2)))
sub_ix_sub[j,:] = [j]*N_i
x = np.ravel(x_sub)
y = np.ravel(y_sub)
subject_idx = np.ravel(sub_ix_sub)
subject_idx = np.asarray(subject_idx, dtype=int)
##
# Fit model
hb1_model = pm.Model()
with hb1_model:
# Hyperpriors
hb1_mu_a = pm.Normal('hb1_mu_a', mu=0., sd=100)
hb1_sigma_a = pm.HalfCauchy('hb1_sigma_a', 4)
hb1_mu_b = pm.Normal('hb1_mu_b', mu=0., sd=100)
hb1_sigma_b = pm.HalfCauchy('hb1_sigma_b', 4)
# We fit a mixture of a sine and cosine with these two coeffieicents
# allowed to be different for each subject
hb1_aj = pm.Normal('hb1_aj', mu=hb1_mu_a, sd=hb1_sigma_a, shape = N_j)
hb1_bj = pm.Normal('hb1_bj', mu=hb1_mu_b, sd=hb1_sigma_b, shape = N_j)
# Model error
hb1_eps = pm.HalfCauchy('hb1_eps', 5)
hb1_linear = hb1_aj[subject_idx]*pm.math.sin(x) + hb1_bj[subject_idx]*pm.math.cos(x)
hb1_linear_like = pm.Normal('y', mu = hb1_linear, sd=hb1_eps, observed=y)
with hb1_model:
hb1_trace = pm.sample(draws=n_draw, tune = n_tune,
init = init_string, n_init = n_init,
target_accept = target_accept)
pm.traceplot(hb1_trace)
To partially answer my own question: After playing with this for a while, it looks like the problem might be due to the hyperprior standard deviation going to 0. I am not sure why the algorithm should get stuck there though (testing a small standard deviation can't be that uncommon...).
In any case, two solutions that seem to alleviate the problem (although they don't remove it entirely) are:
1) Add an offset to the definitions of the standard deviation. e.g.:
offset = 1e-2
hb1_sigma_a = offset + pm.HalfCauchy('hb1_sigma_a', 4)
2) Instead of using a HalfCauchy or HalfNormal for the SD prior, use a logNormal distribution set so that 0 is unlikely.
I'd look at the divergencies, as explained in notes and literature on Hamiltonian Monte Carlo, see, e.g., here and here.
with model:
np.savetxt('diverging.csv', hb1_trace['diverging'])
As a dirty solution, you can try to increase target_accept, perhaps.
Good luck!

Resources