Fit Amplitude (Frequency response) of a capacitor with lmfit - curve-fitting

I am trying to fit measured data with lmfit.
My goal is to get the parameters of the capacitor with an equivalent circuit diagram.
So, I want to create a model with parameters (C, R1, L1,...) and fit it to the measured data.
I know that the resonance frequency is at the global minimum and there must also be R1. Also known is C.
So I could fix the parameter C and R1. With the resonance frequency I could calculate L1 too.
I created the model, but the fit doesn't work right.
Maybe someone could help me with this.
Thanks in advance.
from lmfit import minimize, Parameters
from lmfit import report_fit
params = Parameters()
params.add('C', value = 220e-9, vary = False)
params.add('L1', value = 0.00001, min = 0, max = 0.1)
params.add('R1', value = globalmin, vary = False)
params.add('Rp', value = 10000, min = 0, max = 10e20)
params.add('Cp', value = 0.1, min = 0, max = 0.1)
def get_elements(params, freq, data):
C = params['C'].value
L1 = params['L1'].value
R1 = params['R1'].value
Rp = params['Rp'].value
Cp = params['Cp'].value
XC = 1/(1j*2*np.pi*freq*C)
XL = 1j*2*np.pi*freq*L1
XP = 1/(1j*2*np.pi*freq*Cp)
Z1 = R1 + XC*Rp/(XC+Rp) + XL
real = np.real(Z1*XP/(Z1+XP))
imag = np.imag(Z1*XP/(Z1+XP))
model = np.sqrt(real**2 + imag**2)
#model = np.sqrt(R1**2 + ((2*np.pi*freq*L1 - 1/(2*np.pi*freq*C))**2))
#model = (np.arctan((2*np.pi*freq*L1 - 1/(2*np.pi*freq*C))/R1)) * 360/((2*np.pi))
return data - model
out = minimize(get_elements, params , args=(freq, data))
report_fit(out)
#make reconstruction for plotting
C = out.params['C'].value
L1 = out.params['L1'].value
R1 = out.params['R1'].value
Rp = out.params['Rp'].value
Cp = out.params['Cp'].value
XC = 1/(1j*2*np.pi*freq*C)
XL = 1j*2*np.pi*freq*L1
XP = 1/(1j*2*np.pi*freq*Cp)
Z1 = R1 + XC*Rp/(XC+Rp) + XL
real = np.real(Z1*XP/(Z1+XP))
imag = np.imag(Z1*XP/(Z1+XP))
reconst = np.sqrt(real**2 + imag**2)
reconst_phase = np.arctan(imag/real)* 360/(2*np.pi)
'''
PLOTTING
'''
#plot of filtred signal vs measered data (AMPLITUDE)
fig = plt.figure(figsize=(40,15))
file_title = 'Measured Data'
plt.subplot(311)
plt.xscale('log')
plt.yscale('log')
plt.xlim([min(freq), max(freq)])
plt.ylabel('Amplitude')
plt.xlabel('Frequency in Hz')
plt.grid(True, which="both")
plt.plot(freq, z12_fac, 'g', alpha = 0.7, label = 'data')
#Plot Impedance of model in magenta
plt.plot(freq, reconst, 'm', label='Reconstruction (Model)')
plt.legend()
#(PHASE)
plt.subplot(312)
plt.xscale('log')
plt.xlim([min(freq), max(freq)])
plt.ylabel('Phase in °')
plt.xlabel('Frequency in Hz')
plt.grid(True, which="both")
plt.plot(freq, z12_deg, 'g', alpha = 0.7, label = 'data')
#Plot Phase of model in magenta
plt.plot(freq, reconst_phase, 'm', label='Reconstruction (Model)')
plt.legend()
plt.savefig(file_title)
plt.close(fig)
measured data
equivalent circuit diagram (model)
Edit 1:
Fit-Report:
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 28
# data points = 4001
# variables = 3
chi-square = 1197180.70
reduced chi-square = 299.444897
Akaike info crit = 22816.4225
Bayesian info crit = 22835.3054
## Warning: uncertainties could not be estimated:
L1: at initial value
Rp: at boundary
Cp: at initial value
Cp: at boundary
[[Variables]]
C: 2.2e-07 (fixed)
L1: 1.0000e-05 (init = 1e-05)
R1: 0.06375191 (fixed)
Rp: 0.00000000 (init = 10000)
Cp: 0.10000000 (init = 0.1)
Edit 2:
Data can be found here:
https://1drv.ms/u/s!AsLKp-1R8HlZhcdlJER5T7qjmvfmnw?e=r8G2nN
Edit 3:
I now have simplified my model to a simple RLC-series. With a another set of data this works pretty good. see here the plot with another set of data
def get_elements(params, freq, data):
C = params['C'].value
L1 = params['L1'].value
R1 = params['R1'].value
#Rp = params['Rp'].value
#Cp = params['Cp'].value
#k = params['k'].value
#freq = np.log10(freq)
XC = 1/(1j*2*np.pi*freq*C)
XL = 1j*2*np.pi*freq*L1
# XP = 1/(1j*2*np.pi*freq*Cp)
# Z1 = R1*k + XC*Rp/(XC+Rp) + XL
# real = np.real(Z1*XP/(Z1+XP))
# imag = np.imag(Z1*XP/(Z1+XP))
Z1 = R1 + XC + XL
real = np.real(Z1)
imag= np.imag(Z1)
model = np.sqrt(real**2 + imag**2)
return np.sqrt(np.real(data)**2+np.imag(data)**2) - model
out = minimize(get_elements, params , args=(freq, data))
Report:
Chi-Square is really high...
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 25
# data points = 4001
# variables = 2
chi-square = 5.0375e+08
reduced chi-square = 125968.118
Akaike info crit = 46988.8798
Bayesian info crit = 47001.4684
[[Variables]]
C: 3.3e-09 (fixed)
L1: 5.2066e-09 +/- 1.3906e-08 (267.09%) (init = 1e-05)
R1: 0.40753691 +/- 24.5685882 (6028.56%) (init = 0.05)
[[Correlations]] (unreported correlations are < 0.100)
C(L1, R1) = -0.174
With my originally set of data I get this:
plot original data (complex)
Which is not bad, but also not good. That's why I want to make my model more detailed, so I can fit also in higher frequency regions...
Report of this one:
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 25
# data points = 4001
# variables = 2
chi-square = 109156.170
reduced chi-square = 27.2958664
Akaike info crit = 13232.2473
Bayesian info crit = 13244.8359
[[Variables]]
C: 2.2e-07 (fixed)
L1: 2.3344e-08 +/- 1.9987e-10 (0.86%) (init = 1e-05)
R1: 0.17444702 +/- 0.29660571 (170.03%) (init = 0.05)
Please note: I also have changed the input data of the model. Now I give the model complex values and then I calculate the Amplitude. Find this also here: https://1drv.ms/u/s!AsLKp-1R8HlZhcdlJER5T7qjmvfmnw?e=qnrZk1

Related

Is there a way to extract the variables from `lmfit` report?

I'm using the python package lmfit to fit my dataset with this model:
def GaussianFit(results, highest_num, Peak_shot, nuni, dif = None):
...
Gauss_mod = GaussianModel(prefix='gauss_')
Const_mod = ConstantModel(prefix='const_')
mod = Gauss_mod + Const_mod
pars = mod.make_params(gauss_center = ig, gauss_sigma = 1/12)
out = mod.fit(y_sel, pars, x = x_sel, weights = get_weights(last_sel,Peak_shot,nuni))
print(out.fit_report())
And the fit report looks like:
[[Model]]
(Model(gaussian, prefix='gauss_') + Model(constant, prefix='const_'))
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 101
# data points = 18
# variables = 4
chi-square = 2.1571e-05
reduced chi-square = 1.5408e-06
Akaike info crit = -237.421693
Bayesian info crit = -233.860206
[[Variables]]
gauss_amplitude: 0.02133733 +/- 0.01122602 (52.61%) (init = 0.25)
gauss_center: 0.98316682 +/- 0.02152806 (2.19%) (init = 1.041587)
gauss_sigma: 0.11847360 +/- 0.04182091 (35.30%) (init = 0.08333333)
const_c: 0.09532047 +/- 0.01831759 (19.22%) (init = 0)
gauss_fwhm: 0.27898399 +/- 0.09848070 (35.30%) == '2.3548200*gauss_sigma'
I was wondering if it is possible to extract the gauss_center and its error with two variables, instead of directly copying and pasting these results. Thanks!
out.params['gauss_center'].value will be the best-fit value for the gauss_center parameter, and out.params['gauss_center'].stderr will be its standard error.

How to set up GEKKO for parameter estimation from multiple independent sets of data?

I am learning how to use GEKKO for kinetic parameter estimation based on laboratory batch reactor data, which essentially consists of the concentration profiles of three species A, C, and P. For the purposes of my question, I am using a model that I previously featured in a question related to parameter estimation from a single data set.
My ultimate goal is to be able to use multiple experimental runs for parameter estimation, leveraging data that may be collected at different temperatures, species concentrations, etc. Due to the independent nature of individual batch reactor experiments, each data set features samples collected at different time points. These different time points (and in the future, different temperatures for instance) are difficult for me to implement into a GEKKO model, as I previosly used the experimental data collection time points as the m.time parameter for the GEKKO model. (See end of post for code) I have solved problems like this in the past with gPROMS and Athena Visual Studio.
To illustrate my problem, I generated an artificial data set of 'experimental' data from my original model by introducing noise to the species concentration profiles, and shifting the experimental time points slightly. I then combined all data sets of the same experimental species into new arrays featuring multiple columns. My thought process here was that GEKKO would carry out the parameter estimation by using the experimental data of each corresponding column of the arrays, so that times_comb[:,0] would be related to A_comb[:,0] while times_comb[:,1] would be related to A_comb[:,1].
When I attempt to run the GEKKO model, the system does obtain a solution for the parameter estimation, but it is unclear to me if the problem solution is reasonable, as I notice that the GEKKO Variables A, B, C, and P are 34 element vectors, which is double the elements in each of the experimental data sets. I presume GEKKO is somehow combining both columns of the time and Parameter vectors during model setup that leads to those 34 element variables? I am also concerned that during this combination of the columns of each input parameter, that the relationship between a certain time point and the collected species information is lost.
How could I improve the use of multiple data sets that GEKKO can simultaneously use for parameter estimation, with the consideration that the time points of each data set may be different? I looked on the GEKKO documentation examples as well as the APMonitor website, but I could not find examples featuring multiple data sets that I could use for guidance, as I am fairly new to the GEKKO package.
Thank you for your time reading my question and for any help/ideas you may have.
Code below:
import numpy as np
import matplotlib.pyplot as plt
from gekko import GEKKO
#Experimental data
times = np.array([0.0, 0.071875, 0.143750, 0.215625, 0.287500, 0.359375, 0.431250,
0.503125, 0.575000, 0.646875, 0.718750, 0.790625, 0.862500,
0.934375, 1.006250, 1.078125, 1.150000])
A_obs = np.array([1.0, 0.552208, 0.300598, 0.196879, 0.101175, 0.065684, 0.045096,
0.028880, 0.018433, 0.011509, 0.006215, 0.004278, 0.002698,
0.001944, 0.001116, 0.000732, 0.000426])
C_obs = np.array([0.0, 0.187768, 0.262406, 0.350412, 0.325110, 0.367181, 0.348264,
0.325085, 0.355673, 0.361805, 0.363117, 0.327266, 0.330211,
0.385798, 0.358132, 0.380497, 0.383051])
P_obs = np.array([0.0, 0.117684, 0.175074, 0.236679, 0.234442, 0.270303, 0.272637,
0.274075, 0.278981, 0.297151, 0.297797, 0.298722, 0.326645,
0.303198, 0.277822, 0.284194, 0.301471])
#Generate second set of 'experimental data'
times_new = times + np.random.uniform(0.0,0.01)
P_obs_noisy = P_obs+np.random.normal(0,0.05,P_obs.shape)
A_obs_noisy = A_obs+np.random.normal(0,0.05,A_obs.shape)
C_obs_noisy = A_obs+np.random.normal(0,0.05,C_obs.shape)
#Combine two data sets into multi-column arrays
times_comb = np.array([times, times_new]).T
P_comb = np.array([P_obs, P_obs_noisy]).T
A_comb = np.array([A_obs, A_obs_noisy]).T
C_comb = np.array([C_obs, C_obs_noisy]).T
m = GEKKO(remote=False)
t = m.time = times_comb #using two column time array
Am = m.Param(value=A_comb) #Using the two column data as observed parameter
Cm = m.Param(value=C_comb)
Pm = m.Param(value=P_comb)
A = m.Var(1, lb = 0)
B = m.Var(0, lb = 0)
C = m.Var(0, lb = 0)
P = m.Var(0, lb = 0)
k = m.Array(m.FV,6,value=1,lb=0)
for ki in k:
ki.STATUS = 1
k1,k2,k3,k4,k5,k6 = k
r1 = m.Var(0, lb = 0)
r2 = m.Var(0, lb = 0)
r3 = m.Var(0, lb = 0)
r4 = m.Var(0, lb = 0)
r5 = m.Var(0, lb = 0)
r6 = m.Var(0, lb = 0)
m.Equation(r1 == k1 * A)
m.Equation(r2 == k2 * A * B)
m.Equation(r3 == k3 * C * B)
m.Equation(r4 == k4 * A)
m.Equation(r5 == k5 * A)
m.Equation(r6 == k6 * A * B)
#mass balance diff eqs, function calls rxn function
m.Equation(A.dt() == - r1 - r2 - r4 - r5 - r6)
m.Equation(B.dt() == r1 - r2 - r3 - r6)
m.Equation(C.dt() == r2 - r3 + r4)
m.Equation(P.dt() == r3 + r5 + r6)
m.Minimize((A-Am)**2)
m.Minimize((P-Pm)**2)
m.Minimize((C-Cm)**2)
m.options.IMODE = 5
m.options.SOLVER = 3 #IPOPT optimizer
m.options.NODES = 6
m.solve()
k_opt = []
for ki in k:
k_opt.append(ki.value[0])
print(k_opt)
plt.plot(t,A)
plt.plot(t,C)
plt.plot(t,P)
plt.plot(t,B)
plt.plot(times,A_obs,'bo')
plt.plot(times,C_obs,'gx')
plt.plot(times,P_obs,'rs')
plt.plot(times_new, A_obs_noisy,'b*')
plt.plot(times_new, C_obs_noisy,'g*')
plt.plot(times_new, P_obs_noisy,'r*')
plt.show()
To have multiple data sets with different times and data points, you can join the data sets as a pandas dataframe. Here is a simple example:
# data set 1
t_data1 = [0.0, 0.1, 0.2, 0.4, 0.8, 1.00]
x_data1 = [2.0, 1.6, 1.2, 0.7, 0.3, 0.15]
# data set 2
t_data2 = [0.0, 0.15, 0.25, 0.45, 0.85, 0.95]
x_data2 = [3.6, 2.25, 1.75, 1.00, 0.35, 0.20]
The merged data has NaN where the data is missing:
x1 x2
Time
0.00 2.0 3.60
0.10 1.6 NaN
0.15 NaN 2.25
0.20 1.2 NaN
0.25 NaN 1.75
Take note of where the data is missing with a 1=measured and 0=not measured.
# indicate which points are measured
z1 = (data['x1']==data['x1']).astype(int) # 0 if NaN
z2 = (data['x2']==data['x2']).astype(int) # 1 if number
The final step is to set up Gekko variables, equations, and objective to accommodate the data sets.
xm = m.Array(m.Param,2)
zm = m.Array(m.Param,2)
for i in range(2):
m.Equation(x[i].dt()== -k * x[i]) # differential equations
m.Minimize(zm[i]*(x[i]-xm[i])**2) # objectives
You can also calculate the initial condition with m.free_initial(x[i]). This gives an optimal solution for one parameter value (k) over the 2 data sets. This approach can be expanded to multiple variables or multiple data sets with different times.
from gekko import GEKKO
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# data set 1
t_data1 = [0.0, 0.1, 0.2, 0.4, 0.8, 1.00]
x_data1 = [2.0, 1.6, 1.2, 0.7, 0.3, 0.15]
# data set 2
t_data2 = [0.0, 0.15, 0.25, 0.45, 0.85, 0.95]
x_data2 = [3.6, 2.25, 1.75, 1.00, 0.35, 0.20]
# combine with dataframe join
data1 = pd.DataFrame({'Time':t_data1,'x1':x_data1})
data2 = pd.DataFrame({'Time':t_data2,'x2':x_data2})
data1.set_index('Time', inplace=True)
data2.set_index('Time', inplace=True)
data = data1.join(data2,how='outer')
print(data.head())
# indicate which points are measured
z1 = (data['x1']==data['x1']).astype(int) # 0 if NaN
z2 = (data['x2']==data['x2']).astype(int) # 1 if number
# replace NaN with any number (0)
data.fillna(0,inplace=True)
m = GEKKO(remote=False)
# measurements
xm = m.Array(m.Param,2)
xm[0].value = data['x1'].values
xm[1].value = data['x2'].values
# index for objective (0=not measured, 1=measured)
zm = m.Array(m.Param,2)
zm[0].value=z1
zm[1].value=z2
m.time = data.index
x = m.Array(m.Var,2) # fit to measurement
x[0].value=x_data1[0]; x[1].value=x_data2[0]
k = m.FV(); k.STATUS = 1 # adjustable parameter
for i in range(2):
m.free_initial(x[i]) # calculate initial condition
m.Equation(x[i].dt()== -k * x[i]) # differential equations
m.Minimize(zm[i]*(x[i]-xm[i])**2) # objectives
m.options.IMODE = 5 # dynamic estimation
m.options.NODES = 2 # collocation nodes
m.solve(disp=True) # solve
k = k.value[0]
print('k = '+str(k))
# plot solution
plt.plot(m.time,x[0].value,'b.--',label='Predicted 1')
plt.plot(m.time,x[1].value,'r.--',label='Predicted 2')
plt.plot(t_data1,x_data1,'bx',label='Measured 1')
plt.plot(t_data2,x_data2,'rx',label='Measured 2')
plt.legend(); plt.xlabel('Time'); plt.ylabel('Value')
plt.xlabel('Time');
plt.show()
Including my updated code (not fully cleaned up to minimize number of variables) incorporating the selected answer to my question for reference. The model does a regression of 3 measured species in two separate 'datasets.'
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from gekko import GEKKO
#Experimental data
times = np.array([0.0, 0.071875, 0.143750, 0.215625, 0.287500, 0.359375, 0.431250,
0.503125, 0.575000, 0.646875, 0.718750, 0.790625, 0.862500,
0.934375, 1.006250, 1.078125, 1.150000])
A_obs = np.array([1.0, 0.552208, 0.300598, 0.196879, 0.101175, 0.065684, 0.045096,
0.028880, 0.018433, 0.011509, 0.006215, 0.004278, 0.002698,
0.001944, 0.001116, 0.000732, 0.000426])
C_obs = np.array([0.0, 0.187768, 0.262406, 0.350412, 0.325110, 0.367181, 0.348264,
0.325085, 0.355673, 0.361805, 0.363117, 0.327266, 0.330211,
0.385798, 0.358132, 0.380497, 0.383051])
P_obs = np.array([0.0, 0.117684, 0.175074, 0.236679, 0.234442, 0.270303, 0.272637,
0.274075, 0.278981, 0.297151, 0.297797, 0.298722, 0.326645,
0.303198, 0.277822, 0.284194, 0.301471])
#Generate second set of 'experimental data'
times_new = times + np.random.uniform(0.0,0.01)
P_obs_noisy = (P_obs+ np.random.normal(0,0.05,P_obs.shape))
A_obs_noisy = (A_obs+np.random.normal(0,0.05,A_obs.shape))
C_obs_noisy = (C_obs+np.random.normal(0,0.05,C_obs.shape))
#Combine two data sets into multi-column arrays using pandas DataFrames
#Set dataframe index to be combined time discretization of both data sets
exp1 = pd.DataFrame({'Time':times,'A':A_obs,'C':C_obs,'P':P_obs})
exp2 = pd.DataFrame({'Time':times_new,'A':A_obs_noisy,'C':C_obs_noisy,'P':P_obs_noisy})
exp1.set_index('Time',inplace=True)
exp2.set_index('Time',inplace=True)
exps = exp1.join(exp2, how ='outer',lsuffix = '_1',rsuffix = '_2')
#print(exps.head())
#Combine both data sets into a single data frame
meas_data = pd.DataFrame().reindex_like(exps)
#define measurement locations for each data set, with NaN written for time points
#not common in both data sets
for cols in exps:
meas_data[cols] = (exps[cols]==exps[cols]).astype(int)
exps.fillna(0,inplace = True) #replace NaN with 0
m = GEKKO(remote=False)
t = m.time = exps.index #set GEKKO time domain to use experimental time points
#Generate two-column GEKKO arrays to store observed values of each species, A, C and P
Am = m.Array(m.Param,2)
Cm = m.Array(m.Param,2)
Pm = m.Array(m.Param,2)
Am[0].value = exps['A_1'].values
Am[1].value = exps['A_2'].values
Cm[0].value = exps['C_1'].values
Cm[1].value = exps['C_2'].values
Pm[0].value = exps['P_1'].values
Pm[1].value = exps['P_2'].values
#Define GEKKO variables that determine if time point contatins data to be used in regression
#If time point contains species data, meas_ variable = 1, else = 0
meas_A = m.Array(m.Param,2)
meas_C = m.Array(m.Param,2)
meas_P = m.Array(m.Param,2)
meas_A[0].value = meas_data['A_1'].values
meas_A[1].value = meas_data['A_2'].values
meas_C[0].value = meas_data['C_1'].values
meas_C[1].value = meas_data['C_2'].values
meas_P[0].value = meas_data['P_1'].values
meas_P[1].value = meas_data['P_2'].values
#Define Variables for differential equations A, B, C, P, with initial conditions set by experimental observation at first time point
A = m.Array(m.Var,2, lb = 0)
B = m.Array(m.Var,2, lb = 0)
C = m.Array(m.Var,2, lb = 0)
P = m.Array(m.Var,2, lb = 0)
A[0].value = exps['A_1'][0] ; A[1].value = exps['A_2'][0]
B[0].value = 0 ; B[1].value = 0
C[0].value = exps['C_1'][0] ; C[1].value = exps['C_2'][0]
P[0].value = exps['P_1'][0] ; P[1].value = exps['P_2'][0]
#Define kinetic coefficients, k1-k6 as regression FV's
k = m.Array(m.FV,6,value=1,lb=0,ub = 20)
for ki in k:
ki.STATUS = 1
k1,k2,k3,k4,k5,k6 = k
#If doing paramrter estimation, enable free_initial condition, else not include them in model to reduce DOFs (for simulation, for example)
if k1.STATUS == 1:
for i in range(2):
m.free_initial(A[i])
m.free_initial(B[i])
m.free_initial(C[i])
m.free_initial(P[i])
#Define reaction rate variables
r1 = m.Array(m.Var,2, value = 1, lb = 0)
r2 = m.Array(m.Var,2, value = 1, lb = 0)
r3 = m.Array(m.Var,2, value = 1, lb = 0)
r4 = m.Array(m.Var,2, value = 1, lb = 0)
r5 = m.Array(m.Var,2, value = 1, lb = 0)
r6 = m.Array(m.Var,2, value = 1, lb = 0)
#Model Equations
for i in range(2):
#Rate equations
m.Equation(r1[i] == k1 * A[i])
m.Equation(r2[i] == k2 * A[i] * B[i])
m.Equation(r3[i] == k3 * C[i] * B[i])
m.Equation(r4[i] == k4 * A[i])
m.Equation(r5[i] == k5 * A[i])
m.Equation(r6[i] == k6 * A[i] * B[i])
#Differential species balances
m.Equation(A[i].dt() == - r1[i] - r2[i] - r4[i] - r5[i] - r6[i])
m.Equation(B[i].dt() == r1[i] - r2[i] - r3[i] - r6[i])
m.Equation(C[i].dt() == r2[i] - r3[i] + r4[i])
m.Equation(P[i].dt() == r3[i] + r5[i] + r6[i])
#Minimization objective functions
m.Obj(meas_A[i]*(A[i]-Am[i])**2)
m.Obj(meas_P[i]*(P[i]-Pm[i])**2)
m.Obj(meas_C[i]*(C[i]-Cm[i])**2)
#Solver options
m.options.IMODE = 5
m.options.SOLVER = 3 #APOPT optimizer
m.options.NODES = 6
m.solve()
k_opt = []
for ki in k:
k_opt.append(ki.value[0])
print(k_opt)
plt.plot(t,A[0],'b-')
plt.plot(t,A[1],'b--')
plt.plot(t,C[0],'g-')
plt.plot(t,C[1],'g--')
plt.plot(t,P[0],'r-')
plt.plot(t,P[1],'r--')
plt.plot(times,A_obs,'bo')
plt.plot(times,C_obs,'gx')
plt.plot(times,P_obs,'rs')
plt.plot(times_new, A_obs_noisy,'b*')
plt.plot(times_new, C_obs_noisy,'g*')
plt.plot(times_new, P_obs_noisy,'r*')
plt.show()

Tensorflow/Keras: volatile validation loss

I've been training a U-Net for single class small lesion segmentation, and have been getting consistently volatile validation loss. I have about 20k images split 70/30 between training and validation sets-so I don't think the issue is too little data. I've tried shuffling and resplitting the sets a few times with no change in volatility-so I don't think the validation set is unrepresentative. I have tried lowering the learning rate with no effect on volatility. And I have tried a few loss functions (dice coefficient, focal tversky, weighted binary cross-entropy). I'm using a decent amount of augmentation so as to avoid overfitting. I've also run through all my data (512x512 float64s with corresponding 512x512 int64 masks--both stored as numpy arrays) do double check that the value range, dtypes, etc. aren't screwy...and I even removed any ROIs in the masks under 35 pixels in area which I thought might be artifact and messing with loss.
I'm using keras ImageDataGen.flow_from_directory...I was initially using zca_whitening and brightness_range augmentation but I think this causes issues with flow_from_directory and the link between mask and image being lost.. so I skipped this.
I've tried validation generators with and without shuffle=True. Batch size is 8.
Here's some of my code, happy to include more if it would help:
# loss
from keras.losses import binary_crossentropy
import keras.backend as K
import tensorflow as tf
epsilon = 1e-5
smooth = 1
def dsc(y_true, y_pred):
smooth = 1.
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
score = (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
return score
def dice_loss(y_true, y_pred):
loss = 1 - dsc(y_true, y_pred)
return loss
def bce_dice_loss(y_true, y_pred):
loss = binary_crossentropy(y_true, y_pred) + dice_loss(y_true, y_pred)
return loss
def confusion(y_true, y_pred):
smooth=1
y_pred_pos = K.clip(y_pred, 0, 1)
y_pred_neg = 1 - y_pred_pos
y_pos = K.clip(y_true, 0, 1)
y_neg = 1 - y_pos
tp = K.sum(y_pos * y_pred_pos)
fp = K.sum(y_neg * y_pred_pos)
fn = K.sum(y_pos * y_pred_neg)
prec = (tp + smooth)/(tp+fp+smooth)
recall = (tp+smooth)/(tp+fn+smooth)
return prec, recall
def tp(y_true, y_pred):
smooth = 1
y_pred_pos = K.round(K.clip(y_pred, 0, 1))
y_pos = K.round(K.clip(y_true, 0, 1))
tp = (K.sum(y_pos * y_pred_pos) + smooth)/ (K.sum(y_pos) + smooth)
return tp
def tn(y_true, y_pred):
smooth = 1
y_pred_pos = K.round(K.clip(y_pred, 0, 1))
y_pred_neg = 1 - y_pred_pos
y_pos = K.round(K.clip(y_true, 0, 1))
y_neg = 1 - y_pos
tn = (K.sum(y_neg * y_pred_neg) + smooth) / (K.sum(y_neg) + smooth )
return tn
def tversky(y_true, y_pred):
y_true_pos = K.flatten(y_true)
y_pred_pos = K.flatten(y_pred)
true_pos = K.sum(y_true_pos * y_pred_pos)
false_neg = K.sum(y_true_pos * (1-y_pred_pos))
false_pos = K.sum((1-y_true_pos)*y_pred_pos)
alpha = 0.7
return (true_pos + smooth)/(true_pos + alpha*false_neg + (1-alpha)*false_pos + smooth)
def tversky_loss(y_true, y_pred):
return 1 - tversky(y_true,y_pred)
def focal_tversky(y_true,y_pred):
pt_1 = tversky(y_true, y_pred)
gamma = 0.75
return K.pow((1-pt_1), gamma)
model = BlockModel((len(os.listdir(os.path.join(imageroot,'train_ct','train'))), 512, 512, 1),filt_num=16,numBlocks=4)
#model.compile(optimizer=Adam(learning_rate=0.001), loss=weighted_cross_entropy)
#model.compile(optimizer=Adam(learning_rate=0.001), loss=dice_coef_loss)
model.compile(optimizer=Adam(learning_rate=0.001), loss=focal_tversky)
train_mask = os.path.join(imageroot,'train_masks')
val_mask = os.path.join(imageroot,'val_masks')
model.load_weights(model_weights_path) #I'm initializing with some pre-trained weights from a similar model
data_gen_args_mask = dict(
rotation_range=10,
shear_range=20,
width_shift_range=0.1,
height_shift_range=0.1,
zoom_range=[0.8,1.2],
horizontal_flip=True,
#vertical_flip=True,
fill_mode='nearest',
data_format='channels_last'
)
data_gen_args = dict(
**data_gen_args_mask
)
image_datagen_train = ImageDataGenerator(**data_gen_args)
mask_datagen_train = ImageDataGenerator(**data_gen_args)#_mask)
image_datagen_val = ImageDataGenerator()
mask_datagen_val = ImageDataGenerator()
seed = 1
BS = 8
steps = int(np.floor((len(os.listdir(os.path.join(train_ct,'train'))))/BS))
print(steps)
val_steps = int(np.floor((len(os.listdir(os.path.join(val_ct,'val'))))/BS))
print(val_steps)
train_image_generator = image_datagen_train.flow_from_directory(
train_ct,
target_size = (512, 512),
color_mode = ("grayscale"),
classes=None,
class_mode=None,
seed = seed,
shuffle = True,
batch_size = BS)
train_mask_generator = mask_datagen_train.flow_from_directory(
train_mask,
target_size = (512, 512),
color_mode = ("grayscale"),
classes=None,
class_mode=None,
seed = seed,
shuffle = True,
batch_size = BS)
val_image_generator = image_datagen_val.flow_from_directory(
val_ct,
target_size = (512, 512),
color_mode = ("grayscale"),
classes=None,
class_mode=None,
seed = seed,
shuffle = True,
batch_size = BS)
val_mask_generator = mask_datagen_val.flow_from_directory(
val_mask,
target_size = (512, 512),
color_mode = ("grayscale"),
classes=None,
class_mode=None,
seed = seed,
shuffle = True,
batch_size = BS)
train_generator = zip(train_image_generator, train_mask_generator)
val_generator = zip(val_image_generator, val_mask_generator)
# make callback for checkpointing
plot_losses = PlotLossesCallback(skip_first=0,plot_extrema=False)
%matplotlib inline
filepath = os.path.join(versionPath, model_version + "_saved-model-{epoch:02d}-{val_loss:.2f}.hdf5")
if reduce:
cb_check = [ModelCheckpoint(filepath,monitor='val_loss',
verbose=1,save_best_only=False,
save_weights_only=True,mode='auto',period=1),
reduce_lr,
plot_losses]
else:
cb_check = [ModelCheckpoint(filepath,monitor='val_loss',
verbose=1,save_best_only=False,
save_weights_only=True,mode='auto',period=1),
plot_losses]
# train model
history = model.fit_generator(train_generator, epochs=numEp,
steps_per_epoch=steps,
validation_data=val_generator,
validation_steps=val_steps,
verbose=1,
callbacks=cb_check,
use_multiprocessing = False
)
And here's how my loss looks:
Another potentially relevant thing: I tweaked the flow_from_directory code a bit (added npy to the white list). But training loss looks fine so assuming the issue isnt here
Two suggestions:
Switch to the classic validation data format (i.e. numpy array) instead of using a generator -- this will ensure you always use the exactly same validation data every time. If you see a different validation curve, then there is something "random" in the validation generator giving you different data at different epochs.
Use a fixed set of samples (100 or 1000 should be enough w/o any data augmentation) for both training and validation. If everything goes well, you should see your network quickly overfit to this dataset and your training and validation curves should very much similar. If not, debug your network.

How to use gekko to control two variables while manipulating two variables for a cstr?

Attached below is my PYTHON code:
I have a CSTR and im trying to control the height of the tank and the temperature while manipulating the inlet flow and the cooling temperature. The problem is that the CV's are not tracking their respective setpoints. I tried doing the problem for only 1 CV and 1 MV, it worked really well.
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from gekko import GEKKO
# Steady State Initial Condition
u1_ss = 280.0
u2_ss=100.0
# Feed Temperature (K)
Tf = 350
# Feed Concentration (mol/m^3)
Caf = 1
# Steady State Initial Conditions for the States
Ca_ss = 1
T_ss = 304
h_ss=94.77413303
V_ss=8577.41330293
x0 = np.empty(4)
x0[0] = Ca_ss
x0[1] = T_ss
x0[2]= h_ss
x0[3]= V_ss
#%% GEKKO nonlinear MPC
m = GEKKO(remote=False)
m.time = [0,0.02,0.04,0.06,0.08,0.1,0.12,0.15,0.2]
c1=10.0
Ac=100.0
# Density of A-B Mixture (kg/m^3)
rho = 1000
# Heat capacity of A-B Mixture (J/kg-K)
Cp = 0.239
# Heat of reaction for A->B (J/mol)
mdelH = 5e4
# E - Activation energy in the Arrhenius Equation (J/mol)
# R - Universal Gas Constant = 8.31451 J/mol-K
EoverR = 8750
# Pre-exponential factor (1/sec)
k0 = 7.2e10
# U - Overall Heat Transfer Coefficient (W/m^2-K)
# A - Area - this value is specific for the U calculation (m^2)
UA = 5e4
# initial conditions
Tc0 = 280
T0 = 304
Ca0 = 1.0
h0=94.77413303
q0=100.0
V0=8577.41330293
tau = m.Const(value=0.5)
Kp = m.Const(value=1)
m.Tc = m.MV(value=Tc0,lb=250,ub=350)
m.T = m.CV(value=T_ss)
m.h= m.CV(value=h0)
m.rA = m.Var(value=0)
m.Ca = m.CV(value=Ca_ss,lb=0,ub=1)
m.V= m.CV(value=V_ss,lb=0,ub=100000)
m.q=m.MV(value=q0,lb=0,ub=100000)
m.Equation(m.rA == k0*m.exp(-EoverR/m.T)*m.Ca)
m.Equation(m.T.dt() == m.q/m.V*(Tf - m.T) \
+ mdelH/(rho*Cp)*m.rA \
+ UA/m.V/rho/Cp*(m.Tc-m.T))
m.Equation(m.Ca.dt() == m.q/m.V*(Caf - m.Ca) - m.rA)
m.Equation(m.h.dt()==(m.q-c1*m.h**0.5)/Ac)
m.Equation(m.V.dt() == m.q- c1*m.h**0.5)
#MV tuning
m.Tc.STATUS = 1
m.Tc.FSTATUS = 0
m.Tc.DMAX = 100
m.Tc.DMAXHI = 20
m.Tc.DMAXLO = -100
m.q.STATUS = 1
m.q.FSTATUS = 0
m.q.DMAX = 10
#CV tuning
m.T.STATUS = 1
m.T.FSTATUS = 1
m.T.TR_INIT = 1
m.T.TAU = 1.0
DT = 0.5 # deadband
m.h.STATUS = 1
m.h.FSTATUS = 1
m.h.TR_INIT = 1
m.h.TAU = 1.0
m.Ca.STATUS = 1
m.Ca.FSTATUS = 0 # no measurement
m.Ca.TR_INIT = 0
m.V.STATUS = 1
m.V.FSTATUS = 0 # no measurement
m.V.TR_INIT = 0
m.options.CV_TYPE = 1
m.options.IMODE = 6
m.options.SOLVER = 3
#%% define CSTR model
def cstr(x,t,u1,u2,Tf,Caf,Ac):
# Inputs (3):
# Temperature of cooling jacket (K)
Tc = u1
q=u2
# Tf = Feed Temperature (K)
# Caf = Feed Concentration (mol/m^3)
# States (2):
# Concentration of A in CSTR (mol/m^3)
Ca = x[0]
# Temperature in CSTR (K)
T = x[1]
# the height of the tank (m)
h=x[2]
V=x[3]
# Parameters:
# Density of A-B Mixture (kg/m^3)
rho = 1000
# Heat capacity of A-B Mixture (J/kg-K)
Cp = 0.239
# Heat of reaction for A->B (J/mol)
mdelH = 5e4
# E - Activation energy in the Arrhenius Equation (J/mol)
# R - Universal Gas Constant = 8.31451 J/mol-K
EoverR = 8750
# Pre-exponential factor (1/sec)
k0 = 7.2e10
# U - Overall Heat Transfer Coefficient (W/m^2-K)
# A - Area - this value is specific for the U calculation (m^2)
UA = 5e4
# reaction rate
rA = k0*np.exp(-EoverR/T)*Ca
# Calculate concentration derivative
dCadt = q/V*(Caf - Ca) - rA
# Calculate temperature derivative
dTdt = q/V*(Tf - T) \
+ mdelH/(rho*Cp)*rA \
+ UA/V/rho/Cp*(Tc-T)
# Calculate height derivative
dhdt=(q-c1*h**0.5)/Ac
if x[2]>=300 and dhdt>0:
dhdt = 0
dVdt= q-c1*h**0.5
# Return xdot:
xdot = np.zeros(4)
xdot[0] = dCadt
xdot[1] = dTdt
xdot[2]= dhdt
xdot[3]= dVdt
return xdot
# Time Interval (min)
t = np.linspace(0,8,401)
# Store results for plotting
Ca = np.ones(len(t)) * Ca_ss
V=np.ones(len(t))*V_ss
T = np.ones(len(t)) * T_ss
Tsp = np.ones(len(t)) * T_ss
hsp=np.ones(len(t))*h_ss
h=np.ones(len(t))*h_ss
u1 = np.ones(len(t)) * u1_ss
u2 = np.ones(len(t)) * u2_ss
# Set point steps
Tsp[0:100] = 330.0
Tsp[100:200] = 350.0
Tsp[230:260] = 370.0
Tsp[260:290] = 390.0
hsp[0:100] = 30.0
hsp[100:200] =60.0
hsp[200:250]=90.0
# Create plot
plt.figure(figsize=(10,7))
plt.ion()
plt.show()
# Simulate CSTR
for i in range(len(t)-1):
# simulate one time period (0.05 sec each loop)
ts = [t[i],t[i+1]]
y = odeint(cstr,x0,ts,args=(u1[i+1],u2[i+1],Tf,Caf,Ac))
# retrieve measurements
Ca[i+1] = y[-1][0]
T[i+1] = y[-1][1]
h[i+1]= y[-1][2]
V[i+1]= y[-1][3]
# insert measurement
m.T.MEAS = T[i+1]
m.h.MEAS=h[i+1]
# solve MPC
m.solve(disp=True)
m.T.SPHI = Tsp[i+1] + DT
m.T.SPLO = Tsp[i+1] - DT
m.h.SPHI = hsp[i+1] + DT
m.h.SPLO = hsp[i+1] - DT
# retrieve new Tc value
u1[i+1] = m.Tc.NEWVAL
u2[i+1] = m.q.NEWVAL
# update initial conditions
x0[0] = Ca[i+1]
x0[1] = T[i+1]
x0[2]= h[i+1]
x0[3]= V[i+1]
#%% Plot the results
plt.clf()
plt.subplot(6,1,1)
plt.plot(t[0:i],u1[0:i],'b--',linewidth=3)
plt.ylabel('Cooling T (K)')
plt.legend(['Jacket Temperature'],loc='best')
plt.subplot(6,1,2)
plt.plot(t[0:i],u2[0:i],'b--',linewidth=3)
plt.ylabel('inlet flow')
plt.subplot(6,1,3)
plt.plot(t[0:i],Ca[0:i],'b.-',linewidth=3,label=r'$C_A$')
plt.plot([0,t[i-1]],[0.2,0.2],'r--',linewidth=2,label='limit')
plt.ylabel(r'$C_A$ (mol/L)')
plt.legend(loc='best')
plt.subplot(6,1,4)
plt.plot(t[0:i],V[0:i],'g--',linewidth=3)
plt.xlabel('time')
plt.ylabel('Volume of Tank')
plt.subplot(6,1,5)
plt.plot(t[0:i],Tsp[0:i],'k-',linewidth=3,label=r'$T_{sp}$')
plt.plot(t[0:i],T[0:i],'b.-',linewidth=3,label=r'$T_{meas}$')
plt.plot([0,t[i-1]],[400,400],'r--',linewidth=2,label='limit')
plt.ylabel('T (K)')
plt.xlabel('Time (min)')
plt.legend(loc='best')
plt.subplot(6,1,6)
plt.plot(t[0:i],hsp[0:i],'g--',linewidth=3,label=r'$h_{sp}$')
plt.plot(t[0:i],h[0:i],'k.-',linewidth=3,label=r'$h_{meas}$')
plt.xlabel('time')
plt.ylabel('tank level')
plt.legend(loc='best')
plt.draw()
plt.pause(0.01)

I am trying to use GEKKO on PYTHON to control a cstr. The CVS are the temperature and the level of the tank

Attached is the code I wrote: When it runs, the level controlled variable is not tracking its setpoint.
On the other hand, the Temperature controlled variable is tracking its setpoint very well. I am using manipulating the cooling temperature and inlet flow rate. I am trying to control the level of the tank, temperature and concentration.
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from gekko import GEKKO
# Steady State Initial Condition
u1_ss = 300.0
u2_ss=100.0
Ca_ss = 0.87725294
T_ss = 324.47544313
h_ss=75.82018806
# Feed Temperature (K)
Tf = 350
# Feed Concentration (mol/m^3)
Caf = 1
# Steady State Initial Conditions for the States
x0 = np.empty(2)
x0[0] = Ca_ss
x0[1] = T_ss
p0=np.empty(1)
p0[0]=h_ss
#%% GEKKO nonlinear MPC
m = GEKKO(remote=False)
m.time = [0,0.02,0.04,0.06,0.08,0.1,0.12,0.15,0.2]
c1=10
Ac=400.0
# Volume of CSTR (m^3)
V = 100
# Density of A-B Mixture (kg/m^3)
rho = 1000
# Heat capacity of A-B Mixture (J/kg-K)
Cp = 0.239
# Heat of reaction for A->B (J/mol)
mdelH = 5e4
# E - Activation energy in the Arrhenius Equation (J/mol)
# R - Universal Gas Constant = 8.31451 J/mol-K
EoverR = 8750
# Pre-exponential factor (1/sec)
k0 = 7.2e10
# U - Overall Heat Transfer Coefficient (W/m^2-K)
# A - Area - this value is specific for the U calculation (m^2)
UA = 5e4
# initial conditions
Tc0 = 300
T0 = 324.47544313
Ca0 = 0.87725294
h0=75.82018806
q0=100.0
tau = m.Const(value=0.5)
Kp = m.Const(value=1)
m.Tc = m.MV(value=Tc0,lb=250,ub=350)
m.T = m.CV(value=T_ss)
m.rA = m.Var(value=0)
m.Ca = m.CV(value=Ca_ss,lb=0,ub=1)
m.h=m.CV(value=h_ss)
m.q=m.MV(value=q0,lb=0,ub=1000)
m.Equation(m.rA == k0*m.exp(-EoverR/m.T)*m.Ca)
m.Equation(m.T.dt() == m.q/V*(Tf - m.T) \
+ mdelH/(rho*Cp)*m.rA \
+ UA/V/rho/Cp*(m.Tc-m.T))
m.Equation(m.Ca.dt() == (m.q)/V*(Caf - m.Ca) - m.rA)
m.Equation(m.h.dt()==(m.q-c1*pow(m.h,0.5))/Ac)
#MV tuning
m.Tc.STATUS = 1
m.Tc.FSTATUS = 0
m.Tc.DMAX = 100
m.Tc.DMAXHI = 20
m.Tc.DMAXLO = -100
m.q.STATUS = 1
m.q.FSTATUS = 0
m.q.DMAX = 10
#CV tuning
m.T.STATUS = 1
m.T.FSTATUS = 1
m.T.TR_INIT = 1
m.T.TAU = 1.0
DT = 0.5 # deadband
m.h.STATUS = 1
m.h.FSTATUS = 1
m.h.TR_INIT = 1
m.h.TAU = 1.0
m.Ca.STATUS = 1
m.Ca.FSTATUS = 0 # no measurement
m.Ca.TR_INIT = 0
m.options.CV_TYPE = 1
m.options.IMODE = 6
m.options.SOLVER = 3
# define CSTR model
def cstr(x,t,u1,u2,Tf,Caf):
# Inputs (3):
# Temperature of cooling jacket (K)
Tc = u1
q=u2
# Tf = Feed Temperature (K)
# Caf = Feed Concentration (mol/m^3)
# States (2):
# Concentration of A in CSTR (mol/m^3)
Ca = x[0]
# Temperature in CSTR (K)
T = x[1]
# Parameters:
# Volume of CSTR (m^3)
V = 100
# Density of A-B Mixture (kg/m^3)
rho = 1000
# Heat capacity of A-B Mixture (J/kg-K)
Cp = 0.239
# Heat of reaction for A->B (J/mol)
mdelH = 5e4
# E - Activation energy in the Arrhenius Equation (J/mol)
# R - Universal Gas Constant = 8.31451 J/mol-K
EoverR = 8750
# Pre-exponential factor (1/sec)
k0 = 7.2e10
# U - Overall Heat Transfer Coefficient (W/m^2-K)
# A - Area - this value is specific for the U calculation (m^2)
UA = 5e4
# reaction rate
rA = k0*np.exp(-EoverR/T)*Ca
# Calculate concentration derivative
dCadt = q/V*(Caf - Ca) - rA
# Calculate temperature derivative
dTdt = q/V*(Tf - T) \
+ mdelH/(rho*Cp)*rA \
+ UA/V/rho/Cp*(Tc-T)
# Return xdot:
xdot = np.zeros(2)
xdot[0] = dCadt
xdot[1] = dTdt
return xdot
def tank(p,t,u2,Ac):
q=u2
h=p[0]
dhdt=(q-c1*pow(h,0.5))/Ac
if p[0]>=300 and dhdt>0:
dhdt = 0
return dhdt
# Time Interval (min)
t = np.linspace(0,10,410)
# Store results for plotting
Ca = np.ones(len(t)) * Ca_ss
T = np.ones(len(t)) * T_ss
Tsp=np.ones(len(t))*T_ss
hsp=np.ones(len(t))*h_ss
h=np.ones(len(t))*h_ss
u1 = np.ones(len(t)) * u1_ss
u2 = np.ones(len(t)) * u2_ss
# Set point steps
Tsp[0:100] = 330.0
Tsp[100:200] = 350.0
hsp[200:300] = 150.0
hsp[300:] = 190.0
# Create plot
plt.figure(figsize=(10,7))
plt.ion()
plt.show()
# Simulate CSTR
for i in range(len(t)-1):
ts = [t[i],t[i+1]]
y = odeint(cstr,x0,ts,args=(u1[i+1],u2[i+1],Tf,Caf))
y1=odeint(tank,p0,ts,args=(u2[i+1],Ac))
Ca[i+1] = y[-1][0]
T[i+1] = y[-1][1]
h[i+1]=y1[-1][0]
# insert measurement
m.T.MEAS = T[i+1]
m.h.MEAS= h[i+1]
# solve MPC
m.solve(disp=True)
m.T.SPHI = Tsp[i+1] + DT
m.T.SPLO = Tsp[i+1] - DT
m.h.SPHI = hsp[i+1] + DT
m.h.SPLO = hsp[i+1] - DT
# retrieve new Tc value
u1[i+1] = m.Tc.NEWVAL
u2[i+1]= m.q.NEWVAL
# update initial conditions
x0[0] = Ca[i+1]
x0[1] = T[i+1]
p0[0]=h[i+1]
plt.clf()
# Plot the results
plt.subplot(5,1,1)
plt.plot(t[0:i],u1[0:i],'b--',linewidth=3)
plt.ylabel('Cooling T (K)')
plt.legend(['Jacket Temperature'],loc='best')
plt.subplot(5,1,2)
plt.plot(t[0:i],u2[0:i],'g--')
plt.xlabel('time')
plt.ylabel('flow in')
plt.subplot(5,1,3)
plt.plot(t[0:i],Ca[0:i],'r-',linewidth=3)
plt.ylabel('Ca (mol/L)')
plt.legend(['Reactor Concentration'],loc='best')
plt.subplot(5,1,4)
plt.plot(t[0:i],Tsp[0:i],'r-',linewidth=3,label=r'$T_{sp}$')
plt.plot(t[0:i],T[0:i],'k.-',linewidth=3,label=r'$T_{meas}$')
plt.ylabel('T (K)')
plt.xlabel('Time (min)')
plt.legend(loc='best')
plt.subplot(5,1,5)
plt.plot(t[0:i],hsp[0:i],'g--',linewidth=3,label=r'$h_{sp}$')
plt.plot(t[0:i],h[0:i],'k.-',linewidth=3,label=r'$h_{meas}$')
plt.xlabel('time')
plt.ylabel('tank level')
plt.legend(loc='best')
plt.draw()
plt.pause(0.01)
One problem is that the function pow is not supported by Gekko and is evaluating that part to a constant. Here is a modified version of your equation that should work better:
m.Equation(m.h.dt()==(m.q-c1*m.h**0.5)/Ac)
One other issue is that your similar is broken into two parts and should be one model:
def tank(p,t,u2,Ac):
q=u2
h=p[0]
dhdt=(q-c1*pow(h,0.5))/Ac
if p[0]>=300 and dhdt>0:
dhdt = 0
return dhdt
You should add a third state to your simulator
# Return xdot:
xdot = np.zeros(3)
xdot[0] = dCadt
xdot[1] = dTdt
xdot[2] = dhdt
return xdot
When you have a variable height, the volume is changing so you can't assume that it is constant in the other equations. You'll need to modify your energy balance and species balance as shown in the material on balance equations.

Resources