HTTP error when I run the APMonitor nlc example in python - gekko

I've been interested in learning MPC control, and I wanted to try to nlc python example found here:
http://apmonitor.com/wiki/index.php/Main/PythonApp
When I ran the initial demo example, I got an HTTP error. I was able to run the demo example by changing the instances of "http" to "https" in the apm.py file, similar to the problem found here:
https://github.com/olivierhagolle/LANDSAT-Download/issues/33
I've been trying to run the nlc example now and I'm getting the same kind of error (shown below). However, changing the instances of "http" to "https" no longer seem to be helping.
*Traceback (most recent call last):
File "C:\Users\veli95839\Documents\Python\Scripts\example_nlc\nlc.py", line 88, in
response = apm_meas(server,app,x,value)
File "C:\Users\veli95839\Documents\Python\Scripts\example_nlc\apm.py", line 607, in load_meas
f = urllib.request.urlopen(url_base,params_en)
File "C:\Users\veli95839\Documents\Python\lib\urllib\request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "C:\Users\veli95839\Documents\Python\lib\urllib\request.py", line 531, in open
response = meth(req, response)
File "C:\Users\veli95839\Documents\Python\lib\urllib\request.py", line 640, in http_response
response = self.parent.error(
File "C:\Users\veli95839\Documents\Python\lib\urllib\request.py", line 569, in error
return self._call_chain(*args)
File "C:\Users\veli95839\Documents\Python\lib\urllib\request.py", line 502, in _call_chain
result = func(args)
File "C:\Users\veli95839\Documents\Python\lib\urllib\request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 503: Service Unavailable
Please let me know if anyone has experienced similar issues!
Thanks,
Claire

You may have received that error if your computer was not connected to the Internet or the server was unavailable at the time you ran the test. You can install a local APM server (for Windows or Linux) to avoid any disruptions. Another option is to switch to Python gekko that uses the same underlying APM engine but can run locally with remote=False. Here is the same MPC example in Python gekko.
from gekko import GEKKO
import numpy as np
import matplotlib.pyplot as plt
m = GEKKO(remote=False)
m.time = np.linspace(0,20,41)
# Parameters
mass = 500
b = m.Param(value=50)
K = m.Param(value=0.8)
# Manipulated variable
p = m.MV(value=0, lb=0, ub=100)
p.STATUS = 1 # allow optimizer to change
p.DCOST = 0.1 # smooth out gas pedal movement
p.DMAX = 20 # slow down change of gas pedal
# Controlled Variable
v = m.CV(value=0)
v.STATUS = 1 # add the SP to the objective
m.options.CV_TYPE = 2 # squared error
v.SP = 40 # set point
v.TR_INIT = 1 # set point trajectory
v.TAU = 5 # time constant of trajectory
# Process model
m.Equation(mass*v.dt() == -v*b + K*b*p)
m.options.IMODE = 6 # control
m.solve(disp=False,GUI=True)
# get additional solution information
import json
with open(m.path+'//results.json') as f:
results = json.load(f)
plt.figure()
plt.subplot(2,1,1)
plt.plot(m.time,p.value,'b-',label='MV Optimized')
plt.legend()
plt.ylabel('Input')
plt.subplot(2,1,2)
plt.plot(m.time,results['v1.tr'],'k-',label='Reference Trajectory')
plt.plot(m.time,v.value,'r--',label='CV Response')
plt.ylabel('Output')
plt.xlabel('Time')
plt.legend(loc='best')
plt.show()
If you'd like to get more information on either APM Python or gekko for Nonlinear or Linear Model Predictive Control, there are a series of tutorials with the Temperature Control Lab (TCLab).
Advanced Control Labs with Solutions
Digital Twin Model Development
Lab A - Single Heater Modeling
Lab B - Dual Heater Modeling
Machine Learning with Parameter and State Estimation
Lab C - Parameter Estimation
Lab D - Empirical Model Estimation
Lab E - Hybrid Model Estimation
Model Predictive Control
Lab F - Linear Model Predictive Control
Lab G - Nonlinear Model Predictive Control
Lab H - Moving Horizon Estimation with MPC
These exercises teach how to do first principles or empirical modeling, state estimation, and predictive control. Here is an example of combined Moving Horizon Estimation and Model Predictive Control from Lab H.

Related

Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors

I tried to compile a tflite model to edgetpu model and run into some error like that.
Edge TPU Compiler version 16.0.384591198
Started a compilation timeout timer of 180 seconds.
ERROR: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors.
Compilation failed: Model failed in Tflite interpreter. Please ensure model can be loaded/run in Tflite interpreter.
Compilation child process completed within timeout period.
Compilation failed!
I define my model like that:
preprocess_input = tf.keras.applications.efficientnet.preprocess_input
def Model(image_size=IMG_SIZE):
input_shape = image_size + (3,)
inputs = tf.keras.Input(shape=input_shape)
x = preprocess_input(inputs)
base_model = tf.keras.applications.efficientnet.EfficientNetB0(input_shape=input_shape, include_top=False, weights="imagenet")
base_model.trainable = False
x = base_model(x, training=False)
x = tfl.GlobalAvgPool2D()(x)
x = tfl.Dropout(rate=0.2)(x)
outputs = tfl.Dense(90, activation='softmax')(x)
model = tf.keras.Model(inputs, outputs)
return model
The model summary is like that:
I convert to tflite model like that:
converter = tf.lite.TFLiteConverter.from_keras_model(model)
# Defining the representative dataset from training images.
def representative_dataset_gen():
for image, label in train_dataset.take(100):
yield [image]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset_gen
# Using Integer Quantization.
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# Setting the input and output tensors to uint8.
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
if not os.path.isdir('exported'):
os.mkdir('exported')
with open('/workspace/eff/exported/groups_1.tflite', 'wb') as f:
f.write(tflite_model)
Environment:
Edge TPU Compiler version 16.0.384591198
Python version 3.6.9
tensorflow 1.15.3
When looking for solutions on google, someone said you need to get rid of the preprocess_input, I'm not sure what that means.
How can I check if there is a dynamic shape tensor in the model
and how can I fix it?

How to run SUMO python traci script on OMNEST using viens

I am working on SUMO VEINS and OMNEST.
To run sumo files on OMNEST, the input of sumo files (.xml) are inputted in veins_launchd, which in turn finds an unused port, starts sumo and bridges the connection between sumo and OMNEST.
I want to control the vehicle's behavior (Speed change) on real time (during the simulation).
For this purpose, I have written a Traci script in python language, which calls sumo config file and controls vehicle speed on real time
My issue is,
I do not know how to make this Traci script (python) to run on OMNEST via veins.
Where should I give this python file as input so that I can visualize the output in OMNEST.
My working environment is Linux
Based on some research, I figured out 2 methods.
1. TraCIScenarioManager module
2. Veins_Python
Method1: I understood by using TraCIScenarioManager module, OMNEST can directly connect to the running sumo.
But I don't know where should I make the necessary changes inside veins module to use TraCIScenarioManager instead TraCIScenarioManagerLaunchd
Method2: Regarding veins_python, I downloaded the source file from github and did the configuration steps as mentioned.
I used windows10 and
Versions: Veins5.0, OMNeT++ 5.5.1 and Python3.6
But I got the below error while configuring Veins_Python.
enter image description here
I also tried with the recent versions of software on windows 10
Versions: Veins5.2, OMNEST-5.6.2 and Python3.10
Still I get the same error.
My Sumo Traci script is
import traci
import time
import traci.constants as tc
import pytz
import datetime
from random import randrange
import pandas as pd
def getdatetime():
utc_now = pytz.utc.localize(datetime.datetime.utcnow())
currentDT = utc_now.astimezone(pytz.timezone("Asia/Tokyo"))
DATIME = currentDT.strftime("%Y-%m-%d %H:%M:%S")
return DATIME
def flatten_list(_2d_list):
flat_list = []
for element in _2d_list:
if type(element) is list:
for item in element:
flat_list.append(item)
else:
flat_list.append(element)
return flat_list
sumoCmd = ["sumo-gui", "-c", "osm.sumocfg"]
traci.start(sumoCmd)
packVehicleData = []
packTLSData = []
packBigData = []
while traci.simulation.getMinExpectedNumber() > 0:
traci.simulationStep();
timestep = traci.simulation.getTime()
vehicles=traci.vehicle.getIDList();
trafficlights=traci.trafficlight.getIDList();
for i in range(0,len(vehicles)):
vehid = vehicles[i]
x, y = traci.vehicle.getPosition(vehicles[i])
coord = [x, y]
lon, lat = traci.simulation.convertGeo(x, y)
gpscoord = [lon, lat]
spd = round(traci.vehicle.getSpeed(vehicles[i])*3.6,2)
#Packing of all the data for export to CSV/XLSX
vehList = [getdatetime(), vehid, coord, gpscoord, spd]
print("Vehicle: ", vehicles[i], " at datetime: ", getdatetime())
print(vehicles[i], " >>> Position: ", coord, " | GPS Position: ", gpscoord, " |", \
" Speed: ", round(traci.vehicle.getSpeed(vehicles[i])*3.6,2), "km/h |", \
)
#Pack Simulated Data
packBigDataLine = flatten_list([vehList, tlsList])
packBigData.append(packBigDataLine)
##----- CONTROL Vehicles ----##
#***SET FUNCTION FOR VEHICLES***
#REF: https://sumo.dlr.de/docs/TraCI/Change_Vehicle_State.html
NEWSPEED = 15 # value in m/s (15 m/s = 54 km/hr)
if vehicles[i]=='veh2':
traci.vehicle.setSpeedMode('veh2',0)
traci.vehicle.setSpeed('veh2',NEWSPEED)
traci.close()
#Generate Excel file
columnnames = ['dateandtime', 'vehid', 'coord', 'gpscoord', 'spd']
dataset = pd.DataFrame(packBigData, index=None, columns=columnnames)
dataset.to_excel("output.xlsx", index=False)
time.sleep(5)
It would be really helpful if you could suggest me the procedure or tutorial for executing my Sumo's traci script on OMNEST using veins.
I think what you want is not possible since traci supports only a single client (which is veins in your setup) or if you want multiple clients veins needs to be changed. You might try to send the messages inside veins though, see How to access TraCI command interface from TraCIDemoRSU11p in Veins Car2X simulator?

Pytorch Lightning Automatic Logging - AttributeError: 'NoneType' object has no attribute '_results'

Unable to use Automatic Logging (self.log) when calling training_step() on Pytorch Lightning, what am I missing? Here is a minimal example:
import pytorch_lightning as pl
import torch
import torch.nn as nn
import torch.nn.functional as F
class LitModel(pl.LightningModule):
def __init__(self):
super().__init__()
self.l1 = nn.Linear(100, 4)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y.long())
self.log("train_loss", loss) # <-- error
return loss
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.02)
pl_model = LitModel()
x = torch.rand((10,100))
y = torch.randint(0,4, size=(10,))
batch = (x,y)
loss = pl_model.training_step(batch, 0)
Error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-34-b9419bfca30f> in <module>
25 y = torch.randint(0,4, size=(10,))
26 batch = (x,y)
---> 27 loss = pl_model.training_step(batch, 0)
<ipython-input-34-b9419bfca30f> in training_step(self, batch, batch_idx)
14 y_hat = self(x)
15 loss = F.cross_entropy(y_hat, y.long())
---> 16 self.log("train_loss", loss)
17 return loss
18
D:\programs\anaconda3\lib\site-packages\pytorch_lightning\core\lightning.py in log(self, name, value, prog_bar, logger, on_step, on_epoch, reduce_fx, tbptt_reduce_fx, tbptt_pad_token, enable_graph, sync_dist, sync_dist_op, sync_dist_group, add_dataloader_idx, batch_size, metric_attribute, rank_zero_only)
405 on_epoch = self.__auto_choose_log_on_epoch(on_epoch)
406
--> 407 results = self.trainer._results
408 assert results is not None
409 assert self._current_fx_name is not None
AttributeError: 'NoneType' object has no attribute '_results'
This is NOT the correct usage of LightningModule class. You can't call a hook (namely .training_step()) manually and expect everything to work fine.
You need to setup a Trainer as suggested by PyTorch Lightning at the very start of its tutorial - it is a requirement. The functions (or hooks) that you define in a LightningModule merely tells Lightning "what to do" in a specific situation (in this case, at each training step). It is the Trainer that actually "orchestrates" the training by instantiating the necessary environment (including Logging functionality) and feeding it into the Lightning Module whenever needed.
So, do it the way Lightning suggests and it will work.

Is there any good way to rewrite the edgetpu old code by using pycoral api?

I'm a beginner using coral devboard mini.
I want to start a Smart Bird Feeder project.
https://coral.ai/projects/bird-feeder/
I've been trying to execute the code by referring to
I can't run bird_classify.py.
The error is as follows
untimeError: Internal: Unsupported data type in custom op handler: 0Node number 0 (edgetpu-custom-op) failed to prepare.
Originally, the samples in this project seemed to be deprecated, and
The edgetpu requires an old runtimeversion of 13, instead of the current 14.
(tflite is 2.5 ) I have downloaded it directly and re-installed it in
/usr/lib/python3/dist-packagesm
, but I cannot uninstall the new version and cannot match the version.
Is there a better way to do this?
Also, I've decided to give up on running the same environment as the sample, and use the pycoralapi to run the
If there is a good way to rewrite the code to use pycoral, please let me know.
Thanks
#!/usr/bin/python3
"""
Coral Smart Bird Feeder
Uses ClassificationEngine from the EdgeTPU API to analyze animals in
camera frames. Sounds a deterrent if a squirrel is detected.
Users define model, labels file, storage path, deterrent sound, and
optionally can set this to training mode for collecting images for a custom
model.
"""
import argparse
import time
import re
import imp
import logging
import gstreamer
import sys
sys.path.append('/usr/lib/python3/dist-packages/edgetpu')
from edgetpu.classification.engine import ClassificationEngine
from PIL import Image
from playsound import playsound
from pycoral.adapters import classify
from pycoral.adapters import common
from pycoral.utils.dataset import read_label_file
from pycoral.utils.edgetpu import make_interpreter
def save_data(image,results,path,ext='png'):
"""Saves camera frame and model inference results
to user-defined storage directory."""
tag = '%010d' % int(time.monotonic()*1000)
name = '%s/img-%s.%s' %(path,tag,ext)
image.save(name)
print('Frame saved as: %s' %name)
logging.info('Image: %s Results: %s', tag,results)
def load_labels(path):
"""Parses provided label file for use in model inference."""
p = re.compile(r'\s*(\d+)(.+)')
with open(path, 'r', encoding='utf-8') as f:
lines = (p.match(line).groups() for line in f.readlines())
return {int(num): text.strip() for num, text in lines}
def print_results(start_time, last_time, end_time, results):
"""Print results to terminal for debugging."""
inference_rate = ((end_time - start_time) * 1000)
fps = (1.0/(end_time - last_time))
print('\nInference: %.2f ms, FPS: %.2f fps' % (inference_rate, fps))
for label, score in results:
print(' %s, score=%.2f' %(label, score))
def do_training(results,last_results,top_k):
"""Compares current model results to previous results and returns
true if at least one label difference is detected. Used to collect
images for training a custom model."""
new_labels = [label[0] for label in results]
old_labels = [label[0] for label in last_results]
shared_labels = set(new_labels).intersection(old_labels)
if len(shared_labels) < top_k:
print('Difference detected')
return True
def user_selections():
parser = argparse.ArgumentParser()
parser.add_argument('--model', required=True,
help='.tflite model path')
parser.add_argument('--labels', required=True,
help='label file path')
parser.add_argument('--top_k', type=int, default=3,
help='number of classes with highest score to display')
parser.add_argument('--threshold', type=float, default=0.1,
help='class score threshold')
parser.add_argument('--storage', required=True,
help='File path to store images and results')
parser.add_argument('--sound', required=True,
help='File path to deterrent sound')
parser.add_argument('--print', default=False, required=False,
help='Print inference results to terminal')
parser.add_argument('--training', default=False, required=False,
help='Training mode for image collection')
args = parser.parse_args()
return args
def main():
"""Creates camera pipeline, and pushes pipeline through ClassificationEngine
model. Logs results to user-defined storage. Runs either in training mode to
gather images for custom model creation or in deterrent mode that sounds an
'alarm' if a defined label is detected."""
args = user_selections()
print("Loading %s with %s labels."%(args.model, args.labels))
engine = ClassificationEngine(args.model)
labels = load_labels(args.labels)
storage_dir = args.storage
#Initialize logging file
logging.basicConfig(filename='%s/results.log'%storage_dir,
format='%(asctime)s-%(message)s',
level=logging.DEBUG)
last_time = time.monotonic()
last_results = [('label', 0)]
def user_callback(image,svg_canvas):
nonlocal last_time
nonlocal last_results
start_time = time.monotonic()
results = engine.classify_with_image(image, threshold=args.threshold, top_k=args.top_k)
end_time = time.monotonic()
results = [(labels[i], score) for i, score in results]
if args.print:
print_results(start_time,last_time, end_time, results)
if args.training:
if do_training(results,last_results,args.top_k):
save_data(image,results, storage_dir)
else:
#Custom model mode:
#The labels can be modified to detect/deter user-selected items
if results[0][0] !='background':
save_data(image, storage_dir,results)
if 'fox squirrel, eastern fox squirrel, Sciurus niger' in results:
playsound(args.sound)
logging.info('Deterrent sounded')
last_results=results
last_time = end_time
result = gstreamer.run_pipeline(user_callback)
if __name__ == '__main__':
main()
enter code here
I suggest that you follow one of the examples available from the coral examples. There is an example named classify_image.py which uses the edgetpu (tflite) that I found works. After you install the coral examples, you have to drill down through the directory hierarchy. So, in my case, from root it is: /home/pi/ml-projects/coral/pycoral/tensorflow/examples/lite/examples. There are 17 files in that last examples directory. I'm using: numpy 1.19.3, pycoral 2.0.0, scipy 1.7.1, tensorflow 2.4.0, tflite-runtime 2.5.0.post1. I've installed the following edgetpu-runtime: edgetpu_runtime_20201105.zip.

PyMC: Directly changing an object's name doesn't apply when pulling out traces

Here is a bare bit of code which produces an error:
import pymc
import numpy as np
a = pymc.Normal('a', 1, 1)
b = np.empty(4, dtype=object)
for i in range(4):
b[i] = 1*a
b[i].__name__ = 'b_%i'%i
M = pymc.MCMC([a,b])
M.sample(10)
M.trace('b_0') # Causes a KeyError:'b_0'
I don't understand why I get a KeyError: 'b_0' when I try to extract the trace of b_0 and all the other b's. Are the traces just not being saved? If so, is there a way to directly flick some switch to change that without having to make the object using #deterministic.
I looked through it, apparently the trace wasn't being saved. Also, the "flag variable" for keeping the trace isn't .trace, it's .keep_trace

Resources