How to run SUMO python traci script on OMNEST using viens - omnet++

I am working on SUMO VEINS and OMNEST.
To run sumo files on OMNEST, the input of sumo files (.xml) are inputted in veins_launchd, which in turn finds an unused port, starts sumo and bridges the connection between sumo and OMNEST.
I want to control the vehicle's behavior (Speed change) on real time (during the simulation).
For this purpose, I have written a Traci script in python language, which calls sumo config file and controls vehicle speed on real time
My issue is,
I do not know how to make this Traci script (python) to run on OMNEST via veins.
Where should I give this python file as input so that I can visualize the output in OMNEST.
My working environment is Linux
Based on some research, I figured out 2 methods.
1. TraCIScenarioManager module
2. Veins_Python
Method1: I understood by using TraCIScenarioManager module, OMNEST can directly connect to the running sumo.
But I don't know where should I make the necessary changes inside veins module to use TraCIScenarioManager instead TraCIScenarioManagerLaunchd
Method2: Regarding veins_python, I downloaded the source file from github and did the configuration steps as mentioned.
I used windows10 and
Versions: Veins5.0, OMNeT++ 5.5.1 and Python3.6
But I got the below error while configuring Veins_Python.
enter image description here
I also tried with the recent versions of software on windows 10
Versions: Veins5.2, OMNEST-5.6.2 and Python3.10
Still I get the same error.
My Sumo Traci script is
import traci
import time
import traci.constants as tc
import pytz
import datetime
from random import randrange
import pandas as pd
def getdatetime():
utc_now = pytz.utc.localize(datetime.datetime.utcnow())
currentDT = utc_now.astimezone(pytz.timezone("Asia/Tokyo"))
DATIME = currentDT.strftime("%Y-%m-%d %H:%M:%S")
return DATIME
def flatten_list(_2d_list):
flat_list = []
for element in _2d_list:
if type(element) is list:
for item in element:
flat_list.append(item)
else:
flat_list.append(element)
return flat_list
sumoCmd = ["sumo-gui", "-c", "osm.sumocfg"]
traci.start(sumoCmd)
packVehicleData = []
packTLSData = []
packBigData = []
while traci.simulation.getMinExpectedNumber() > 0:
traci.simulationStep();
timestep = traci.simulation.getTime()
vehicles=traci.vehicle.getIDList();
trafficlights=traci.trafficlight.getIDList();
for i in range(0,len(vehicles)):
vehid = vehicles[i]
x, y = traci.vehicle.getPosition(vehicles[i])
coord = [x, y]
lon, lat = traci.simulation.convertGeo(x, y)
gpscoord = [lon, lat]
spd = round(traci.vehicle.getSpeed(vehicles[i])*3.6,2)
#Packing of all the data for export to CSV/XLSX
vehList = [getdatetime(), vehid, coord, gpscoord, spd]
print("Vehicle: ", vehicles[i], " at datetime: ", getdatetime())
print(vehicles[i], " >>> Position: ", coord, " | GPS Position: ", gpscoord, " |", \
" Speed: ", round(traci.vehicle.getSpeed(vehicles[i])*3.6,2), "km/h |", \
)
#Pack Simulated Data
packBigDataLine = flatten_list([vehList, tlsList])
packBigData.append(packBigDataLine)
##----- CONTROL Vehicles ----##
#***SET FUNCTION FOR VEHICLES***
#REF: https://sumo.dlr.de/docs/TraCI/Change_Vehicle_State.html
NEWSPEED = 15 # value in m/s (15 m/s = 54 km/hr)
if vehicles[i]=='veh2':
traci.vehicle.setSpeedMode('veh2',0)
traci.vehicle.setSpeed('veh2',NEWSPEED)
traci.close()
#Generate Excel file
columnnames = ['dateandtime', 'vehid', 'coord', 'gpscoord', 'spd']
dataset = pd.DataFrame(packBigData, index=None, columns=columnnames)
dataset.to_excel("output.xlsx", index=False)
time.sleep(5)
It would be really helpful if you could suggest me the procedure or tutorial for executing my Sumo's traci script on OMNEST using veins.

I think what you want is not possible since traci supports only a single client (which is veins in your setup) or if you want multiple clients veins needs to be changed. You might try to send the messages inside veins though, see How to access TraCI command interface from TraCIDemoRSU11p in Veins Car2X simulator?

Related

Is there any good way to rewrite the edgetpu old code by using pycoral api?

I'm a beginner using coral devboard mini.
I want to start a Smart Bird Feeder project.
https://coral.ai/projects/bird-feeder/
I've been trying to execute the code by referring to
I can't run bird_classify.py.
The error is as follows
untimeError: Internal: Unsupported data type in custom op handler: 0Node number 0 (edgetpu-custom-op) failed to prepare.
Originally, the samples in this project seemed to be deprecated, and
The edgetpu requires an old runtimeversion of 13, instead of the current 14.
(tflite is 2.5 ) I have downloaded it directly and re-installed it in
/usr/lib/python3/dist-packagesm
, but I cannot uninstall the new version and cannot match the version.
Is there a better way to do this?
Also, I've decided to give up on running the same environment as the sample, and use the pycoralapi to run the
If there is a good way to rewrite the code to use pycoral, please let me know.
Thanks
#!/usr/bin/python3
"""
Coral Smart Bird Feeder
Uses ClassificationEngine from the EdgeTPU API to analyze animals in
camera frames. Sounds a deterrent if a squirrel is detected.
Users define model, labels file, storage path, deterrent sound, and
optionally can set this to training mode for collecting images for a custom
model.
"""
import argparse
import time
import re
import imp
import logging
import gstreamer
import sys
sys.path.append('/usr/lib/python3/dist-packages/edgetpu')
from edgetpu.classification.engine import ClassificationEngine
from PIL import Image
from playsound import playsound
from pycoral.adapters import classify
from pycoral.adapters import common
from pycoral.utils.dataset import read_label_file
from pycoral.utils.edgetpu import make_interpreter
def save_data(image,results,path,ext='png'):
"""Saves camera frame and model inference results
to user-defined storage directory."""
tag = '%010d' % int(time.monotonic()*1000)
name = '%s/img-%s.%s' %(path,tag,ext)
image.save(name)
print('Frame saved as: %s' %name)
logging.info('Image: %s Results: %s', tag,results)
def load_labels(path):
"""Parses provided label file for use in model inference."""
p = re.compile(r'\s*(\d+)(.+)')
with open(path, 'r', encoding='utf-8') as f:
lines = (p.match(line).groups() for line in f.readlines())
return {int(num): text.strip() for num, text in lines}
def print_results(start_time, last_time, end_time, results):
"""Print results to terminal for debugging."""
inference_rate = ((end_time - start_time) * 1000)
fps = (1.0/(end_time - last_time))
print('\nInference: %.2f ms, FPS: %.2f fps' % (inference_rate, fps))
for label, score in results:
print(' %s, score=%.2f' %(label, score))
def do_training(results,last_results,top_k):
"""Compares current model results to previous results and returns
true if at least one label difference is detected. Used to collect
images for training a custom model."""
new_labels = [label[0] for label in results]
old_labels = [label[0] for label in last_results]
shared_labels = set(new_labels).intersection(old_labels)
if len(shared_labels) < top_k:
print('Difference detected')
return True
def user_selections():
parser = argparse.ArgumentParser()
parser.add_argument('--model', required=True,
help='.tflite model path')
parser.add_argument('--labels', required=True,
help='label file path')
parser.add_argument('--top_k', type=int, default=3,
help='number of classes with highest score to display')
parser.add_argument('--threshold', type=float, default=0.1,
help='class score threshold')
parser.add_argument('--storage', required=True,
help='File path to store images and results')
parser.add_argument('--sound', required=True,
help='File path to deterrent sound')
parser.add_argument('--print', default=False, required=False,
help='Print inference results to terminal')
parser.add_argument('--training', default=False, required=False,
help='Training mode for image collection')
args = parser.parse_args()
return args
def main():
"""Creates camera pipeline, and pushes pipeline through ClassificationEngine
model. Logs results to user-defined storage. Runs either in training mode to
gather images for custom model creation or in deterrent mode that sounds an
'alarm' if a defined label is detected."""
args = user_selections()
print("Loading %s with %s labels."%(args.model, args.labels))
engine = ClassificationEngine(args.model)
labels = load_labels(args.labels)
storage_dir = args.storage
#Initialize logging file
logging.basicConfig(filename='%s/results.log'%storage_dir,
format='%(asctime)s-%(message)s',
level=logging.DEBUG)
last_time = time.monotonic()
last_results = [('label', 0)]
def user_callback(image,svg_canvas):
nonlocal last_time
nonlocal last_results
start_time = time.monotonic()
results = engine.classify_with_image(image, threshold=args.threshold, top_k=args.top_k)
end_time = time.monotonic()
results = [(labels[i], score) for i, score in results]
if args.print:
print_results(start_time,last_time, end_time, results)
if args.training:
if do_training(results,last_results,args.top_k):
save_data(image,results, storage_dir)
else:
#Custom model mode:
#The labels can be modified to detect/deter user-selected items
if results[0][0] !='background':
save_data(image, storage_dir,results)
if 'fox squirrel, eastern fox squirrel, Sciurus niger' in results:
playsound(args.sound)
logging.info('Deterrent sounded')
last_results=results
last_time = end_time
result = gstreamer.run_pipeline(user_callback)
if __name__ == '__main__':
main()
enter code here
I suggest that you follow one of the examples available from the coral examples. There is an example named classify_image.py which uses the edgetpu (tflite) that I found works. After you install the coral examples, you have to drill down through the directory hierarchy. So, in my case, from root it is: /home/pi/ml-projects/coral/pycoral/tensorflow/examples/lite/examples. There are 17 files in that last examples directory. I'm using: numpy 1.19.3, pycoral 2.0.0, scipy 1.7.1, tensorflow 2.4.0, tflite-runtime 2.5.0.post1. I've installed the following edgetpu-runtime: edgetpu_runtime_20201105.zip.

HTTP error when I run the APMonitor nlc example in python

I've been interested in learning MPC control, and I wanted to try to nlc python example found here:
http://apmonitor.com/wiki/index.php/Main/PythonApp
When I ran the initial demo example, I got an HTTP error. I was able to run the demo example by changing the instances of "http" to "https" in the apm.py file, similar to the problem found here:
https://github.com/olivierhagolle/LANDSAT-Download/issues/33
I've been trying to run the nlc example now and I'm getting the same kind of error (shown below). However, changing the instances of "http" to "https" no longer seem to be helping.
*Traceback (most recent call last):
File "C:\Users\veli95839\Documents\Python\Scripts\example_nlc\nlc.py", line 88, in
response = apm_meas(server,app,x,value)
File "C:\Users\veli95839\Documents\Python\Scripts\example_nlc\apm.py", line 607, in load_meas
f = urllib.request.urlopen(url_base,params_en)
File "C:\Users\veli95839\Documents\Python\lib\urllib\request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "C:\Users\veli95839\Documents\Python\lib\urllib\request.py", line 531, in open
response = meth(req, response)
File "C:\Users\veli95839\Documents\Python\lib\urllib\request.py", line 640, in http_response
response = self.parent.error(
File "C:\Users\veli95839\Documents\Python\lib\urllib\request.py", line 569, in error
return self._call_chain(*args)
File "C:\Users\veli95839\Documents\Python\lib\urllib\request.py", line 502, in _call_chain
result = func(args)
File "C:\Users\veli95839\Documents\Python\lib\urllib\request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 503: Service Unavailable
Please let me know if anyone has experienced similar issues!
Thanks,
Claire
You may have received that error if your computer was not connected to the Internet or the server was unavailable at the time you ran the test. You can install a local APM server (for Windows or Linux) to avoid any disruptions. Another option is to switch to Python gekko that uses the same underlying APM engine but can run locally with remote=False. Here is the same MPC example in Python gekko.
from gekko import GEKKO
import numpy as np
import matplotlib.pyplot as plt
m = GEKKO(remote=False)
m.time = np.linspace(0,20,41)
# Parameters
mass = 500
b = m.Param(value=50)
K = m.Param(value=0.8)
# Manipulated variable
p = m.MV(value=0, lb=0, ub=100)
p.STATUS = 1 # allow optimizer to change
p.DCOST = 0.1 # smooth out gas pedal movement
p.DMAX = 20 # slow down change of gas pedal
# Controlled Variable
v = m.CV(value=0)
v.STATUS = 1 # add the SP to the objective
m.options.CV_TYPE = 2 # squared error
v.SP = 40 # set point
v.TR_INIT = 1 # set point trajectory
v.TAU = 5 # time constant of trajectory
# Process model
m.Equation(mass*v.dt() == -v*b + K*b*p)
m.options.IMODE = 6 # control
m.solve(disp=False,GUI=True)
# get additional solution information
import json
with open(m.path+'//results.json') as f:
results = json.load(f)
plt.figure()
plt.subplot(2,1,1)
plt.plot(m.time,p.value,'b-',label='MV Optimized')
plt.legend()
plt.ylabel('Input')
plt.subplot(2,1,2)
plt.plot(m.time,results['v1.tr'],'k-',label='Reference Trajectory')
plt.plot(m.time,v.value,'r--',label='CV Response')
plt.ylabel('Output')
plt.xlabel('Time')
plt.legend(loc='best')
plt.show()
If you'd like to get more information on either APM Python or gekko for Nonlinear or Linear Model Predictive Control, there are a series of tutorials with the Temperature Control Lab (TCLab).
Advanced Control Labs with Solutions
Digital Twin Model Development
Lab A - Single Heater Modeling
Lab B - Dual Heater Modeling
Machine Learning with Parameter and State Estimation
Lab C - Parameter Estimation
Lab D - Empirical Model Estimation
Lab E - Hybrid Model Estimation
Model Predictive Control
Lab F - Linear Model Predictive Control
Lab G - Nonlinear Model Predictive Control
Lab H - Moving Horizon Estimation with MPC
These exercises teach how to do first principles or empirical modeling, state estimation, and predictive control. Here is an example of combined Moving Horizon Estimation and Model Predictive Control from Lab H.

How to run perticular code in gpu using PyTorch?

I am using an image processing code in python opencv. Since that process is taking a lot of time to process say 30 images. I tried to process these image parallel using Multiprocessing. The multiprocessing part is working good in CPU but I want to use that multiprocessing thing in GPU(cuda).
I use torch.multiprocessing for running task in parallel. So I am using torch.device('cuda') for our class to run whole thing in to this perticular device. When I run the code it's showing device using "cuda" but not using any GPU processing.
import cv2
import numpy as np
import torch
import torch.nn as nn
from torch.multiprocessing import Process, Pool, Manager, set_start_method
import sys
import os
class RoadShoulderWidth(nn.Module):
def __init__(self):
super(RoadShoulderWidth, self).__init__()
pass
// Want to run below method in parallel for 30 images.
#staticmethod
def get_dim(image, road_shoulder_width_list):
..... code
def get_road_shoulder_width(self, _root_dir, _img_path_list):
manager = Manager()
road_shoulder_width_list = manager.list()
processes = []
for img_path in img_path_list[:30]:
img = cv2.imread(_root_dir + '/' + img_path)
img = img[72 * 5:72 * 6, 0:1280]
# Do work
p = Process(target=self.get_dim,args=(img,road_shoulder_width_list))
p.start()
processes.append(p)
for p in processes:
p.join()
return road_shoulder_width_list
Use below set of code to run your class
if __name__ == '__main__':
root_dir = '/home/nikhil_m/r'
img_path_list = os.listdir(root_dir)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)
dataloader_kwargs = {'pin_memory': True}
set_start_method('fork')
obj = RoadShoulderWidth().to(device)
val = obj.get_road_shoulder_width(str(root_dir), img_path_list)
print(val)
print(torch.cuda.is_available())
Can anybody suggest me how to fix this?
Your class RoadShoulderWidth is a nn.Module subclass which lets you use .to(device). This only means that all other nn.Module objects or nn.Parameters that are members of your RoadShoulderWidth object are moved to the device. As from your example, there are none, so nothing happens.
In general PyTorch does not move code to GPU but data. If all data of a pytorch operation are on the GPU (e.g. a + b, a and b are on GPU) then the operation is executed on the GPU. You can move the data with a.to(device), given a is a torch.Tensor object.
PyTorch can only execute its own operations on GPU. It's not able to execute OpenCV code on GPU.

`ProcessPoolExecutor` works on Ubuntu, but fails with `BrokenProcessPool` when running Jupyter 5.0.0 notebook with Python 3.5.3 on Windows 10

I'm running Jupyter 5.0.0 notebook with Python 3.5.3 on Windows 10. The following example code fails to run:
from concurrent.futures import as_completed, ProcessPoolExecutor
import time
import numpy as np
def do_work(idx1, idx2):
time.sleep(0.2)
return np.mean([idx1, idx2])
with ProcessPoolExecutor(max_workers=4) as executor:
futures = set()
for idx in range(32):
future = winprocess.submit(
executor, do_work, idx, idx * 2
)
futures.add(future)
for future in as_completed(futures):
print(future.result())
... and throws BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
The code works perfectly fine on Ubuntu 14.04.
I've understand that Windows doesn't have os.fork, thus multiprocessing is handled differently, and doesn't always play nice with interactive mode and Jupyter.
What are some workarounds to make ProcessPoolExecutor work in this case?
There are some similar questions, but they relate to multiprocessing.Pool:
multiprocessing.Pool in jupyter notebook works on linux but not windows
Closer inspection shows that a Jupyter notebook can run external python modules which is parallelized using ProcessPoolExecutor. So, a solution is to do the parallelizable part of your code in a module and call it from the Jupyter notebook.
That said, this can be generalized as a utility. The following can be stored as a module, say, winprocess.py and imported by jupyter.
import inspect
import types
def execute_source(callback_imports, callback_name, callback_source, args):
for callback_import in callback_imports:
exec(callback_import, globals())
exec('import time' + "\n" + callback_source)
callback = locals()[callback_name]
return callback(*args)
def submit(executor, callback, *args):
callback_source = inspect.getsource(callback)
callback_imports = list(imports(callback.__globals__))
callback_name = callback.__name__
future = executor.submit(
execute_source,
callback_imports, callback_name, callback_source, args
)
return future
def imports(callback_globals):
for name, val in list(callback_globals.items()):
if isinstance(val, types.ModuleType) and val.__name__ != 'builtins' and val.__name__ != __name__:
import_line = 'import ' + val.__name__
if val.__name__ != name:
import_line += ' as ' + name
yield import_line
Here is how you would use this:
from concurrent.futures import as_completed, ProcessPoolExecutor
import time
import numpy as np
import winprocess
def do_work(idx1, idx2):
time.sleep(0.2)
return np.mean([idx1, idx2])
with ProcessPoolExecutor(max_workers=4) as executor:
futures = set()
for idx in range(32):
future = winprocess.submit(
executor, do_work, idx, idx * 2
)
futures.add(future)
for future in as_completed(futures):
print(future.result())
Notice that executor has been changed with winprocess and the original executor is passed to the submit function as a parameter.
What happens here is that the notebook function code and imports are serialized and passed to the module for execution. The code is not executed until it is safely in a new process, thus does not trip up with trying to make a new process based on the jupyter notebook itself.
Imports are handled in such a way as to maintain aliases. The import magic can be removed if you make sure to import everything needed for the function being executed inside the function itself.
Also, this solution only works if you pass all necessary variables as arguments to the function. The function should be static so to speak, but I think that's a requirement of ProcessPoolExecutor as well. Finally, make sure you don't execute other functions defined elsewhere in the notebook. Only external modules will be imported, thus other notebook functions won't be included.

Sumo and OMNET++

I just edited my map and i can run it with 100 car using SUMO.. Now i want to launch this map and these cars with omnet++ i created launch file .. and going to ini file and set the launch file
Config DSRUU-City]
description = "DSRUU"
*.numHosts = 100
*.manager.launchConfig = xmldoc("..\\_maps\\Mapsprojec\\City.launchd.xml")
*.**.nic.phy80211p.decider = xmldoc("..\\_maps\\Mapsprojec\\config.xml")
*.**.nic.phy80211p.analogueModels = xmldoc("..\\_maps\\Mapsprojec\\config.xml")
*.playgroundSizeX =3000
*.playgroundSizeY =3000
*.playgroundSizeZ = 50m
**.roiRects = "0,100-2200,2000"#x,y-X,Y
My problem, When i run the simulation, i can't see the cars as node inside the omnet++ although i can see them in the sumo gui in the same time .. So how i can solve this issue ... So i should see the cars as node in the omnet++ and as car in the sumo ? .. Thanks in advance ...
Make sure to enable the drawing of annotations in your omnetpp.ini by adding (or modifying) the line:
*.annotations.draw = true

Resources