Reduce the simulation time of a large scenario - omnet++

I simulated a large scenario (using the whole map of a city) with Omnet-5.0 , Veins-4.5 and SUMO-0.29.0.
The first scenario simulated took one week.
I need to know the computing time used by Veins comparing it to SUMO and Omnet++. I asked for SUMO, I have a response that SUMO takes 1 second to load the network and 0.3 seconds to run the simulation.
Is there a tool to reduce the simulation time of my scenario please?

Related

How to calculate average waiting time of nodes in OMNet++?

I'm using instant veins 5.2, sumo 1.11.0 and OMNet++ 5.7 to simulate V2I communication between vehicles and RSUs. Is there any way to get results as to the average waiting time of vehicles - nodes during a planned accident? The problem is that for each simulation the results of the waiting time are the same.
What I am trying to achieve is to verify if the traffic movement on the roadmap improves after applying different ranges in each simulation.
Thank you in advance.

How can I reduce computational time in a OPL script using CPLEX?

I'm an engineering student, new user of CPLEX and OPL. I modelled an electric vehicle scheduling problem, using OPL in CPLEX.
It takes around 20min to give me an optimal solution for an instance of 4 service trips, 2 depots and 2 charging stations.
I'm currently trying to run a real example of 100 service trips, 1 depot and 1 charging station, but is taking me forever to get an answer (it is running for the last 17h).
Any suggestions on how to speed up the process?
You could set a time limit and then instead of an optimal solution you'll get a solution after the time limit. You could also try to see with the profiler where time is spent.
A few links to help you:
Performance Tuning Using CPLEX Optimization Studio IDE
MIP Tuning
CPO Introduction

How to get lane statistics (Scalars or Vectors) using Veins

I am trying to collect the following data using Veins towards each single lane, including throughput, density, mean speed, delay and collision. I know TraCI has the Simulation Value Retrieval, which can provide some information that I need. Also, the Lane Value Retrieval can help. But I have no clue where should I put the customized codes, so that the statistics can be recorded properly. For example, I want to collect the density and the mean speed of each lane every minute of the simulation time, which class should I put my codes to? TraCISenarioManager?
Any suggestion is appreciated.
I think putting the code in TraCIScenarioManager is entirely reasonable. If you want per-vehicle statistics I'd recommend putting them in the vehicles' application code, the way VEINS already collects some statistics out of the box.

Veins Framework Tutorial Accident rate

Currently, I am doing some research scenarios with Veins framework. I modified the Veins example (which is in the tutorial) and made it use my network file and ran the simulation for 3000 step.
From the OMNeT++ console, I can see that there are lot of accidents scheduled and performed. May I know how these accidents are being scheduled? In what rate? For example 2 accidents per minute or 5 accidents per SUMO 15 steps?
In omnetpp.ini at the mobility tab it is possible to specify which node, when and for how long the accident will occur, for example:
*.node[*0].veinsmobility.accidentCount = 1
*.node[*0].veinsmobility.accidentStart = 10s
*.node[*0].veinsmobility.accidentDuration = 1s
Another thing to care about is that accident start is
relative to the time where a node enters the simulation. If there aren't many accidents maybe you can specify these accidents manually.

Algorithm to distribute heartbeats?

I am building a sensor network where a large number of sensors report their status to a central hub. The sensors need to report status atleast once every 3 hours, but I want to make sure that the hub does not get innundated with too many reports at any given time. So to mitigate this, I let the hub tell the sensors the 'next report time'.
Now I am looking for any standard algorithms for doing some load balancing of these updates, such that the sensors dont exceed a set interval between reports and the hub can calculate the next report time such that its load (of receiving reports) is evenly divided over the day.
Any help will be appreciated.
If you know how many sensors there are, just divide up every three hour chunk into that many time slots and (either randomly or programmatically as you need), assign one to each sensor.
If you don't, you can still divide up every three hour chunk into some large number of time slots and assign them to sensors. In your assignment algorithm, you just have to make sure that all the slots have one assigned sensor before any of them have two, and all of them have two before any of them have three, etc.
Easiest solution: Is there any reason why the hub cannot poll the sensors according to its own schedule?
Otherwise you may want to devise a system where the hub can decide whether or not to accept a report based on its own load. If a sensor has its connection denied make it wait an random period of time and retry. Over time the sensors should space themselves out more or less optimally.
IIRC some facet of TCP/IP uses a similar method, but I'm drawing a blank as to which.
I would use a base of 90 minutes with a randomized variation over a 30-minute range, so that the intervals are randomly beteween 60 and 120 minutes. Adjust these numbers if you want to get closer to the 3-hour interval but I would personally stay well under it

Resources