I used VEINS 7.4.1 OMNET++ 5.4.1, and SUMO 0.30.0, I tried several times to increase the number of vehicles in Veins, however, the result of the simulation shows another number. I tried to increase the number of vehicles by *.manager.numVehicles in omnetpp.ini. Could you please guide me? I appreciate you in advance.
I read the below links over and over, so I understood that we can decrease the number of vehicles which is in erlangen.rou.xml
However, how we can increase the number of them? the number of vehicles is increased when simulation time increase. As simulation time in VEINS is 200s, we should firstly increase simulation time then change the number of vehicles. These links are:
About vehicle number in VEINS
How is the number of vehicles determined? In Sumo route file or in omnetpp.ini?
The number of vehicles and other parameters can be tuned using the "flow" tag. For example, the period parameter can be used to increase the frequency of the vehicles, the number parameter is used to indicate the number of cars. So, if you decrease the period value, even for a smaller simulation time you can have a larger number of cars.
<routes>
<vType id="vtype0" accel="2.6" decel="4.5" sigma="0.5" length="2.5" minGap="2.5" maxSpeed="14" color="1,1,0"/>
<route id="route0" edges="-39539626 -5445204#2 -5445204#1 113939244#2 -126606716 23339459 30405358#1 85355912 85355911#0 85355911#1 30405356 5931612 30350450#0 30350450#1 30350450#2 4006702#0 4006702#1 4900043 4900041#1"/>
<flow id="flow0" type="vtype0" route="route0" begin="0" period="3" number="195"/>
</routes>
Related
I'm using instant veins 5.2, sumo 1.11.0 and OMNet++ 5.7 to simulate V2I communication between vehicles and RSUs. Is there any way to get results as to the average waiting time of vehicles - nodes during a planned accident? The problem is that for each simulation the results of the waiting time are the same.
What I am trying to achieve is to verify if the traffic movement on the roadmap improves after applying different ranges in each simulation.
Thank you in advance.
I am currently doing research on the multi-hop broadcast technology of the Internet of Vehicles. I want to use only Veins (5.0) and SUMO to achieve it, but I have encountered problems:
1.Using Veins' example (TraCIDemo11p.cc) to modify the selection of relay nodes, the packet loss rate and delay cannot be counted So I want to know if the packet loss rate and delay can be counted after only modifying the example? It's been two weeks now, I would really appreciate if you could fix my problem.need to use inet?
I have setup my environment using omnet++, sumo and veins in ubuntu. I want to reduce packet loss in an emergency situation among vehicles and improve packet delivery time and cost. My project is about choosing the suitable processing position among cluster head (nodes), road side unit (rsu) and cloud. I want to achieve certain tasks that is need to implement my veins project. I have configured 50 nodes and 4 rsu and provide data rate about 6mbps and assign the packet size upto 2MB.
Therefore, how can I change the behavior of vehicles (nodes), road side unit (rsu) and cloud in order to implement the following parameters?
processing rate of clusters (nodes) = 3 Mbps.
processing rate of RSUs = 7 Mbps.
processing rate of cloud = 10 Mbps.
the range of clusters (nodes) = 60 m.
the range of RSU = 120 m.
the range of cloud = 500 m.
If you could help with building these parameters I will appreciate it.
Thank you
If you are talking about transsmision rate, then you can set the bit rate in the ini file (check veins example) but if you meant processing delay then it is usually simulated by scheduling self messages (check tictoc example). In terms of transsmsion range, veins uses Free Space Propagation model and the related parameters are set in the ini file so you can change them to decide the required range. Finally, I recommand to read more about veins and how it deal with the parameters you asked about. There are alot of answered questions on StackOverFlow about your questions.
Currently, I am doing some research scenarios with Veins framework. I modified the Veins example (which is in the tutorial) and made it use my network file and ran the simulation for 3000 step.
From the OMNeT++ console, I can see that there are lot of accidents scheduled and performed. May I know how these accidents are being scheduled? In what rate? For example 2 accidents per minute or 5 accidents per SUMO 15 steps?
In omnetpp.ini at the mobility tab it is possible to specify which node, when and for how long the accident will occur, for example:
*.node[*0].veinsmobility.accidentCount = 1
*.node[*0].veinsmobility.accidentStart = 10s
*.node[*0].veinsmobility.accidentDuration = 1s
Another thing to care about is that accident start is
relative to the time where a node enters the simulation. If there aren't many accidents maybe you can specify these accidents manually.
I am building a sensor network where a large number of sensors report their status to a central hub. The sensors need to report status atleast once every 3 hours, but I want to make sure that the hub does not get innundated with too many reports at any given time. So to mitigate this, I let the hub tell the sensors the 'next report time'.
Now I am looking for any standard algorithms for doing some load balancing of these updates, such that the sensors dont exceed a set interval between reports and the hub can calculate the next report time such that its load (of receiving reports) is evenly divided over the day.
Any help will be appreciated.
If you know how many sensors there are, just divide up every three hour chunk into that many time slots and (either randomly or programmatically as you need), assign one to each sensor.
If you don't, you can still divide up every three hour chunk into some large number of time slots and assign them to sensors. In your assignment algorithm, you just have to make sure that all the slots have one assigned sensor before any of them have two, and all of them have two before any of them have three, etc.
Easiest solution: Is there any reason why the hub cannot poll the sensors according to its own schedule?
Otherwise you may want to devise a system where the hub can decide whether or not to accept a report based on its own load. If a sensor has its connection denied make it wait an random period of time and retry. Over time the sensors should space themselves out more or less optimally.
IIRC some facet of TCP/IP uses a similar method, but I'm drawing a blank as to which.
I would use a base of 90 minutes with a randomized variation over a 30-minute range, so that the intervals are randomly beteween 60 and 120 minutes. Adjust these numbers if you want to get closer to the 3-hour interval but I would personally stay well under it