I am using OMNeT++ 5.5.1, veins 5.0 and SUMO 1.7.0 to simulate a highway having six lanes in total (3 lanes in each direction).
I am simulating the network with different number of vehicles (e.g., 50, 100, 150, 200) that I am generating from SUMO.
The simulation runs fine when number of vehicles are 50 and 100. However, when I set the number of vehicles to 150, I get an error: "Cannot open output scalar file ..." (as attached in the screenshot below).
The error occurs when a vehicle (node[84]) leaves the network.
On debugging, I found that this error is causing in callFinish() method of cModule.cc:
https://github.com/omnetpp/omnetpp/blob/omnetpp-5.x/src/sim/cmodule.cc#L1428
and it throws an exception that is caught here:
https://github.com/omnetpp/omnetpp/blob/omnetpp-5.x/src/sim/cmodule.cc#L1443
Can anyone suggest how to correct it? Many thanks.
Best Regards,
Yasir Saleem
Thanks to #ChristophSommer for pointing out the attention towards memory corruption. It helped me to solve the issue.
Cause of Problem: For logging statistics, I was creating a csv file (ofstream) and I realized that the csv was opening by all the vehicles and RSUs (instead of one node) while was closing ofstream by only one node (RSU). This was causing the problem.
Solution: I fixed by making the ofstream variable global and opening and closing the ofstream file it by only one node (RSU).
Related
I'm somewhat new to Veins and I'm trying to record collision statistics within the sample "RSUExampleScenario" provided in the VM. I found this question which describes what line to add to the .ini file, which I have, but I'm unable to find the "ncollisions" value in the results folder, which makes me think either I ran the wrong .ini line or am looking in the wrong place.
Thanks!
Because collision statistics take time to compute (essentially: trying to decode every transmission twice: once while considering interference by other nodes as usual, then trying again while ignoring all interference), Veins 5.1 requires you to explicitly turn collision statistics on. As discussed in https://stackoverflow.com/a/52103375/4707703, this can be achieved by adding a line *.**.nic.phy80211p.collectCollisionStatistics = true to omnetpp.ini.
After altering the Veins 5.1 example simulation this way and running it again (e.g., by running ./run -u Cmdenv -c Default from the command line), the ncollisions field in the resulting .sca file should now (sometimes) have non-zero values.
You can quickly verify this by running (from the command line)
opp_scavetool export --filter 'module("**.phy80211p") and name("ncollisions")' results/Default-\#0.sca -F CSV-R -o collisions.csv
The resulting collisions.csv should now contain a line containing (among other information) param,,,*.**.nic.phy80211p.collectCollisionStatistics,true (indicating that the simulation was executed with the required configuration) as well as many lines containing (among other information) scalar,RSUExampleScenario.node[10].nic.phy80211p,ncollisions,,,1 (indicating that node[10] could have received one more message, had it not been for interference caused by other transmissions in the simulation.
I would just like to know if the value of "RecievedBroadcasts" for each of the node in Veins also consists of the packets lost or does it just give the number of successful packet receptions. That is if I want to calculate the packet loss ratio, then would it be TotalLostPackets/RecievedBroadcasts or TotalLostPackets/(RecievedBroadcasts + TotalLostPackets).
Thanks for your help
Your best bet to find out exactly how statistics are logged is to look into the source code of Mac1609_4.
You will find that the ReceivedBroadcasts scalar logs the value of variable statsReceivedBroadcasts which is increased via a method called from handleLowerMsg, so only when the Mac layer successfully decoded data.
I am trying to insert regular vehicles and platoon vehicles in a specific time step in a scenario by using SUMO and Plexe. I am using Sumo 1.2.0, Veins 5.0, Omnet++ 5.5.1, Plexe-3.0a2 versions. As plexe documentation points i have to change the traffic manager in my .ini file to SumoTrafficManager in order to insert the vehicles and the platoons from the .rou file that i have created. For testing purposes i used the platoon example provided from plexe using the option for Sumo Traffic. The problem is that i am get the sumo error
Error: tcpip::Socket::recvAndCheck # recv: peer shutdown
and omnet exits with code 139. The error occurs only when the first car is inserted.
Note: All the other configuration of the example works perfectly.
Why does this error occurs and how can i resolve this??
I got the answer from the sumo mailing list so i am posting it also here. Currently there is a bug for inserting platoons and regular-human car with the standard SUMO way (.rou file). But there is a way to resolve this issue by letting the insertion of platoon cars to be handled by the TrafficManager module and the regular-human car to be inserted with the SUMO way. To make it work you must use the classic PlatoonsTrafficManager and add the following lines to the .ini file:
*.manager.moduleType = "vtypeauto=org.car2x.plexe.PlatoonCar
vtypehuman=HumanCar"
*.manager.moduleName = "vtypeauto=node vtypehuman=human"
That way you seperate the module types for the simulation to handle it differently. A good example for testing is the Human example that is provided. By modifying the .ini file to pass to the TrafficManager only the platoon related variables and then adding some lines in .rou file (like flows or vehs) for the human cars to be injected you get the desired result.
I am simulating a scenario where I want to add and/or delete polygon dynamically. However, when I tried to add a polygon, the system generates me below error;
<!> ASSERT: Condition 'result == RTYPE_OK' does not hold in function 'query' at veins/modules/mobility/traci/TraCIConnection.cc:119 -- in module (TraCIDemo11p) RSUExampleScenario.node[1].appl (id=14), at t=1.1s, event #12
I debug the code and I see that the TraciConnection does not return the RTYPE_OK. If I remove the assert statement, the code works fine. However, I want to learn the logic behind this.
I also see that the SUMO console give an error message. The code that I used to add polygon is;
traci->addPolygon(polyId, polyType, color, filled, layer, points);
Sumo: 0.32 Omnet: 5.4.1 Veins: 4.7
Any suggestion is appreciated. I am a starter on GUI related things. Sorry if the question does not make sense. Thanks.
Most likely SUMO refuses to add the polygon you requested. Maybe the ID you chose already exists in the simulation.
To find out why SUMO complains, you can change its source code to include debug output -- or you can run SUMO in a debugger.
To run SUMO in a debugger, the simplest solution is to switch from using TraCIScenarioManagerLaunchd to TraCIScenarioManager (probably by changing veins/nodes/Scenario.ned) and launching SUMO in a debugger manually (e.g. by running lldb sumo -- --remote-port 9999 -c erlangen.sumo.cfg)
Has anybody mentioned data errors, trigger error or upload errors in ChipScope?
I'm using ChipScope (from ISE 14.7) with the IP core flow. So I created 15 different ICON IP cores as ngc files and wrapped them all in a VHDL module. This module chooses by generic with ngc file should be instantiated. So I can easily choose the number of active VIO/ILA cores.
Currently my project has 2 VIO cores and 5 ILA cores, utilizing circa 190 BlockRAMs on a Kintex-7 325T (>400 BlockRAMs in total). When a trigger event occurs, I get sometimes the warning Did not find trigger mark in buffer. Data buffer may be corrupted. or Data upload error.
This error is independent of the trigger mode (normal trigger event, immediate trigger, startup trigger). It seams to happen mostly on Unit 4 (91 bit data * 32k depth + 3 trigger ports each of 4 units). The upload progress bar can stops at any percentage from 1 to 95% as far as I noticed.
Additionally I get hundreds of these warnings:
Xst - Edge .../TransLayer_ILA2_ControlBus<14> has no source ports and will not be translated to ABC.
My google research: ignore them :)
There is also a bug in XST: This warning has no ID and can't be filtered :(
As of now, I tried to fix this problem:
Reduced / Increased JTAG speed -> no effect (program device is not effected)
recompiled ip core / new ngc file
reduced ILA windows size
So what can it be?
P.S. All timings are met.
I found the problem and a solution.
The problem: I changed one ILA coregenerator file's name and it's contents (modified internal name with an editor). But I missed one parameter so CoreGen generated some sources under the old name. This was still in usage by another ILA core, so one of them got overwritten.
Solution:
I opened every ILA xco file and every cgp file and check all names.