Veins onWSM is not getting called for RSU Omnet++ - debugging

I am a new bee working upon Veins5.2 and Omnetpp-5.6.2 version.
I am working upon a simulation scenario in which an emergency vehicle (an Ambulance e.g;)is passing by and RSU has to send a stop message(beacon or WSM) to normal passenger cars/vehicles. I am creating my application file referring to example scenario.
Please find below snippet of my code.
void MyApplLayer::handleSelfMsg(cMessage* msg)
{
BaseFrame1609_4* wsm1 = new BaseFrame1609_4();
sendDown(wsm1);
}
void MyApplLayer::onWSM(BaseFrame1609_4* frame)
{
//this function is called for all the cars nodes in my simulation
}
void TraCIDemoRSU11p::onWSM(BaseFrame1609_4* frame)
{
//this function is never called for RSU in TraCIDemoRSU.cc
}
handleSelfMsg() and onWSM() is getting called for all the Cars in my application layer but onWSM() is not getting invoked in TraCIDempRSU11p.cc file (as per the requirement RSU has to broadcast message to stop the vehicle. For this purpose, I am trying to invoke onWSM() method for RSU (in TraCIDempRSU11p.cc file).
Could it be I am missing something in omnetpp.ini file. Could someone please suggest some way forward or any kind of suggestions/ideas are much appreciated.
I found following post during research:
RSU not receiving WSMs in Veins 4.5
the above post suggest to check handLowerMsg is getting called or not.
handleLowerMsg function (inside DemoBaseApplLayer) onWSM is getting called but only for car nodes in my scenario.

Related

OMNeT++/Veins: Why WSM message is sent/broadcasted ONLY for the first accident scheduled?

I am doing some simulations using the Veins framework (with the RSU scenario). I defined two accidents for node[*0] as shown in the code below. However, node[*0] sends WSM message to neighboring nodes (i.e., vehicles and RSU) only for the first accident. When it arrives to the second accident, it stops for a while but does not send any message. I further tested this by scheduling more accidents, but the message is still sent/broadcasted only for the first accident. My question is, why node[*0] is not sending WSM message when it arrives to each of the other scheduled accidents? I would greatly appreciate it if someone could help me clarifying this issue. Thank you.
*.node[*0].veinsmobility.accidentCount = 2
*.node[*0].veinsmobility.accidentStart = 60s
*.node[*0].veinsmobility.accidentDuration = 40s
*.node[*0].veinsmobility.accidentInterval = 70s
The application used in the Veins example simulation is https://github.com/sommer/veins/blob/veins-5.2/src/veins/modules/application/traci/TraCIDemo11p.cc#L96. As you can see, it uses a bool member to track whether it has already sent a message and, if it has, refuses to send another.

Send and Receive BSM message

I am working on a OMNeT++, Veins, and sumo project. The goal of the project is to calculate when 2 vehicles will crash into each other. To do this they need to communicate their position, direction, and speed.
I could use the manager to get all of the vehicles at once, but the goal is to use messages.
Although it's difficult to find out how these messages exactly work, the documentation is very minimal. So I was hoping someone here could help me out.
void InteractingVehicle::onBSM(veins::DemoSafetyMessage* bsm)
{
EV_DEBUGF_C("test") << "TEST" << std::endl;
}
void InteractingVehicle::handlePositionUpdate(cObject* obj)
{
veins::DemoSafetyMessage* bsm = new veins::DemosafetyMessage();
populateWSM(bsm);
sendDown(bsm->dup());
}
This doesn't do much, but it should at least show "TEST" in debug mode, which it doesn't.
Am I forgetting something?
Any help would be much appreciated.

Populating messages with data in veins

I am working on a project where an RSU sends beacons to the cars in its range .When this beacon is received by the car it should send its id back to the RSU.I made a custom message file with just the vehicle id in it.This is how i am handling the beacons now.
void MyVeinsApp::onBSM(DemoSafetyMessage* bsm)
{
findHost()->getDisplayString().setTagArg("i", 1, "green");
if(sentMessage==false){
sendDown(bsm);
//scheduleAt(simTime() + 2 + uniform(0.01, 0.2), wsm->dup());
sentMessage=true;
}
}
This does not work for me at all.Is there any way I can send messages from cars to RSU?
I am not an expert but i recently started working on a similar project with yours. So your message includes a parameter like, let say, vehicle_id and upon receiving a beacon you have to send the message to the RSU with the id include. To do such thing you have to first of all fill the message with the vehicle id like
bsm->setVehicle_id(findHost()->getIndex());
When you create a new message file with variables inside it and then building it the program also creates the get() and set() functions in order to handle those parameters.
Now for the RSU to simply acquire the message variable you have sent it must call the get() function like:
RSU_vehicle_id=wsm->getVehicle_id();
So now you have a variable that includes the received vehicle node id.
I highly suggest you to offer a couple of day just to understand the principles behind the Veins tutorial and how it handles all its aspects.

initialize method in TraCIDemoRSU11p

I'm using omnetpp-5.4.1 , veins-4.7.1 , sumo-0.30.0 .I'm going to do fuzzy clustering by RSU in veins.I created a new module called FCM in veins/modules/application/traci and inherited the TraCIDemo11p and wrote the clustering code in it.
Because I want to RSU start clustering,I used the initialize method in the TraCIDemoRSU11p to call the method inside the FMC at the start of work.
void TraCIDemoRSU11p::initialize(int stage) {
BaseWaveApplLayer::initialize(stage);
std::cout<<"starting clustering";
FCM * fcm_clustering;
fcm_clustering->clustering();
}
When I run the program, it is not allowed to run at the start of the program, saying "finish with error" and the program stops running.
What can I do to call the clustering by RSU at the beginning of the simulation?
please help me to solvemy problem.
Thanks.
You have defined a pointer fcm_clustering but you didn't initialize it. Therefore an attempt to use it ends up with memory violation.
Try to create FCM object, for example:
FCM * fcm_clustering = new FCM();

Does an OMNET++ / Veins simulation get very slow if both Vehicles and RSUs broadcast messages periodically?

Let me give a brief context first:
I have a scenario where the RSUs will broadcast a fixed message 'RSUmessage' about every TRSU seconds. I have implemented the following code for RSU broadcast (also, these fixed messages have Psid = -100 to differentiate them from others):
void TraCIDemoRSU11p::handleSelfMsg(cMessage* msg) {
if (WaveShortMessage* wsm = dynamic_cast<WaveShortMessage*>(msg)) {
if(wsm->getPsid()==-100){
sendDown(RSUmessage->dup());
scheduleAt(simTime() + trsu +uniform(0.02, 0.05), RSUmessage);
}
}
else {
BaseWaveApplLayer::handleSelfMsg(wsm);
}
}
A car can receive these messages from other cars as well as RSUs. RSUs discard the messages received from cars. The cars will receive multiple such messages, do some comparison stuff and periodically broadcast a similar type of message : 'aggregatedMessage' per interval Tcar. aggregatedMessage also have Psid=-100 ,so that the message can be differentiated from other messages easily.
I am scheduling the car events using self messages. (Though it could have been done inside handlePositionUpdate I believe). The handleSelfMsg of a car is following:
void TraCIDemo11p::handleSelfMsg(cMessage* msg) {
if (WaveShortMessage* wsm = dynamic_cast<WaveShortMessage*>(msg)) {
wsm->setSerial(wsm->getSerial() +1);
if (wsm->getPsid() == -100) {
sendDown(aggregatedMessage->dup());
//sendDelayedDown(aggregatedMessage->dup(), simTime()+uniform(0.1,0.5));
scheduleAt(simTime()+tcar+uniform(0.01, 0.05), aggregatedMessage);
}
//send this message on the service channel until the counter is 3 or higher.
//this code only runs when channel switching is enabled
else if (wsm->getSerial() >= 3) {
//stop service advertisements
stopService();
delete(wsm);
}
else {
scheduleAt(simTime()+1, wsm);
}
}
else {
BaseWaveApplLayer::handleSelfMsg(msg);
}
}
PROBLEM: With this setting, the simulation is very very slow. I get about 50 simulation seconds in 5-6 hours or more in Express mode in OMNET GUI. (No. of RSU: 64, Number of Vehicle: 40, around 1kmx1km map)
Also, I am referring to this post. The OP says that he got faster speed by removing the sending of message after each RSU received a message. In my case I cannot remove that, because I need to send out the broadcast messages after each interval.
Question: I think that this slowness is because every node is trying to sendDown messages at the beginning of each simulated second. Is it the case that when all vehicles and nodes schedules and sends message at the same time OMNET slows down? (Makes sense to slow down , but by what degree) But, there are only around 100 nodes overall in the simulation. Surely it cannot be this slow.
What I tried : I tried using sendDelayedDown(wsm->dup(), simTime()+uniform(0.1,0.5)); to spread the sending of the messages through out 1st half of each simulated second. This seems to stop messages piling up at the beginning of each simulation seconds and sped things up a bit, but not so much overall.
Can anybody please let me know if this is normal behavior or whether I am doing something wrong.
Also I apologize for the long post. I could not explain my problem without giving the context.
It seems you are flooding your network with messages: Every message from an RSU gets duplicated and transmitted again by every Car which has received this message. Hence, the computational time increases quadratically with the number of nodes (sender of messages) in your network (every sent message has to be handled by every node which is in range to receive it). The limit of 3 transmissions per message does not seem to help much and, as the comment in the code indicates, is not used at all, if there is no channel switching.
Therefore, if you can not improve/change your code to simply send less messages, you have to live with that. Your little tweak to send the messages in a delayed manner only distributes the messages over one second but does not solve the problem of flooding.
However, there are still some hints you can follow to improve the performance of your simulation:
Compile in release mode: make MODE=Release
Run your simulation in the terminal environment Cmdenv: ./run -u Cmdenv ...
If you need to use the graphical environment by all means, you should at least speed up the animations by using the slider in the upper part of the interface.
Removing the simtime-resolution parameter from the omnetpp.ini file solves the problem.
It seems the simulation kernel has an issue when the channel delay does not match the simulation-time resolution.
You can verify the solution by cloning the following repository. Note that you need a functional installation of the OMNeT++ framework. Specifically, I test this fix in OMNeT++ 5.6.2.
https://github.com/Ryuuba/flooding

Resources