The MAC layer in veins - omnet++

I am using omnet++-5.4.1, veins-4.7.1 and sumo-0.25.0 to simulate vehicle frame transmission.
About the behavior of the EDCA in the mac layer in WAVE, in my understanding the waiting time to send can be obtained by the following calculation.
waiting time = AIFS[AC] + backoff
AIFS[AC] = SIFS + AIFSN[AC] * slotlength
However, in the startContent function of Mac 1609_4.cc, it is written as follows
if (idleTime > possibleNextEvent) {
DBG_MAC << "Could have already send if we had it earlier" << std::endl;
//we could have already sent. round up to next boundary
simtime_t base = idleSince + DIFS;
possibleNextEvent = simTime() - simtime_t().setRaw((simTime() - base).raw() % SLOTLENGTH_11P.raw()) + SLOTLENGTH_11P;
}
Even during simulation, transmission is performed without waiting for the time calculated just after the transmission request has occurred.
As described above, it is considered that the operation of the original EDCA (CSMA / CA) has not been performed and the busyness of the channel is not sensed.
I do not understand enough about this Mac layer? Please let me know if I missed some information.
Thank you.

Related

Errors in running Leach Protocol in Omnet++(Castalia )

I am running Leach protocol simulations in Castalia Omnet++ with the following simulation parameters:
sim-time-limit = 100s
SN.field_x = 70
SN.field_y = 70
SN.numNodes = 10
SN.deployment = "[1..9]->uniform"
SN.node[*].Communication.RoutingProtocolName = "LeachRouting"
SN.node[*].Communication.Routing.netBufferSize = 1000
SN.node[0].Communication.Routing.isSink = true
SN.node[*].Communication.Routing.slotLength = 0.2
SN.node[*].Communication.Routing.roundLength = 20s
SN.node[*].Communication.Routing.percentage = 0.05
SN.node[*].Communication.Routing.powersConfig = xmldoc("powersConfig.xml")
SN.node[*].ApplicationName = "ThroughputTest"
SN.node[*].Application.packet_rate = 1
SN.node[*].Application.constantDataPayload = 200
After running simulations, I checked the Castalia trace file and found the following errors:
SN.node[1].Communication.Radio Failed packet (WC_SIGNAL_START) from
node 6, radio not in RX state
SN.node[1].Communication.Radio Failed packet (WC_SIGNAL_END) from node
6, NO interference
Are these errors occurs due to simulation parameters or is there any other reason?
The messages you see are not errors per se. It could be normal behaviour. The messages just tell you that when a packet from node 6 arrived at node 1, node 1 did not have its radio in RX mode (listening) so it could not receive the packet.
This is a problem only when you lose most of your info-carrying packets, or you do not have a way to recover from such losses. You do not provide information whether this is the case or not.
The MAC plays a crucial role in this. The MAC puts the radio in RX or TX or Sleep mode. In the list of simulation parameters, the MAC is absent. If we assume that you use the default value, then this is bypassMAC that does not put the Radio to Sleep mode. Only way to have this message appear is for node 1 to TX at the same time it receives the packet from node 6.
These are normal messages, not errors. You can check Radio.cc to know why these messages are generated and adjust your code.

How to get collisions in Veins 4.7.1

I want to get the number of collisions in Veins. I am using Instant Veins 4.7.1 and I just modified the scenario to get a high density of vehicles without a RSU. The application only send beacons (BasicSafetyMessages) with an interval and transmission power as follows:
*.**.nic.mac1609_4.txPower = 50mW
*.node[*].appl.sendBeacons = true
*.node[*].appl.beaconInterval = 0.1s
I modified the next part of Mac1609_4.cc:
else if (msg->getKind() == Decider80211p::BITERROR || msg->getKind() == Decider80211p::COLLISION) {
statsSNIRLostPackets++;
DBG_MAC << "A packet was not received due to biterrors" << std::endl;
if (msg->getKind() == Decider80211p::COLLISION)
statsCollisions++;
else if (msg->getKind() == Decider80211p::BITERROR)
statsBitErrors++;
}
but all lost packets I get are due to biterrors and none due to collisions. Is there a default configuration of Veins that is not allowing me to get collisions?
Veins allows to collect statistics about collisions natively. However, it is disabled by default, since it increases simulation time. To enable it, just add the following line to your omnetpp.ini:
*.**.nic.phy80211p.collectCollisionStatistics = true
This enables collision statistics in Decider80211p on all nodes in your scenario which then record ncollisions.

How can I measure how long this Linux interrupt handler takes to run?

I am trying to debug a custom Linux serial driver that is having some issues missing some receive data. It has one interrupt for 4 serial ports, and baud rate is 115200. Firstly I would like to see how to measure how long the interrupt handler takes. I have used perf, but things are just in percent and not seconds. Secondly does anyone see any issues with the below code that can be improved to speed things up?
void serial_interrupt(int irq, void *dev_id)
{
...
// Need to loop through each port to see which port caused the interrupt.
list_for_each(lpNode, &serial_ports)
{
struct serial_port_module *ser_dev = list_entry(lpNode, struct serial_port_module, port_list);
lnIsr = ioread8(ser_dev->membase + ser_dev->chan_num * PORT_OFFSET + SERIAL_ISR);
if (lnIsr & IPM512_RX_INT)
{
while (serialdata_is_data_available(ser_dev)) // equals a ioread8()
{
lcIn = ioread8(ser_dev->membase + ser_dev->chan_num * PORT_OFFSET + SERIAL_RBR);
kfifo_in(&ser_dev->rx_fifo, &lcIn, sizeof(lcIn));
// Notify if anyone is doing a blocking read.
wake_up_interruptible(&ser_dev->read_queue);
}
}
}
}
Use the ftrace API to try to track down your latency issues. It's woth the time to get to know: https://www.kernel.org/doc/Documentation/trace/ftrace.txt
If this is too heavy-weight, what about adding some simple instrumentation yourself? getnstimeofday(struct timespec *ts) is relatively lightweight... with a little code you could output in a sysfs debug file the worst case execution times, some stats on latency of call to this function, worst-case number of bytes available per interrupt... if this number gets near your hardware FIFO size, you're in trouble.
One optimization would be to read the data in batches into a buffer, as long as data is available, then input the entire buffer, then wake up any readers.
while(data_available(dev))
{
buf[cnt++] = ioread8();
}
kfifo_in(fifo, buf, cnt);
wake_up_interruptible();
But execution time of code this simple is not likely to be an issue. You're probably suffering from missed interrupts or unexpected latency of the interrupt handling.

Display issue in the log module of Omnet++ simulator

I use veins-4a2. First, I have executed a scnario with only vehicles. Now I have added RSU in my example. I need that every RSU receives data, displays a message in the module log of Omnet++. Like I did for nodes when they receives data, I have add the bold line in onData() function of the TraCIDemp11p like this:
void TraCIDemoRSU11p::onData(WaveShortMessage* wsm) {
findHost()->getDisplayString().updateWith("r=16,green");
annotations->scheduleErase(1, annotations->drawLine(wsm->getSenderPos(), mobi->getCurrentPosition(), "blue"));
**EV << " I am an RSU and I have received a data ! \n";**
//if (!sentMessage) sendMessage(wsm->getWsmData());
}
My problem is that "I am an RSU and I have received a data ! " isn't displayed in the log module.
When an RSU receives a data, this is what is displayed in the log module of omnet++:
** Event #4802 t=9.004337832007 RSUExampleScenario.node[4].nic.phy80211p (PhyLayer80211p, id=161), on `data' (Mac80211Pkt, id=669)
node[4]::PhyLayer80211p: AirFrame encapsulated, length: 1326
Make sure that is going in the onData function.
You can use ASSERT or exit function for that.
Print the message with DBG, EV or cout
DBG << "Test_DBG: I am an RSU and I have received a data!\n";
EV << "Test_EV: I am an RSU and I have received a data!\n";
std::cout << "Test_cout: I am an RSU and I have received a data!\n"
After set on print message, use one code to terminate the simulation
// terminate the simulation with error code 3
exit(3);
or use ASSERT
ASSERT2(0,"Test: I'm RSU");
If the simulation terminate with error, you will have sure that the onData is executed, if not, the onData is not called in any part of your code.
-Sorry, I don't have reputation to add just one comment- Good luck!
I don't know if you are aware of how onData works.
In the default veins, the onData is only called where one package with name data arrived in one car/node or RSU (through the handleLowerMsg).
In your case in a RSU, so are needed:
The cars/nodes need the appl.sendData with true
Calls for send packages with name data
Range of communication with the cars/nodes and the RSU. The default is 1 km of diameter.
A good test is create a small grid with the randomTrips.py and set the RSU in center, where all nodes can achieve it.
-Big for one comment, so I make a new answer - Good luck!

Is measuring js execution time a way to tell how quickly the app is responding to requests?

I have something like a microtime() function at the very start of my node.js / express app.
function microtime (get_as_float) {
// Returns either a string or a float containing the current time in seconds and microseconds
//
// version: 1109.2015
// discuss at: http://phpjs.org/functions/microtime
// + original by: Paulo Freitas
// * example 1: timeStamp = microtime(true);
// * results 1: timeStamp > 1000000000 && timeStamp < 2000000000
var now = new Date().getTime() / 1000;
var s = parseInt(now, 10);
return (get_as_float) ? now : (Math.round((now - s) * 1000) / 1000) + ' ' + s;
}
The code of the actual app looks something like this:
application.post('/', function(request, response) {
t1 = microtime(true);
//code
//code
response.send(something);
console.log("Time elapsed: " + (microtime(true) - t1));
}
Time elapsed: 0.00599980354309082
My question is, does this mean that from the time a POST request hits the server to the time a response is sent out is give or take ~0.005s?
I've measured it client-side but my internet is pretty slow so I think there's some lag that has nothing to do with the application itself. What's a quick and easy way to check how quickly the requests are being processed?
Shameless plug here. I've written an agent that tracks the time usage for every Express request.
http://blog.notifymode.com/blog/2012/07/17/profiling-express-web-framwork-with-notifymode/
In fact when I first started writing the agent, I took the same approach. But I soon realized that it is not accurate. My implementation tracks the time difference between request and the response by substituting the Express router. That allowed me to add tracker functions. Feel free to give it a try.

Resources