Where can I insert packet content in OMNET++? - omnet++

Using Omnet++ and INET framework.
I want to see the data after the transmission process with errors, noise and all channel impairments.
I found out that I can insert it in the .msg file, but it won't be affected by channel impairments.
My question is: Where can I insert the data (packet content) to be transmitted such that I can see the effect of channel impairments on it?

When a packet (for example an IPv4 packet) is transmitted between two nodes via channel with some impairments, in the destination node only the following events may occur:
The packet is received without errors, then link layer delivers it to the upper layers.
The packet is received without errors but with extended delay, then link layer delivers it to the upper layers.
The packet is received with error, then link layer discards the whole packet.
So there is no possibility that upper layer will receive the packet with errors.

Related

How to insert customized field chunks( in simuLTE LteHandoverManager) into the empty DATA packet sent at the SCTP layer (in INET framework)?

In simuLTE, the LteHandoverManager is trying to send a self-constructed packet "X2HandoverControlMsg" to the SCTP layer for further processing. After receiving the packet, however, the SCTP layer discards it and sends a self-defined "DATA" packet to the IP layer.
The problem is: I want to send customized packets constructed in the LteHandoverManager and pass through the layers including SCTP, IP and PPP all the way down to the destination node. Does anybody know if it is possbiel to do that and how?

CAN BUS - ACK field (singular or multiple response?)

I have several ECAN within the PIC18 and PIC24 (on OpenCan) with Can Transceiver attached to the CAN Bus network. In event one module send a message and received by other modules (within ECAN), will all ECAN do CRC check and if passed, make dominate bit or just one one of many make this response?. In other words, does PIC ECAN make ACK response even the message is not assigned for that module?
CAN controllers generate dominant ACK bits if they receive the frame without any errors. ID filtering takes place after that. So yes, the CAN controller generates ACK even for the frames it's not interested in.
If a transmitter detects dominant ACK bit, it concludes that at least one node in the bus has received the frame correctly. However, it's not possible to determine if this receiver was the intended one.
As far as I understand, ACK bit makes it possible for a transmitter to self-check. A transmitter can think "If no one hears my message, then I should be the one having problems." if it samples recessive ACK bits. The reception of the message by the intended node should be checked by higher layer protocols, like CANopen.
Transmitter node transmits CAN MSG and monitors the bus for a dominant bit in the ack. slot. Receiver if receives the message correctly, will overwrite the ack. bit and make it dominant. If it does not receive the message correctly, it will not overwrite the ack. slot. Then the transmitter knows that one node has not received the message correctly because it will detect a dominant bit written by the other nodes and assume that all the nodes have received the correct message. Even if one node does not receive the data correctly the message is retransmitted by the transmitter.
Check if you can successfully transmit CAN messages. The problem you could have is in receiving messages. When you send a message to PIC, the message is not received. The message received flag is never set. You have to check that a message is being sent with the scope, check if your PIC stores it. Check which mode is it in, I assume 0, and if it is configured to receive all messages, even with errors.
Check on the scope if the PIC sends and receives the Ack response. When a message is then transmitted back to the pic, check if it sends an Ack response or receives the message!
CAN is a broadcast network so a node does not really know how many other nodes share the bus with it.
With that manner, all the nodes shall do the CRC check and ACK whether the messages are "assigned" (supposed to be received in application layer) by the listened node or not.
There are no conflict, since if there a error with CRC or ACK, all listened nodes shall send (active or passive) Error Frame which are same form from every nodes.
I recommend you to refer this excellent article:
http://www.copperhilltechnologies.com/can-bus-guide-error-flag/

What happens in linux wifi driver after a packet is sent (life of packet)?

I am working on a low latency application, sending udp packets from a master to a slave. The master acts as access point sending the data directly to the slave. Mostly it is working well but sometimes data is arriving late in the slave. In order to narrow down the possible sources of the delay I want to timestamp the packets when they are sent out in the master device.
To achieve that I need a hook where I can take a timestamp right after a packet is sent out.
According to http://www.xml.com/ldd/chapter/book/ch14.html#t7 there should be an interrupt after a packet is sent out but I can't really find where the tx interrupt is serviced.
This is the driver:
drivers/net/wireless/bcmdhd/dhd_linux.c
I call dhd_start_xmit(..) from another driver to send out my packet. dhd_start_xmit(..) calls dhd_sendpkt(..) and then dhd_bus_txdata(..) (in bcmdhd/dhdpcie.c) is called where the data is queued. Thats basically where I lose track of what happens after the queue is scheduled in dhd_bus_schedule_queue(..).
Question
Does someone know what happens right after a packet is physically sent out in this particular driver and maybe can point me to the piece of code.
Of course any other advice how to tackle the problem is also welcome.
Thanks
In case of any network hardware and network driver these steps happen:-
1.driver have a transmit descriptor which will be in format understandable by hardware.
2.driver fill the descriptor with the current transmitting packet and send it to hardware queue to transmit .
after successful transmission a interrupt is generated by hardware .
this interrupt called transmission completion function in driver , which will be free the memory of previous packet and reset many things including descriptor etc.
here in line no. 1829 , you can see packet has been freeing .
PKTFREE(dhd->osh, pkt, TRUE);
Thanks
The packet is freed in the function
static void BCMFASTPATH
dhd_prot_txstatus_process(dhd_pub_t *dhd, void * buf, uint16 msglen)
in the file dhd_msgbuf.c
with
PKTFREE(dhd->osh, pkt, TRUE);

Veins - Unexpected behavior with lost packets in certain vehicles

I'm working with the Veins framework over the OMNeT++ simulator and I'm facing a weird situation where certain few nodes lose all received packets.
To put everybody in context, I'm simulating 100 nodes (4 flows of 25 nodes), all under coverage (apparently), and sending 10 packets per second each. Depending on the moment the nodes enter the network (i.e: are created by SUMO), some of them (usually just 1 but can be 2, 3, 4...) enter in a mode where all packets are marked as lost (SNIRLostPackets) as they receive a packet while another packet is already being received (according to the Decider the NIC is already synced to another frame).
That doesn't suppose to happen in 802.11 unless there are hidden nodes and the senders don't see each other at the moment of sending their respective frames (both see the the channel idle) right?
So, this behavior is not expected at all and destroys the final lost packets statistics. I tuned the transmission powers of transmission and interference range but nothing changes.
It happens too often to ignore it and I would like to know if anybody has experienced this behavior and how it was solved.
Thank you
(OK, apparently the issue comes in an special case where a packet is received (started to being received) OK but at the end of the reception, the node has changed to a TX state.
Then, the packet is marked as "received while sending" but the node has already marked this frame as the next correctly reception. So it discards all receiving ones with no end.
It seems a bug and the possible workaround is adding these lines
if (!frame->getWasTransmitting()){
curSyncFrame = 0;
}
in the processSignalEnd function (Decider80211p file), inside the "(frame->getWasTransmitting() || phy11p->getRadioState() == Radio::TX)" case.
I'm not quite sure if this is a case that whether should happen or not, as a node should not send a packet while receiving.
Hope it helps.

How to send data packets like ACK and ENQ using golang

I am writing an interface to a clinical lab machine, which uses ASTM protocol for communication (http://makecircuits.com/blog/2010-06-25-astm-protocol.html).
To start with, I am trying to use golang's TCP Server to receive data. But not able to figure how to send ACK back to lab machine. I am a newbie in golang. Can any one suggest how I can proceed?
The protocol in the link you supplied is for RS232. When sending data over TCPIP it is a stream (the receiver has to know when the data ends). Normally when changing an RS232 protocol to TCPIP, a header is added to each message which is the length of the message (often two bytes), so if you want to send ASCII ACK you send three bytes a two byte length and one byte data. When writing this you must flush the buffer, as for such a small packet it will not be transmitted until you do.

Resources