Delay before TX in RS485 - rs485

I want to understand why we need to add delay before and after we send TX data for using RS485.
And how do I calculate the delay time.
Thanks for reply

Related

How I can set direction as only Tx message should be there and filter out TxRq

I am writing a CAPL script to mimic a CAN message on network. While I am getting intended message, direction is both Tx & TxRq. How I can filter out and send only Tx message.
I tried CANID.dir=1 (tx) however not getting intended result.
Message details
I tried, setting direction as CANID.dir=1 (tx).
According to your screenshot, I assume took from the Trace window, you are looking at duplicate entries that spark confusion.
The entries marked as Tx are telling you that there is, indeed, a CAN frame with direction "outgoing" from your measurement system (your Vector Node).
The entries marked as TxRq are send requests. You may change settings for send requests in the Vector Hardware Configuration tool (ref).
TxRq is not a "direction" per se, just a way to indicate a different type of log in the Trace window. Remember that the only directions available are Tx and Rx (all frames either go out of your node or come into your node).
More about send requests. The following is an extract, I don't remember from which Vector knowledge base entry
By default, this box [the one in the Vector Hardware Configuration] is unchecked because most users do not require
this feature. This feature displays the TxRq messages with a time
stamp in the Trace window of CANoe. These are requests to send
messages by CANoe that have not yet been transmitted onto the CAN bus.
If they have been transmitted on the CAN bus, they would be Tx
messages.
Vector Network Hardware Configuration
If you get similar issue, pls go to simulation setup, channel on which you have simulated node, go to Network hardware & untick Activate TxRq

can Application Layer send event based emergency messages in SCH interval in VANETs

In Veins, while sending a BSM or WSA a random initial time is chosen to make sure that the transmission time is in CCH. This is implemented in BaseWaveApplLayer::computeAsynchronousSendingTime(). Then based on beaconInterval/wsaInterval they are sent periodically.
For event driven emergency messages which are not periodic should the application layer
wait for control Channel or
send whenever it has to send a packet but the MAC layer queues the packet till CCH time and send in CCH time?
Which is the best way to implement this in Veins?
I see the 2nd approach as more apt way. When a WSM is scheduled in SCH by the application layer, Veins adds them in to AC queue and send the back-logged messages during the next CCH and then sends the messages generated during the next CCH (FIFO behaviour). This approach increases end to end delay for all the messages under heavy load. How is the delay actually defined for messages that are generated during SCH?
Any insights on best approach in this case?

Broadcasting and Fetch Data On UART

I have a computer (that use as server) and several board with Atmega microcontroller Something like:
The computer connect to board on UART & RS485 (with a USB to RS485 converter)(I have limitation that lead to I could not use ModBus). I want to broadcast a message from server over bus and fetch the ID from of each board (Board ID is 4 digit).
When the boards receive broadcast message and try to send their own ID and the server receive some fake ID and I think it related to Collisions problem when all boards want to send data in one time.
After I search about this problem found a way that put a constant in each board that save a special delay for send data and when board receive broadcast message send ID with that delay...in this way it work fine and I dont see the Collisions but have some problem:
May be the delay number of 2 board be same.
Good way for small count of board.
Extra process when want to install board on bus.
Anybody know with this problem and could help me how to solve this problem with better solution?
You are mentioning Modbus in your question although some of your other stated facts seem to deviate from there (like 4-digit device numbers, and Modbus only has 1-255). Also, Modbus does not support responses to broadcast messages. I thus doubt a bit you are actually using Modbus.
A scheme you could use (and that is classically used in MA networks) would be:
Once a broadcast is received, have the clients scan the bus for responses for a time frame based on its station ID. If your client can see one, have it wait a minimum bus time (the time a module needs to answer your broadcast message based on current bus timing + the round trip for the master acknowledging the broadcast answer) plus an additional time based on its module ID, then go back to (1)
If a client sees the bus unoccupied for the specified time, send back a broadcast answer.
Have the master acknowledge the broadcast response from this client with the shortest possible message.
If a client that has sent a broadcast response does not receive a proper ack, go back to (1)
This is not 100% secure and absolutely not according to the Modbus specification, but could work.
* is a transmission, - is a "wait"
**** (Bus master broadcast)
--------- station 100 waits 100ms
------------------ station 200 waits 200ms
**** Station 100 sends broadcast response
------------------ station 200 sees bus active and waits another 200ms
*** master acknowledges broadcast response of 100
------------------ station 200 sees bus active again and waits 200ms from last seen activity
**** Station 200 has seen bus quiet for 200ms and sends broadcast response
*** master acks brc response of 200
This can take quite a bit of time and needs the waiting times finely adjusted against the transmission time of broadcast responses and response acks, but can work, and actually is implemented that way in a lot of CSMA/CD networks.
It will probably take longer, but here is another way to do it. First, design your protocol so that each command contains (or an can contain) an ID, and boards only respond to commands for their ID. Then, on your host, you would iterate through each of the possible IDs and send a simple command to each of them. If you get a response, you know there is a board with that ID. If you don't get a response after some period of time, you know there is no board there.

What happens in linux wifi driver after a packet is sent (life of packet)?

I am working on a low latency application, sending udp packets from a master to a slave. The master acts as access point sending the data directly to the slave. Mostly it is working well but sometimes data is arriving late in the slave. In order to narrow down the possible sources of the delay I want to timestamp the packets when they are sent out in the master device.
To achieve that I need a hook where I can take a timestamp right after a packet is sent out.
According to http://www.xml.com/ldd/chapter/book/ch14.html#t7 there should be an interrupt after a packet is sent out but I can't really find where the tx interrupt is serviced.
This is the driver:
drivers/net/wireless/bcmdhd/dhd_linux.c
I call dhd_start_xmit(..) from another driver to send out my packet. dhd_start_xmit(..) calls dhd_sendpkt(..) and then dhd_bus_txdata(..) (in bcmdhd/dhdpcie.c) is called where the data is queued. Thats basically where I lose track of what happens after the queue is scheduled in dhd_bus_schedule_queue(..).
Question
Does someone know what happens right after a packet is physically sent out in this particular driver and maybe can point me to the piece of code.
Of course any other advice how to tackle the problem is also welcome.
Thanks
In case of any network hardware and network driver these steps happen:-
1.driver have a transmit descriptor which will be in format understandable by hardware.
2.driver fill the descriptor with the current transmitting packet and send it to hardware queue to transmit .
after successful transmission a interrupt is generated by hardware .
this interrupt called transmission completion function in driver , which will be free the memory of previous packet and reset many things including descriptor etc.
here in line no. 1829 , you can see packet has been freeing .
PKTFREE(dhd->osh, pkt, TRUE);
Thanks
The packet is freed in the function
static void BCMFASTPATH
dhd_prot_txstatus_process(dhd_pub_t *dhd, void * buf, uint16 msglen)
in the file dhd_msgbuf.c
with
PKTFREE(dhd->osh, pkt, TRUE);

How can I calculate the flow rate of a tcp session?

In my project I need to calculate the flow rate of a tcp session, should I use total_bytes_of_session/total_time_elapsed or use tcp-windows and tcp-rtt to calculate?
Thanks!
In my project I need to calculate the flow rate of a tcp session
I assume you mean the flow rate of a session that is just ending and that you have acquired data for?
should I use total_bytes_of_session/total_time_elapsed
Yes, if you have that data.
or use tcp-windows and tcp-rtt to calculate?
You don't have that data and you can't get it, so you can't use any calculation that relies on it.

Resources