I'm using the wireless example, and i want get the simulation time and save in parameter for calculate the time the packet arrival and the packet send. Have anyone solution for these?
There is a statistics for this automatically collected in application models (i.e. like BasicUdpApp etc.). It's called endToEndDelay.
The proper way to do this (and this what is already done in INET) is that you during packet creation you should add a TAG to the sent packet which contains a simtime_t variable and put the actual simtime there and then read the same TAG when the packet arrives and calculate the difference. Putting values into the "parameters" of a message would NOT work as the packets could be fragmented/defragmented in the network so their identity is not kept and the attached parameters are destroyed.
But again, this is already present in INET 4.2
Related
In serial communication with devices such as a digital Multimeter (ex. BK Precision 2831E), why do I need to send a query command once but read the output twice? For instance, I sent a query command for the voltage measured, and received an echo but no value of voltage.
I then sent the query command twice which returned the echo and the measured voltage. In essence, to read out the voltage measured, I had to send the same query command in succession twice.
I do not understand this concept. Can anyone kindly help me out with this reasoning.
I have attached a sample code here below:
def readoutmm(portnumber_multimeter):
import serial
import time
ser2 = serial.Serial(
port="com"+str(portnumber_multimeter),
baudrate=9600,
bytesize=serial.EIGHTBITS,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE
)
ser2.write(b'fetc?\n') # Query command
voltage= ser2.readline() # Returns echo
voltage=ser2.readline() # Returns measured voltage
voltage=float(voltage)
ser2.close()
packet=[voltage]
return packet
This is actually quite common with devices based on RS232/RS485 protocols.
From the manual of the machine you mentioned, I quote:
The character received by the multimeter will be sent back to the controller again. The controller will
not send the next character until the last returned character is received correctly from the meter. If
the controller fails to receive the character sent back from the meter, the possible reasons are listed
as follows:
The serial interface is not connected correctly.
Check if the same baud rate is selected for both the meter and the controller.
When the meter is busy with executing a bus command, it will not accept any character from
the serial interface at the same time. So the character sent by controller will be ignored.
In order
to make sure the whole command is sent and received correctly, the character without a return character should be sent again by the controller.
On a lot of devices this is actually a setting which you can turn on and off.
Now, as for your question:
why do I need to send a query command once but read the output twice?
You are supposed to read every character back before sending a new one to validate if the character was received correctly. But in your code are actually sending all the character before reading a single one of them.
In scenario's where you have a reliable connection, you method will work as well, but as a consequence you'll need to read twice; once to validate if the command was received and the second time to retrieve the actual data.
Do keep in mind that read buffers might be limited to a certain amount. If you are experiencing unexpected behavior while querying large amount of data and sending a lot of commands, it might be due to the fact these buffers are full.
TL;DR I want to have the functionality where a channel has two extra fields that tell the producer whether it is allowed to send to the channel and if so tell the producer what value the consumer expects. Although I know how to do it with shared memory, I believe that this approach goes against Go's ideology of "Do not communicate by sharing memory; instead, share memory by communicating."
Context:
I wish to have a server S that runs (besides others) three goroutines:
Listener that just receives UDP packets and sends them to the demultplexer.
Demultiplexer that takes network packets and based on some data sends it into one of several channels
Processing task which listens to one specific channel and processes data received on that channel.
To check whether some devices on the network are still alive, the processing task will periodically send out nonces over the network and then wait for k seconds. In those k seconds, other participants of my protocol that received the nonce will send a reply containing (besides other information) the nonce. The demultiplexer will receive the packets from the listener, parse them and send them to the processing_channel. After the k seconds elapsed, the processing task processes the messages pushed onto the processing_channel by the demultiplexer.
I want the demultiplexer to not just blindly send any response (of the correct type) it received onto the the processing_channel, but to instead check whether the processing task is currently even expecting any messages and if so which nonce value it expects. I made this design decision in order to drop unwanted packets a soon as possible.
My approach:
In other languages, I would have a class with the following fields (in pseudocode):
class ActivatedChannel{
boolean flag_expecting_nonce;
int expected_nonce;
LinkedList chan;
}
The demultiplexer would then upon receiving a packet of the correct type simply acquire the lock for the ActivatedChannel processing_channel object, check whether the flag is set and the nonce matches, and if so add the message to the LinkedList chan!
Problem:
This approach makes use of locks and shared memory, which does not align with Golang's "Do not communicate by sharing memory; instead, share memory by communicating" mantra. Hence, I would like to know... :
... whether my approach is "bad" regarding Go in the sense that it relies on shared memory.
... how to achieve the outlined result in a more Go-like way.
Yes, the approach described by you doesn't align with Golang's Idiomatic way of implementation. And you have rightly pointed out that in the above approach you are communicating by sharing memory.
To achieve this in Go's Idiomatic way, one of the approaches could be that your Demultiplexer "remembers" all the processing_channels that are expecting nonce and the corresponding type of the nonce. Whenever a processing_channels is ready to receive a reply, it sends a signal to the Demultiplexe saying that it is expecting a reply.
Since Demultiplexer is at the center of all the communication it can maintain a mapping between a processing_channel & the corresponding nonce it expects. It can also maintain a "registry" of all the processing_channels which are expecting a reply.
In this approach, we are Sharing memory by communicating
For communicating that a processing_channel is expecting a reply, the following struct can be used:
type ChannelState struct {
ChannelId string // unique identifier for processing channel
IsExpectingNonce bool
ExpectedNonce int
}
In this approach, there is no lock used.
In a LIN simulated slave node, what is the difference between output and linUpdateResponse?
From output docs:
To reconfigure response data of LIN frame. In that case RTR selector has to be set to 0. The LIN hardware responds to the next request of the specified frame with the newly configured data.
So, I can reconfigure the output and next time (real?) hardware should talk I've successfully override it, right?
From linUpdateResponse docs:
Updates the response data of a specific LIN frame.
letting me set data length (dlc) and data content for a specific frame ID.
How are they different and are there examples available? I can't quite understand how to use the latter with the example provided.
For LIN slave nodes, there is not really a difference between output and linUpdateResponse.
Both modify the internal state of the (simulated) slave and change the frame that will be sent the next time the master asks for the frame.
As you have posted, when using output you have the set the RTR selector.
But apart from that, there is no difference.
I, personally, think that linUpdateResponse is more convenient to use.
As part of CanTp protocol related tests, I have been trying to test N_As and N_Ar timeout errors, where N_AsMax = 1000ms and N_ArMax = 1000ms.
Is it possible to create the N_As and N_Ar timeouts with CANalyzer and/or using CAPL?
It would be great help, if you can share a possible way to test these timing parameters using CANalyzer or CANoe.
CanTP is a protocol to extendend the maximum data length (in bytes) of any given CAN data frame over the traditional 8 bytes, please refer to ISO 15765-2. Here you can have Single Frames, or Multi-Frames, which are trains of related frames each one carrying a portion of the overall PDU. A flow control frame is sent, usually by the receiver, to address and instruct the transmitter on the protocol to be used for frame splitting.
According to docs,
N_Ar [is the] Time for transmission of the CAN frame (any N-PDU) on
the receiver side (see ISO 15765-2)
N_As [is the] Time for transmission of the CAN frame (any N-PDU) on
the sender side (see ISO 15765-2).
In addition, the following requirements are relevant:
[SWS_CanTp_00075] ⌈If the transmit confirmation is not received after
a maximum time (equal to N_As), the CanTp module shall act as if it
had received an unsuccessful transmission confirmation and any late
confirmation shall be ignored. The CanTp module shall cancel
(internally) the failed transmission. ⌋ ( )
[SWS_CanTp_00311] ⌈In case of N_Ar timeout occurrence (no confirmation
from CAN driver for any of the FC frame sent) the CanTp module shall
abort reception and notify the upper layer of this failure by calling
the indication function PduR_CanTpRxIndication() with the result
E_NOT_OK. ⌋ ( )
Coming back to your question:
Is it possible to create the N_As and N_Ar timeouts with CANalyzer and/or using CAPL?
Yes, by means of the osek_tp.dll file that you should have in your local CANoe install (I'm using CANoe v10.0). Examples on how to use it are well documented in the help document AN-IND-1-012_CAPL_Callback_Interface.pdf, again it should be distributed in your CANoe install folder.
According to that document,
Basically, the OSEK_TP.DLL implements fault injection functionality
that has to be enabled explicitly in order to prevent unintentional
usage. Once activated, it is possible to setup a specific fault on a
connection that is executed during the next data transfer.
I'd urge to give it a read, and refer to linked documentation as well. I hope this is pointing you in the rigth direction.
Additional info:
Transmitting data over ISO-TP in CANoe using CAPL
I have a computer (that use as server) and several board with Atmega microcontroller Something like:
The computer connect to board on UART & RS485 (with a USB to RS485 converter)(I have limitation that lead to I could not use ModBus). I want to broadcast a message from server over bus and fetch the ID from of each board (Board ID is 4 digit).
When the boards receive broadcast message and try to send their own ID and the server receive some fake ID and I think it related to Collisions problem when all boards want to send data in one time.
After I search about this problem found a way that put a constant in each board that save a special delay for send data and when board receive broadcast message send ID with that delay...in this way it work fine and I dont see the Collisions but have some problem:
May be the delay number of 2 board be same.
Good way for small count of board.
Extra process when want to install board on bus.
Anybody know with this problem and could help me how to solve this problem with better solution?
You are mentioning Modbus in your question although some of your other stated facts seem to deviate from there (like 4-digit device numbers, and Modbus only has 1-255). Also, Modbus does not support responses to broadcast messages. I thus doubt a bit you are actually using Modbus.
A scheme you could use (and that is classically used in MA networks) would be:
Once a broadcast is received, have the clients scan the bus for responses for a time frame based on its station ID. If your client can see one, have it wait a minimum bus time (the time a module needs to answer your broadcast message based on current bus timing + the round trip for the master acknowledging the broadcast answer) plus an additional time based on its module ID, then go back to (1)
If a client sees the bus unoccupied for the specified time, send back a broadcast answer.
Have the master acknowledge the broadcast response from this client with the shortest possible message.
If a client that has sent a broadcast response does not receive a proper ack, go back to (1)
This is not 100% secure and absolutely not according to the Modbus specification, but could work.
* is a transmission, - is a "wait"
**** (Bus master broadcast)
--------- station 100 waits 100ms
------------------ station 200 waits 200ms
**** Station 100 sends broadcast response
------------------ station 200 sees bus active and waits another 200ms
*** master acknowledges broadcast response of 100
------------------ station 200 sees bus active again and waits 200ms from last seen activity
**** Station 200 has seen bus quiet for 200ms and sends broadcast response
*** master acks brc response of 200
This can take quite a bit of time and needs the waiting times finely adjusted against the transmission time of broadcast responses and response acks, but can work, and actually is implemented that way in a lot of CSMA/CD networks.
It will probably take longer, but here is another way to do it. First, design your protocol so that each command contains (or an can contain) an ID, and boards only respond to commands for their ID. Then, on your host, you would iterate through each of the possible IDs and send a simple command to each of them. If you get a response, you know there is a board with that ID. If you don't get a response after some period of time, you know there is no board there.