Serial Port Communication Protocol - serial-communication

In serial communication with devices such as a digital Multimeter (ex. BK Precision 2831E), why do I need to send a query command once but read the output twice? For instance, I sent a query command for the voltage measured, and received an echo but no value of voltage.
I then sent the query command twice which returned the echo and the measured voltage. In essence, to read out the voltage measured, I had to send the same query command in succession twice.
I do not understand this concept. Can anyone kindly help me out with this reasoning.
I have attached a sample code here below:
def readoutmm(portnumber_multimeter):
import serial
import time
ser2 = serial.Serial(
port="com"+str(portnumber_multimeter),
baudrate=9600,
bytesize=serial.EIGHTBITS,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE
)
ser2.write(b'fetc?\n') # Query command
voltage= ser2.readline() # Returns echo
voltage=ser2.readline() # Returns measured voltage
voltage=float(voltage)
ser2.close()
packet=[voltage]
return packet

This is actually quite common with devices based on RS232/RS485 protocols.
From the manual of the machine you mentioned, I quote:
The character received by the multimeter will be sent back to the controller again. The controller will
not send the next character until the last returned character is received correctly from the meter. If
the controller fails to receive the character sent back from the meter, the possible reasons are listed
as follows:
The serial interface is not connected correctly.
Check if the same baud rate is selected for both the meter and the controller.
When the meter is busy with executing a bus command, it will not accept any character from
the serial interface at the same time. So the character sent by controller will be ignored.
In order
to make sure the whole command is sent and received correctly, the character without a return character should be sent again by the controller.
On a lot of devices this is actually a setting which you can turn on and off.
Now, as for your question:
why do I need to send a query command once but read the output twice?
You are supposed to read every character back before sending a new one to validate if the character was received correctly. But in your code are actually sending all the character before reading a single one of them.
In scenario's where you have a reliable connection, you method will work as well, but as a consequence you'll need to read twice; once to validate if the command was received and the second time to retrieve the actual data.
Do keep in mind that read buffers might be limited to a certain amount. If you are experiencing unexpected behavior while querying large amount of data and sending a lot of commands, it might be due to the fact these buffers are full.

Related

Question about implementing Raft's Client interaction

I'm actually learning MIT6.824,
https://www.youtube.com/channel/UC_7WrbZTCODu1o_kfUMq88g,
and try to implement its lab,
there's a paragraph in raft's paper describing client semantics:
Our goal for Raft is to implement linearizable seman- tics (each operation appears to execute instantaneously, exactly once, at some point between its invocation and its response). However, as described so far Raft can exe- cute a command multiple times: for example, if the leader crashes after committing the log entry but before respond- ing to the client, the client will retry the command with a new leader, causing it to be executed a second time. The solution is for clients to assign unique serial numbers to every command. Then, the state machine tracks the latest serial number processed for each client, along with the as- sociated response. If it receives a command whose serial number has already been executed, it responds immedi- ately without re-executing the
request.
Now I have passed MIT lab 3A, but I have responses map[string]string in kvserver,
which is a map from client's request id to response, but the problem is then the
map will keep increasing if client's keep sending request, Which is problemic in real project. How does Raft handle this in real project? Also, the MIT lab 3 says one client
will execute one command at a time, so probably I can optimize by deleting client's last request's response. But how does Raft handle this in real project where client's behavior is more free?

Can AT+CMGS be cancelled without ESC?

Is it possible to have AT+CMGS commands cancelled by some control code other than ESC?
I need it because ESC is intercepted by the attached equipment for its own use and never gets to the modem. And, I can't change that.
Unfortunately, CTRL-Z will send even an empty message, or else I could backspace enough to clear the message and do CTRL-Z to abort.
The relevant "AT command set" manual is no help.
According to TS 127 005 specification, it seems that there's no way to configure the character for SMS sending abortion.
Anyway I can suggest a workaround, based on three different commands:
+CMGW - Write Message To Memory
+CMGD - Delete Message
+CMSS - Send Message From Storage
So basically, instead of using +CMGS that sends the message in one step
Write the SMS to memory with +CMGW (same syntax of +CMGS). After the SMS contents closure with the CTRL-Z character, its answer is
+CMGW: <index>
where <index> is the message location index in the current memory storage
Actually send it with
AT+CMSS=index
Delete the SMS with
AT+CMGD=index
Since memory slots are limited, you will have to delete it anyway. If you realize that the message you are composing during +CMGW phase is wrong, store it anyway with CTRL-Z and skip the actual sending.
As you can see, the entire procedure is performed without using the ESC character (0x1B), can be easily automated and doesn't require much more time to be executed.

Inet framework getting time

I'm using the wireless example, and i want get the simulation time and save in parameter for calculate the time the packet arrival and the packet send. Have anyone solution for these?
There is a statistics for this automatically collected in application models (i.e. like BasicUdpApp etc.). It's called endToEndDelay.
The proper way to do this (and this what is already done in INET) is that you during packet creation you should add a TAG to the sent packet which contains a simtime_t variable and put the actual simtime there and then read the same TAG when the packet arrives and calculate the difference. Putting values into the "parameters" of a message would NOT work as the packets could be fragmented/defragmented in the network so their identity is not kept and the attached parameters are destroyed.
But again, this is already present in INET 4.2

MPI_SSEND : How does it guarantee reuse of the sender buffer?

I understand that MPI_Bsend will save the sender's buffer in local buffer managed by MPI library, hence it's safe to reuse the sender's buffer for other purposes.
What I do not understand is how MPI_Ssend guarantee this?
Send in synchronous mode.
A send that uses the synchronous mode can be started whether or not a matching receive was posted. However, the send will complete successfully only if a matching receive is posted, and the receive operation has started to receive the message sent by the synchronous send. Thus, the completion of a synchronous send not only indicates that the send buffer can be reused, but also indicates that the receiver has reached a certain point in its execution, namely that it has started executing the matching receive
As per above, MPI_Ssend will return (ie allow further program execution) if matching receive has been posted and it has started to receive the message sent by the synchronous send. Consider the following case:
I send a huge data array of int say data[1 million] via MPI_Ssend. Another process starts receiving it (but might not have done so completely), which allows MPI_Ssend to return and execute the next program statement. The next statement makes changes to the buffer at very end data[1 million] = \*new value*\. Then the MPI_Ssend finally reaches the buffer end and sends this new value which was not what I wanted.
What am I missing in this picture?
TIA
MPI_Ssend() is both blocking and synchronous.
From the MPI 3.1 standard (chapter 3.4, page 37)
Thus, the completion of a synchronous send not only indicates
that the send buffer can be reused, but it also indicates that the
receiver has reached a certain point in its execution, namely that it
has started executing the matching receive.

How to test N_As, N_Ar timeout parameters in CanTp protocol using CAPL script or or any other possible way?

As part of CanTp protocol related tests, I have been trying to test N_As and N_Ar timeout errors, where N_AsMax = 1000ms and N_ArMax = 1000ms.
Is it possible to create the N_As and N_Ar timeouts with CANalyzer and/or using CAPL?
It would be great help, if you can share a possible way to test these timing parameters using CANalyzer or CANoe.
CanTP is a protocol to extendend the maximum data length (in bytes) of any given CAN data frame over the traditional 8 bytes, please refer to ISO 15765-2. Here you can have Single Frames, or Multi-Frames, which are trains of related frames each one carrying a portion of the overall PDU. A flow control frame is sent, usually by the receiver, to address and instruct the transmitter on the protocol to be used for frame splitting.
According to docs,
N_Ar [is the] Time for transmission of the CAN frame (any N-PDU) on
the receiver side (see ISO 15765-2)
N_As [is the] Time for transmission of the CAN frame (any N-PDU) on
the sender side (see ISO 15765-2).
In addition, the following requirements are relevant:
[SWS_CanTp_00075] ⌈If the transmit confirmation is not received after
a maximum time (equal to N_As), the CanTp module shall act as if it
had received an unsuccessful transmission confirmation and any late
confirmation shall be ignored. The CanTp module shall cancel
(internally) the failed transmission. ⌋ ( )
[SWS_CanTp_00311] ⌈In case of N_Ar timeout occurrence (no confirmation
from CAN driver for any of the FC frame sent) the CanTp module shall
abort reception and notify the upper layer of this failure by calling
the indication function PduR_CanTpRxIndication() with the result
E_NOT_OK. ⌋ ( )
Coming back to your question:
Is it possible to create the N_As and N_Ar timeouts with CANalyzer and/or using CAPL?
Yes, by means of the osek_tp.dll file that you should have in your local CANoe install (I'm using CANoe v10.0). Examples on how to use it are well documented in the help document AN-IND-1-012_CAPL_Callback_Interface.pdf, again it should be distributed in your CANoe install folder.
According to that document,
Basically, the OSEK_TP.DLL implements fault injection functionality
that has to be enabled explicitly in order to prevent unintentional
usage. Once activated, it is possible to setup a specific fault on a
connection that is executed during the next data transfer.
I'd urge to give it a read, and refer to linked documentation as well. I hope this is pointing you in the rigth direction.
Additional info:
Transmitting data over ISO-TP in CANoe using CAPL

Resources