On my channel display i have a paramter RESETSEQ(1217) so each time the channel stop and start the sequence is set to 1217 causing all sorts of sequence errors.
Review of documentation indicates that the RESETSEQ is for a pending sequence reset. A value of 0 (zero) indicates that there is no pending sequence reset. So far so good...BUT I have not been able to find anything that indicates how to set this parm to 0(zero) so that the display channel shows resetseq(no) so i am stuck reseting this RCVR channel each day.
When a reset command is issued from a sender channel, it tells the receiver what message number to set as the sequence number. When you reset from the receiver, it doesn't have the same effect of making a change to the sender. It seems your sender at some point was trying to send messages as seqno (1217) and that command is still considered outstanding. Since that time, the sender time may have recycled and that number appears to no longer be valid (explaining why you now need to set it to zero).
Do you have access to the sender channel? If a new command were sent from the sender to reset the channel, I would expect this outstanding request to be overriden. You may have to do a stop on both channel ends to clear up the problem. Refer to this Technote should you encounter an in-doubt situation:
Related
I understand that MPI_Bsend will save the sender's buffer in local buffer managed by MPI library, hence it's safe to reuse the sender's buffer for other purposes.
What I do not understand is how MPI_Ssend guarantee this?
Send in synchronous mode.
A send that uses the synchronous mode can be started whether or not a matching receive was posted. However, the send will complete successfully only if a matching receive is posted, and the receive operation has started to receive the message sent by the synchronous send. Thus, the completion of a synchronous send not only indicates that the send buffer can be reused, but also indicates that the receiver has reached a certain point in its execution, namely that it has started executing the matching receive
As per above, MPI_Ssend will return (ie allow further program execution) if matching receive has been posted and it has started to receive the message sent by the synchronous send. Consider the following case:
I send a huge data array of int say data[1 million] via MPI_Ssend. Another process starts receiving it (but might not have done so completely), which allows MPI_Ssend to return and execute the next program statement. The next statement makes changes to the buffer at very end data[1 million] = \*new value*\. Then the MPI_Ssend finally reaches the buffer end and sends this new value which was not what I wanted.
What am I missing in this picture?
TIA
MPI_Ssend() is both blocking and synchronous.
From the MPI 3.1 standard (chapter 3.4, page 37)
Thus, the completion of a synchronous send not only indicates
that the send buffer can be reused, but it also indicates that the
receiver has reached a certain point in its execution, namely that it
has started executing the matching receive.
I have several ECAN within the PIC18 and PIC24 (on OpenCan) with Can Transceiver attached to the CAN Bus network. In event one module send a message and received by other modules (within ECAN), will all ECAN do CRC check and if passed, make dominate bit or just one one of many make this response?. In other words, does PIC ECAN make ACK response even the message is not assigned for that module?
CAN controllers generate dominant ACK bits if they receive the frame without any errors. ID filtering takes place after that. So yes, the CAN controller generates ACK even for the frames it's not interested in.
If a transmitter detects dominant ACK bit, it concludes that at least one node in the bus has received the frame correctly. However, it's not possible to determine if this receiver was the intended one.
As far as I understand, ACK bit makes it possible for a transmitter to self-check. A transmitter can think "If no one hears my message, then I should be the one having problems." if it samples recessive ACK bits. The reception of the message by the intended node should be checked by higher layer protocols, like CANopen.
Transmitter node transmits CAN MSG and monitors the bus for a dominant bit in the ack. slot. Receiver if receives the message correctly, will overwrite the ack. bit and make it dominant. If it does not receive the message correctly, it will not overwrite the ack. slot. Then the transmitter knows that one node has not received the message correctly because it will detect a dominant bit written by the other nodes and assume that all the nodes have received the correct message. Even if one node does not receive the data correctly the message is retransmitted by the transmitter.
Check if you can successfully transmit CAN messages. The problem you could have is in receiving messages. When you send a message to PIC, the message is not received. The message received flag is never set. You have to check that a message is being sent with the scope, check if your PIC stores it. Check which mode is it in, I assume 0, and if it is configured to receive all messages, even with errors.
Check on the scope if the PIC sends and receives the Ack response. When a message is then transmitted back to the pic, check if it sends an Ack response or receives the message!
CAN is a broadcast network so a node does not really know how many other nodes share the bus with it.
With that manner, all the nodes shall do the CRC check and ACK whether the messages are "assigned" (supposed to be received in application layer) by the listened node or not.
There are no conflict, since if there a error with CRC or ACK, all listened nodes shall send (active or passive) Error Frame which are same form from every nodes.
I recommend you to refer this excellent article:
http://www.copperhilltechnologies.com/can-bus-guide-error-flag/
I am implementing a PON in OMNet++ and I am trying to avoid the runtime error that occurs when transmitting at the time another transmission is ongoing. The only way to avoid this is by using sendDelayed() (or scheduleAt() + send() but I don't prefer that way).
Even though I have used sendDelayed() I am still getting this runtime error. My question is: when exactly the kernel checks if the channel is free if I'm using sendDelayed(msg, startTime, out)? It checks at simTime() + startTime or at simTime()?
I read the Simulation Manual but it is not clear about that case I'm asking.
The business of the channel is checked only when you schedule the message (i.e. at simTime() as you asked). At this point it is checked whether the message is scheduled to be delivered at a time after channel->getTransmissionFinishTime() i.e. you can query when the currently ongoing transmission will finish and you must schedule the message for that time or later). But please be aware that this check is just for catching the most common errors. If you schedule for example TWO messages for the same time using sendDelayed() the kernel will check only that is starts after the currently transmitted message id finished, but will NOT detect that you have scheduled two or more messages for the same time after that point in time.
Generally when you transmit over a channel which has a datarate set to a non-zero time (i.e. it takes time to transmit the message), you always have to take care what happens when the messages are coming faster than the rate of the channel. In this case you should either throw away the message or you should queue it. If you queue it, then you obviously have to put it into a data structure (queue) and then schedule a self timer to be executed at the time when the message channel gets free (and the message is delivered at the other side). At this point, you should get the next packet from the queue, put it on the channel and schedule a next self timer for the time when this message is delivered.
For this reason, using just sendDelayed() is NOT the correct solution because you are just trying to implicitly implement a queue whit postponing the message. The problem is in this case, that once you schedule a message with sendDelay(), what delay will you use if an other packet arrives, and then another is a short timeframe? As you can see, you are implicitly creating a queue here by postponing the event. You are just using the simulation's main event queue to store the packets but it is much more convoluted an error prone.
Long story short, create a queue and schedule self event to manage the queue content properly or drop the packets if that suits your need.
We using the WSAEventSelect to bind an socket with an event. And From the MSDN
The FD_CLOSE network event is recorded when a close indication is
received for the virtual circuit corresponding to the socket. In TCP
terms, this means that the FD_CLOSE is recorded when the connection
goes into the TIME WAIT or CLOSE WAIT states. This results from the
remote end performing a shutdown on the send side or a closesocket.
FD_CLOSE being posted after all data is read from a socket. An
application should check for remaining data upon receipt of FD_CLOSE
to avoid any possibility of losing data. For more information, see the
section on Graceful Shutdown, Linger Options, and Socket Closure and
the shutdown function.
Seams the first highlight sentence means the FD_CLOSE will only been posted after all data is read from socket. But the second sentence require an application need to check if there is data in socket when received FD_CLOSE.
Isn't it conflict? How to understand it?
Unfortunately there is a lot of speculation and very little official word. My understanding is the following:
FD_CLOSE being posted after all data is read from a socket.
Edit: My original response here appears to be false. I believe this statement to be referring to a specific type of socket closure, but there doesn't seem to be agreement on exactly what. It is expected that this should be true, but experience shows that it will not always be.
An application should check for remaining data upon receipt of FD_CLOSE to avoid any possibility of losing data.
There may be data still available at the point your application code receives the FD_CLOSE event. In fact, reading around indicates that new data may become available at the socket after you have received the FD_CLOSE. You should check for this data in order to avoid losing it. I've seen some people implement recv loops until the recv call fails (indicating the socket is actually closed) or even restart the event loop waiting for more FD_READs. I think in the general case you can simply attempt a recv with a sufficiently large buffer and assume nothing more will arrive.
I have a question about the high water mark (HWM) for a ZeroMQ PUB/SUB connection. Effectively I want to set the HWM value to zero. i.e.: if a message can't be delivered, just drop it.
Unfortunately it seems that the HWM value can only be set as low as 1.
"0" means infinity according to the API docs and my testing.
In my opinion, using "0" to mean "infinite" in the API was a mistake
:/ It's probably unlikely to change.Is there a workaround that
doesn't require recompilation of ZeroMQ?
The problem I'm encountering is that with a non-zero HWM, when a connection fails, at least one message sits in the queue and is sent when the connection is re-established. By that time the message is no longer valid and shouldn't be trusted.
I've thought about discarding messages on the receiving side by including a time stamp generated at the sending side. Unfortunately the systems clocks are not connected to the internet and drift significantly. Syncing the clocks with an additional REQ/REP socket will introduce other complicated startup states to the sender and seems like an unnecessary workaround.
Yes:
. . . may try to hunt the same deer from the opposite forest
Based on the need,
declared in the initial problem motivation,
the solution may be derived from setting not the ZMQ_???HWM,
but another parameter:
ZMQ_CONFLATE: Keep only last message
Default value 0 ( false )
Applicable socket types ZMQ_PULL, ZMQ_PUSH, ZMQ_SUB, ZMQ_PUB, ZMQ_DEALERIf set, a socket shall keep only one message in its inbound / outbound queue, this message being the last message received/the last message to be sent. Ignores ZMQ_RCVHWM and ZMQ_SNDHWM options.Does not support multi-part messages, in particular, only one part of it is kept in the socket internal queue.
and some help may come from a "preventive-measure" hidden in:
ZMQ_IMMEDIATE: Queue messages only to completed connections
Applicable socket types all, only for connection-oriented transports.
By default queues will fill on outgoing connections even if the connection has not completed. This can lead to "lost" messages on sockets with round-robin routing (REQ, PUSH, DEALER).
If this option is set to 1, messages shall be queued only to completed connections. This will cause the socket to block if there are no other connections, but will prevent queues from filling on pipes awaiting connection.