Broadcasting and Fetch Data On UART - collision

I have a computer (that use as server) and several board with Atmega microcontroller Something like:
The computer connect to board on UART & RS485 (with a USB to RS485 converter)(I have limitation that lead to I could not use ModBus). I want to broadcast a message from server over bus and fetch the ID from of each board (Board ID is 4 digit).
When the boards receive broadcast message and try to send their own ID and the server receive some fake ID and I think it related to Collisions problem when all boards want to send data in one time.
After I search about this problem found a way that put a constant in each board that save a special delay for send data and when board receive broadcast message send ID with that delay...in this way it work fine and I dont see the Collisions but have some problem:
May be the delay number of 2 board be same.
Good way for small count of board.
Extra process when want to install board on bus.
Anybody know with this problem and could help me how to solve this problem with better solution?

You are mentioning Modbus in your question although some of your other stated facts seem to deviate from there (like 4-digit device numbers, and Modbus only has 1-255). Also, Modbus does not support responses to broadcast messages. I thus doubt a bit you are actually using Modbus.
A scheme you could use (and that is classically used in MA networks) would be:
Once a broadcast is received, have the clients scan the bus for responses for a time frame based on its station ID. If your client can see one, have it wait a minimum bus time (the time a module needs to answer your broadcast message based on current bus timing + the round trip for the master acknowledging the broadcast answer) plus an additional time based on its module ID, then go back to (1)
If a client sees the bus unoccupied for the specified time, send back a broadcast answer.
Have the master acknowledge the broadcast response from this client with the shortest possible message.
If a client that has sent a broadcast response does not receive a proper ack, go back to (1)
This is not 100% secure and absolutely not according to the Modbus specification, but could work.
* is a transmission, - is a "wait"
**** (Bus master broadcast)
--------- station 100 waits 100ms
------------------ station 200 waits 200ms
**** Station 100 sends broadcast response
------------------ station 200 sees bus active and waits another 200ms
*** master acknowledges broadcast response of 100
------------------ station 200 sees bus active again and waits 200ms from last seen activity
**** Station 200 has seen bus quiet for 200ms and sends broadcast response
*** master acks brc response of 200
This can take quite a bit of time and needs the waiting times finely adjusted against the transmission time of broadcast responses and response acks, but can work, and actually is implemented that way in a lot of CSMA/CD networks.

It will probably take longer, but here is another way to do it. First, design your protocol so that each command contains (or an can contain) an ID, and boards only respond to commands for their ID. Then, on your host, you would iterate through each of the possible IDs and send a simple command to each of them. If you get a response, you know there is a board with that ID. If you don't get a response after some period of time, you know there is no board there.

Related

How to solve problem of high data rates of flow, hits switches that does not have flowrule for that flow, in openflow mininet?

I am using iperf traffic generation and hard timeout as extension to simple_switch_13.py code in mininet with RYU SDN. I am using linear topology with 8 switches. I set the hard timeout to 5 seconds.
I am working with only one flow. I started the iperf traffic between two hosts(let's say h1 to h7. the terms used are same as terms used in mininet linear topology) for 10 seconds. When the flow started arp packets packets are generated in the network. After that a arp reply from h7 is sent to h1 which creates seven packet in messges from (s7, s6, ... , s1) and respective flowrule is installed in the switches and finally reaches h1. Then h1 sends tcp flow to h7 which also creates seven packet in messages from (s1, s2, ... , s7) and respective flow rule is installed in the switches and reaches h7. So far everything worked fine.
But once the timeout(5 seconds) is completed the flow rule in the switches is deleted. because flow is still in network what actually should happen is controller should send one packet-in message and buffer the rest of the packets so that when the respective flow rule is installed in flow table of the switch then the buffered packets will use the installed flow rule to pass the switch. But that is not happening. The controller is getting a lot of "packet-in" messages before the flow rule get installed into the switch(every packet that came to switch is coming to controller). What might be the reason for the lot of packet in messages. Is the buffers of the switch not working fine (but i am getting packet-in messages with some buffer_id). How to solve this issue?
This is also happening with idle-timeout and udp flow in the starting.(i.e. when h1 starts communication with h7) the switches along the path are generating lot of packet-in messages.
After doing a lot of research I understand that it is not problem with hard timeout or idle timeout. It is happening when a flow with high data rate hits the switch and switch didn't have that flow rule, then it is sending a lot of packet-in messages for the same flow. It is not queuing(or may be not storing the rest of the flow packets after sending one packet-in for that respective flow) those packets. How to solve this issue in mininet?

CAN BUS - ACK field (singular or multiple response?)

I have several ECAN within the PIC18 and PIC24 (on OpenCan) with Can Transceiver attached to the CAN Bus network. In event one module send a message and received by other modules (within ECAN), will all ECAN do CRC check and if passed, make dominate bit or just one one of many make this response?. In other words, does PIC ECAN make ACK response even the message is not assigned for that module?
CAN controllers generate dominant ACK bits if they receive the frame without any errors. ID filtering takes place after that. So yes, the CAN controller generates ACK even for the frames it's not interested in.
If a transmitter detects dominant ACK bit, it concludes that at least one node in the bus has received the frame correctly. However, it's not possible to determine if this receiver was the intended one.
As far as I understand, ACK bit makes it possible for a transmitter to self-check. A transmitter can think "If no one hears my message, then I should be the one having problems." if it samples recessive ACK bits. The reception of the message by the intended node should be checked by higher layer protocols, like CANopen.
Transmitter node transmits CAN MSG and monitors the bus for a dominant bit in the ack. slot. Receiver if receives the message correctly, will overwrite the ack. bit and make it dominant. If it does not receive the message correctly, it will not overwrite the ack. slot. Then the transmitter knows that one node has not received the message correctly because it will detect a dominant bit written by the other nodes and assume that all the nodes have received the correct message. Even if one node does not receive the data correctly the message is retransmitted by the transmitter.
Check if you can successfully transmit CAN messages. The problem you could have is in receiving messages. When you send a message to PIC, the message is not received. The message received flag is never set. You have to check that a message is being sent with the scope, check if your PIC stores it. Check which mode is it in, I assume 0, and if it is configured to receive all messages, even with errors.
Check on the scope if the PIC sends and receives the Ack response. When a message is then transmitted back to the pic, check if it sends an Ack response or receives the message!
CAN is a broadcast network so a node does not really know how many other nodes share the bus with it.
With that manner, all the nodes shall do the CRC check and ACK whether the messages are "assigned" (supposed to be received in application layer) by the listened node or not.
There are no conflict, since if there a error with CRC or ACK, all listened nodes shall send (active or passive) Error Frame which are same form from every nodes.
I recommend you to refer this excellent article:
http://www.copperhilltechnologies.com/can-bus-guide-error-flag/

Are SNMP request sequential - are there chances they it can arrive in multiples

I am writing an SNMP agent and plan to write agent to process SNMP request one by one. Means that as when a request arrives at port 161 - will not accept any further request until response / timeout completes.
I am no sure of many SNMP clients - but is it that the SNMP request are sync and sequential - is there any way that they can come in bulk at a single time?
I think SNMP queries can easily come in bursts due to multiple independent managers polling your agent and/or a single anxious manager retrying the same command if your agent is not quick enough to respond.
When it comes to writing SNMP agents, the other consideration would be to estimate the maximum possible time for the agent to gather required data to respond. I believe it should not be the OID-average, but the OID-maximum. In other words, should your agent serve 100 OIDs, out of which querying one "slow" OID would lead to the entire (synchronous) agent to block and stop serving others - this situation might undermine the credibility of your agent on the network...
On top of that, if you happen to hit the same slow OID multiple time in a row (e.g. manager retries), the delay might be accumulating, effectively blocking out other queries.
To summarize: I think high-performance SNMP agent should have the following traits:
Support massively concurrent SNMP commands processing
Have non-blocking data source access for gathering managed objects data
Have some form of caching or rate limiting to protect computationally expensive data sources from cocky SNMP managers
On the other hand, if your SNMP agent is serving a small piece of static data on a low-power hardware and you do not expect too many managers ever talking to you, perhaps you could get away with a simplistic synchronous SNMP agent...
BTW, BSD sockets interface would hold a queue of unprocessed UDP packets so your agent would have a chance to catch up.
The premise of your question is flawed, as there is no concept of "coming in bulk at a single time" — no matter in which order the UDP datagrams making up an SNMP packet are received, and no matter how long a duration lies between the receipt of each packet by your network interface, your operating system will present the SNMP packets to you in receipt order, in sequence. You have one listen port, and one read buffer. So this synchronicity is already how network data processing works and you shouldn't worry about it.
I would say though, that if you are waiting for some resource to become available while processing an SNMP request (as suggested by your use of the word "timeout"), you probably ought to get on and start processing your other pending SNMP requests in the meantime, or you risk your whole stack grinding to a halt. It's not fair to make a manager wait some unknown duration for a response to request B just because some other manager made a request A that is experiencing a delay in being serviced. That being said, you probably do want some upper limit on how many requests can be serviced at any one time, to prevent potential DDoSsing — choosing this value can only be done by you, with your knowledge of the use case and the ecosystem.
Get requests are one OID per request, GetBulk request can ask for several OIDs in one request. Also SNMP client can use async mode sending multiple requests with minimal intervals and waiting for replies.
Packets can also arrive out-or-order due to network delays and equal-cost routes. Your can experiment sending requests with snmpget, snmpgetbulk, snmpbulkwalk and use tcpdump to see what is on the wire.
So, in general, your agent has to be ready to accept bursts of requests.
For simplicity, if request rate is low and your agent can reply fast enough, you can use one-by-one processing. Some of requests can fail in this case, but clients can retry request and finally get reply from agent.

Veins - Unexpected behavior with lost packets in certain vehicles

I'm working with the Veins framework over the OMNeT++ simulator and I'm facing a weird situation where certain few nodes lose all received packets.
To put everybody in context, I'm simulating 100 nodes (4 flows of 25 nodes), all under coverage (apparently), and sending 10 packets per second each. Depending on the moment the nodes enter the network (i.e: are created by SUMO), some of them (usually just 1 but can be 2, 3, 4...) enter in a mode where all packets are marked as lost (SNIRLostPackets) as they receive a packet while another packet is already being received (according to the Decider the NIC is already synced to another frame).
That doesn't suppose to happen in 802.11 unless there are hidden nodes and the senders don't see each other at the moment of sending their respective frames (both see the the channel idle) right?
So, this behavior is not expected at all and destroys the final lost packets statistics. I tuned the transmission powers of transmission and interference range but nothing changes.
It happens too often to ignore it and I would like to know if anybody has experienced this behavior and how it was solved.
Thank you
(OK, apparently the issue comes in an special case where a packet is received (started to being received) OK but at the end of the reception, the node has changed to a TX state.
Then, the packet is marked as "received while sending" but the node has already marked this frame as the next correctly reception. So it discards all receiving ones with no end.
It seems a bug and the possible workaround is adding these lines
if (!frame->getWasTransmitting()){
curSyncFrame = 0;
}
in the processSignalEnd function (Decider80211p file), inside the "(frame->getWasTransmitting() || phy11p->getRadioState() == Radio::TX)" case.
I'm not quite sure if this is a case that whether should happen or not, as a node should not send a packet while receiving.
Hope it helps.

VoIP SIP partial number dialing

When using an old style analogue or ISDN telephone, the dialing of a number is not closed to the end. There is no signal the number is complete and finished. However, adapters and such enable old phones for VoIP using SIP.
As I understand, the SIP request headers contain the whole client address or number.
How then is a SIP session established without knowing if the dialed number is complete?
SIP (per se) doesn't say anything about when calls are made or dialing, that's entirely up to the device or program. Most ATAs act like traditional POTS phones connected to a switch, and dial off a completed dial-plan entry that matched (like 1-212-345-6789 or 911 or 411), or when a time since the last digit has elapsed (though most of those will end up forwarded to the "you've dialed an invalid number, please try again" message or beeps). True IP phones often function closer to a cellphone (or cordless phone) model, with a "call" or "dial" button.
In many devices the dial-plan is programmable, sometimes by the user, other times (more often) by the service provider (Vonage, etc), in a few by either party.
Depending on the dial-plan, it may do more or less validation of the number being dialed in the matching (like checking for valid area-code digits or not, etc).
by guessing. If there comes no additional digit within a certain number of seconds, the call will be made. Often you can speed up this by terminating your number with a # or similiar.
glglgl's guess is correct, a SIP device only initiates a call once it has acquired the full number it needs to use. SIP uses URIs in call requests which are very similar to email addresses and in the same manner that sending an email to a partial address is likely to fail initiating a call with a partial SIP URI is also likely to.
As to how SIP devices recognise when the user has completed the number it's normally done with a timeout, for example no more keys pressed within 10 seconds, or by the user pressing a "Send" key which as glglgl also alludes to will often be the # key on phones connected to an ATA. IP Phones typically have a "Send" or "Dial" button.
Some ATAs do allow you to adjust the timeout to detect when a user has completed dialling. I know the original Sipura ATAs (now owned by Cisco) allowed the delay to be configured in their internal dialplan.

Resources