EtherCAT vs. ADS(Automation Device Specification) - twincat

What is the main difference between ADS and EtherCAT and where is their exact position on the OSI model?

EtherCAT can be used for real-time applications, but ADS can't. That is due to the fact how they are setup.
OSI layer
The OSI layer model for EtherCAT looks like this (Wikipedia):
ISO/OSI Layer
EtherCAT
7. Application
Mailbox Acyclic Data Access
Cyclic Data Exchange
HTTP*, FTP*
6. Presentation
—
5. Session
—
4. Transport
TCP*
3. Network
IP*
2. Data link
Mailbox/Buffer Handling
Process Data Mapping
Extreme Fast Auto-Forwarder
Ethernet MAC
1. Physical
100BASE-TX, 100BASE-FX
where the *'s are optional.
The ADS protocol runs on top of the TCP/IP or UDP/IP protocols.
EtherCAT
Wikipedia
With EtherCAT, the standard Ethernet packet or frame (according to IEEE 802.3) is no longer received, interpreted, and copied as process data at every node. The EtherCAT slave devices read the data addressed to them while the telegram passes through the device, processing data "on the fly". In other words, real-time data and messages are prioritized over more general, less time-sensitive or heavy load data.
Similarly, input data are inserted while the telegram passes through. A frame is not completely received before being processed; instead processing starts as soon as possible. Sending is also conducted with a minimum delay of small bit times. Typically the entire network can be addressed with just one frame.2
ADS
ADS runs on top of TCP, which is not fast enough, or UDP, which is not reliable enough, for real time purposes. This is due to the following reasons.
For TCP this is due to fact that several calls need to be made back and forth (Wikipedia):
The server must be listening (passive open) for connection requests from clients before a connection is established. Three-way handshake (active open), retransmission, and error detection adds to reliability but lengthens latency.
For UDP it is the fact that there is no way to check if a message was delivered (Wikipedia):
UDP is suitable for purposes where error checking and correction are either not necessary or are performed in the application; UDP avoids the overhead of such processing in the protocol stack. Time-sensitive applications often use UDP because dropping packets is preferable to waiting for packets delayed due to retransmission, which may not be an option in a real-time system.1
TCP/UDP difference
To remember the difference between TCP and UDP the following joke might help:
RedneckBob on April 1, 2014
"Hi, I'd like to hear a TCP joke."
"Hello, would you like to hear a TCP joke?"
"Yes, I'd like to hear a TCP joke."
"OK, I'll tell you a TCP joke."
"Ok, I will hear a TCP joke."
"Are you ready to hear a TCP joke?"
"Yes, I am ready to hear a TCP joke."
"Ok, I am about to send the TCP joke. It will last 10 seconds, it has
two characters, it does not have a setting, it ends with a punchline."
"Ok, I am ready to get your TCP joke that will last 10 seconds, has
two characters, does not have an explicit setting, and ends with a
punchline."
"I'm sorry, your connection has timed out. Hello, would you like to
hear a TCP joke?"
shawabawa3 on April 2, 2014
I'd reply with a UDP joke, but you might not get it

Related

How do tools like iperf measure UDP?

Given that UDP packets don't actually send acks, how does a program like iperf measure their one-way performance, i.e., how can it confirm that the packets actually reached:
within a time frame
intact, and uncorrupted
To contrast, Intuitively, to me, it seems that TCP packets, which have an ack signal sent back to allow rigorous benchmarking of their movement across a network can be done very reliably from a client.
1/ "how can it confirm that the packets actually reached [...] intact, and uncorrupted"
UDP is an unfairly despised protocol, but come on, this is going way too far here! :-)
UDP have checksum, just like TCP:
https://en.wikipedia.org/wiki/User_Datagram_Protocol#Checksum_computation
2/ "how can it confirm that the packets actually reached [...] within a time frame"
It does not, because this is not what UDP is about, nor TCP by the way.[*]
As can be seen from its source code here:
https://github.com/esnet/iperf/blob/master/src/iperf_udp.c#L55
...what it does though, is check for out of order packets. A "pcount" is set in the sending side, and checked at the receiving side here:
https://github.com/esnet/iperf/blob/master/src/iperf_udp.c#L99
...and somewhat calculate a bogus jitter:
https://github.com/esnet/iperf/blob/master/src/iperf_udp.c#L110
(real life is more complicated than this, you not only have jitter, but also drift)
[*]:
For semi-garanteed, soft "within a time frame" / real time layer 3 and above protocols, look at RTP, RTSP and such. But neither TCP nor UDP inherently have this.
For real, serious hard real-time garantee, you've got to go to layer 2 protocols such as Ethernet-AVB:
https://en.wikipedia.org/wiki/Audio_Video_Bridging
...which were designed because IP and above simply cannot. make. hard. real. time. guaranteed. delivery. Period.
EDIT:
This is another debate, but...
The first thing you need for "within a time frame", is a shared wall clock on sending/receiving systems (else, how could you tell that such received packet is out of date?)
From Layer 3 (IP) and above, NTP precision target is about 1ms. It can be less than that on a LAN (but accross IP networks, it's just taking a chance and hope the best).
On layer 2, aka "LAN" the layer 2 PTP (Precision Time Protocol) IEEE 1588 is for sub-microsecond range. That's a 1000 times more accurate. Same goes for the derived IEEE 802.1AS, "Timing and Synchronization for Time-Sensitive Applications (gPTP)" used In Ethernet AVB.
Conclusion on this sub-topic:
TCP/IP, though very handy and powerful, is not designed to "guarantee delivery within a time frame". Be it TCP or UDP. Get this idea out of your head.
The obvious way would be to connect to a server that participates in the testing.
The client starts by (for example) connecting to an NTP server to get an accurate time base.
Then the UDP client sends a series of packets to the server. In its payload, each packet contains:
a serial number
a timestamp when it was sent
a CRC
The server then looks these over and notes whether any serial numbers are missing (after some reasonable timeout) and compares the time it received each packet to the time the client sent the packet. After some period of time, the server sends a reply indicating how many packets it received, the mean and standard deviation of the transmission times, and an indication of how many appeared to be corrupted based on the CRCs they contained.
Depending on taste, you might also want to set up a simultaneous TCP connection from the client to the server to coordinate testing of the UDP channel and (possibly) return results.

How to find out whether the packet transferred over UDP is lost or dropped?

I am new to networking. I have a small doubt.
I am sending an alarm using SNMP to a target, but the alarm is not received at the target within the specified amount of time. I feel that the data may be lost or dropped.
Now my question is : on what basis should I conclude that there is a loss or drop?
Or will there be any other reason for the trap not to be received?
If I assume your definition of "lost" means one of the network equipment (switch, firewall, ...) didn't forward it to the next hop, and "dropped" means your network board didn't deliver it to your application (e.g. input buffer full, ...).
Under those assumptions, you have no way to know, in your application, that the packet has been "lost" or "dropped". If you want to be sure, you can install network sniffer such as Wireshark on your computer to make sure your packet is delivered (but maybe not processed by your application), or configure your network appliance (if you can) to log packets dropping (meaning "loss" accross the network).

Is send/receive packet buffer the same preallocated memory

I have a windows app consuming large amounts of incoming udp traffic and sending a small number of udp packets 'keep alive' messages. I'm seeing a small amount of drops on both incoming and outgoing. I was surprised that the small amount of outgoing data was experiencing drops so I captured the packets using netMon and see them all being sent though out of the server, 3 frames sent only 2 arrive at the linux server.
I'd like to know the following:
1. Is NetMon a clone on the sock_buffer and therefore the data may be dropped at the packet buffer and not actually be being sent of the server?
2. Is the packet buffer memory the same for both send and receive (ie. if receive packet buffers are using all the buffer memory preallocated could this cause packet loss on the small amount of outgoing traffic)?
First thing: The send and receive packet buffer have separate memory.
Second thing: NetMon works on lower network layer, not the socket layer.
third thing: Keep in mind that UDP is unreliable protocol and you can not ensure that all packets sent from one end will be received on other end. If you need reliability, you should consider TCP or some other reliable protocol.
By the way both sender and receiver are on same LAN or Internet?? How they are connected? if you can describe it then maybe someone can suggest something else to debug the issue further.

how TCP can be tuned for high-performance one-way transmission?

my (network) client sends 50 to 100 KB data packets every 200ms to my server. there're up to 300 clients. Server sends nothing to client. Server (dedicated) and clients are in LAN. How can I tune TCP configuration for better performance? Server on Windows Server 2003 or 2008, clients on Windows 2000 and up.
e.g. TCP window size. Does changing this parameter help? anything else? any special socket options?
[EDIT]: actually in different modes packets can be up to 5MB
I did a study on this a couple of years ago wth 1700 data points. The conclusion was that the single best thing you can do is configure an enormous socket receive buffer (e.g. 512k) at the receiver. Do that to the listening socket, so it will be inherited by the accepted sockets, so it will already be set while they are handshaking. That in turn allows TCP window scaling to be negotiated during the handshake, which allows the client to know about the window size > 64k. The enormous window size basically lets the client transmit at the maximum possible rate, subject only to congestion avoidance rather than closed receive windows.
What OS?
IPv4 or v6?
Why so large of a dump ; why can't it be broken down?
Assuming a solid, stable, low bandwidth:delay prod, you can adjust things like inflight sizing, initial window size, mtu (depending on the data, IP version, and mode[tcp/udp].
You could also round robin or balance inputs, so you have less interrupt time from the nic .. binding is an option as well..
5MB /packet/? That's a pretty poor design .. I would think it'd lead to a lot of segment retrans's , and a LOT of kernel/stack mem being used in sequence reconstruction / retransmits (accept wait time, etc)..
(Is that even possible?)
Since all clients are in LAN, you might try enabling "jumbo frames" (need to run a netsh command for that, would need to google for the precise command, but there are plenty of how-tos).
On the application layer, you could use TransmitFile, which is the Windows sendfile equivalent and which works very well under Windows Server 2003 (it is artificially rate-limited under "non server", but that won't be a problem for you). Note that you can use a memory mapped file if you generate the data on the fly.
As for tuning parameters, increasing the send buffer will likely not give you any benefit, though increasing the receive buffer may help in some cases because it reduces the likelihood of packets being dropped if the receiving application does not handle the incoming data fast enough. A bigger TCP window size (registry setting) may help, as this allows the sender to send out more data before having to block until ACKs arrive.
Yanking up the program's working set quota may be worth a consideration, it costs you nothing and may be an advantage, since the kernel needs to lock pages when sending them. Being allowed to have more pages locked might make things faster (or might not, but it won't hurt either, the defaults are ridiculously low anyway).

Ensuring packet order in UDP

I'm using 2 computers with an application to send and receive udp datagrams. There is no flow control and ICMP is disabled. Frequently when I send a file as UDP datagrams via the application, I get two packets changing their order and therefore - packet loss.
I've disabled and kind of firewall and there is no hardware switch connected between the computers (they are directly wired).
Is there a way to make sure Winsock and send() will send the packets the same way they got there?
Or is the OS doing that?
Or network device configuration needed?
UDP is a lightweight protocol that by design doesn't handle things like packet sequencing. TCP is a better choice if you want robust packet delivery and sequencing.
UDP is generally designed for applications where packet loss is acceptable or preferable to the delay which TCP incurs when it has to re-request packets. UDP is therefore commonly used for media streaming.
If you're limited to using UDP you would have to develop a method of identifying the out of sequence packets and resequencing them.
UDP does not guarantee that your packets will arrive in order. (It does not even guarantee that your packets will arrive at all.) If you need that level of robustness you are better off with TCP. Alternatively you could add sequence markers to your datagrams and rearrange them at the other end, but why reinvent the wheel?
is there a way to make sure winsock and send() will send the packets the same way they got there?
It's called TCP.
Alternatively try a reliable UDP protocol such as UDT. I'm guessing you might be on a small embedded platform so you want a more compact protocol like Bell Lab's RUDP.
there is no flow control (ICMP disabled)
You can implement your own flow control using UDP:
Send one or more UDP packets
Wait for acknowledgement (sent as another UDP packets from receiver to sender)
Repeat as above
See Sliding window protocol for further details.
[This would be in addition to having a sequence number in the packets which you send.]
There is no point in trying to create your own TCP like wrapper. We love the speed of UPD and that is just going to slow things down. Your problem can be overcome if you design your protocol so that every UDP datagram is independent of each other. Our packets can arrive in any order so long as the header packet arrives first. The header says how many packets are suppose to arrive. Also, UPD has become a lot more reliable since this post was created over a decade ago. Don't try to
This question is 12 years old, and it seems almost a waste to answer it now. Even as the suggestions that I have have already been posed. I dealt with this issue back in 2002, in a program that was using UDP broadcasts to communicate with other running instances on the network. If a packet got lost, it wasn't a big deal. But if I had to send a large packet, greater than 1020 bytes, I broke it up into multiple packets. Each packet contained a header that described what packet number it was, along with a header that told me it was part of a larger overall packet. So, the structure was created, and the payload was simply dropped in the (correct) place in the buffer, and the bytes were subtracted from the overall total that was needed. I knew all the packets had arrived once the needed byte total reached zero. Once all of the packets arrived, that packet got processed. If another advertisement packet came in, then everything that had been building up was thrown away. That told me that one of the fragments didn't make it. But again, this wasn't critical data; the code could live without it. But, I did implement an AdvReplyType in every packet, so that if it was a critical packet, I could reply to the sender with an ADVERTISE_INCOMPLETE_REQUEST_RETRY packet type, and the whole process could start over again.
This whole system was designed for LAN operation, and in all of my debugging/beta testing, I rarely ever lost a packet, but on larger networks I would often get them out of order...but I did get them. Being that it's now 12 years later, and UDP broadcasting seems to be frowned upon by a lot of IT Admins, UDP doesn't seem like a good, solid system any longer. ChrisW mentioned a Sliding Window Protocol; this is sort of what I built...without the sliding part! More of a "Fixed Window Protocol". I just wasted a few more bytes in the header of each of the payload packets to tell how many total bytes are in this Overlapped Packet, which packet this was, and the unique MsgID it belonged to so that I didn't have to get the initial packet telling me how many packets to expect. Naturally, I didn't go as far as implementing RFC 1982, as that seemed like overkill for this. As long as I got one packet, I'd know the total length, unique Message Id, and which packet number this one was, making it pretty easy to malloc() a buffer large enough to hold the entire Message. Then, a little math could tell me where exactly in the Message this packet fits into. Once the Message buffer was filled in...I knew I got the whole message. If a packet arrived that didn't belong to this unique Message ID, then we knew this was a bust, and we likely weren't going to ever get the remainder of the old message.
The only real reason I mention this today, is that I believe there still is a time and a place to use a protocol like this. Where TCP actually involves too much overhead, on slow, or spotty networks; but it's those where you also have the most likelihood of and possible fear of packet loss. So, again, I'd also say that "reliability" cannot be a requirement, or you're just right back to TCP. If I had to write this code today, I probably would have just implemented a Multicast system, and the whole process probably would have been a lot easier on me. Maybe. It has been 12 years, and I've probably forgotten a huge portion of the implementation details.
Sorry, if I woke a sleeping giant here, that wasn't my intention. The original question intrigued me, and reminded me of this turn-of-the-century Windows C++ code I had written. So, please try to keep the negative comments to a minimum--if at all possible! (Positive comments, of course...always welcome!) J/K, of course, folks.

Resources