Is send/receive packet buffer the same preallocated memory - windows

I have a windows app consuming large amounts of incoming udp traffic and sending a small number of udp packets 'keep alive' messages. I'm seeing a small amount of drops on both incoming and outgoing. I was surprised that the small amount of outgoing data was experiencing drops so I captured the packets using netMon and see them all being sent though out of the server, 3 frames sent only 2 arrive at the linux server.
I'd like to know the following:
1. Is NetMon a clone on the sock_buffer and therefore the data may be dropped at the packet buffer and not actually be being sent of the server?
2. Is the packet buffer memory the same for both send and receive (ie. if receive packet buffers are using all the buffer memory preallocated could this cause packet loss on the small amount of outgoing traffic)?

First thing: The send and receive packet buffer have separate memory.
Second thing: NetMon works on lower network layer, not the socket layer.
third thing: Keep in mind that UDP is unreliable protocol and you can not ensure that all packets sent from one end will be received on other end. If you need reliability, you should consider TCP or some other reliable protocol.
By the way both sender and receiver are on same LAN or Internet?? How they are connected? if you can describe it then maybe someone can suggest something else to debug the issue further.

Related

EtherCAT vs. ADS(Automation Device Specification)

What is the main difference between ADS and EtherCAT and where is their exact position on the OSI model?
EtherCAT can be used for real-time applications, but ADS can't. That is due to the fact how they are setup.
OSI layer
The OSI layer model for EtherCAT looks like this (Wikipedia):
ISO/OSI Layer
EtherCAT
7. Application
Mailbox Acyclic Data Access
Cyclic Data Exchange
HTTP*, FTP*
6. Presentation
—
5. Session
—
4. Transport
TCP*
3. Network
IP*
2. Data link
Mailbox/Buffer Handling
Process Data Mapping
Extreme Fast Auto-Forwarder
Ethernet MAC
1. Physical
100BASE-TX, 100BASE-FX
where the *'s are optional.
The ADS protocol runs on top of the TCP/IP or UDP/IP protocols.
EtherCAT
Wikipedia
With EtherCAT, the standard Ethernet packet or frame (according to IEEE 802.3) is no longer received, interpreted, and copied as process data at every node. The EtherCAT slave devices read the data addressed to them while the telegram passes through the device, processing data "on the fly". In other words, real-time data and messages are prioritized over more general, less time-sensitive or heavy load data.
Similarly, input data are inserted while the telegram passes through. A frame is not completely received before being processed; instead processing starts as soon as possible. Sending is also conducted with a minimum delay of small bit times. Typically the entire network can be addressed with just one frame.2
ADS
ADS runs on top of TCP, which is not fast enough, or UDP, which is not reliable enough, for real time purposes. This is due to the following reasons.
For TCP this is due to fact that several calls need to be made back and forth (Wikipedia):
The server must be listening (passive open) for connection requests from clients before a connection is established. Three-way handshake (active open), retransmission, and error detection adds to reliability but lengthens latency.
For UDP it is the fact that there is no way to check if a message was delivered (Wikipedia):
UDP is suitable for purposes where error checking and correction are either not necessary or are performed in the application; UDP avoids the overhead of such processing in the protocol stack. Time-sensitive applications often use UDP because dropping packets is preferable to waiting for packets delayed due to retransmission, which may not be an option in a real-time system.1
TCP/UDP difference
To remember the difference between TCP and UDP the following joke might help:
RedneckBob on April 1, 2014
"Hi, I'd like to hear a TCP joke."
"Hello, would you like to hear a TCP joke?"
"Yes, I'd like to hear a TCP joke."
"OK, I'll tell you a TCP joke."
"Ok, I will hear a TCP joke."
"Are you ready to hear a TCP joke?"
"Yes, I am ready to hear a TCP joke."
"Ok, I am about to send the TCP joke. It will last 10 seconds, it has
two characters, it does not have a setting, it ends with a punchline."
"Ok, I am ready to get your TCP joke that will last 10 seconds, has
two characters, does not have an explicit setting, and ends with a
punchline."
"I'm sorry, your connection has timed out. Hello, would you like to
hear a TCP joke?"
shawabawa3 on April 2, 2014
I'd reply with a UDP joke, but you might not get it

How to set the UDP packet reassembly timeout in Windows 10

I am currently developing an image aquisition application in Visual C++ that receives image data from an UDP hardware device with limited capabilities (i.e. no UDP checksum). The device has a GBit connection to a dedicated switch and the PC uses a dedicated NIC and a 10GBit connection to this switch.
The transmitted image data consists of packets with a size ranging from 6528 to 19680 bytes. These packets are fragmented by the hardware device and reconstructed by the network stack on the PC.
Sometimes a packet (call it packet #4711) is lost and the PC side tries to reconstruct it for a long time. Within this timespan a new packet with the same packed id is sent by the hardware device because of an overflow of the 16-bit packet id. Now the PC receives new fragments for (a new) packet #4711 and uses it to complete the old, still unassembled packet and assembles a damaged packet. To top it, the remaining fragments of the new #4711 packet are stored and combined with the next #4711 (which will be received a few seconds later). So the longer the system runs, the more packet ids will be compromised until no communication is possible at all.
We cannot calculate the UDP checksum on the hardware device because of it's limited capabilities.
We cannot use IPv6 (which would offer bigger packet ids) because there is no support for the hardware device.
We will have to implement our own protocol on top of UDP and "manually" fragment and reconstruct the data, but we could avoid this if we could find a way to reduce the packet reconstruction timeout on Windows to 500ms or less.
I searched Google and Stackoverflow for information, but there are not many results and none of them was of much help.
Hence the question: Is there a way to reduce the reconstruction timeout for IPv4 UDP fragments on Windows 10 via Registry, Windows API or any other magic or do you have a better suggestion?
Since Windows 2000 its hardcoded there is no official way of modifying the ip packet reassembly timeout because of the strict RFC 2460 compatibility.
Details can be read here:
https://blogs.technet.microsoft.com/nettracer/2010/06/03/why-doesnt-ipreassemblytimeout-registry-key-take-effect-on-windows-2000-or-later-systems/
Currently the only possibility seems to use raw-sockets which are limited since Windows 7 and not available with every socket provider. This would make the application much more complex.
We will alter our software protocol so that no packets > 1400 Byte are being send at all. This forces us to care about fragmentation in our software but prevents IP packet fragmentation and all of its traps. Perhaps this is the correct way to handle such problems.

How do tools like iperf measure UDP?

Given that UDP packets don't actually send acks, how does a program like iperf measure their one-way performance, i.e., how can it confirm that the packets actually reached:
within a time frame
intact, and uncorrupted
To contrast, Intuitively, to me, it seems that TCP packets, which have an ack signal sent back to allow rigorous benchmarking of their movement across a network can be done very reliably from a client.
1/ "how can it confirm that the packets actually reached [...] intact, and uncorrupted"
UDP is an unfairly despised protocol, but come on, this is going way too far here! :-)
UDP have checksum, just like TCP:
https://en.wikipedia.org/wiki/User_Datagram_Protocol#Checksum_computation
2/ "how can it confirm that the packets actually reached [...] within a time frame"
It does not, because this is not what UDP is about, nor TCP by the way.[*]
As can be seen from its source code here:
https://github.com/esnet/iperf/blob/master/src/iperf_udp.c#L55
...what it does though, is check for out of order packets. A "pcount" is set in the sending side, and checked at the receiving side here:
https://github.com/esnet/iperf/blob/master/src/iperf_udp.c#L99
...and somewhat calculate a bogus jitter:
https://github.com/esnet/iperf/blob/master/src/iperf_udp.c#L110
(real life is more complicated than this, you not only have jitter, but also drift)
[*]:
For semi-garanteed, soft "within a time frame" / real time layer 3 and above protocols, look at RTP, RTSP and such. But neither TCP nor UDP inherently have this.
For real, serious hard real-time garantee, you've got to go to layer 2 protocols such as Ethernet-AVB:
https://en.wikipedia.org/wiki/Audio_Video_Bridging
...which were designed because IP and above simply cannot. make. hard. real. time. guaranteed. delivery. Period.
EDIT:
This is another debate, but...
The first thing you need for "within a time frame", is a shared wall clock on sending/receiving systems (else, how could you tell that such received packet is out of date?)
From Layer 3 (IP) and above, NTP precision target is about 1ms. It can be less than that on a LAN (but accross IP networks, it's just taking a chance and hope the best).
On layer 2, aka "LAN" the layer 2 PTP (Precision Time Protocol) IEEE 1588 is for sub-microsecond range. That's a 1000 times more accurate. Same goes for the derived IEEE 802.1AS, "Timing and Synchronization for Time-Sensitive Applications (gPTP)" used In Ethernet AVB.
Conclusion on this sub-topic:
TCP/IP, though very handy and powerful, is not designed to "guarantee delivery within a time frame". Be it TCP or UDP. Get this idea out of your head.
The obvious way would be to connect to a server that participates in the testing.
The client starts by (for example) connecting to an NTP server to get an accurate time base.
Then the UDP client sends a series of packets to the server. In its payload, each packet contains:
a serial number
a timestamp when it was sent
a CRC
The server then looks these over and notes whether any serial numbers are missing (after some reasonable timeout) and compares the time it received each packet to the time the client sent the packet. After some period of time, the server sends a reply indicating how many packets it received, the mean and standard deviation of the transmission times, and an indication of how many appeared to be corrupted based on the CRCs they contained.
Depending on taste, you might also want to set up a simultaneous TCP connection from the client to the server to coordinate testing of the UDP channel and (possibly) return results.

How to explain this incredibly slow socket connection?

I was trying to set up a bandwidth test between two PCs, with only a switch between them. All network hardware is gigabit. One one machine I put a program to open a socket, listen for connections, accept, followed by a loop to read data and measure bytes received against the 'performance counter'. On the other machine, the program opened a socket, connected to the first machine, and proceeds into a tight loop to pump data into the connection as fast as possible, in 1K blocks per send() call. With just that setup, things seem acceptably fast; I could get about 30 to 40 MBytes/sec through the network - distinctly faster than 100BaseT, within the realm of plausibility for gigabit h/w.
Here's where the fun begins: I tried to use setsockopt() to set the size of the buffers (SO_SNDBUF, SO_RCVBUF) on each end to 1K. Suddenly the receiving end reports it's getting a mere 4,000 or 5,000 bytes a second. Instrumenting the transmit side of things, it appears that the send() calls take 0.2 to 0.3 seconds each, just to send 1K blocks. Removing the setsockopt() from the receive side didn't seem to change things.
Now clearly, trying to manipulate the buffer sizes was a Bad Idea. I had thought that maybe forcing the buffer size to 1K, with send() calls of 1K, would be a way to force the OS to put one packet on the wire per send call, with the understanding that this would prevent the network stack from efficiently combining the data for transmission - but I didn't expect throughput to drop to a measly 4-5K/sec!
I don't have time on the resources to chase this down and really understand it the way I'd like to, but would really like to know what could make a send() take 0.2 seconds. Even if it's waiting for acks from the other side, 0.2 seconds is just unbelievable. What gives?
Nagle?
Windows networks with small messages
The explanation is simply that a 1k buffer is an incredibly small buffer size, and your sending machine is probably sending one packet at a time. The sender must wait for the acknowledgement from the receiver before emptying the buffer and accepting the next block to send from your application (because the TCP layer may need to retransmit data later).
A more interesting exercise would be to vary the buffer size from its default for your system (query it to find out what that is) all the way down to 1k and see how each buffer size affects your throughput.

Ensuring packet order in UDP

I'm using 2 computers with an application to send and receive udp datagrams. There is no flow control and ICMP is disabled. Frequently when I send a file as UDP datagrams via the application, I get two packets changing their order and therefore - packet loss.
I've disabled and kind of firewall and there is no hardware switch connected between the computers (they are directly wired).
Is there a way to make sure Winsock and send() will send the packets the same way they got there?
Or is the OS doing that?
Or network device configuration needed?
UDP is a lightweight protocol that by design doesn't handle things like packet sequencing. TCP is a better choice if you want robust packet delivery and sequencing.
UDP is generally designed for applications where packet loss is acceptable or preferable to the delay which TCP incurs when it has to re-request packets. UDP is therefore commonly used for media streaming.
If you're limited to using UDP you would have to develop a method of identifying the out of sequence packets and resequencing them.
UDP does not guarantee that your packets will arrive in order. (It does not even guarantee that your packets will arrive at all.) If you need that level of robustness you are better off with TCP. Alternatively you could add sequence markers to your datagrams and rearrange them at the other end, but why reinvent the wheel?
is there a way to make sure winsock and send() will send the packets the same way they got there?
It's called TCP.
Alternatively try a reliable UDP protocol such as UDT. I'm guessing you might be on a small embedded platform so you want a more compact protocol like Bell Lab's RUDP.
there is no flow control (ICMP disabled)
You can implement your own flow control using UDP:
Send one or more UDP packets
Wait for acknowledgement (sent as another UDP packets from receiver to sender)
Repeat as above
See Sliding window protocol for further details.
[This would be in addition to having a sequence number in the packets which you send.]
There is no point in trying to create your own TCP like wrapper. We love the speed of UPD and that is just going to slow things down. Your problem can be overcome if you design your protocol so that every UDP datagram is independent of each other. Our packets can arrive in any order so long as the header packet arrives first. The header says how many packets are suppose to arrive. Also, UPD has become a lot more reliable since this post was created over a decade ago. Don't try to
This question is 12 years old, and it seems almost a waste to answer it now. Even as the suggestions that I have have already been posed. I dealt with this issue back in 2002, in a program that was using UDP broadcasts to communicate with other running instances on the network. If a packet got lost, it wasn't a big deal. But if I had to send a large packet, greater than 1020 bytes, I broke it up into multiple packets. Each packet contained a header that described what packet number it was, along with a header that told me it was part of a larger overall packet. So, the structure was created, and the payload was simply dropped in the (correct) place in the buffer, and the bytes were subtracted from the overall total that was needed. I knew all the packets had arrived once the needed byte total reached zero. Once all of the packets arrived, that packet got processed. If another advertisement packet came in, then everything that had been building up was thrown away. That told me that one of the fragments didn't make it. But again, this wasn't critical data; the code could live without it. But, I did implement an AdvReplyType in every packet, so that if it was a critical packet, I could reply to the sender with an ADVERTISE_INCOMPLETE_REQUEST_RETRY packet type, and the whole process could start over again.
This whole system was designed for LAN operation, and in all of my debugging/beta testing, I rarely ever lost a packet, but on larger networks I would often get them out of order...but I did get them. Being that it's now 12 years later, and UDP broadcasting seems to be frowned upon by a lot of IT Admins, UDP doesn't seem like a good, solid system any longer. ChrisW mentioned a Sliding Window Protocol; this is sort of what I built...without the sliding part! More of a "Fixed Window Protocol". I just wasted a few more bytes in the header of each of the payload packets to tell how many total bytes are in this Overlapped Packet, which packet this was, and the unique MsgID it belonged to so that I didn't have to get the initial packet telling me how many packets to expect. Naturally, I didn't go as far as implementing RFC 1982, as that seemed like overkill for this. As long as I got one packet, I'd know the total length, unique Message Id, and which packet number this one was, making it pretty easy to malloc() a buffer large enough to hold the entire Message. Then, a little math could tell me where exactly in the Message this packet fits into. Once the Message buffer was filled in...I knew I got the whole message. If a packet arrived that didn't belong to this unique Message ID, then we knew this was a bust, and we likely weren't going to ever get the remainder of the old message.
The only real reason I mention this today, is that I believe there still is a time and a place to use a protocol like this. Where TCP actually involves too much overhead, on slow, or spotty networks; but it's those where you also have the most likelihood of and possible fear of packet loss. So, again, I'd also say that "reliability" cannot be a requirement, or you're just right back to TCP. If I had to write this code today, I probably would have just implemented a Multicast system, and the whole process probably would have been a lot easier on me. Maybe. It has been 12 years, and I've probably forgotten a huge portion of the implementation details.
Sorry, if I woke a sleeping giant here, that wasn't my intention. The original question intrigued me, and reminded me of this turn-of-the-century Windows C++ code I had written. So, please try to keep the negative comments to a minimum--if at all possible! (Positive comments, of course...always welcome!) J/K, of course, folks.

Resources