How to set the UDP packet reassembly timeout in Windows 10 - windows

I am currently developing an image aquisition application in Visual C++ that receives image data from an UDP hardware device with limited capabilities (i.e. no UDP checksum). The device has a GBit connection to a dedicated switch and the PC uses a dedicated NIC and a 10GBit connection to this switch.
The transmitted image data consists of packets with a size ranging from 6528 to 19680 bytes. These packets are fragmented by the hardware device and reconstructed by the network stack on the PC.
Sometimes a packet (call it packet #4711) is lost and the PC side tries to reconstruct it for a long time. Within this timespan a new packet with the same packed id is sent by the hardware device because of an overflow of the 16-bit packet id. Now the PC receives new fragments for (a new) packet #4711 and uses it to complete the old, still unassembled packet and assembles a damaged packet. To top it, the remaining fragments of the new #4711 packet are stored and combined with the next #4711 (which will be received a few seconds later). So the longer the system runs, the more packet ids will be compromised until no communication is possible at all.
We cannot calculate the UDP checksum on the hardware device because of it's limited capabilities.
We cannot use IPv6 (which would offer bigger packet ids) because there is no support for the hardware device.
We will have to implement our own protocol on top of UDP and "manually" fragment and reconstruct the data, but we could avoid this if we could find a way to reduce the packet reconstruction timeout on Windows to 500ms or less.
I searched Google and Stackoverflow for information, but there are not many results and none of them was of much help.
Hence the question: Is there a way to reduce the reconstruction timeout for IPv4 UDP fragments on Windows 10 via Registry, Windows API or any other magic or do you have a better suggestion?

Since Windows 2000 its hardcoded there is no official way of modifying the ip packet reassembly timeout because of the strict RFC 2460 compatibility.
Details can be read here:
https://blogs.technet.microsoft.com/nettracer/2010/06/03/why-doesnt-ipreassemblytimeout-registry-key-take-effect-on-windows-2000-or-later-systems/
Currently the only possibility seems to use raw-sockets which are limited since Windows 7 and not available with every socket provider. This would make the application much more complex.
We will alter our software protocol so that no packets > 1400 Byte are being send at all. This forces us to care about fragmentation in our software but prevents IP packet fragmentation and all of its traps. Perhaps this is the correct way to handle such problems.

Related

High UDP communication latency because of audio rendering (Windows, C++)

I am trying to communicate with an external robot at 1 kHz using UDP protocol with WinSocket library. I am using Windows 10 Pro Version 21H2. In the hardware side, I use a pc with intel core i9 10900X 32 GB RAM and Intel I219.
At a certain point it work pretty well, I did measure the time spent for the communication (both sending and receiving the packets sequentially takes between 200 microseconds and 500 microseconds), and I did also measure using wireshark the number of packets exchanged (1000 packets sent per second and 1000 packets received per second too). The throughput sending is 2 Mbps and receiving is 3Mbps.
The issue start when any audio is rendered (even the sound happening when changing the volume on windows), this lead to a noticeable latency (about 10 to 15 milliseconds).
When I stop the Windows Audio Service, this solves the issue but in our application, we need to render a sound permanently.
graph : round trip time and frequency vs index of udp query, using NIC PCI
A temporary solution was to use a USB/Ethernet adapter instead of NICs. Using this type of device we have no latency but we already experienced in the past some issues related to drops of performance due to thermal throttling.
graph : round trip time and frequency vs index of udp query, using USB/Ethernet adapter
I also did try to reduce the audio process priority, no difference. I also tried to set the affinity mask of the audio service in different threads than my application, no difference neither.
My questions : is there a way to increase the audio latency in order to prioritize the udp communication or to reduce the latency of the udp communication to meet our need of 1 kHz frequency.
This problem is due to the Receive Side Throttle feature some NICs support.
In order to fix it, you need to set the register variable
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Multimedia\SystemProfile\NetworkThrottlingIndex to 0xffffffff and reboot windows.
This registry key is private and internal to Windows OS, it is not supposed to be used publicly and it's not officially supported by Microsoft.

How do tools like iperf measure UDP?

Given that UDP packets don't actually send acks, how does a program like iperf measure their one-way performance, i.e., how can it confirm that the packets actually reached:
within a time frame
intact, and uncorrupted
To contrast, Intuitively, to me, it seems that TCP packets, which have an ack signal sent back to allow rigorous benchmarking of their movement across a network can be done very reliably from a client.
1/ "how can it confirm that the packets actually reached [...] intact, and uncorrupted"
UDP is an unfairly despised protocol, but come on, this is going way too far here! :-)
UDP have checksum, just like TCP:
https://en.wikipedia.org/wiki/User_Datagram_Protocol#Checksum_computation
2/ "how can it confirm that the packets actually reached [...] within a time frame"
It does not, because this is not what UDP is about, nor TCP by the way.[*]
As can be seen from its source code here:
https://github.com/esnet/iperf/blob/master/src/iperf_udp.c#L55
...what it does though, is check for out of order packets. A "pcount" is set in the sending side, and checked at the receiving side here:
https://github.com/esnet/iperf/blob/master/src/iperf_udp.c#L99
...and somewhat calculate a bogus jitter:
https://github.com/esnet/iperf/blob/master/src/iperf_udp.c#L110
(real life is more complicated than this, you not only have jitter, but also drift)
[*]:
For semi-garanteed, soft "within a time frame" / real time layer 3 and above protocols, look at RTP, RTSP and such. But neither TCP nor UDP inherently have this.
For real, serious hard real-time garantee, you've got to go to layer 2 protocols such as Ethernet-AVB:
https://en.wikipedia.org/wiki/Audio_Video_Bridging
...which were designed because IP and above simply cannot. make. hard. real. time. guaranteed. delivery. Period.
EDIT:
This is another debate, but...
The first thing you need for "within a time frame", is a shared wall clock on sending/receiving systems (else, how could you tell that such received packet is out of date?)
From Layer 3 (IP) and above, NTP precision target is about 1ms. It can be less than that on a LAN (but accross IP networks, it's just taking a chance and hope the best).
On layer 2, aka "LAN" the layer 2 PTP (Precision Time Protocol) IEEE 1588 is for sub-microsecond range. That's a 1000 times more accurate. Same goes for the derived IEEE 802.1AS, "Timing and Synchronization for Time-Sensitive Applications (gPTP)" used In Ethernet AVB.
Conclusion on this sub-topic:
TCP/IP, though very handy and powerful, is not designed to "guarantee delivery within a time frame". Be it TCP or UDP. Get this idea out of your head.
The obvious way would be to connect to a server that participates in the testing.
The client starts by (for example) connecting to an NTP server to get an accurate time base.
Then the UDP client sends a series of packets to the server. In its payload, each packet contains:
a serial number
a timestamp when it was sent
a CRC
The server then looks these over and notes whether any serial numbers are missing (after some reasonable timeout) and compares the time it received each packet to the time the client sent the packet. After some period of time, the server sends a reply indicating how many packets it received, the mean and standard deviation of the transmission times, and an indication of how many appeared to be corrupted based on the CRCs they contained.
Depending on taste, you might also want to set up a simultaneous TCP connection from the client to the server to coordinate testing of the UDP channel and (possibly) return results.

Is send/receive packet buffer the same preallocated memory

I have a windows app consuming large amounts of incoming udp traffic and sending a small number of udp packets 'keep alive' messages. I'm seeing a small amount of drops on both incoming and outgoing. I was surprised that the small amount of outgoing data was experiencing drops so I captured the packets using netMon and see them all being sent though out of the server, 3 frames sent only 2 arrive at the linux server.
I'd like to know the following:
1. Is NetMon a clone on the sock_buffer and therefore the data may be dropped at the packet buffer and not actually be being sent of the server?
2. Is the packet buffer memory the same for both send and receive (ie. if receive packet buffers are using all the buffer memory preallocated could this cause packet loss on the small amount of outgoing traffic)?
First thing: The send and receive packet buffer have separate memory.
Second thing: NetMon works on lower network layer, not the socket layer.
third thing: Keep in mind that UDP is unreliable protocol and you can not ensure that all packets sent from one end will be received on other end. If you need reliability, you should consider TCP or some other reliable protocol.
By the way both sender and receiver are on same LAN or Internet?? How they are connected? if you can describe it then maybe someone can suggest something else to debug the issue further.

how TCP can be tuned for high-performance one-way transmission?

my (network) client sends 50 to 100 KB data packets every 200ms to my server. there're up to 300 clients. Server sends nothing to client. Server (dedicated) and clients are in LAN. How can I tune TCP configuration for better performance? Server on Windows Server 2003 or 2008, clients on Windows 2000 and up.
e.g. TCP window size. Does changing this parameter help? anything else? any special socket options?
[EDIT]: actually in different modes packets can be up to 5MB
I did a study on this a couple of years ago wth 1700 data points. The conclusion was that the single best thing you can do is configure an enormous socket receive buffer (e.g. 512k) at the receiver. Do that to the listening socket, so it will be inherited by the accepted sockets, so it will already be set while they are handshaking. That in turn allows TCP window scaling to be negotiated during the handshake, which allows the client to know about the window size > 64k. The enormous window size basically lets the client transmit at the maximum possible rate, subject only to congestion avoidance rather than closed receive windows.
What OS?
IPv4 or v6?
Why so large of a dump ; why can't it be broken down?
Assuming a solid, stable, low bandwidth:delay prod, you can adjust things like inflight sizing, initial window size, mtu (depending on the data, IP version, and mode[tcp/udp].
You could also round robin or balance inputs, so you have less interrupt time from the nic .. binding is an option as well..
5MB /packet/? That's a pretty poor design .. I would think it'd lead to a lot of segment retrans's , and a LOT of kernel/stack mem being used in sequence reconstruction / retransmits (accept wait time, etc)..
(Is that even possible?)
Since all clients are in LAN, you might try enabling "jumbo frames" (need to run a netsh command for that, would need to google for the precise command, but there are plenty of how-tos).
On the application layer, you could use TransmitFile, which is the Windows sendfile equivalent and which works very well under Windows Server 2003 (it is artificially rate-limited under "non server", but that won't be a problem for you). Note that you can use a memory mapped file if you generate the data on the fly.
As for tuning parameters, increasing the send buffer will likely not give you any benefit, though increasing the receive buffer may help in some cases because it reduces the likelihood of packets being dropped if the receiving application does not handle the incoming data fast enough. A bigger TCP window size (registry setting) may help, as this allows the sender to send out more data before having to block until ACKs arrive.
Yanking up the program's working set quota may be worth a consideration, it costs you nothing and may be an advantage, since the kernel needs to lock pages when sending them. Being allowed to have more pages locked might make things faster (or might not, but it won't hurt either, the defaults are ridiculously low anyway).

Ensuring packet order in UDP

I'm using 2 computers with an application to send and receive udp datagrams. There is no flow control and ICMP is disabled. Frequently when I send a file as UDP datagrams via the application, I get two packets changing their order and therefore - packet loss.
I've disabled and kind of firewall and there is no hardware switch connected between the computers (they are directly wired).
Is there a way to make sure Winsock and send() will send the packets the same way they got there?
Or is the OS doing that?
Or network device configuration needed?
UDP is a lightweight protocol that by design doesn't handle things like packet sequencing. TCP is a better choice if you want robust packet delivery and sequencing.
UDP is generally designed for applications where packet loss is acceptable or preferable to the delay which TCP incurs when it has to re-request packets. UDP is therefore commonly used for media streaming.
If you're limited to using UDP you would have to develop a method of identifying the out of sequence packets and resequencing them.
UDP does not guarantee that your packets will arrive in order. (It does not even guarantee that your packets will arrive at all.) If you need that level of robustness you are better off with TCP. Alternatively you could add sequence markers to your datagrams and rearrange them at the other end, but why reinvent the wheel?
is there a way to make sure winsock and send() will send the packets the same way they got there?
It's called TCP.
Alternatively try a reliable UDP protocol such as UDT. I'm guessing you might be on a small embedded platform so you want a more compact protocol like Bell Lab's RUDP.
there is no flow control (ICMP disabled)
You can implement your own flow control using UDP:
Send one or more UDP packets
Wait for acknowledgement (sent as another UDP packets from receiver to sender)
Repeat as above
See Sliding window protocol for further details.
[This would be in addition to having a sequence number in the packets which you send.]
There is no point in trying to create your own TCP like wrapper. We love the speed of UPD and that is just going to slow things down. Your problem can be overcome if you design your protocol so that every UDP datagram is independent of each other. Our packets can arrive in any order so long as the header packet arrives first. The header says how many packets are suppose to arrive. Also, UPD has become a lot more reliable since this post was created over a decade ago. Don't try to
This question is 12 years old, and it seems almost a waste to answer it now. Even as the suggestions that I have have already been posed. I dealt with this issue back in 2002, in a program that was using UDP broadcasts to communicate with other running instances on the network. If a packet got lost, it wasn't a big deal. But if I had to send a large packet, greater than 1020 bytes, I broke it up into multiple packets. Each packet contained a header that described what packet number it was, along with a header that told me it was part of a larger overall packet. So, the structure was created, and the payload was simply dropped in the (correct) place in the buffer, and the bytes were subtracted from the overall total that was needed. I knew all the packets had arrived once the needed byte total reached zero. Once all of the packets arrived, that packet got processed. If another advertisement packet came in, then everything that had been building up was thrown away. That told me that one of the fragments didn't make it. But again, this wasn't critical data; the code could live without it. But, I did implement an AdvReplyType in every packet, so that if it was a critical packet, I could reply to the sender with an ADVERTISE_INCOMPLETE_REQUEST_RETRY packet type, and the whole process could start over again.
This whole system was designed for LAN operation, and in all of my debugging/beta testing, I rarely ever lost a packet, but on larger networks I would often get them out of order...but I did get them. Being that it's now 12 years later, and UDP broadcasting seems to be frowned upon by a lot of IT Admins, UDP doesn't seem like a good, solid system any longer. ChrisW mentioned a Sliding Window Protocol; this is sort of what I built...without the sliding part! More of a "Fixed Window Protocol". I just wasted a few more bytes in the header of each of the payload packets to tell how many total bytes are in this Overlapped Packet, which packet this was, and the unique MsgID it belonged to so that I didn't have to get the initial packet telling me how many packets to expect. Naturally, I didn't go as far as implementing RFC 1982, as that seemed like overkill for this. As long as I got one packet, I'd know the total length, unique Message Id, and which packet number this one was, making it pretty easy to malloc() a buffer large enough to hold the entire Message. Then, a little math could tell me where exactly in the Message this packet fits into. Once the Message buffer was filled in...I knew I got the whole message. If a packet arrived that didn't belong to this unique Message ID, then we knew this was a bust, and we likely weren't going to ever get the remainder of the old message.
The only real reason I mention this today, is that I believe there still is a time and a place to use a protocol like this. Where TCP actually involves too much overhead, on slow, or spotty networks; but it's those where you also have the most likelihood of and possible fear of packet loss. So, again, I'd also say that "reliability" cannot be a requirement, or you're just right back to TCP. If I had to write this code today, I probably would have just implemented a Multicast system, and the whole process probably would have been a lot easier on me. Maybe. It has been 12 years, and I've probably forgotten a huge portion of the implementation details.
Sorry, if I woke a sleeping giant here, that wasn't my intention. The original question intrigued me, and reminded me of this turn-of-the-century Windows C++ code I had written. So, please try to keep the negative comments to a minimum--if at all possible! (Positive comments, of course...always welcome!) J/K, of course, folks.

Resources