I am trying to port a libpcap based program to macos, and it seems to be written for windows and linux. In the pcap_open_live function, the read timeout was set to -1, same with PacketOpen, and on macOS, this causes an error when trying to open the interface, BIOCSRTIMEOUT: Invalid Argument. I am unable to find any documentation on what a -1 read timeout actually does. Additonally, is there a version of this which will allow me to do the same thing on a BPF based libpcap?
What does to_ms = -1 do on winpcap and linux libpcap?
Nothing predictable. To quote the tip-of-the-master-branch pcap(3pcap) man page:
packet buffer timeout
If, when capturing, packets are delivered as soon as they
arrive, the application capturing the packets will be woken up
for each packet as it arrives, and might have to make one or
more calls to the operating system to fetch each packet.
If, instead, packets are not delivered as soon as they arrive,
but are delivered after a short delay (called a "packet buffer
timeout"), more than one packet can be accumulated before the
packets are delivered, so that a single wakeup would be done for
multiple packets, and each set of calls made to the operating
system would supply multiple packets, rather than a single
packet. This reduces the per‐packet CPU overhead if packets are
arriving at a high rate, increasing the number of packets per
second that can be captured.
The packet buffer timeout is required so that an application
won’t wait for the operating system’s capture buffer to fill up
before packets are delivered; if packets are arriving slowly,
that wait could take an arbitrarily long period of time.
Not all platforms support a packet buffer timeout; on platforms
that don’t, the packet buffer timeout is ignored. A zero value
for the timeout, on platforms that support a packet buffer time‐
out, will cause a read to wait forever to allow enough packets
to arrive, with no timeout.
NOTE: the packet buffer timeout cannot be used to cause calls
that read packets to return within a limited period of time,
because, on some platforms, the packet buffer timeout isn’t sup‐
ported, and, on other platforms, the timer doesn’t start until
at least one packet arrives. This means that the packet buffer
timeout should NOT be used, for example, in an interactive
application to allow the packet capture loop to ‘‘poll’’ for
user input periodically, as there’s no guarantee that a call
reading packets will return after the timeout expires even if no
packets have arrived.
Nothing is said there about a negative timeout; I'll update it to explicitly say that a negative value should not be used. (Not on Windows, not on macOS, not on Linux, not on *BSD, not on Solaris, not on AIX, not on HP-UX, not on Tru64 UNIX, not on IRIX, not on anything.)
By setting the timeout to -1, they probably intended to put the pcap_t into "non-blocking mode", where an attempt to read will return immediately if there are no packets waiting to be read, rather than waiting for a packet to arrive. So, instead, provide a timeout of, for example, 100 (meaning 1/10 second) and use pcap_setnonblock() after the pcap_open_live() call to put the pcap_t into non-blocking mode. That should work on all platforms.
Related
In the context of a DFU driver, I'm trying to respond with a packet of length zero (not ZLP as in multiples of max size, just zero bytes) to a USB control in transfer. However, the host returns with a timeout condition. I tried both, the dfu-util tool and the corresponding protocol, as well as a minimal working example with pyusb just issuing a control in transfer of some length and the device returning no data.
My key question is: Do I achieve this by responding with a NAK or should I set the endpoint valid but without any data? The specs are rather vague about this, imo.
Here are some technical details since I'm not sure where the problem is:
Host: Linux Kernel 5.16.10, dfu-util and pyusb (presumably) both using libusb 0.1.12
Device: STM32L1 with ChibiOS 21.11.1 USB stack (sends NAK in the above situation, I also tried to modify it to send a zero-length packet without success)
It sounds like you are programming the firmware of a device, and you want your device to give a response that is 0 bytes long when the host starts a control read transfer.
You can't simply send a NAK token: that is what the device does when the data isn't ready yet, and it causes the host to try again later to read the data.
Instead, you must actually send a 0-length IN packet to the host. When the host receives this packet, it sees that the packet is shorter than the maximum packet size, so it knows the data phase of the control transfer is done, and it moves on to the status stage.
On Windows XP when I am calling WSASend in iterations on non-blocking socket, it fails with WSAENOBUFS.
I've two cases here:
Case 1:
On non-blocking socket I am calling WSASend. Here is pseudo-code:
while(1)
{
result = WSASend(...); // Buffersize 1024 bytes
if (result == -1)
{
if (WSAGetLastError() == WSAENOBUFS)
{
// Wait for some time before calling WSASend again
Sleep(1000);
}
}
}
In this case WSASend returns sucessfully for around 88000 times. Then it fails with WSAENOBUFS and never recovers even when tried after some time as shown in the code.
Case 2:
In order to solve this problem, I referred this and as suggested there,
just before above code, I called setsockopt with SO_SNDBUF and set buffersize 0 (zero)
In this case, WSASend returns sucessfully for around 2600 times. Then it fails. But after waiting it succeeds again for 2600 times then fails.
Now I've these questions in both the cases:
Case 1:
What factors decides this number 88000 here?
If the failure was because of TCP buffer was full, why it didn't recover after some time?
Case 2:
Again, what factors decides the number 2600 here?
As given in Microsoft KB article, if instead of internal TCP buffers it sends from application buffer directly, why would it fail with WSAENOBUFS?
EDIT:
In case of asynchronous sockets (On Windows XP), the behavior is more strange. If I ignore WSAENOBUFS and continued further writing to socket I eventually get disconnection WSAECONNRESET. And not sure at the moment why does that happen?
The values are undocumented and depend on what's installed on your machine that may sit between your application and the network driver. They're likely linked to the amount of memory in the machine. The limits (most probably non-paged pool memory and i/o page lock limit) are likely MUCH higher on Vista and above.
The best way to deal with the problem is add application level flow control to your protocol so that you don't assume that you can just send at whatever rate you feel like. See this blog posting for details of how non-blocking and async I/O can cause resource usage to balloon and how you have no control over it unless you have your own flow control.
In summary, never assume that you can just write data to the wire as fast as you like using non-blocking/async APIs. Remember that due to how TCP/IP's internal flow control works you COULD be using an uncontrollable amount of local machine resources and the client is the only thing that has any control over how fast those resources are released back to the O/S on the server machine.
I'm using I/O completion ports on Windows for serial port communication (we will potentially have lots and lots of serial port usage). I've done the usual, creating the IOCP, spinning up the I/O threads, and associating my CreateFile() handle with the IOCP (CreateFile() was called with FILE_FLAG_OVERLAPPED). That's all working fine. I've set the COMMTIMEOUTS all to 0 except ReadIntervalTimeout which is set to MAXDWORD in order to be completely async.
In my I/O thread, I've noticed that GetQueuedCompletionStatus() blocks indefinitely. I'm using an INFINITE timeout. So I put a ReadFile() call right after I associate my handle with the IOCP. Now that causes GetQueuedCompletionStatus() to release immediately for some reason with 0 bytes transferred, but there's no errors (it returns true, GetLastError() reports 0). I obviously want it to block if there's nothing for it to do. If I put another ReadFile() after GetQueuedCompletionStatus(), then another thread in the pool will pick it up with 0 bytes transferred and no errors.
In the examples I've seen and followed, I don't see anyone setting the hEvent on the OVERLAPPED structure when using IOCP. Is that necessary? I don't care to ever block IOCP threads -- so I'll never be interested in CreateEvent(...) | 1.
If it's not necessary, what could be causing the problem? GetQueuedCompletionStatus() needs to block until data arrives on the serial port.
Are there any good IOCP serial port examples out there? I haven't found a complete serial port + IOCP example out there. Most of them are for sockets. In theory, it should work for serial ports, files, sockets, etc.
I figured it out -- I wasn't calling SetCommMask() with EV_RXCHAR | EV_TXEMPTY and then WaitCommEvent() with the OVERLAPPED struct. After I did that, my IOCP threads behaved as expected. GetQueuedCompletionStatus() returned when a new character appeared on the port. I could then call ReadFile().
So to answer the original question: "no, you don't need to set hEvent for IOCP with serial ports."
I have a problem - I don't know the amount of data being sent to my UDP server.
The current code is this - testing in irb:
require 'sockets'
sock = UDPSocket.new
sock.bind('0.0.0.0',41588)
sock.read # Returns nothing
sock.recvfrom(1024) # Requires length of data to be read - I don't know this
I could set recvfrom to 65535 or some other large number but this seems like an unnecessary hack.
recvfrom and recvfrom_nonblock both throw away anything after that length specified.
Am I setting the socket up incorrectly?
Note that UDP is a datagram protocol, not a stream like TCP. Each read from UDP socket dequeues one full datagram. You might pass these flags to recvfrom(2):
MSG_PEEK
This flag causes the receive operation to return
data from the beginning of the receive queue without
removing that data from the queue. Thus, a subsequent
receive call will return the same data.
MSG_WAITALL
This flag requests that the operation block until the
full request is satisfied. However, the call may still
return less data than requested if a signal is caught,
an error or disconnect occurs, or the next data to be
received is of a different type than that returned.
MSG_TRUNC
Return the real length of the packet, even when it was
longer than the passed buffer. Only valid for packet sockets.
If you really don't know how large of a packet you might get (protocol limit is 65507 bytes, see here) and don't care about doubling the number of system calls, do the MSG_PEEK first, then read exact number of bytes from the socket.
Or you can set an approximate max buffer size, say 4096, then use MSG_TRUNC to check if you lost any data.
Also note that UDP datagrams are rarely larger then 1472 - ethernet data size of 1500 minus 20 bytes of IPv4 header minus 8 bytes of UDP header - nobody likes fragmentation.
Edit:
Socket::MSG_PEEK is there, for others you can use integer values:
MSG_TRUNC 0x20
MSG_WAITALL 0x100
Look into your system headers (/usr/include/bits/socket.h on Linux) to be sure.
Looking at the documentation for Ruby's recvfrom(), the argument is a maximum length. Just provide 65535 (max length of a UDP datagram); the returned data should be the sent datagram of whatever size it happens to be, and you should be able to determine the size of it the way you would for any stringlike thing in Ruby.
I am working on a problem in SNMP extension agent in windows, which is passing traps to snmp.exe via SnmpExtensionTrap callback.
We added a couple of fields to the agent recently, and I am starting to see that some traps are getting lost. When I intercept the call in debugger and reduce the length of some strings, the same traps, that would have been lost, will go through.
I cannot seem to find any references to size limit or anything on the data passed via SnmpExtensionTrap. Does anyone know of one?
I would expect the trap size to be limited by the UDP packet size, since SNMP runs over the datagram-oriented UDP protocol.
The maximum size of a UDP packet is 64Kb but you'll have to take into account the SNMP overhead plus any limitations of the transport you're running over (e.g. ethernet).