I'm writing firmware for a USB 2.0 full speed device that communicates with a WinUSB host, with one Bulk Pipe in each direction. When should the device send a zero-length packet (ZLP) to terminate an IN transfer, and how does it know that it should?
Section 5.8.3 of the USB 2.0 spec says:
A bulk transfer is complete when the endpoint does one of the following:
Has transferred exactly the amount of data expected
Transfers a packet with a payload size less than wMaxPacketSize or transfers a zero-length packet [ZLP]
I interpret this to mean that a ZLP should be sent when the transfer size is an integer multiple of the max packet size, and the "expected" size of the transfer is greater than the actual size (i.e. what is available to be sent). But how does the recipient know what's expected?
For instance, I'm using the WinUSBNet wrapper in C#. When I read from the pipe like this
int bytesRead;
buffer = new byte[128];
try
{
bytesRead = m_PipeIN.Read(buffer);
buffer = buffer.Take(bytesRead).ToArray();
}
the library calls WinUsb_ReadPipe() like this:
WinUsb_ReadPipe(InterfaceHandle(ifaceIndex),
pipeID,
pBuffer + offset,
(uint)bytesToRead,
out bytesRead,
IntPtr.Zero);
Suppose the device has exactly 128 bytes to send, and max packet size is 64 bytes. How does the device determine what the host is "expecting", thus whether it should send a ZLP to terminate the transfer?
(Similar to this question, but that one is about control pipes. I'm asking about bulk pipes.)
Explanation of the spec:
Case 1
Has transferred exactly the amount of data expected
This means that if the host is expecting X amount of bytes, and you send exactly X amount of bytes, the transfer stops right there. MPS and ZLP don't play into it.
Case 2
Transfers a packet with a payload size less than wMaxPacketSize or transfers a zero-length packet [ZLP]
This means that if the host is expecting X bytes but you want to send only Y bytes, where Y < X, the transfer is complete as soon as you do a "short" packet, a packet less the MPS. If Y bytes is a multiple of MPS, then you would have to do a ZLP.
Example 1 (no ZLP)
MPS = 512, the host expects 8192 bytes.
You want to send only 1500 bytes. The payload would go over in 3 packets like this:
Packet 0: [512 bytes] MPS
Packet 1: [512 bytes] MPS
Packet 2: [476 bytes] short packet
When the host gets the short packet, it knows the transfer is complete, and won't continue asking for more packets for the transfer.
Example 2 (with ZLP)
MPS = 512, the host expects 8192 bytes.
You want to send only 2048 bytes. The payload would go over in 4 packets like this:
Packet 0: [512 bytes] MPS
Packet 1: [512 bytes] MPS
Packet 2: [512 bytes] MPS
Packet 3: [512 bytes] MPS
At this point, the host has received 4 MPS-sized packets so it doesn't know the transfer is complete. So it will continue to request packets from the device.
Packet 4: [0 bytes] short packet (ZLP)
When the host gets the short packet, it knows the transfer is complete, and won't continue asking for more packets for the transfer.
Determining Transfer Size
You may be wondering how to determine the "expected" amount of bytes since BULK transfers do not have a length like CTRL transfers do. This is determined entirely by the higher-level protocol that specifies how to do transfers on the BULK pipes. The host and device both follow this protocol and thus they are in sync about how much data to transfer at any given time.
This protocol is typically specified by a class specification, like the mass-storage class protocol, or it could be some very simple protocol of your own design.
Transfers a packet with a payload size less than wMaxPacketSize or transfers a zero-length packet [ZLP]
a ZLP has to be send when the length of payload data is exactly an integer multiple of wMaxPacketSize
The USB spec defines that if the last packet of a bulk transfer has
the exact size of the endpoint max packet size, the whole transfer
must be terminated by a zero length urb.
If apps don't sent this in such a situation libusb times out and the
initial urb is never sent resulting in a broken application.
All Kernel drivers use the URB_ZERO_PACKET to comply to the spec
correctly.
source: http://libusb.org/ticket/6
in case that data length is exactly an integer multiple of wMaxPacketSize
the first ending condition packetSize < wMaxPacketSize does not apply because in this case packetSize = wMaxPacketSize.
so to indicate the information that the last packet has been send you need a ZLP, else the other side would expect more data
there are several other situations when ZLPs are sent see i.e. USB in a nutshell website
Related
In the context of a DFU driver, I'm trying to respond with a packet of length zero (not ZLP as in multiples of max size, just zero bytes) to a USB control in transfer. However, the host returns with a timeout condition. I tried both, the dfu-util tool and the corresponding protocol, as well as a minimal working example with pyusb just issuing a control in transfer of some length and the device returning no data.
My key question is: Do I achieve this by responding with a NAK or should I set the endpoint valid but without any data? The specs are rather vague about this, imo.
Here are some technical details since I'm not sure where the problem is:
Host: Linux Kernel 5.16.10, dfu-util and pyusb (presumably) both using libusb 0.1.12
Device: STM32L1 with ChibiOS 21.11.1 USB stack (sends NAK in the above situation, I also tried to modify it to send a zero-length packet without success)
It sounds like you are programming the firmware of a device, and you want your device to give a response that is 0 bytes long when the host starts a control read transfer.
You can't simply send a NAK token: that is what the device does when the data isn't ready yet, and it causes the host to try again later to read the data.
Instead, you must actually send a 0-length IN packet to the host. When the host receives this packet, it sees that the packet is shorter than the maximum packet size, so it knows the data phase of the control transfer is done, and it moves on to the status stage.
I use Embarcadero RAD Studio & Indy and I have a problem.
I am using component IdUDPServer and try to send file via UDP messages.
The file is successfully transferred if it is not large.
However, the error message “Package Size Too Big” appears if the file is large.
My code:
FileAPP->ReadBuffer(Buffer, Size);
IdUDPServer1->SendBuffer("10.6.1.255", 34004, RawToBytes(&Buffer, Size));
I see two ways to solve this problem:
1) Compose small messages and send these messages in a loop.
This is an easy way to solve the problem, however I will need to fix the remote application.
This is currently difficult. I do not have much time.
2)I want to find a condition in the source code where this error message is generated.
Maybe I can fix the condition and it will work for my specific task.
I have added some pictures. This is all the information that I now have.
Screenshots from help file: Picture 1, Picture 2, Picture 3
Error message: Picture 4
Please help me if you know the solution.
"Package Size Too Big" means you tried to send a datagram that is too large for UDP to handle. UDP has an effective max limit of 65507 bytes of payload data per datagram. You can't send a large file in a single datagram, it just won't fit.
So yes, you will have to break up your file into smaller chucks that are sent in separate datagrams. And as such, because UDP is connection-less and does not guarantee delivery, you will also have to solve the issues of lost packets and out-of-order packets. You will have to put a rolling sequence number inside your packets so that the receiver can save packets in the proper order. And you will have to implement a re-transmission mechanism to re-send lost packets that the receiver did not receive.
I would suggest not using TIdUDPClient/TIdUDPServer to implement file transfers manually, if possible. Indy has TIdTrivialFTP/TIdTrivialFTPServer components that implement the standardized UDP-based TFTP protocol, which handles these kind of details for you. By default, TFTP has a limit of ~32MB per file (512 bytes per datagram * 65535 datagrams max), but you can increase that limit using the TIdTrivialFTP.RequestedBlockSize property to request permission from a TFTP server to send more bytes per datagram. RequestedBlockSize is set to 1500 bytes by default, which can accommodate files up to ~93.7MB in size (though in practice it should not be set higher than 1468 bytes so TFTP datagrams won't exceed the MTU size of Ethernet packets, thus limiting the max file size closer to ~91.7MB). The RequestedBlockSize can be set as high as 65464 bytes per datagram to handle files up to ~3.9GB (at the risk of fragmenting UDP packets at the IP layer).
It is also possible for TFTP to handle unlimited file sizes with smaller datagram sizes, though TidTrivialFTP/TIdTrivialFTPServer do not currently implement this, because this behavior is not currently standardized in the TFTP protocol, but is commonly implemented as an extension in 3rd party implementations.
Otherwise, you should consider using TCP instead of UDP for your file transfers. TCP doesn't suffer from these problems.
If I have a definition which is only repeated strings, I can find the length of the packed buffers via the get_packed_size call. However, if I am on the receiving side of the exchange, how do I know how many bytes to read to form a complete message? (Since there are a variable number of entries, it isn't known apriori.)
Sender:
length = <name>_get_packed_size(&message)
buffer = malloc(length)
<name>_pack(&message, buffer)
write(fd, buffer, length)
Receiver:
read(fd, buffer, ???) // what is '???' if 'fd' is a stream socket?
If I am in datagram mode, I can issue the read for something like 64K bytes and just get the entire message. However, if I am in stream mode, how do I do this without short changing the message or reading part of the next message?
See this answer for a typical solution to this common problem: https://stackoverflow.com/a/5586945/618259
I'm processing Midi on the iPad and everything is working fine and I can log everything that comes in and all works as expected. However, in trying to recieve long messages (ie Sysex), I can only get one packet with a maximum of 256 bytes and nothing afterwards.
Using the code provided by Apple:
MIDIPacket *packet = &packetList->packet[0];
for (int i = 0; i > packetList->numPackets; ++i) {
// ...
packet = MIDIPacketNext (packet);
}
packetList->numPackets is always 1. After I get that first message, no other callback methods are called until a 'new' sysex message is sent. I don't think that my MIDI processing method would be called with the full packetList (which could potentially be any size). I would have thought I would recieve the data as a stream. Is this correct?
After digging around the only thing I could find was this: http://lists.apple.com/archives/coreaudio-api/2010/May/msg00189.html, which mentions the exact same thing but was not much help. I understand I probably need to implement buffering, but I can't even see anything past the first 256 bytes so I'm not sure where to even start with it.
My gut feeling here is that the system is either cramming the entire sysex message into one packet, or breaking it up into multiple packets. According to the CoreMidi documentation, the data field of the MIDIPacket structure has some interesting properties:
A variable-length stream of MIDI messages. Running status is not allowed. In the case of system-exclusive messages, a packet may only contain a single message, or portion of one, with no other MIDI events.
The MIDI messages in the packet must always be complete, except for system-exclusive.
(This is declared to be 256 bytes in length so clients don't have to create custom data structures in simple situations.)
So basically, you should look at the declared length field of the MIDIPacket and see if it is larger than 256. According to the spec, 256 bytes is just the standard allocation, but that array can hold more if necessary. You might find that the entire message has been crammed into that array.
Otherwise, it seems that the system is breaking the sysex messages up into multiple packets. Since the spec says that running status is not allowed, then it would have to send multiple packets, each with a leading 0xF0 byte. You would then need to create your own internal buffer to store the contents of these messages, stripping away the status bytes or header as necessary, and appending the data to your buffer until you read a 0xF7 byte which denotes the end of the sequence.
I had a similar issue on iOS. You are right MIDI packets number is always 1.
In my case, when receiving multiple MIDI events with the same timestamp (MIDI events received at the same time), iOS does not split those multiple MIDI events in multiple packets, as expected.
But, fortunately nothing is lost ! Indeed instead of receiving multiple packets with their correct number of bytes, you will receive a single packet with multiple events in it and the number of bytes will be increased accordingly.
So here what you have to do is:
In your MIDI IN callback, parse all packets received (always 1 for iOS), then for each packet received you must check the length of the packet as well as the MIDI status, then loop into that packet to retrieve all MIDI events in the current packet.
For instance, if the packet contains 9 bytes, and the MIDI status is a note ON (3 bytes message), that means your current packet contains more than a single note ON, you must then parse the first Note ON (bytes 0 to 2) then check the following MIDI status from byte 3 and so on ..
Hope this helps ...
Jerome
There is a good reference of how to walk through a MIDI packet in this file of a GitHub project : https://github.com/krevis/MIDIApps/blob/master/Frameworks/SnoizeMIDI/SMMessageParser.m
(Not mine, but it helped me solve the problems that got me to this thread)
I have a problem - I don't know the amount of data being sent to my UDP server.
The current code is this - testing in irb:
require 'sockets'
sock = UDPSocket.new
sock.bind('0.0.0.0',41588)
sock.read # Returns nothing
sock.recvfrom(1024) # Requires length of data to be read - I don't know this
I could set recvfrom to 65535 or some other large number but this seems like an unnecessary hack.
recvfrom and recvfrom_nonblock both throw away anything after that length specified.
Am I setting the socket up incorrectly?
Note that UDP is a datagram protocol, not a stream like TCP. Each read from UDP socket dequeues one full datagram. You might pass these flags to recvfrom(2):
MSG_PEEK
This flag causes the receive operation to return
data from the beginning of the receive queue without
removing that data from the queue. Thus, a subsequent
receive call will return the same data.
MSG_WAITALL
This flag requests that the operation block until the
full request is satisfied. However, the call may still
return less data than requested if a signal is caught,
an error or disconnect occurs, or the next data to be
received is of a different type than that returned.
MSG_TRUNC
Return the real length of the packet, even when it was
longer than the passed buffer. Only valid for packet sockets.
If you really don't know how large of a packet you might get (protocol limit is 65507 bytes, see here) and don't care about doubling the number of system calls, do the MSG_PEEK first, then read exact number of bytes from the socket.
Or you can set an approximate max buffer size, say 4096, then use MSG_TRUNC to check if you lost any data.
Also note that UDP datagrams are rarely larger then 1472 - ethernet data size of 1500 minus 20 bytes of IPv4 header minus 8 bytes of UDP header - nobody likes fragmentation.
Edit:
Socket::MSG_PEEK is there, for others you can use integer values:
MSG_TRUNC 0x20
MSG_WAITALL 0x100
Look into your system headers (/usr/include/bits/socket.h on Linux) to be sure.
Looking at the documentation for Ruby's recvfrom(), the argument is a maximum length. Just provide 65535 (max length of a UDP datagram); the returned data should be the sent datagram of whatever size it happens to be, and you should be able to determine the size of it the way you would for any stringlike thing in Ruby.