Is it possible to determine when a message has been entirely read when reading from a Named Pipe in Overlapped I/O? - winapi

It is very easy to determine if a message has been entirely read when reading form a pipe with read mode set to PIPE_READMODE_MESSAGE in synchronous I/O. (If the ReadFile function returns FALSE and GetLastError() returns ERROR_MORE_DATA, it means that the message is incomplete and that subsequent reads are necessary to retrieve the full message.)
Now, if the Named Pipe operates in Overlapped I/O instead and a read operation is pending (ReadFile function returns FALSE and GetLastError() returns ERROR_IO_PENDING), how do I know if I retrieved the full message when the operation completes? All I can determine is the number of bytes that were actually transferred by calling the GetOverlappedResult function, but it does not tell me whether or not the full message has been read…
Am I missing something here?

I think the easiest way ist to know, how long the messages is that you are expecting. Your protocol may give you the information.
For example the protocol always delivers a WORD as the first 2 bytes that tells you the length of the complete message.
So I use overlapped I/O with ReadFile to get the first 2 Bytes of the WORD. When I receive them I use ReadFile without overlapped I/O using the known message length and so I get all data.

Related

writing partial data with libwebsockets

I'm using the libwebsockets v2.4.
The doc seems unclear to me about what I have to do with the returned value of the lws_write() function.
If it returns -1, it's an error and I'm invited to close the connection. That's fine for me.
But when it returns a value that is strictly inferior to the buffer length I pass, should I consider that I have to write the last bytes that could not be written later (in another WRITABLE callback occurrence). Is it even possible to have this situation?
Also, should I use the lws_send_pipe_choked() before using the lws_write(), considering that I always use lws_write() in the context of a WRITABLE callback?
My understanding is that lws_write always return the asked buffer length except is an error occurs.
If you look at lws_issue_raw() (from which the result is returned by lws_write()) in output.c (https://github.com/warmcat/libwebsockets/blob/v2.4.0/lib/output.c#L157), you can see that if the length written by lws_ssl_capable_write() is less than the provided length, then the lws allocate a buffer to fill up the remaining bytes on wsi->trunc_alloc, in order for it to be sent in the future.
Concerning your second question, I think it is safe to call lws_write() in the context of a WRITABLE callback without checking if the pipe is choked. However, if you happen to loop on lws_write() in the callback, lws_send_pipe_choked() must be called in order to protect the subsequent calls to lws_write(). If you don't, you might stumble upon this assertion https://github.com/warmcat/libwebsockets/blob/v2.4.0/lib/output.c#L83 and the usercode will crash.

Put data back in socket buffer

Short question, didn't seem to find anything useful here or on Google: in the Winsock2 API, is it possible to put data back in the sockets internal buffer when you have retrieved it using recv() for example, so that is seems it was never actually read from the buffer?
No, it is not possible to inject data back into the socket's internal buffer. Either use the MSG_PEEK flag to read data without removing it from the socket's buffer, or else read the socket data into your own buffer, and then do whatever you want with your buffer. You could have your reading I/O logic always look for data in your buffer first, and then read more data from the socket only when your buffer does not have enough data to satisfy the read operation. Any data you inject back into your buffer will be seen by subsequent read operations.
You can use the MSG_PEEK flag in your recv() call

MIDIPacketList, numPackets is always 1

I'm processing Midi on the iPad and everything is working fine and I can log everything that comes in and all works as expected. However, in trying to recieve long messages (ie Sysex), I can only get one packet with a maximum of 256 bytes and nothing afterwards.
Using the code provided by Apple:
MIDIPacket *packet = &packetList->packet[0];
for (int i = 0; i > packetList->numPackets; ++i) {
// ...
packet = MIDIPacketNext (packet);
}
packetList->numPackets is always 1. After I get that first message, no other callback methods are called until a 'new' sysex message is sent. I don't think that my MIDI processing method would be called with the full packetList (which could potentially be any size). I would have thought I would recieve the data as a stream. Is this correct?
After digging around the only thing I could find was this: http://lists.apple.com/archives/coreaudio-api/2010/May/msg00189.html, which mentions the exact same thing but was not much help. I understand I probably need to implement buffering, but I can't even see anything past the first 256 bytes so I'm not sure where to even start with it.
My gut feeling here is that the system is either cramming the entire sysex message into one packet, or breaking it up into multiple packets. According to the CoreMidi documentation, the data field of the MIDIPacket structure has some interesting properties:
A variable-length stream of MIDI messages. Running status is not allowed. In the case of system-exclusive messages, a packet may only contain a single message, or portion of one, with no other MIDI events.
The MIDI messages in the packet must always be complete, except for system-exclusive.
(This is declared to be 256 bytes in length so clients don't have to create custom data structures in simple situations.)
So basically, you should look at the declared length field of the MIDIPacket and see if it is larger than 256. According to the spec, 256 bytes is just the standard allocation, but that array can hold more if necessary. You might find that the entire message has been crammed into that array.
Otherwise, it seems that the system is breaking the sysex messages up into multiple packets. Since the spec says that running status is not allowed, then it would have to send multiple packets, each with a leading 0xF0 byte. You would then need to create your own internal buffer to store the contents of these messages, stripping away the status bytes or header as necessary, and appending the data to your buffer until you read a 0xF7 byte which denotes the end of the sequence.
I had a similar issue on iOS. You are right MIDI packets number is always 1.
In my case, when receiving multiple MIDI events with the same timestamp (MIDI events received at the same time), iOS does not split those multiple MIDI events in multiple packets, as expected.
But, fortunately nothing is lost ! Indeed instead of receiving multiple packets with their correct number of bytes, you will receive a single packet with multiple events in it and the number of bytes will be increased accordingly.
So here what you have to do is:
In your MIDI IN callback, parse all packets received (always 1 for iOS), then for each packet received you must check the length of the packet as well as the MIDI status, then loop into that packet to retrieve all MIDI events in the current packet.
For instance, if the packet contains 9 bytes, and the MIDI status is a note ON (3 bytes message), that means your current packet contains more than a single note ON, you must then parse the first Note ON (bytes 0 to 2) then check the following MIDI status from byte 3 and so on ..
Hope this helps ...
Jerome
There is a good reference of how to walk through a MIDI packet in this file of a GitHub project : https://github.com/krevis/MIDIApps/blob/master/Frameworks/SnoizeMIDI/SMMessageParser.m
(Not mine, but it helped me solve the problems that got me to this thread)

Handling streamed data via pipes

A Win32 application (the "server") is sending a continuous stream of data over a named pipe. GetNamedPipeInfo() tells me that input and output buffer sizes are automatically allocated as needed. The pipe is operating in byte mode (although it is sending data units that are bigger than 1 byte (doubles, to be precise)).
Now, my question is this: Can I somehow verify that my application (the "client") is not missing any data when reading from the pipe? I know that those read/write operations are buffered, but I suppose the buffers will not grow indefinitely if the client doesn't fetch the data quickly enough. How do I know if I missed something? Does the server (or the pipe?) silently discard data that is not read in time by the client?
BTW, can I rely on proper alignment of the data the client reads using ReadFile()? As far as I understood, ReadFile() may return with less bytes read than specified, i.e. NumberOfBytesRead <= NumberOfBytesToRead. Do I have to check every time that NumberOfBytesRead is a multiple of sizeof(double)?
The write operation will block if there is no more room in the pipe's buffers. This is from my (old) copy of the SDK manual:
When an application uses the WriteFile
function to write to a pipe, the write
operation may not finish if the pipe
buffer is full. The write operation is
completed when a read operation (using
the ReadFile function) makes more
buffer space available.
Sorry, didn't find out how to comment on your post, Neil.
The write operation will block if there is no more room in the pipe's buffers.
I just discovered that Sysinternals' FileMon can also monitor pipe operations. For testing purposes I connected the client to the named pipe and did no read operations, just waiting. The server writes a few hundred kB to the pipe every 4--5 seconds, even though nobody is fetching the data from the pipe on the client side. No blocking write operation ... And so far no limits in buffer-size seem to have been reached.
This is either a very big buffer ... or the server does some magic additional to just using WriteFile() and waiting for the client to read.

What does CancelIo() do with bytes that have already been read?

What happens if I ReadFile() 10 bytes (in overlapped mode without a timeout) but invoke CancelIo() after 5 bytes have been read? The documentation for CancelIo() says that it cancels any pending I/O, but what happens to the 5 bytes already read? Are they lost? Are they re-enqueued so the next time I ReadFile() I'll get them again?
I'm looking for the specification to indicate one way or another. I don't want to rely on empirical evidence.
According to http://groups.google.ca/group/microsoft.public.win32.programmer.kernel/browse_thread/thread/4fded0ac7e4ecfb4?hl=en
It depends on how the driver writer implemented the device. The exact
semantics of cancel on an operation are not defined to that level.
Either it doesn't matter because you are using overlapped I/O or you can just call SetFilePointer manually when you know you've cancelled I/O.
You don't have to rely on undocumented behavior if you just force the issue.

Resources