what happens when the recv function in winsock is called and not all the data has been received? - winapi

From what i've read around about winsock, recv has a tendency to not receive all the data from a sender in a single call. When people say this do they mean, for example, i send 300 bytes from a client and i call recv on the server, it's possible that it could only receive 200 some bytes on it's first call and the buffer will be filled with those 200 bytes? What happens to the last 100 bytes?
also, let's say a buffer is too small, like 512 bytes or something and the client sends 600. will the first recv call fill the buffer to capacity, and then just drop the last 88 bytes? if i call recv again with a different buffer, will the first 88 bytes of that buffer be the rest of the data?
and thirdly, if one and two are true, and i receive a whole packet of data in separate buffers, will i have to splice them together into one buffer to start parsing my data out of it?

I'm assuming TCP here.
is it possible that it could only receive 200 some bytes on it's first call and the buffer will be filled with those 200 bytes?
Yes.
What happens to the last 100 bytes?
You'll receive them next time.
also, let's say a buffer is too small, like 512 bytes or something and the client sends 600. will the first recv call fill the buffer to capacity, and then just drop the last 88 bytes?
No.
if i call recv again with a different buffer, will the first 88 bytes be the rest of the data?
Yes.
and thirdly, if one and two are true, and i receive a whole packet of data in separate buffers, will i have to splice them together into one buffer to start parsing my data out of it?
That's up to you. Do you need the data in one contiguous buffer? If so, yes.

Related

Synchronisation for audio decoders

There's a following setup (it's basically a pair of TWS earbuds and a smartphone):
2 audio sink devices (or buds), both are connected to the same source device. One of these devices is primary (and is responsible for handling connection), other is secondary (and simply sniffs data).
Source device transmits a stream of encoded data and sink device need to decode and play it in sync with each other. There problem is that there's a considerable delay between each receiver (~5 ms # 300 kbps, ~10 ms # 600 kbps and # 900 kbps).
It seems that synchronisation mechanism which is already implemented simply doesn't want to work, so it seems that my only option is to implement another one.
It's possible to send messages between buds (but because this uses the same radio interface as sink-to-source communication, only small amount of bytes at relatively big interval could be transferred, i.e. 48 bytes per 300 ms, maybe few times more, but probably not by much) and to control the decoder library.
I tried the following simple algorithm: secondary will send every 50 milliseconds message to primary containing number of decoded packets. Primary would receive it and update state of decoder accordingly. The decoder on primary only decodes if the difference between number of already decoded frame and received one from peer is from 0 to 100 (every frame is 2.(6) ms) and the cycle continues.
This actually only makes things worse: now latency is about 200 ms or even higher.
Is there something that could be done to my synchronization method or I'd be better using something other? If so, what would be the best in such case? Probably fixing already existing implementation would be the best way, but it seems that it's closed-source, so I cannot modify it.

Boost asio async_write indicating fewer bytes returned than read from a serial port

I am using async_write to write a series of bytes to a serial port slave fd and x number of bytes are reported being transferred by the async_write_completion_handler. In a second thread, I am running a tight loop, doing a read() on the serial port master fd; the serial port is being created via pseudo-tty. If the number of bytes being transferred by the async_write exceeds 793, I see multiple reads occuring before the completion_handler is fired. The total number of bytes seen by the system read is always 25 bytes larger that the number of bytes pass to async_write. AND, an extra 25 bytes is added for each time 793 bytes are read: if the buffer is 2000 bytes, the total bytes from the read() will be 2050. In these extra-bytes situation, it appears that the bytes from the previous write are still in the buffer returned to the read().
Any help would be appreciated.

Buffer limitation of windows tcp stack , using winsocket in particular

While working on a windows application which communicate through winsocket, I've encountred the following scenario:
Alice initiate a tcp session with Bob
Bob accept, and the session is established.
Bob is sending loads of data (~1000 mb) sequancely.
Bob moving on to do other things.
meanwhile, Alice slowly reads the data, N bytes at the time (where N is the size of Alice's buffer, which allocated only once, as the data is written to a file betwin each read. this buffer is allocated by the application).
when debugging this, I found that Bob's send() never blocks, even when I paused Alice before the first read.
question is, what guarantee that the entire data (~1000 mb) will be kept available for Alice to read?
is there known/configurable parameter that limit this buffer's length?
Alice has a socket receive buffer, and Bob has a socket send buffer. Both exist for the lifetime of the respective sockets. Data is removed from Bob's buffer when Alice's TCP acknowledges it, and from Alice's buffer when Alice reads it.

Go: How to receive a whole UDP Datagram

My Problem: With net.Read... Methods copy only the number of bytes of the size of the given byte-array or slice. I don't want to allocate the maximum UDP datagram of 64 kB every time of course.
Is there a go way to determine the size of the datagram (which is in the datagram header) or read again until the datagram is completely read?
Try ReadFromUDP:
func (c *UDPConn) ReadFromUDP(b []byte) (n int, addr *UDPAddr, err error)
ReadFromUDP reads a UDP packet from c, copying the payload into b. It returns the number of bytes copied into b and the return address that was on the packet.
The packet size should be available from n, which you can then use to define a custom slice (or other data structure) to store the datagrams in. This relies on the datagram size not changing during the session, which it really shouldn't.
Usually in a UDP protocol packet sizes are known in advance, and it's usually much smaller, in the order of 1.5k or less.
What you can do is preallocate a maximum size static buffer for all reads, then once you know the size of the datagram you've read from the socket, allocate a byte array with the actual size and copy the data to it. I don't think you can do extra reads of the same datagram.

Is WriteFile atomic?

I'm designing a system that will write time series data to a file. The data is blocks of 8 bytes divided into two 4 bytes parts, time and payload.
According to MSDN the WriteFile function is atomic ( http://msdn.microsoft.com/en-us/library/aa365747(VS.85).aspx ), if the data written is less than a sector in size.
Since the file will only contain these blocks (there is no "structure" of the file so it's not possible to reconstruct a damaged file), added one after each other, it's vital that the whole block, or nothing is written to the file at all times.
So the question is, have I understood it correctly that a writefile less than a sector in size is alway written completely to disk or not written at all, no matter what happens during the actual call to writefile ?
WriteFile is atomic as long as the write does not cross a sector boundary in the file. So if the sector size is 512 bytes, writing 20 bytes starting at file offset 0 will be atomic, but the same data written at file offset 500 will not be atomic. In your case the writes should be atomic, since the sector size should be a multiple of 8.
This MSDN blog has more information on how to do an atomic multi-sector write without using transacted NTFS.

Resources