I am trying to create an app that opens a listening TCP socket and then reads all bytes written to the socket and then does something with those bytes. The server does not know the length of the data that will be written to the socket. Also, there is not a terminating character in the data that is sent. What I would like to do is simply read all the available bytes, delay a short while, and try to read more bytes, etc. Is there an easy way to do this? I would like to just do something like read 100 bytes with a timeout or 1 second, then if the read timesout, get all the data that is available and then start reading again.
Related
Short question, didn't seem to find anything useful here or on Google: in the Winsock2 API, is it possible to put data back in the sockets internal buffer when you have retrieved it using recv() for example, so that is seems it was never actually read from the buffer?
No, it is not possible to inject data back into the socket's internal buffer. Either use the MSG_PEEK flag to read data without removing it from the socket's buffer, or else read the socket data into your own buffer, and then do whatever you want with your buffer. You could have your reading I/O logic always look for data in your buffer first, and then read more data from the socket only when your buffer does not have enough data to satisfy the read operation. Any data you inject back into your buffer will be seen by subsequent read operations.
You can use the MSG_PEEK flag in your recv() call
I'm writing a TFTP server in Ruby and I don't understand a couple things.
First, I read through the entire RFC and I understand the TFTP part of the packet (2 bytes opcode, etc), but I don't know where the TID's go. Also, I've never done anything in Ruby at a byte level. I don't know how to create a variable that's 2 bytes this and then 1 byte that and then whatever.
If someone could show me an example of how to construct a read request packet in ruby, that'd be sweet. Say I'm on the client side and I select port #20000 (for my local TID) and I want to read the file named /Users/pachun/documents/hello.txt on the server which has a TID of 69 right now because it's the first request. How would I construct that packet in Ruby?
Check out this project:
https://github.com/spiceworks/net-tftp
The code there should answer your questions regarding how to construct byte sequences for communicating with tftp protocol.
HI all,
I am doing serial port communication program. How do I achieve the following.
Need to know number bytes available for reading.
Flushing
Note: I am creating File with Overlapped option.
thanks in advance
~ Johnnie
You are trying to query the number of bytes available first, and then read them. The standard way would be to just allocate a buffer (say 1000 chars), then call ReadComm() which tells you how many bytes were actually used (e.g. less than or equal to 1000).
You can flush the buffer of serial io using FlushFileBuffers() (http://msdn.microsoft.com/en-us/library/aa364439%28VS.85%29.aspx) but since you want asynchronous IO, you probably only want to do that when you have written to a file and then want to move the file (certainly not on every call to WriteComm()).
More info:
http://msdn.microsoft.com/en-us/library/ms810467.aspx
I have a problem - I don't know the amount of data being sent to my UDP server.
The current code is this - testing in irb:
require 'sockets'
sock = UDPSocket.new
sock.bind('0.0.0.0',41588)
sock.read # Returns nothing
sock.recvfrom(1024) # Requires length of data to be read - I don't know this
I could set recvfrom to 65535 or some other large number but this seems like an unnecessary hack.
recvfrom and recvfrom_nonblock both throw away anything after that length specified.
Am I setting the socket up incorrectly?
Note that UDP is a datagram protocol, not a stream like TCP. Each read from UDP socket dequeues one full datagram. You might pass these flags to recvfrom(2):
MSG_PEEK
This flag causes the receive operation to return
data from the beginning of the receive queue without
removing that data from the queue. Thus, a subsequent
receive call will return the same data.
MSG_WAITALL
This flag requests that the operation block until the
full request is satisfied. However, the call may still
return less data than requested if a signal is caught,
an error or disconnect occurs, or the next data to be
received is of a different type than that returned.
MSG_TRUNC
Return the real length of the packet, even when it was
longer than the passed buffer. Only valid for packet sockets.
If you really don't know how large of a packet you might get (protocol limit is 65507 bytes, see here) and don't care about doubling the number of system calls, do the MSG_PEEK first, then read exact number of bytes from the socket.
Or you can set an approximate max buffer size, say 4096, then use MSG_TRUNC to check if you lost any data.
Also note that UDP datagrams are rarely larger then 1472 - ethernet data size of 1500 minus 20 bytes of IPv4 header minus 8 bytes of UDP header - nobody likes fragmentation.
Edit:
Socket::MSG_PEEK is there, for others you can use integer values:
MSG_TRUNC 0x20
MSG_WAITALL 0x100
Look into your system headers (/usr/include/bits/socket.h on Linux) to be sure.
Looking at the documentation for Ruby's recvfrom(), the argument is a maximum length. Just provide 65535 (max length of a UDP datagram); the returned data should be the sent datagram of whatever size it happens to be, and you should be able to determine the size of it the way you would for any stringlike thing in Ruby.
A Win32 application (the "server") is sending a continuous stream of data over a named pipe. GetNamedPipeInfo() tells me that input and output buffer sizes are automatically allocated as needed. The pipe is operating in byte mode (although it is sending data units that are bigger than 1 byte (doubles, to be precise)).
Now, my question is this: Can I somehow verify that my application (the "client") is not missing any data when reading from the pipe? I know that those read/write operations are buffered, but I suppose the buffers will not grow indefinitely if the client doesn't fetch the data quickly enough. How do I know if I missed something? Does the server (or the pipe?) silently discard data that is not read in time by the client?
BTW, can I rely on proper alignment of the data the client reads using ReadFile()? As far as I understood, ReadFile() may return with less bytes read than specified, i.e. NumberOfBytesRead <= NumberOfBytesToRead. Do I have to check every time that NumberOfBytesRead is a multiple of sizeof(double)?
The write operation will block if there is no more room in the pipe's buffers. This is from my (old) copy of the SDK manual:
When an application uses the WriteFile
function to write to a pipe, the write
operation may not finish if the pipe
buffer is full. The write operation is
completed when a read operation (using
the ReadFile function) makes more
buffer space available.
Sorry, didn't find out how to comment on your post, Neil.
The write operation will block if there is no more room in the pipe's buffers.
I just discovered that Sysinternals' FileMon can also monitor pipe operations. For testing purposes I connected the client to the named pipe and did no read operations, just waiting. The server writes a few hundred kB to the pipe every 4--5 seconds, even though nobody is fetching the data from the pipe on the client side. No blocking write operation ... And so far no limits in buffer-size seem to have been reached.
This is either a very big buffer ... or the server does some magic additional to just using WriteFile() and waiting for the client to read.