Using TCPSocket, I need to socket.puts "foobar", and then socket.close to let the socket in the other side socket.read the message.
Is there a way to send or receive a message though a socket, but without closing the socket, which mean I can send message again without creating a socket again?
p.s Something like websocket
If you traverse up the super class chain you will eventually see that you inherit from IO. Most IO objects, in Ruby, buffer the data to be more efficient writing and reading from disk.
In your case, the buffer wasn't large enough (or enough time didn't pass) for it to flush out. However, when you closed the socket, this forced a flush of the buffer resources.
You have a few options:
You can manually force the buffer to flush using IO#flush.
You can set the buffer to always sync after a write / read, by setting IO#sync= to true. You can check the status of your IO object's syncing using IO#sync; I'm guessing you'd see socket.sync #=> false
Use BasicSocket#send which will call POSIX send(2); since sockets are initialized with O_FSYNC set, the send will be synchronous and atomic.
It should not be neccessary to close the connection in order for the other party to read it. send should transfer the data over the conection immediately. Make sure the other party is reading from the socket.
Related
We using the WSAEventSelect to bind an socket with an event. And From the MSDN
The FD_CLOSE network event is recorded when a close indication is
received for the virtual circuit corresponding to the socket. In TCP
terms, this means that the FD_CLOSE is recorded when the connection
goes into the TIME WAIT or CLOSE WAIT states. This results from the
remote end performing a shutdown on the send side or a closesocket.
FD_CLOSE being posted after all data is read from a socket. An
application should check for remaining data upon receipt of FD_CLOSE
to avoid any possibility of losing data. For more information, see the
section on Graceful Shutdown, Linger Options, and Socket Closure and
the shutdown function.
Seams the first highlight sentence means the FD_CLOSE will only been posted after all data is read from socket. But the second sentence require an application need to check if there is data in socket when received FD_CLOSE.
Isn't it conflict? How to understand it?
Unfortunately there is a lot of speculation and very little official word. My understanding is the following:
FD_CLOSE being posted after all data is read from a socket.
Edit: My original response here appears to be false. I believe this statement to be referring to a specific type of socket closure, but there doesn't seem to be agreement on exactly what. It is expected that this should be true, but experience shows that it will not always be.
An application should check for remaining data upon receipt of FD_CLOSE to avoid any possibility of losing data.
There may be data still available at the point your application code receives the FD_CLOSE event. In fact, reading around indicates that new data may become available at the socket after you have received the FD_CLOSE. You should check for this data in order to avoid losing it. I've seen some people implement recv loops until the recv call fails (indicating the socket is actually closed) or even restart the event loop waiting for more FD_READs. I think in the general case you can simply attempt a recv with a sufficiently large buffer and assume nothing more will arrive.
This is my case, I have a server listening for connections, and a client that I'm programming now. The client has nothing to receive from the server, yet it has to be sending status updates every 3 minutes.
I have the following at the moment:
WSAStartup(0x101,&ws);
sock = socket(AF_INET,SOCK_STREAM,0);
sa.sin_family = AF_INET;
sa.sin_port = htons(PORT_NET);
sa.sin_addr.s_addr = inet_addr("127.0.0.1");
connect(sock,(SOCKADDR*)&sa,sizeof(sa));
send(sock,(const char*)buffer,128,NULL);
How should my approach be? Can I avoid looping recv?
That's rather dependant on what behaviour you want and your program structure.
By default a socket will block on any read or write operations, which means that if your try and have your server's main thread poll the connection, you're going to end up with it 'freezing' for 3 minutes or until the client closes the connection.
The absolute simplest functional solution (no multithreadding) is to set the socket to non-blocking, and poll in in the main thread. It sounds like you want to avoid doing that though.
The most obvious way around that is to make a dedicated thread for every connection, plus the main listener socket. Your server listens for incoming connections and spawns a thread for each stream socket it creates. Then each connection thread blocks on it's socket until it receives data, and either handles it itself or shunts it onto a shared queue.
That's a bulky and complex solution - multiple threads which need opening and closing, shared resources which need protecting.
Another option is to set the socket to non-blocking (Under win32 use setsockopt so set a timeout, under *nix pass it the O_NONBLOCK flag). That way it will return control if there's no data available to read. However that means you need to poll the socket at reasonable intervals ("reasonable" being entirely up to you, and how quickly you need the server to act on new data.)
Personally, for the lightweight use you're describing I'd use a combination of the above: A single dedicated thread which polls a socket (or an array of nonblocking sockets) every few seconds, sleeping in between, and simply pushed the data onto a queue for the main thread to act upon during it's main loop.
There are a lot of ways to get into a mess with asynchronous programs, so it's probably best to keep it simple and get it working, until you're comfortable with the control flow.
I have created a client/server program, the client starts
an instance of Writer class and the server starts an instance of
Reader class. Writer will then write a DATA_SIZE bytes of data
asynchronously to the Reader every USLEEP mili seconds.
Every successive async_write request by the Writer is done
only if the "on write" handler from the previous request had
been called.
The problem is, If the Writer (client) is writing more data into the
socket than the Reader (server) is capable of receiving this seems
to be the behaviour:
Writer will start writing into (I think) system buffer and even
though the data had not yet been received by the Reader it will be
calling the "on write" handler without an error.
When the buffer is full, boost::asio won't fire the "on write"
handler anymore, untill the buffer gets smaller.
In the meanwhile, the Reader is still receiving small chunks
of data.
The fact that the Reader keeps receiving bytes after I close
the Writer program seems to prove this theory correct.
What I need to achieve is to prevent this buffering because the
data need to be "real time" (as much as possible).
I'm guessing I need to use some combination of the socket options that
asio offers, like the no_delay or send_buffer_size, but I'm just guessing
here as I haven't had success experimenting with these.
I think that the first solution that one can think of is to use
UDP instead of TCP. This will be the case as I'll need to switch to
UDP for other reasons as well in the near future, but I would
first like to find out how to do it with TCP just for the sake
of having it straight in my head in case I'll have a similar
problem some other day in the future.
NOTE1: Before I started experimenting with asynchronous operations in asio library I had implemented this same scenario using threads, locks and asio::sockets and did not experience such buffering at that time. I had to switch to the asynchronous API because asio does not seem to allow timed interruptions of synchronous calls.
NOTE2: Here is a working example that demonstrates the problem: http://pastie.org/3122025
EDIT: I've done one more test, in my NOTE1 I mentioned that when I was using asio::iosockets I did not experience this buffering. So I wanted to be sure and created this test: http://pastie.org/3125452 It turns out that the buffering is there event with asio::iosockets, so there must have been something else that caused it to go smoothly, possibly lower FPS.
TCP/IP is definitely geared for maximizing throughput as intention of most network applications is to transfer data between hosts. In such scenarios it is expected that a transfer of N bytes will take T seconds and clearly it doesn't matter if receiver is a little slow to process data. In fact, as you noticed TCP/IP protocol implements the sliding window which allows the sender to buffer some data so that it is always ready to be sent but leaves the ultimate throttling control up to the receiver. Receiver can go full speed, pace itself or even pause transmission.
If you don't need throughput and instead want to guarantee that the data your sender is transmitting is as close to real time as possible, then what you need is to make sure the sender doesn't write the next packet until he receives an acknowledgement from the receiver that it has processed the previous data packet. So instead of blindly sending packet after packet until you are blocked, define a message structure for control messages to be sent back from the receiver back to the sender.
Obviously with this approach, your trade off is that each sent packet is closer to real-time of the sender but you are limiting how much data you can transfer while slightly increasing total bandwidth used by your protocol (i.e. additional control messages). Also keep in mind that "close to real-time" is relative because you will still face delays in the network as well as ability of the receiver to process data. So you might also take a look at the design constraints of your specific application to determine how "close" do you really need to be.
If you need to be very close, but at the same time you don't care if packets are lost because old packet data is superseded by new data, then UDP/IP might be a better alternative. However, a) if you have reliable deliver requirements, you might ends up reinventing a portion of tcp/ip's wheel and b) keep in mind that certain networks (corporate firewalls) tend to block UDP/IP while allowing TCP/IP traffic and c) even UDP/IP won't be exact real-time.
I have a problem with a socket library that uses WSAASyncSelect to put the socket into asynchronous mode. In asynchronous mode the socket is placed into a non-blocking mode (WSAWOULDBLOCK is returned on any operations that would block) and windows messages are posted to a notification window to inform the application when the socket is ready to be read, written to etc.
My problem is this - when receiving a FD_READ event I don't know how many bytes to try and recv. If I pass a buffer thats too small, then winsock will automatically post another FD_READ event telling me theres more data to read. If data is arriving very fast, this can saturate the message queue with FD_READ messages, and as WM_TIMER and WM_PAINT messages are only posted when the message queue is empty this means that an application could stop painting if its receiving a lot of data and useing asynchronous sockets with a too small buffer.
How large to make the buffer then? I tried using ioctlsocket(FIONREAD) to get the number of bytes to read, and make a buffer exactly that large, BUT, KB192599 explicitly warns that that approach is fraught with inefficiency.
How do I pick a buffer size thats big enough, but not crazy big?
As far as I could ever work out, the value set using setsockopt with the SO_RVCBUF option is an upper bound on the FIONREAD value. So rather than call ioctlsocket it should be OK to call getsockopt to find out the SO_RCVBUF setting, and use that as the (attempted) value for each recv.
Based on your comment to Aviad P.'s answer, it sounds like this would solve your problem.
(Disclaimer: I have always used FIONREAD myself. But after reading the linked-to KB article I will probably be changing...)
You can set your buffer to be as big as you can without impacting performance, relying on the TCP PUSH flag to make your reads return before filling the buffer if the sender sent a smaller message.
The TCP PUSH flag is set at a logical message boundary (normally after a send operation, unless explicitly set to false). When the receiving end sees the PUSH flag on a TCP packet, it returns any blocking reads (or asynchronous reads, doesn't matter) with whatever's accumulated in the receive buffer up to the PUSH point.
So if your sender is sending reasonable sized messages, you're ok, if he's not, then you limit your buffer size such that even if you read into it all, you don't negatively impact performance (subjective).
Non-forking (aka single-threaded or select()-based) webservers like lighttpd or nginx are
gaining in popularity more and more.
While there is a multitude of documents explaining forking servers (at
various levels of detail), documentation for non-forking servers is sparse.
I am looking for a bird eyes view of how a non-forking web server works.
(Pseudo-)code or a state machine diagram, stripped down to the bare
minimum, would be great.
I am aware of the following resources and found them helpful.
The
World of SELECT()
thttpd
source code
Lighttpd
internal states
However, I am interested in the principles, not implementation details.
Specifically:
Why is this type of server sometimes called non-blocking, when select() essentially blocks?
Processing of a request can take some time. What happens with new requests during this time when there is no specific listener thread or process? Is the request processing somehow interrupted or time sliced?
Edit:
As I understand it, while a request is processed (e.g file read or CGI script run) the
server cannot accept new connections. Wouldn't this mean that such a server could miss a lot
of new connections if a CGI script runs for, let's say, 2 seconds or so?
Basic pseudocode:
setup
while true
select/poll/kqueue
with fd needing action do
read/write fd
if fd was read and well formed request in buffer
service request
other stuff
Though select() & friends block, socket I/O is not blocking. You're only blocked until you have something fun to do.
Processing individual requests normally involved reading a file descriptor from a file (static resource) or process (dynamic resource) and then writing to the socket. This can be done handily without keeping much state.
So service request above typically means opening a file, adding it to the list for select, and noting that stuff read from there goes out to a certain socket. Substitute FastCGI for file when appropriate.
EDIT:
Not sure about the others, but nginx has 2 processes: a master and a worker. The master does the listening and then feeds the accepted connection to the worker for processing.
select() PLUS nonblocking I/O essentially allows you to manage/respond to multiple connections as they come in a single thread (multiplexing), versus having multiple threads/processes handle one socket each. The goal is to minimize the ratio of server footprint to number of connections.
It is efficient because this single thread takes advantage of the high level of active socket connections required to reach saturation (since we can do nonblocking I/O to multiple file descriptors).
The rationale is that it takes very little time to acknowledge bytes are available, interpret them, then decide on the appropriate bytes to put on the output stream. The actual I/O work is handled without blocking this server thread.
This type of server is always waiting for a connection, by blocking on select(). Once it gets one, it handles the connection, then revisits the select() in an infinite loop. In the simplest case, this server thread does NOT block any other time besides when it is setting up the I/O.
If there is a second connection that comes in, it will be handled the next time the server gets to select(). At this point, the first connection could still be receiving, and we can start sending to the second connection, from the very same server thread. This is the goal.
Search for "multiplexing network sockets" for additional resources.
Or try Unix Network Programming by Stevens, Fenner, Rudoff