How to pause IOCP TCP socket server? - windows

My program is similar to a HTTP proxy, it waits for messages on an interface and it forwards them to another interface. The application uses only IOCP, both client and server sides. Sometimes, the client is slower (in a ratio of 10 or 100) than the server, which can not buffering too much data.
How can I suspend an established TCP connection then resume it without losing any message? I tried to delay the post of a new recv IOCP event, but some messages are lost.
C++/Windows 7+

I tried to delay the post of a new recv IOCP event
That should in fact do the trick. Your server side TCP connection's receive buffer would then fill up to the receive buffer size as set on your socket, at which point the socket will push back to the sending side of that socket, which - by standard means of TCP flow control - simply stops sending more packets until the receiver signals having processed more messages.
Now it depends on the sending side how long it wants to wait (timeout), before disconnecting.
"some messages are lost"
That can only happen with TCP if you get disconnected, which both the sender and the receiver will take notice of. So data isn't simply lost. It depends on your network protocol on top of TCP though whether the sender application can know how much of the messages the receiving application (your proxy, in this case) successfully processed.

Related

What is the correct way to send large data over TCP network

I was reading this post and it was saying there could be an issue with deadlocks if you send too much data without receiving. Is it bad to send the whole file over in a single send call? If so then what is the correct way of doing it?
I have tried sending large files using single send calls and wait until i receive it on the other end also. Sometimes the connection hangs. Maybe it could be a deadlock or improper use?
TL;DR: there are no problems doing large send with TCP itself, but there are problems in the specific example you cite.
As for TCP in general:
Using a large sent is not a problem. The network layer of your OS will take care of everything. All you need to do is make sure is that the data gets actually transmitted to the OS, i.e. check the result of sent and retry with everything not already covered by the previous sent. sent will just block your application if it currently cannot send (write buffer full). But this requires that the server will actually receive the data. If not then the server side read buffer will fill up which causes the TCP window to decrease and ultimately the send to stop until the server is actually reading the previously sent data.
As for your specific linked example:
In your specific linked example there is an application protocol on top of TCP which changes the semantics. It is not plain TCP anymore where the client could send without receiving, but it actually requires the client to also receive data. To cite the relevant part:
The server sends one byte for every 3 bytes received.
Thus, if you send a large amount of data, then the server will send a matching amount of data back - size being one third of what you have sent. This sender emitted data will be put in the read buffer of your socket. If you don't recv then this read buffer will get full. This will cause the client network stack to signal to the server a TCP window of 0 and the server will stop sending data.
If the TCP window is 0 then the server cannot send anymore data on this socket. This means that the server will be stuck in send. If the server cannot handle recv and send on the same socket in parallel, then the server will be stuck in send and not call recv anymore - which fill fill up the server side read buffer and ultimately cause the TCP window for data from client to server to be 0 too.
In this situation both client and server will be stuck in send since nobody is receiving the data sent by the other and thus the TCP window stays 0 in both directions - deadlock.

Are TCP Packets Received when Requested

I have a server process running that listens on a port. I can establish the connection with this port, but when I try to send data, the client reports that data has been sent, while the server never receives it.
I am using WireShark to trace the data, and I can't find the data packet I sent, which means it was never received. So here's my question. Does this mean that:
The packet has never reached the network adapter on the server side?
or,
The server process never called the receiving API (recv() or equivalent)?
In other words, are TCP packets transmitted only when the receiving side calls the receiving API, or are they transmitted automatically whenever they are sent, and the receiving API only reads the buffered data?

How to drop inactive/disconnected peers in ZMQ

I have a client/server setup in which clients send a single request message to the server and gets a bunch of data messages back.
The server is implemented using a ROUTER socket and the clients using a DEALER. The communication is asynchronous.
The clients are typically iPads/iPhones and they connect over wifi so the connection is not 100% reliable.
The issue I’m concern about is if the client connects to the server and sends a request for data but before the response messages are delivered back the communication goes down (e.g. out of wifi coverage).
In this case the messages will be queued up on the server side waiting for the client to reconnect. That is fine for a short time but eventually I would like to drop the messages and the connection to release resources.
By checking activity/timeouts it would be possible in the server and the client applications to identify that the connection is gone. The client can shutdown the socket and in this way free resources but how can it be done in the server?
Per the ZMQ FAQ:
How can I flush all messages that are in the ZeroMQ socket queue?
There is no explicit command for flushing a specific message or all messages from the message queue. You may set ZMQ_LINGER to 0 and close the socket to discard any unsent messages.
Per this mailing list discussion from 2013:
There is no option to drop old messages [from an outgoing message queue].
Your best bet is to implement heartbeating and, when one client stops responding without explicitly disconnecting, restart your ROUTER socket. Messy, I know, this is really something that should have a companion option to HWM. Pieter Hintjens is clearly on board (he created ZMQ) - but that was from 2011, so it looks like nothing ever came of it.
This is a bit late but setting tcp keepalive to a reasonable value will cause dead sockets to close after the timeouts have expired.
Heartbeating is necessary for either side to determine the other side is still responding.
The only thing I'm not sure about is how to go about heartbeating many thousands of clients without spending all available cpu just on dealing with the heartbeats.

How can I know if the message sent by websocket success or not

I developed a chat server using websocket in cowboy, but I want to know if the message sent by server to client success.How can I know?
Websocket is a rather thin abstraction layer on top of a conventional TCP socket. After the initial handshake the difference is minimal. So, the question is: how do I know if a data chunk was received by the remote peer? The short answer: only if the peer acknowledges it explicitly by the means of application-level protocol. Remote client will send TCP ACK packets for every data packet you will send it, but this fact is well hidden from the application for good reasons. Receiving ACK packet only means that remote TCP stack has dealt with it, but says nothing about how (and if) the client application has processed it.
Add a special "acknowledge receive" message type to your chat protocol. Include a monotonically increasing sequence number in all of your outgoing messages, and include the SN of the received message in the ACK message to know exactly how much data the client has already processed.

Why might an EventMachine outbound data buffer stop sending and just fill up forever (while other connections can still send)

I have an EventMachine server sending TCP data down to a Mac client (via GCDAsyncSocket). It always works flawlessly for a while, but inevitably the server suddenly stops sending data on a connection-by-connection basis. The connection is still maintained, and the server still receives data from the client, but it doesn't go the other way.
When this happens, I've discovered via connection#get_outbound_data_size that the connection send buffer is filling up infinitely (via #send_data) and not being sent to the client.
Are there specific (and hopefully fixable) reasons why this might occur? The reactor keeps humming along, and other active connections to the server continue working fine (though they sometimes fall into buffer hell as well).
I see one reason at least: when the remote client no longer read data from its side of the TCP connection (with a recv() call or whatever).
Then, the scenario is: the receiving TCP buffer on the client side becomes full. And the OS can no longer accepts TCP pacquets from its peer, since it cannot store them queue them. As a consequence, the sending TCP buffer on the server side becomes full too as your application continue to send paquets on the socket! Soon your server is no longer able to write into the socket since the send() system call will :
blocks undefinitively. (waiting for buffer to empty enough for the new paquet)
ot returns with an EWOULDBLOCK error. (if you configured your socket as a non-blocking one)
I usually met that kind of use case in TEST environment when I put a breakpoint in my code on the client side.
There was a patch was applied to GCDAsyncSocket on March 23 that prevents the reads from stopping. Did this patch solve your problem?

Resources