Socket Connect calls in a loop - zeromq

is it ok to connect to Sockets inside of a loop?
even if that is ok
if I put the connect call before the loop - would it automatically reconnect on a recv or send call if the connection was lost?
would it slow down the loop noticeably?
What are the best practices here?
In my case currently I'm asking about a SUB client. But soon I will need to use some REP/REQ as well on another socket.

Related

Get latency/ping of socket.io without creating a socket

I want to determine the ping of a socket.io server without actually creating a socket. Why? Because I have a bunch of socket.io servers, and I want the client to connect to the one with the lowest latency and I don't want to manage all those sockets. Plus i think creating a socket and closing it on every single server just for a simple ping does not make sense and would cause performance problems.
Ideally, the client would create a websocket just for each ping (i think i know how to do that). But on the server side, what is the best way to receive those websocket messages (made with ws://), since you don't typically make/receive direct ws requests with socket.io (as all of that stuff is handled under-the-hood)?
Use the ping class.
Ping ping = new Ping();
PingReply reply= ping.Send("www.google.com");
Console.WriteLine(reply.RoundtripTime.ToString() + "ms");
As done here, too: Making a "ping" inside of my C# application

ZeroMQ REQ/REP server error handling

I am trying to use the ZeroMQ rep/req and cannot figure out how to handle server side errors. Look at the code from here:
socket.bind("tcp://*:%s" % port)
while True:
# Wait for next request from client
message = socket.recv()
print "Received request: ", message
time.sleep (1)
socket.send("World from %s" % port)
My problem is what happens if the client calls socket.send() and then hangs or crashes. Wouldn't the server just get stuck on socket.send() or socket.recv() forever?
Note that it is not a problem with TCP sockets. With TCP sockets I can simply break the connection. With ZMQ, the connections are implicitly managed for me and I don't know if it is possible to break a 'session' or 'connection' and start over.
You can terminate ZMQ sockets much the same way you terminate TCP sockets.
socket.close()
If you need to wait on a message but only up for a finite amount of time you can pass a timeout flag to socket.recv(timeout=1024) and then handle the timeout error case the same way you would when a TCP socket timeouts or disconnects. If you need to manage several sockets all of which may be in an error state then the Poller class will let you accomplish this.
The ZMQ Z-guide offers lots of good hints on how to structure your services to handle different scenarios.
I think chapter 4 can be of interest to you, especially the Lazy Pirate Pattern.
Check out the examples of Lazy Pirate Server and Lazy Pirate Client.
In general,
Make sure you setsockopt() on the socket such that send and recv will not block forever. (Temporary blocking – be it client or server – is OK, but infinite blocking is bad because your application cannot do anything else)
In the event that any of the I/O got an error,
If you are the client, close() the current socket and re-create a new one to establish a new connection
If you are the server, there's nothing else to do, you simply are waiting for a new connection. You will want to explore the Poller class.

Why might an EventMachine outbound data buffer stop sending and just fill up forever (while other connections can still send)

I have an EventMachine server sending TCP data down to a Mac client (via GCDAsyncSocket). It always works flawlessly for a while, but inevitably the server suddenly stops sending data on a connection-by-connection basis. The connection is still maintained, and the server still receives data from the client, but it doesn't go the other way.
When this happens, I've discovered via connection#get_outbound_data_size that the connection send buffer is filling up infinitely (via #send_data) and not being sent to the client.
Are there specific (and hopefully fixable) reasons why this might occur? The reactor keeps humming along, and other active connections to the server continue working fine (though they sometimes fall into buffer hell as well).
I see one reason at least: when the remote client no longer read data from its side of the TCP connection (with a recv() call or whatever).
Then, the scenario is: the receiving TCP buffer on the client side becomes full. And the OS can no longer accepts TCP pacquets from its peer, since it cannot store them queue them. As a consequence, the sending TCP buffer on the server side becomes full too as your application continue to send paquets on the socket! Soon your server is no longer able to write into the socket since the send() system call will :
blocks undefinitively. (waiting for buffer to empty enough for the new paquet)
ot returns with an EWOULDBLOCK error. (if you configured your socket as a non-blocking one)
I usually met that kind of use case in TEST environment when I put a breakpoint in my code on the client side.
There was a patch was applied to GCDAsyncSocket on March 23 that prevents the reads from stopping. Did this patch solve your problem?

boost pion comet-like httpserver

I'm trying to efficiently implement comet-like functionality using HTTPServer class of boost::pion.
Basically, in my 'handleURI' function, I would like to postpone returning results to the client, until the server is ready to respond (for instance, until another user has sent a message to the first user, to use a simple comet 'hello world' application).
What should I do? Put the state on the stack, and exit silently, without creating a HTTPResponseWriter?
Cheers!
Setup a timeout ASIO event for your connection so that you can reap the connection after 20min or something reasonable like that. I don't know about Boost Pion, but in ASIO you'd want to register a read handler that catches when the connection closes and a timeout handler to alert you for when the connection has actually timed out. Enable TCP keep alives on your socket to detect when the socket should be reaped in the event that it just vanishes (though tcp keep alives aren't a guarantee so don't rely exclusively on them - not all clients support tcp keep alives). As for the timer, check out the following timer example:
https://github.com/sean-/Boost.Examples/blob/master/asio/timer/timer.cc

WinSock best accept() practices

Imagine you have a server which can handle only one client at a time. The server uses WSAAsyncSelect to be notified of new connections. In this case, what is the best way of handling FD_ACCEPT messages:
A > Accept the connection attempt right away but queue the client until its turn?
B > Do not accept the next connection attempt until we are done serving the currently connected client?
What do you guys think is the most efficient?
Here I describe the cons that I'm aware for both options. Hopefully this might help you decide.
A)
Upon a new client connection, it could send tons of data making your receive buffer become full, which causes unnecessary packets to be transmitted (see this). If you don't plan to receive any data from the client, shutdown receiving on that socket, thus if the client sends any data after that, the connection is reset. Moreover, if your protocol has strict rules, disconnect the client.
If the connection stays idle for too long, the system might disconnect it. To solve this, use setsockopt to set SO_KEEPALIVE on each client socket.
B)
If you don't accept the connection after a certain period (I guess the default is 60 seconds), it will timeout. In a normal (or most common) situation this indicates the server is overloaded, thus unable to answer in time. However, if the client is also designed by you, make the socket non-blocking, try to connect, then manage the timeout as you wish.
Ask yourself: what do you want the user experience to be at the other end? Do you want them to be stuck? Do you want them to time out? Do you want them to get a polite message?

Resources