As golang http package document saying, http.Server will not close until all handler finished after Shutdown() called. If handler take too long time, context will expires, and Shutdown() returns a error. What should I do for forcing handler to return immediately when server.Shutdown() has been called? Will Context().Done() of http.Request be closed after server.ShutDown() has been called?
No. If you read the docs, it explains exactly what Shutdown does, saying explicitly that it does not interrupt active connections (emphasis added):
Shutdown gracefully shuts down the server without interrupting any active connections. Shutdown works by first closing all open listeners, then closing all idle connections, and then waiting indefinitely for connections to return to idle and then shut down. If the provided context expires before the shutdown is complete, Shutdown returns the context's error, otherwise it returns any error returned from closing the Server's underlying Listener(s).
Related
QueuedThreadPool: WebSocketClient#122503328{STOPPING,8<=8<=200,i=5,q=7} Couldn't stop Thread[WebSocketClient#122503328-1556,5,main]
QueuedThreadPool: WebSocketClient#122503328{STOPPING,8<=8<=200,i=4,q=7} Couldn't stop Thread[WebSocketClient#122503328-1557,5,main]
QueuedThreadPool: WebSocketClient#122503328{STOPPING,8<=8<=200,i=4,q=7} Couldn't stop Thread[WebSocketClient#122503328-1560,5,main]
QueuedThreadPool: WebSocketClient#122503328{STOPPING,8<=8<=200,i=4,q=7} Couldn't stop Thread[WebSocketClient#122503328-1561,5,main]
QueuedThreadPool: WebSocketClient#122503328{STOPPING,8<=8<=200,i=4,q=7} Couldn't stop Thread[WebSocketClient#122503328-1563,5,main]
The above warning log is seen repeatedly and system performance is impacted when we try to stop the client using WebSocketClient stop() method. The stop timeout is set to 0.
This is occurring when server application on 3rd party machine is down and connection is refused since no server is listening on destination port. Connect Exception is seen in onError callback.
This warning is seen even if disconnect and close are done from a different thread than the client thread.
That's because you are using a Thread from the ThreadPool to attempt to stop that same thread pool.
Don't call .stop() from a Jetty thread.
Schedule the stop on a new thread of your own making.
What is the correct behavior on close of a systemd AF_UNIX socket activated daemon.
daemon.socket service file creates the socket, passes it to my daemon, which accept()s new connections. What is supposed to happen when my daemon ends?
The usual is to close() and unlink() the socket. However, that does what it says, and the UNIX socket is no longer available in the FS, even though daemon.socket is still reporting as activated, basically disabling socket re-activation.
How to create a systemd socket restart-able daemon that listen()s on its socket? Is the correct approach to leave the socket open?
After experimentation and reading of relevant manpages and cups code, here is what I know:
There are two modes of functioning :
inetd mode, where systemd accept()s the connection and passes the accepted FD to a newly spawned subprocess. This method is used by default by ssh.socket, whereby every connection spawns a new ssh process.
Non-accepting mode, where the bound socket itself is forwarded to the daemon, which then handles all accept() calls.
per man systemd.socket
A daemon listening on an AF_UNIX socket may,
but does not need to, call close(2) on the received socket before exiting. However, it must not unlink the socket from a file system.
So a received bound socket must not be unlink()ed. On the other hand, even stopping the .socket service doesn't remove the inode. To that effect, the RemoveOnStop= directive exists.
It would appear the "recommended" way is to let systemd create the socket, pass it to the daemon, and in the daemon at most close() it. Once closed, it is of course only closed in the daemon, so the socket is still available in the system. Meaning after a close() the service will be again activated.
Presumably a call to sd_listen_fds() in a still running service would pass the FD anew. I have not tested this.
TLDR: In a systemd socket activated service, don't unlink(), you may close(), RemoveOnStop= also deletes the socket file on stop of the .socket service.
I have a small ssl client that I've programmed in boost 1.55 asio, and I'm trying to figure out why boost::asio::ssl::stream::async_shutdown() always fails. The client is very similar (almost identical) to the ssl client examples in the boost documentation, in that it goes through an boost::asio::ip::tcp::resolver::async_resolve() -> boost::asio::ssl::stream::async_connect() -> boost::asio::ssl::stream::async_handshake() callback sequence. All of this works as expected and the async_handshake() callback gets an all-clear boost::system::error_code.
From the async_handshake() callback, I call async_shutdown() (I don't transfer any data - this object is more for testing the handshake):
void ClientCertificateFinder::handle_handshake(const boost::system::error_code& e)
{
if ( !e )
{
m_socket.async_shutdown( boost::bind( &ClientCertificateFinder::handle_shutdown_after_success,
this,
boost::asio::placeholders::error ) );
}
else
{
m_handler( e, IssuerNameList() );
}
}
handle_shutdown_after_success() is then called, but always with an error? The error is value=2 in asio.misc, which is 'End of file'. I've tried this with a variety of ssl servers, and I always seem to get this asio.misc error. That this isn't an underlying openssl error suggests to me that I might be misusing asio in some way...?
Anyone know why this might be happening? I was under the impression that shutting down the connection with async_shutdown() was The Right Thing To Do, but I guess I could just call boost::asio::ssl::stream.lowestlayer().close() to close the socket out from under openssl if that's the expected way to do this (and indeed the asio ssl examples seem to indicate that this is the right way of shutting down).
For a cryptographically secure shutdown, both parties musts execute shutdown operations on the boost::asio::ssl::stream by either invoking shutdown() or async_shutdown() and running the io_service. If the operation completes with an error_code that does not have an SSL category and was not cancelled before part of the shutdown could occur, then the connection was securely shutdown and the underlying transport may be reused or closed. Simply closing the lowest layer may make the session vulnerable to a truncation attack.
The Protocol and Boost.Asio API
In the standardized TLS protocol and the non-standardized SSLv3 protocol, a secure shutdown involves parties exchanging close_notify messages. In terms of the Boost.Asio API, either party may initiate a shutdown by invoking shutdown() or async_shutdown(), causing a close_notify message to be sent to the other party, informing the recipient that the initiator will not send more messages on the SSL connection. Per the specification, the recipient must respond with a close_notify message. Boost.Asio does not automatically perform this behavior, and requires the recipient to explicitly invoke shutdown() or async_shutdown().
The specification permits the initiator of the shutdown to close their read side of the connection before receiving the close_notify response. This is used in cases where the application protocol does not wish to reuse the underlying protocol. Unfortunately, Boost.Asio does not currently (1.56) provide direct support for this capability. In Boost.Asio, the shutdown() operation is considered complete upon error or if the party has sent and received a close_notify message. Once the operation has completed, the application may reuse the underlying protocol or close it.
Scenarios and Error Codes
Once an SSL connection has been established, the following error codes occur during shutdown:
One party initiates a shutdown and the remote party closes or has already closed the underlying transport without shutting down the protocol:
The initiator's shutdown() operation will fail with an SSL short read error.
One party initiates a shutdown and waits for the remote party to shutdown the protocol:
The initiator's shutdown operation will complete with an error value of boost::asio::error::eof.
The remote party's shutdown() operation completes with success.
One party initiates a shutdown then closes the underlying protocol without waiting for the remote party to shutdown the protocol:
The initiator's shutdown() operation will be cancelled, resulting in an error of boost::asio::error::operation_aborted. This is the result of a workaround noted in the details below.
The remote party's shutdown() operation completes with success.
These various scenarios are captured in detailed below. Each scenario is illustrated with a swim-line like diagram, indicating what each party is doing at the exact same point in time.
PartyA invokes shutdown() after PartyB closes connection without negotiating shutdown.
In this scenario, PartyB violates the shutdown procedure by closing the underlying transport without first invoking shutdown() on the stream. Once the underlying transport has been closed, the PartyA attempts to initiate a shutdown().
PartyA | PartyB
-------------------------------------+----------------------------------------
ssl_stream.handshake(...); | ssl_stream.handshake(...);
... | ssl_stream.lowest_layer().close();
ssl_stream.shutdown(); |
PartyA will attempt to send a close_notify message, but the write to the underlying transport will fail with boost::asio::error::eof. Boost.Asio will explicitly map the underlying transport's eof error to an SSL short read error, as PartyB violated the SSL shutdown procedure.
if ((error.category() == boost::asio::error::get_ssl_category())
&& (ERR_GET_REASON(error.value()) == SSL_R_SHORT_READ))
{
// Remote peer failed to send a close_notify message.
}
PartyA invokes shutdown() then PartyB closes connection without negotiating shutdown.
In this scenario, PartyA initiates a shutdown. However, while PartyB receives the close_notify message, PartyB violates the shutdown procedure by never explicitly responding with a shutdown() before closing the underlying transport.
PartyA | PartyB
-------------------------------------+---------------------------------------
ssl_stream.handshake(...); | ssl_stream.handshake(...);
ssl_stream.shutdown(); | ...
| ssl_stream.lowest_layer().close();
As Boost.Asio's shutdown() operation is considered complete once a close_notify has been both sent and received or an error occurs, PartyA will send a close_notify then wait for a response. PartyB closes the underlying transport without sending a close_notify, violating the SSL protocol. PartyA's read will fail with boost::asio::error::eof, and Boost.Asio will map it to an SSL short read error.
PartyA initiates shutdown() and waits for PartyB to respond with a shutdown().
In this scenario, PartyA will initiate a shutdown and wait for PartyB to respond with a shutdown.
PartyA | PartyB
-------------------------------------+----------------------------------------
ssl_stream.handshake(...); | ssl_stream.handshake(...);
ssl_stream.shutdown(); | ...
... | ssl_stream.shutdown();
This is a fairly basic shutdown, where both parties send and receive a close_notify message. Once the shutdown has been negotiated by both parties, the underlying transport may either be reused or closed.
PartyA's shutdown operation will complete with an error value of boost::asio::error::eof.
PartyB's shutdown operation will complete with success.
PartyA initiates shutdown() but does not wait for PartyB to responsd.
In this scenario, PartyA will initiate a shutdown and then immediately close the underlying transport once close_notify has been sent. PartyA does not wait for PartyB to respond with a close_notify message. This type of negotiated shutdown is allowed per the specification and fairly common amongst implementations.
As mentioned above, Boost.Asio does not directly support this type of shutdown. Boost.Asio's shutdown() operation will wait for the remote peer to send its close_notify. However, it is possible to implement a workaround while still upholding the specification.
PartyA | PartyB
-------------------------------------+---------------------------------------
ssl_stream.handshake(...); | ssl_stream.handshake(...)
ssl_stream.async_shutdown(...); | ...
const char buffer[] = ""; | ...
async_write(ssl_stream, buffer, | ...
[](...) { ssl_stream.close(); }) | ...
io_service.run(); | ...
... | ssl_stream.shutdown();
PartyA will initiate an asynchronous shutdown operation and then initiate an asynchronous write operation. The buffer used for the write must be of a non-zero length (null character is used above); otherwise, Boost.Asio will optimize the write to a no-op. When the shutdown() operation runs, it will send close_notify to PartyB, causing SSL to close the write side of PartyA's SSL stream, and then asynchronously wait for PartyB's close_notify. However, as the write side of PartyA's SSL stream has closed, the async_write() operation will fail with an SSL error indicating the protocol has been shutdown.
if ((error.category() == boost::asio::error::get_ssl_category())
&& (SSL_R_PROTOCOL_IS_SHUTDOWN == ERR_GET_REASON(error.value())))
{
ssl_stream.lowest_layer().close();
}
The failed async_write() operation will then explicitly close the underlying transport, causing the async_shutdown() operation that is waiting for PartyB's close_notify to be cancelled.
Although PartyA performed a shutdown procedure permitted by the SSL specification, the shutdown() operation was explicitly cancelled when underlying transport was closed. Hence, the shutdown() operation's error code will have a value of boost::asio::error::operation_aborted.
PartyB's shutdown operation will complete with success.
In summary, Boost.Asio's SSL shutdown operations are a bit tricky. The inconstancies between the initiator and remote peer's error codes during proper shutdowns can make handling a bit awkward. As a general rule, as long as the error code's category is not an SSL category, then the protocol was securely shutdown.
I can't find much documentation to say whether this is supposed to happen or not:
Some thread opens a TCP (or other stream) socket
Thread 1 starts a blocking recv()
Thread 2 calls shutdown() on the socket with SHUT_RDWR (or SHUT_RD I think)
Thread 1 is now "woken up" from its blocking call, and returns zero, as it would if the other party closed its socket.
This behaviour appears on modern Linux and FreeBSD systems. I haven't tested it with any others.
A comment on a Microsoft MSDN help page here: http://msdn.microsoft.com/en-us/library/windows/desktop/ms740481%28v=vs.85%29.aspx suggests that this behaviour is "responsible" in Windows; it also states that this is "not currently the case" but this may be out of date.
Is this behaviour specified anywhere? Can I rely on it?
I don't think you can rely on it. shutdown() initiales socket shutdown, but the details depend on particular circumstances. Some protocols may indeed close connection and socket immediately which would wake up processes sleeping on that socket. In other cases, shutdown just kicks protocol state machine into action, but it would take some time until it would get to the point where it would make sense to wake up anyone. For instance, established TCP connection would have to transition through few states until it reaches CLOSED state. You will eventually wake up, but you can't rely on it happening right away.
I was just wondering if it is possible to interrupt call to windows socket "connect" function?
The problem is that my code requires that to be done in a different thread (so GUI thread keeps running). But when the programm is closed there my still be threads calling "connect" that are wating for a WSAETIMEDOUT exception.
Any ideas?
Update/Hint: i cant call close() since i only have a valid handle when connect() returns. the latter one is not the case when using blocking sockets and having a tcp-connect to a firewalled location (for example) :/
If the socket is in blocking mode, the only way to abort connect() call is to close the socket from a different thread context than the one that is calling connect(). connect() will return an error, and the thread can then exit itself normally.
If the socket is in non-blocking or overlapped mode, connect() will return immediately with a WSAEWOULDBLOCK error, and you then have to call select(), WSAAsyncSelect(FD_CONNECT), or WSAEventSelect(FD_CONNECT) to detect when the connection has been established before continuing with your socket work. Since the calling thread is not blocked on connect(), it is free to periodically check for any termination/abort signals from the rest of your code, and if detected then close the socket (if needed) and exit itself normally.
If you write your socket code in non-blocking or overlapped mode, then you do not really need to use a thread. You can do your socket work within the main thread without blocking your UI, then you can just close the socket when needed. It takes a little more work to code that way, but it does work. Or you can continue using a thread. It will keep your socket code separate from your UI code and thus a bit more managable.