TCP idle connection performance - performance

How does the server and client keep the TCP connection open? Is it heavy on the system both cpu-wise and/or network-wise even if the connection is idle?

I found a good article about it http://www.tcpipguide.com/free/t_TCPConnectionManagementandProblemHandlingtheConnec-3.htm
It explains that when a TCP connection is idle, nothing happens on the network and when they need to send data the connection simply opens again.
Some people think the use of "keepalive" messages is necessary to limit the number of idle connections open, and to ensure that no 'broken' connections are kept open.
Others think the keepalive messages are a waste of resources, And possible accidental server disconnection problems.

Related

Tracking 'TCP/IP failed to establish an outgoing connection' bug that happens rarely

We're seeing TCP/IP warnings and a few connection failures on our server but they happen pretty rarely, like once a month or so. Right now, I built a little monitoring application that will just track TIME_WAIT statuses in netstat and see if there are any anomalies with the system we are monitoring. This was all done in response to how Microsoft's documentation would handle port exhaustion. I was wondering if I need to track for anything else? Is there a faster way to resolve this problem? I am not sure where the bug is originating but this seems to be the only way supposedly.
The error in question:
TCP/IP failed to establish an outgoing connection because the selected
local endpoint was recently used to connect to the same remote
endpoint. This error typically occurs when outgoing connections are
opened and closed at a high rate, causing all available local ports to
be used and forcing TCP/IP to reuse a local port for an outgoing
connection. To minimize the risk of data corruption, the TCP/IP
standard requires a minimum time period to elapse between successive
connections from a given local endpoint to a given remote endpoint.

How to make http2 requests with persistent connection ? (Any language)

How connect to https://api.push.apple.com using http2 with persistent connection ?
Persistent connection is to avoid rapid connection and disconnection:
APNs treats rapid connection and disconnection as a denial-of-service attack
https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/Chapters/APNsProviderAPI.html
Is writing a client in c using https://nghttp2.org the only solution?
(If that question should be ask in another StackExchange website, please do tell me)
Non-persistent connections are a relic of the past. They were used in HTTP/1.0, but HTTP/1.1 already moved to a model where the connections were persistent by default, and HTTP/2 (also being multiplexed) continues on that model of connections being persistent by default.
Independently on the language you are using to develop your applications, any HTTP/2 compliant client will, by default, use persistent connections.
You only need to use the HTTP/2 client library in a way that you don't explicitly close the connection after every request you make.
Typically these libraries employ a connection pool that keeps the connections open, typically until an idle timeout fires.
When your application makes HTTP requests, the library will pick an open connection and send the request. When the response arrives the library will not close the connection but instead put it back into the pool for the next usage.
Just study how the library you want to use allows you to make multiple requests without closing the connection.
I also met this question!
If the connection be idle for a long time (about 1 hour), then function poll catches no socket status changed. It always returns 0 even as on_frame_send_callback was invoked.
Is there anyone can figure out the problem?

WebSphere MQ DISC vs KAINT on SVRCONN channels

we have a major problem with many of our Applications making improper connections (SVRCONN) with queue manager and not issuing MQDISC when connection not required. This causes lot of idle stale connections and prevents Application from making new connections and fails with CONNECTION BROKEN (2009) error. We have been restricting Application connections with clientidle parameter in our Windows MQ on version 7.0.1.8 but when we migrated to MQ v7.5.0.2 in Linux platform we are deciding on the best option available in the new version. We do not have clientidle anymore in ini file for v7.5 but has DISCINT & KAINT in SVRCONN channels. I have been going through the advantages and disadvantages of both for our scenario of Application making connections through SVRCONN channels and leave connections open without issuing a disconnect. Which of these above channel attributes is ideal for us. Any suggestions? Does any of these take precedence over the other??
First off, KAINT controls TCP functions, not MQ functions. That means for it to take effect, the TCP Keepalive function must be enabled in the qm.ini TCP stanza. Nothing wrong with this, but the native HBINT and DISCINT are more responsive than delegating to TCP. This addresses the problem that the OS hasn't recognized that a socket's remote partner is gone and cleaned up the socket. As long as the socket exists and MQ's channel is idle, MQ won't notice. When TCP cleans the socket up, MQ's exception callback routine sees it immediately and closes the channel.
Of the remaining two, DISCINT controls the interval after which MQ will terminate an idle but active socket whereas HBINT controls the interval after which MQ will shut down an MCA attached to an orphan socket. Ideally, you will have a modern MQ client and server so you can use both of these.
The DISCINT should be a value longer than the longest expected interval between messages if you want the channel to stay up during the Production shift. So if a channel should have message traffic at least once every 5 minutes by design, then a DISCINT longer than 5 minutes would be required to avoid channel restart time.
The HBINT actually flows a small heartbeat message over the channel, but only will do so if HBINT seconds have passed without a message. Thsi catches the case that the socket is dead but TCP hasn't yet cleaned it up. HBINT allows MQ to discover this before the OS and take care of it, including tearing down the socket.
In general, really low values for HBINT can cause lots of unnecessary traffic. For example, HBINT(5) would flow a heartbeat every five second interval in which no other channel traffic is passed. chances are, you don't need to terminate orphan channels within 5 seconds of the loss of the socket so a larger value is perhaps more useful. That said, HBINT(5) would cause zero extra traffic in a system with a sustained message rate of 1/second - until the app died, in which case the orphan socket would be killed pretty quick.
For more detail, please go to the SupportPacs page and look for the Morag's "Keeping Channels Running" presentation.

Performance Implication of Creating New TCP Connection Per Message

In the past, our server application was designed so that a client creates one TCP connection, keeps this connection established indefinitely and sends messages when needed. These messages may come in high volume bursts or with longer idle periods in between. We are switching to a different connection protocol now, where the client will create a new connection per message and then disconnect after sending.
I have a few questions:
I know there is some overhead for the 3-shake handshake to establish the connection. But is this overhead significant (cpu, memory, bandwidth, etc.)?
Is there any difference between the latency of message being transferred for an established TCP connection that has been idle for minutes vs. creating a new connection and sending the message?
Are there any other factors/considerations that I should be considering if I'm trying to determine the performance impact of switching to this new connection protocol compared to the old one?
Any help at all is greatly appreciated.
Opening and closing a lot of TCP sessions may impact connection tracking firewalls and load balancers, causing them to slow down or even fail and reject the connection. Some, like the Linux iptables conntrack, have moderate default values for the number of tracked connections.
The program might run out of available local port numbers if it cycles messages too quickly. There is a TCP timeout before a socket can be considered "closed". There is often an operating system timer to clean up these closed connections. If too many sockets are opened too quickly, the operating system may not have had time to clean up.
The handshake adds about an extra 80 bytes to your bandwidth cost. The TCP connection close also involves FIN or RST packets, although these flags may be combined with the data packet.
Latency in an established TCP session might be a tiny bit higher if the Nagle algorithm is turned on for the message sender. Nagle causes the OS to wait for more data before sending a partially filled packet. The TCP session that is being closed will flush all data. The same effect can be had in the open session by disabling Nagle with the TCP_NODELAY flag.

How long can a XMPP Session last?

Without timeout? Or is there even a timeout?
There is no limit on the lifetime of a connected jid. For command line bots, it is a good practice to send periodic ping packets to the server, just to make sure opened socket doesn't drop after some period of inactivity.
In case your client is connected from browser and suppose the user refreshes the browser without disconnecting from the jabber server. User can still use saved (via cookie/session) jid,sid,rid combination to reconnect with previously opened session. However, bosh connection manager will drop the connection after "X" seconds of inactivity.
XMPP does not say anything about having or not a timeout. So, in theory, you XMPP Session could last as long as the TCP connection is established.
You are free to implement a timeout in your client or server though...

Resources