tcp and apache keepalivetimouts - windows

A few weeks ago I wrote a small program which created a socket to an apache webserver and made a request.
Back then I did not know that this web server had a KeepAliveTimeout of 5 seconds.
After my first request I waited 1 minute. After this I wanted to reuse my first socket for another webserver request, but got an error.
From Beej's Guide to Network Programming I learned that if recv returns 0, then the other side has closed its connection:
Wait! recv() can return 0. This can mean only one thing: the remote side has closed
the connection on you! A return value of 0 is recv()'s way of letting you know this
has occurred.
My questions are now:
What does Apache send when the KeepAliveTimeout is over - a FIN or a RST packet?
I know that using a TCP connection for 2 unrelated HTTP requests like in this scenario might
not be the best thing. But in order to understand TCP more the next question is:
After my first successful http request, and before sending the next HTTP request over the same socket, would there be somehow a possibility to get informed about this keepalivetimeout TCPsocket termination of the server other than receiving 0 from the next recv() call?

It will send a FIN. If you write a request to the server after that, send() will return -1 with errno/WSAGetLastError() = ECONNRESET.

would there be somehow a possibility to get informed about this keepalivetimeout tcp socket termination of the server
Yes, by reading the proper response header parameter, namely Keep-Alive: timeout=delta-seconds:
'timeout' Parameter
A host sets the value of the timeout parameter to the time that the host will allows an idle connection to remain open before it is closed. A connection is idle if no data is sent or received by a host.
The value of the timeout parameter is a single integer in seconds.
A host MAY keep an idle connection open for longer than the time that it indicates, but it SHOULD attempt to retain a connection for at least as long as indicated.
As you can see, it's up to the host to decide. Given it only SHOULD try to keep the connection open as long as promised, but it isn't required that it does in order to conform to the spec, so the server might decide to close and reuse the connection to serve another pending client.

Related

Golang ssh client timeout not working as expected

I am writing a Golang ssh/sftp client which connects to a sftp server with a slowness in connecting and writing files, using golang.org/x/crypto/ssh package. I need to set Connection timeout and SO timeout (as we do in Java JSCH library).
First to achieve Connection timeout I was using ssh.ClientConfig.Timeout, but only worked for nanosecond and microsecond values, not for milliseconds and above, where I needed to set 5 seconds. According to the API doc also I assume ssh.ClientConfig.Timeout is used only for TCP socket connection creation and ssh handshake is not included there.
So then I tried net.Conn.SetDeadline() and it was for end-to-end connection creation + writing file + connection closing. Since this is also not fine, tried net.Conn.SetWriteDeadline() which looks like SO timeout (applied in TCP packet level) but timeout error is not appeared just after the duration elasped, instead comes out after the server's late reply or subsequent write operation starts.
So can someone please show the correct way of setting Connection timeout and SO timeout in Golang ssh package or tell whether this is supported or not?

Connection timeout setting using resttemplate using closeableHttpclient

So I read this article https://www.baeldung.com/httpclient-timeout and it says that connection timeout adds to its own penalty if the underlying service's DNS that httpclient tries to connect to has multiple IPs configured to it.
So if I have a connection timeout set to 100ms and the called service DNS has 5 IPs mapped to it then, I am looking at a max connection timeout of 500ms assuming what works is the last IP.
Is there a way to have a cap on this connection timeout regardless what the underlying service topology is as being a client, I will always be agnostic to it.
As far as I understood, you don't have a code-wise case to run in 5 or more IPs situation rather curiosity. So here my experience :
Since you're using RestTemplate which by default uses SimpleClientHttpRequestFactory.
And as the definition of connection time out goes :
The connection timeout is the timeout in making the initial
connection; i.e. completing the TCP connection handshake and getting
connected to the requested Server.
So, as far as theory goes :
Regardless of the underlying service topology, RestTemplate will try to make connection as per the connection timeout value.
And in order to figure out the almost exact timeout in your case, you must run some latency test, print the time differences which restTemplate is taking to get 200 OK.
Also, SimpleClientHttpRequestFactory internally uses HttpURLConnection which has default timeout of infinite (0/-1).
Yes, it has also been observed in rare cases, the connection keeps trying unless Thread.interrupt() explicitly being called to end.
Thus it becomes vital to describe your read-time-out and connection-time-out values and in this way you cap your connection to the limits you defined.
Hope this helps.

java.net.SocketException: Connection reset on reaching 3000 users in JMeteR

All required changes have been done to respective files like:
stalecheck=true,
keepalive is checked from HTTP request defaults,
retrycount=1,
hc.parameters file changes,
Socket timeout is 240000
Still we see "java.net.SocketException: Connection reset" in response data however I see the valid requests been passed to Server.
The issue wasnt till we reach 3000 users, worked smoothly till 3000 users.
Connection Reset has a lot of meaning, possible reasons are:
One of the server components is not able to handle load so it closes connections on its side
On JMeter side, check that you running in NON GUI mode and that neither JMeter JVM nor injector machine are overloaded which could explain this. See:
https://jmeter.apache.org/usermanual/get-started.html#non_gui

JMeter TCPSampler - how to handle a custom protocol with a periodic keep alive?

I am relatively new to JMeter however I have been doing Performance testing for almost a decade.
I am working with a proprietary TCP protocol that sends a keep alive periodically - through the existing TCP connection.
I am struggling to understand how I can fork the JMeter 'thread group' to handle a TCP Keep alive received over the same TCP session.
Any ideas?
Thank you brainstrust!
edit: I'm using the TCPsampler and have read the help page. I'll try to provide some more detail shortly about what's happening and how the protocol is written.
edit2: Unfortunately because it's a propriety protocol I cannot reveal the exact nature of the protocol itself but it's largely irrelevant to the problem I'm facing.
Basically, I use the 1st TCP sampler to 'start/authenticate' the session with the server. This is configured the following options:
1. TCPClient classname: LengthPrefixedBinaryTCPClientImpl (my protocol is implemented this standard way)
2. Re-use connection ON.
3. Close connection OFF.
4. Set NoDelay OFF.
5. SO_Linger: nothing
6. Text to send: my hex code for the protocol (this is correct)
I get the response from the first TCP request and then I want to start interacting, however in the session, the server sends a keep alive mid-stream, so occassionally when I send a request, I get an unexpected keep alive response instead (it's an open stream of data).
This is what I would like to solve.
I attempted to use a recursive test fragment, so that on KeepAlive response, it would send the request again however one cannot recurse the test fragments (it throws a Java error on Run attempt).
I hope this gives more context! Thank you for your patience (I'm a newbie SO user!)
Please check the below options if it helps with you sceario:-
If "Re-use connection" is selected, connections are shared between
Samplers in the same thread, provided that the exact same host name
string and port are used. Different hosts/port combinations will use
different connections, as will different threads. If both of "Re-use
connection" and "Close connection" are selected, the socket will be
closed after running the sampler. On the next sampler, another socket
will be created. You may want to close a socket at the end of each
thread loop.
If an error is detected - or "Re-use connection" is not selected - the
socket is closed. Another socket will be reopened on the next sample.
The following properties can be used to control its operation:
tcp.status.prefix text that precedes a status numbertcp.status.suffix
text that follows a status numbertcp.status.properties name of
property file to convert status codes to messagestcp.handler Name of
TCP Handler class (default TCPClientImpl) - only used if not specified
on the GUI
For more details:-https://jmeter.apache.org/usermanual/component_reference.html#TCP_Sampler

Open multiple keep-alives to a server in Go

I have an application that opens a keep-alive to a few remote servers (that I control). It sends a heart beat packet to keep this connection alive before the timeout.
This is how I created my transport:
// Keep-alive connection to the servers
tr := &http.Transport{}
client := &http.Client{Transport: tr}
If I use &http.Transport{MaxIdleConnsPerHost: 2} and set it to > 2, then I'm able to maintain multiple keep-alives per remote connection. However, these additional keep-alives per remote server are created by Go itself when there are concurrent requests that have to be made, and terminated automatically after the timeout expires.
My question is: how can I create the additional keep-alives, say 5 keep-alives per remote server myself when I initialize my transport (when I start Go) and keep them all alive? This would speed up subsequent requests greatly and speed is very important.
Based on inputs from the go-nuts group, to manually open multiple keep-alives to one server, we make that many simultaneous requests. Go then keeps these alive till the remote server times them out (default 5 seconds in Apache).
Please note that these connections cannot be more than the MaxIdleConnsPerHost which is 2 by default.
You can verify this behaviour using netstat -p tcp

Resources