What is the use of keep alive option in Jmeter and ow its working ?
I did a performance test using Jmeter 3.0
In my recorded script Keep alive option is checked.
So i use keep alive option checked in my real test script
If i use keep alive option i got error with in 75 concurrent VU's
Error message : XXX.XXXX.XXXX:XXX server refused to respond
if i un check keep alive option i can able to go up to 500 VU's without error.
In this case do we need to use Keep alive option or not ?
Keep-alive is an HTTP feature to keep a persistent connection between round trips, so that it does not initiate a new one on every single request. This feature has many benefits, but one of the trade-offs is that it holds resources on the server side and that can be an issue under heavy load.
In your case, I guess that you've simply consumed all resources on the server with 75 opened connections and that it can't serve further requests. This error does not necessarily mean your server can't serve more than 75 connections, because it all depends on your HTTP server config.
Example of an Apache config:
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 100
Keep alive on wikipedia
Ugh! Just ran into this. According to JMeter's documentation:
http://svn.apache.org/repos/asf/jmeter/tags/v5_1_RC2/docs/usermanual/component_reference.html
JMeter sets the Connection: keep-alive header. This does not work
properly with the default HTTP implementation, as connection re-use is
not under user-control. It does work with the Apache HttpComponents
HttpClient implementations.
In other words, JMeter will send the header, but with the default implementation it will not re-use connections.
To use KeepAlive effectively, you need to select 'HttpClient4' from Implementation dropdown on Advanced tab.
HTTP keep-alive i.e. HTTP persistent connection, is an instruction that allows a single TCP connection to remain open for multiple HTTP requests/responses.
Related
I want to make the client of proxy server keepAlive. Thus, I don't want the proxy client to make a tcp close handshake everytime.
Please look at this example in netty.
Adding the keepAlive option to this example doesn't seem to work properly. Because it makes a client and connect everytime the server get request and close the client when the response is arrived.
Then how can I make my proxy client keepAlive? Is there any reference/example for it?
Using SO_KEEPALIVE socket option doesn't mean that the server (or the other peer in the connection) should ignore an explicit request to close the connection. It helps in cases like
Idle sessions timing-out/getting killed by the other end due to non-activity
Idle or long-running requests being disconnected by a firewall in-between after a certain time passes (e.g. 1 hour, for resource clean-up purposes).
If the client's logic is not to re-use the same socket for different requests (i.e. if its application logic uses a new socket for each request), there's nothing you can do about that on your proxy.
The same argument is valid for the "back-end" side of your proxy as well. If the server you're proxying to doesn't allow the socket to be re-used, and explicitly closes a connection after a request is served, that wouldn't work as you wanted either.
If you are not closing the connection on your side then the proxy is. Different proxy servers will behave in different ways.
Try sending Connection: Keep-Alive as a header.
If that doesn't work, try also sending Proxy-Connection: Keep-Alive as a header.
I activate HTTP/2 support on my server. Now i got the problem with AJAX/jQuery scipts like upload or Api handling.
After max_input_time of 60sec for php i got: [HTTP/2.0 504 Gateway Timeout 60034ms]
with HTTP/1 only a few connections where startet simultaneously and when one is finished a nother starts.
with HTTP/2 all starts at once.
when fore example 100 images would uploaded it takes to long for all.
I don't wish to change the max_input_time. I hope to limit the simultaneous connections in the scripts.
thank you
HTTP/2 intentionally allows multiple requests in parallel. This differs from HTTP/1.1 which only allowed one request at a time (but which browsers compensated for by opening 6 parallel connections). The downside to drastically increasing that limit is you can have more requests on the go at once, contending for bandwidth.
You’ve basically two choices to resolve this:
Change your application to throttle uploads rather than expecting the browser or the protocol to do this for you.
Limit the maximum number of concurrent streams allowed by your webserver. In Apache for example, this is controlled by the H2MaxSessionStreams Directive while in Nginx it is similarly controlled by the
http2_max_concurrent_streams config. Other streams will need to wait.
Case/Assumption:
There is a server that is written by someone else.
This server has an endpoint GET /api/watch.
This endpoint is plain HTTP/1.1
This endpoint will write events like
{type:"foo", message:"bar"}
to the response stream once they appear (one event per line and then a flush).
Sometimes this server writes events every second to the output, sometimes every 15 minutes.
Between my client and this server there is a third-party Load Balancer which assumes a connection as staling if there is no action on the connection for more than 60 seconds and drops the connection without closing it.
The client is written in simple Golang and simply makes a GET request to this endpoint.
Once the connection is marked by the LB as staled the client (the same happens to curl, too) is not notified that the connection was dropped by the LB and is still waiting for stuff to receive in the response of the GET request.
So: What are my possibilities to deal with this situation?
What is not possible:
Modify the server.
Use another server.
Use something else than this endpoint and how it is written.
Modify the Load Balancer.
Use another LB.
Leave the LB out of the connection.
15 minutes is an incredibly long quiet period for basic HTTP - possibly better-suited to WebSockets. Short of a protocol change, you should be able to adjust the timeout period on the load balancer (hard to say since you didn't specify what the LB is) to better suit your use case, though not all load balancers will allow timeouts as high as 15 minutes. If you can't change the protocol and can't turn the timeout up high enough, you would have to send keepalive messages from the server every so often (just short of the timeout period, so maybe 55s with your current config, or a little less than the highest timeout period you're able to set on the LB). This would have to be something the client knew to discard, like {"type": "keepalive"} - something easily identifiable on the client side as a "fake" message for keepalive purposes.
How connect to https://api.push.apple.com using http2 with persistent connection ?
Persistent connection is to avoid rapid connection and disconnection:
APNs treats rapid connection and disconnection as a denial-of-service attack
https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/Chapters/APNsProviderAPI.html
Is writing a client in c using https://nghttp2.org the only solution?
(If that question should be ask in another StackExchange website, please do tell me)
Non-persistent connections are a relic of the past. They were used in HTTP/1.0, but HTTP/1.1 already moved to a model where the connections were persistent by default, and HTTP/2 (also being multiplexed) continues on that model of connections being persistent by default.
Independently on the language you are using to develop your applications, any HTTP/2 compliant client will, by default, use persistent connections.
You only need to use the HTTP/2 client library in a way that you don't explicitly close the connection after every request you make.
Typically these libraries employ a connection pool that keeps the connections open, typically until an idle timeout fires.
When your application makes HTTP requests, the library will pick an open connection and send the request. When the response arrives the library will not close the connection but instead put it back into the pool for the next usage.
Just study how the library you want to use allows you to make multiple requests without closing the connection.
I also met this question!
If the connection be idle for a long time (about 1 hour), then function poll catches no socket status changed. It always returns 0 even as on_frame_send_callback was invoked.
Is there anyone can figure out the problem?
We have a web server and a client, both written in go, that interact with each other. We want HAProxy to load balance requests between several instance of the server, but it's not working. The client will always connect to the same server while it's still up.
If I look at the output of "netstat -anp", I can see that there is a persistent connection that was established between the client and the sever through HAProxy. I tried setting the Connection Header in the response to "close", but that didn't work at all.
Needless to say, I'm completely confused by this. My first question is, is this a problem with the client, server, or HAProxy? How does one force the client to disconnect? Am I missing something regarding this? Curl works fine, so I know that HAProxy does load balance, but curl also completely shuts down when finished, hence why I'm suspecting it's the persistent connection that's causing me issues since the client and server are long running.
Just as an FYI, I'm using go-martini on the server.
Thanks.
HTTP/1.1 uses KeepAlive by default. Since the connections aren't closed, HAProxy can't balance the requests between different backends.
You have a couple options:
Force the connection to close after each request in your code. Setting Request.Close = true on either the client or the server will send a Connection: close header, telling both sides to close the tcp connection.
Alternatively you could have HAPoxy alter the requests by setting http-server-close so the backend is closed after each request, or http-closeto shutdown both sides after each request.
http-server-close is usually the best option, since that still maintains persistent connections for the client, while proxying each request individually.