How to handle Socket Exception when response time is high - jmeter

We are executing a test of Upload scenario where we are aware that the response time will be more than 5 minutes. Hence we have configured timeout in HTTP Request Defaults as well as in the Http request as 3600000 milliseconds. But still we are getting Socket Exception in Upload transaction . Could you please suggest how to handle this.
Thanks,

SocketException doesn't necessarily means "timeout", it indicates that JMeter is not able to create or access Socket connection, there are too many possible reasons, the most common are:
Network configuration of your server doesn't allow that many connections as you're trying to open, check the maximum number of open connections on your application server and operating system level.
Your application server is overloaded and cannot handle such a big load. Make sure it has enough headroom to operate in terms of CPU, RAM and especially Network metrics (these can be monitored using JMeter PerfMon Plugin)
You might be experiencing the behaviour described in JMeterSocketClosed article
Basically the same as points 1 and 2 but this time you need to check JMeter health, make sure you're following JMeter Best Practices and maybe even consider going for distributed testing

Related

What is the maximum rest client connection supported by Quarkus Rest server by default

We have a Quarkus Rest service, and the client is using org.apache.http.impl.conn.PoolingHttpClientConnectionManager with following settings
connMgr.setMaxTotal(20);
connMgr.setDefaultMaxPerRoute(6);
How would from service we can check if the service support maximum 20 connections?
By default what is the maximum connection allowed in quarkus?
As far as I know quarkus is not limited you can configure a limit with this property but I would not suggest going over 100 connections never, because then you might run into the memory limits or cpu limits. This is why you replicate or scale your backends, so instead of having one jvm handling 150 connections, you have three smaller jvms handling 50 connections each one so you gain some high availability and fault tolerance.
If you want to test the behaviour in concurrency of your application you can always run a load test with Jmeter or other tools which will allow to simulate the load that you want and you would be able then to check the response time of your backend or if you run into resource bottlenecks or other issues.

Interpreting JMeter Results

I have been running some load tests against APIs using JMeter, the results are below:
I am trying to understand what the cause of the two different patterns of slow behaviour I am seeing could be:
Pattern 1: Time to connect is low, Latency is high
Pattern 2: Time to connect is high, Latency is low
*Note: the majority of calls are returning in around 45-50ms.
My current thoughts are as follows:
Pattern 1: This is "server processing time" so for some reason back-end server is taking longer than usual to respond. We will need to do deeper dive to figure out why.
Pattern 2: This pattern shows a long time to establish a TCP connection. Is there a way to determine if this is a problem on the outgoing side i.e. JMeter itself is running out of threads to make API connections, or if the API server is running out of connections and is unable to accept more?
How should I interpret these results? Are there any additional data points I could pull or tools I could use to better understand the findings?
Both Connect Time and Latency are network-related metrics, the formula is:
Elapsed Time = Connect Time + Latency + Server Response time
It looks like your server itself is no brainer, the problem is either on network level or connected with JMeter which might lack resources in order to send requests fast enough.
With regards to additional information sources:
Generate HTML Reporting Dashboard and look into "Over Time" charts. It should allow you to correlate increasing load with the increasing connect time / latency.
Consider setting up monitoring of essential health metrics of JMeter load generator(s) and the application under test. You can use JMeter PerfMon Plugin for this.
Make sure to follow JMeter Best Practices as JMeter default configuration is good for tests development and debugging and you need to perform fine tuning of JMeter for the high loads.

How to prevent Jmeter5.5 from stopping internet?

I am doing some benchmark testing. During this, I have to increase numbers of users 3000, keeping ramp-up time 100 and loop count 1.
Somehow jmeter giving the below errors:
Response code: Non HTTP response code: org.apache.http.conn.HttpHostConnectException
Response message: Non HTTP response message: Connection to https://the-homepage-I-am-testing.net refused
There is no such thing as "JMeter5.5" yet, the latest version as of now is JMeter 5.1
Looking into error details it looks like a bottleneck on the application under test side so you need to inspect what's causing these connections refusals:
inspect your application logs for any suspicious entries, it might be a matter of a thread pool maximum setting which is not sufficient
inspect your application server / database configuration as it might be the case you have the above limitation on the middleware level
inspect OS configuration for the same as it might be the case there is not enough maximum open handles so OS cannot allocate a socket to serve the connection
make sure to monitor baseline health metrics, application under test should have enough headroom to operate in terms of CPU, RAM, etc. - you can use JMeter PerfMon Plugin for this
Looking into your configuration it doesn't necessarily assume 3000 concurrent users as you have only 1 iteration. Given 100 seconds ramp-up it might be the case that some users have already finished test execution and some had not been yet started. Double check you're really deliver the anticipated load using i.e. Active Threads Over Time listener.

Factors affecting need to consider when using jmeter tool

I would like to know which are the factors i need to consider when using jmeter, Most of the time internet speed will be varying and due to which i don't get accurate response time, and operations on server side [CPU utilization, etc].
Do I need to consider all this points when calculating performance of the application.
In regards to "internet speed vary", JMeter is smart enough to detect it and report as Latency metric. As per JMeter glossary:
Latency.
JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client
So you should be able to subtract latency from the overall response time and calculate the time, required to process your request on the server side. However it will be much better if JMeter load generators will live in the same intranet. If you need to test your application behavior when virtual users are sitting on different network types it can also be simulated
In regards to other factors that matter:
Application under test health. You should be monitoring baseline server-side health metrics to identify whether application server(s) gets overloaded during the load test as i.e. if you see high response times the reason could be as simple as a lack of free RAM or slow hard-drive or whatever.
JMeter load generator(s) health. The same approach should be applicable to JMeter engine(s). If JMeter hosts are overloaded they cannot generate the requests and send them fast enough which will be reported as reduced throughput.
You can use PerfMon JMeter Plugin for both. See How to Monitor Your Server Health & Performance During a JMeter Load Test article for detailed description of the plugin installation and usage
Your tests need to be realistic and represent virtual user as close to real one as possible. So make sure you:
Add Timers to your test plan to represent "think time"
If you are testing web-based application consider adding and properly configuring HTTP Cookie Manager, Header Manager and Cache Manager. Also don't forget to configure HTTP Request Defaults to "Retrieve all embedded resources" and use "Parallel pool" for this.

It is not possible to download large files at Jetty server

I made a few test downloads using the Jetty 9 server, where it is made multiple downloads of a single file with an approximate size of 80 MB. When smaller number of downloads and the time of 55 seconds is not reached to download, all usually end, however if any downloads in progress after 55 seconds the flow of the network simply to download and no more remains.
I tried already set the timeout and the buffer Jetty, though this has not worked. Has anyone had this problem or have any suggestions on how to solve? Tests on IIS and Apache Server work very well. Use JMeter for testing.
Marcus, maybe you are just hitting Jetty bug 472621?
Edit: The mentioned bug is a separate timeout in Jetty that applies to the total operation, not just idle time. So by setting the http.timeout property you essentially define a maximum time any download is allowed to take, which in turn may cause timeout errors for slow clients and/or large downloads.
Cheers,
momo
A timeout means your client isn't reading fast enough.
JMeter isn't reading the response data fast enough, so the connection sits idle long enough that it idle times out and disconnects.
We test with 800MB and 2GB files regularly.
On using HTTP/1.0, HTTP/1.1, and HTTP/2 protocols.
Using normal (plaintext) connections, and secured TLS connections.
With responses being delivered in as many Transfer-Encodings and Content-Encodings as we can think of (compressed, gzip, chunked, ranged, etc.).
We do all of these tests using our own test infrastructure, often spinning up many many Amazon EC2 nodes to perform a load test that can sufficiently test the server demands (a typical test is 20 client nodes to 1 server node)
When testing large responses, you'll need to be aware of the protocol (HTTP/1.x vs HTTP/2) and how persistence behavior of that protocol can change the request / response latency. In the real world you wont have multiple large requests after each other on the same persisted connection via HTTP/1 (on HTTP/2 the multiple requests would be parallel and be sent at the same time).
Be sure you setup your JMeter to use HTTP/1.1 and not use persisted connections. (see JMeter documentation for help on that)
Also be aware of your bandwidth for your testing, its very common to blame a server (any server) for not performing fast enough, when the test itself is sloppily setup and has expectations that far exceed the bandwidth of the network itself.
Next, don't test with the same machine, this sort of load test would need multiple machines (1 for the server, and 4+ for the client)
Lastly, when load testing, you'll want to become intimately aware of your networking configurations on your server (and to a lesser extent, your client test machines) to maximize your network configuration for high load. Default configurations for OS's are rarely sufficient to handle proper load testing.

Resources