I am doing some benchmark testing. During this, I have to increase numbers of users 3000, keeping ramp-up time 100 and loop count 1.
Somehow jmeter giving the below errors:
Response code: Non HTTP response code: org.apache.http.conn.HttpHostConnectException
Response message: Non HTTP response message: Connection to https://the-homepage-I-am-testing.net refused
There is no such thing as "JMeter5.5" yet, the latest version as of now is JMeter 5.1
Looking into error details it looks like a bottleneck on the application under test side so you need to inspect what's causing these connections refusals:
inspect your application logs for any suspicious entries, it might be a matter of a thread pool maximum setting which is not sufficient
inspect your application server / database configuration as it might be the case you have the above limitation on the middleware level
inspect OS configuration for the same as it might be the case there is not enough maximum open handles so OS cannot allocate a socket to serve the connection
make sure to monitor baseline health metrics, application under test should have enough headroom to operate in terms of CPU, RAM, etc. - you can use JMeter PerfMon Plugin for this
Looking into your configuration it doesn't necessarily assume 3000 concurrent users as you have only 1 iteration. Given 100 seconds ramp-up it might be the case that some users have already finished test execution and some had not been yet started. Double check you're really deliver the anticipated load using i.e. Active Threads Over Time listener.
Related
Getting 503 Error while Running the JMeter for the Thread User 400,Is it Because of Server issues.? When I run the thread group for 100 user with ramp up period 25 seconds then it will be working fine but for the user 400 users its giving 503 error.
Given you don't experience any issues with 100 users and have issues with 400 users most probably it's a server issue connected with the overload so congratulations on finding the bottleneck.
You can either report it as is or perform a little bit deeper investigation in order to find the cause, suggested steps:
Instead of kicking off 400 users at once try increasing the load gradually at the same time looking at Response Times vs Threads and Transaction Throughput vs Threads charts. Ideally response time should remain the same and throughput should be growing as the number of threads increase. When response time starts increasing and throughput starts decreasing it indicates the saturation point and at this stage you can state that this is the maximum number of users your application can support
Check your application logs and configuration as it might be not properly tuned for the high loads, you can use 15 Simple ASP.NET Performance Tuning Tips as a reference or look for a similar guide for your application technology stack
Ensure that your application has enough headroom to operate in terms of CPU, RAM, Network, etc. as it might be the case that it's basically a lack of resources, it can be done using i.e. JMeter PerfMon Plugin
Repeat your test with profiler tool telemetry in place, this way you will be able to localize the problem and state where is the problematic piece of code or inefficient algo lives.
If server isn't down/restarted, then yes, 503 indicate overload
Common causes are a server that is down for maintenance or that is overloaded
You need to find what stop server from serving 400 concurrent requests/users
Notice that if you are testing on a test environment which isn't equal/similar to production environment, it may not reflect the load that production server can endure
We are executing a test of Upload scenario where we are aware that the response time will be more than 5 minutes. Hence we have configured timeout in HTTP Request Defaults as well as in the Http request as 3600000 milliseconds. But still we are getting Socket Exception in Upload transaction . Could you please suggest how to handle this.
Thanks,
SocketException doesn't necessarily means "timeout", it indicates that JMeter is not able to create or access Socket connection, there are too many possible reasons, the most common are:
Network configuration of your server doesn't allow that many connections as you're trying to open, check the maximum number of open connections on your application server and operating system level.
Your application server is overloaded and cannot handle such a big load. Make sure it has enough headroom to operate in terms of CPU, RAM and especially Network metrics (these can be monitored using JMeter PerfMon Plugin)
You might be experiencing the behaviour described in JMeterSocketClosed article
Basically the same as points 1 and 2 but this time you need to check JMeter health, make sure you're following JMeter Best Practices and maybe even consider going for distributed testing
I am running a load test using JMeter with 200 users for approx 1hr. So, the observation is that a few threads are stuck even after the duration completes. Like 60 out of 200 get stuck. When I take the thread dump and observe that these threads are in a Runnable state. Any suggestions for resolving this issue? And I do not see anything meaningful from the JMeter log file.
You will find an unexpected increase in response time at the end of that time.
This is because of the thread's insufficient ramp-down time. Some of your threads were active and made requests to the server and didn't receive the response but threads were closed forcefully. If your JMeter test is stopped forcefully, all the active threads will be closed immediately. So the requests generated by those threads will get higher response time.
You can use Ultimate Thread Group for graceful shutdown time(ramp-down time) of threads just like the ramp-up time.
Here is an example setting:
This is not a normal behaviour for a JMeter test, most probably it indicates that either JMeter engine is overloaded (not properly configured for high loads) or the machine where JMeter is running is overloaded (i.e. lacks RAM and starts intensive swapping)
Make sure to follow JMeter Best Practices (run your test in non-GUI mode, remove all Listeners and test elements you don't need, increase JMeter heap size, etc.)
Make sure to monitor the essential health metrics of the machine where JMeter is running (CPU, RAM, Network and Disk IO, Swap file usage). You can use JMeter PerfMon Plugin for this if you don't have any better software
It might be the case you'll have to switch to Distributed Testing, 200 virtual users doesn't seem to be a "high" load to me, but it depends on what exactly these users are doing, if they're uploading/downloading large files it may be sufficient to cause the problems
Going forward consider adding the thread dump and jmeter log file contents to your question as it doesn't contain any clues so we can only come up with "blind shot" answers
You may want to check your HTTP timeouts.
I usually set Connect Timeout to 5000 milliseconds, and Response Time out to 30000.
Your values may vary for your specific environment/ application.
In this way, if things go bad on the server under test, all requests terminate within the timeout (with errors).
You have also to consider that, if you are retrieving an HTML page with all its embedded objects, and the web server is stuck, you need to wait for multiple timeouts to expire before the operation terminate.
I have come across a situation where I can`t decide what scenario would be best. I have written my test in JMeter as follows:
I have one test plan that runs test in consecutive.
I have 4 thread groups and each thread group has the following properties:
No of threads: 8000
Ramp-up period: 60 sec
Loop count : 10
Same user on each iteration: true
I was having connection error, connection time out error.
So, to make it work , when I test from localhost (same machine), I have to enter a response time out of 1800000 ms, whereas when I do the same test on a remote server, I have to enter the response time out of 3600000 ms.
Can someone please advise :
Is it a good idea to include response time? Is there any other issue I should look for instead of including a response time? Is it an alert for other issue?
Can I improve the test without using response time?
First of all, a couple of questions to you:
Do you really think that the real user will really wait for 1 hour for getting the response from the application?
How did you come to this 8000 users?
Now recommendations:
Never have JMeter and the system under test on the same machine, they are both resource intensive and they will start struggling for CPU cycles, memory pages, etc. and you won't be able to tell whether JMeter is not capable of sending requests fast enough or your application cannot respond properly
Follow JMeter Best Practices
Although the number of virtual users you can simulate with JMeter is very high it's limited by the machine/operating system resources so make sure JMeter has enough headroom to operate in terms of CPU, RAM, Network and Disk IO. The metrics can be checked using JMeter PerfMon Plugin. Once you notice that any of monitored metrics start exceeding i.e. 80% of total available capacity - mention how many users are online and this is how many you can simulate from this machine for this test. If you need more - go for Distributed Testing
The same for the system under test, if you hit the resources consumption limits you need either to upgrade the hardware or to deploy your application in clustered mode (if it supports such a mode)
I want to test 400 Concurrency Users Which allow us to pass our load testing scenario as I am using below configuration setting in Apache JMeter which will through us lots of errors.
Number of Thread (Users): 400
Ramp-Up Time: 1
Loop Count: Forever Until ( Period of 1 minutes )
We are not telepathic enough to tell what's wrong with your setup without seeing the configuration and the nature of errors.
Several generic hints:
Run your test with 1-2 users/iterations to ensure it works fine and does what it is supposed to be doing. Check requests and responses details using View Results Tree listener
Make sure to run your test in command-line non-GUI mode and disable all the Listeners while your test is running.
It is better to increase and decrease the load gradually so consider using longer ramp-up time and increase test duration accordingly. I.e.
During the first minute virtual users arrive
They then hold the load for another minute
During the last minute virtual users leave
This way you will be able to tell what was the load when the errors started occurring, what is the maximum number of users your application can support, where is the saturation point, does it recover when the load gets back to normal, etc. See JMeter Ramp-Up - The Ultimate Guide article for more details.
It might be the case you found the bottleneck, i.e. your application fails to support 400 concurrent users, now you need to find the reason which may be in:
incorrect middleware configuration (wrong web server, database, load balancer settings)
your application simply lacks resources (CPU, RAM, Network, Swap, etc.). You can check this using JMeter PerfMon Plugin
if infrastructure configuration is OK and there is enough headroom for the application to operate most probably the reason is in the application code, you need to inspect what it is doing using APM or Profiler tools and report the issue.