I'm running Jmeter load test from command prompt for about one hour and test is not ending at one hour. It is taking more time that one hour.
I'm getting errors in Jmeter logs for failed transactions:
Non HTTP response code: java.net.SocketException
Non HTTP response message: Unexpected end of file from server
Please advise. Is it because of some threads is hanging and how to rectify it.
Thank you
Most probably you didn't set Connect and Read Timeout in Http Request default.
So if you server hangs, JMeter will wait infinitely for the response.
Beside ensure you set correct values in your scheduler.
If issue persists, please give more details with screenshot.
By default JMeter "asks" threads to stop and waits for their graceful shutdown. If threads cannot be stopped in a timely fashion you have the following options:
Implement a "hard stop", i.e. instead of "asking" threads to stop you need to "tell" them to stop. It can be done using i.e. Test Action sampler with Stop now action. (remember that in this case JMeter won't wait for graceful shutdown and you will get extra errors connected with abnormal threads termination)
The fact that JMeter threads are stuck may indicate problem with your application so I would recommend checking baseline health metrics on the application under test side using i.e. JMeter PerfMon Plugin. If there is enough headroom but it still works slowly or doesn't work at all it might indicate a bottleneck in your application code or infrastructure.
Also make sure you are following JMeter Best Practices and make sure your JMeter load generator(s) have enough CPU, RAM, etc. as in some cases JMeter can report "false negative" results due to lack of hardware resources.
Related
I run a performance test with load 25 users with jmeter integrated with jenkins.Once the build is triggered all my thread groups are completed but the console process is running for close to 2 hours.
image here
In your image I see 500 users, not 25.
Also I see that 498 users have finished and 2 are still running.
I cannot state for sure why they are running, it's more a question to you. I can think of the following reasons:
You're not following JMeter Best Practices
JMeter lacks resources and got stuck i.e. in endless garbage collection (which is a subset of point 1)
JMeter waits for response from the server and the server fails to respond. By default JMeter's HTTP Request samplers don't have any timeout defined and it means that JMeter will wait forever, so make sure to set reasonable connect/response timeouts, the setting lives under "Advanced" tab of the HTTP Request sampler (or better use HTTP Request Defaults so you could set the timeout for all the HTTP Request samplers in one shot)
If nothing helps you can check what exactly threads are doing and where they're stuck by taking a thread dump
I am running a load test using JMeter with 200 users for approx 1hr. So, the observation is that a few threads are stuck even after the duration completes. Like 60 out of 200 get stuck. When I take the thread dump and observe that these threads are in a Runnable state. Any suggestions for resolving this issue? And I do not see anything meaningful from the JMeter log file.
You will find an unexpected increase in response time at the end of that time.
This is because of the thread's insufficient ramp-down time. Some of your threads were active and made requests to the server and didn't receive the response but threads were closed forcefully. If your JMeter test is stopped forcefully, all the active threads will be closed immediately. So the requests generated by those threads will get higher response time.
You can use Ultimate Thread Group for graceful shutdown time(ramp-down time) of threads just like the ramp-up time.
Here is an example setting:
This is not a normal behaviour for a JMeter test, most probably it indicates that either JMeter engine is overloaded (not properly configured for high loads) or the machine where JMeter is running is overloaded (i.e. lacks RAM and starts intensive swapping)
Make sure to follow JMeter Best Practices (run your test in non-GUI mode, remove all Listeners and test elements you don't need, increase JMeter heap size, etc.)
Make sure to monitor the essential health metrics of the machine where JMeter is running (CPU, RAM, Network and Disk IO, Swap file usage). You can use JMeter PerfMon Plugin for this if you don't have any better software
It might be the case you'll have to switch to Distributed Testing, 200 virtual users doesn't seem to be a "high" load to me, but it depends on what exactly these users are doing, if they're uploading/downloading large files it may be sufficient to cause the problems
Going forward consider adding the thread dump and jmeter log file contents to your question as it doesn't contain any clues so we can only come up with "blind shot" answers
You may want to check your HTTP timeouts.
I usually set Connect Timeout to 5000 milliseconds, and Response Time out to 30000.
Your values may vary for your specific environment/ application.
In this way, if things go bad on the server under test, all requests terminate within the timeout (with errors).
You have also to consider that, if you are retrieving an HTML page with all its embedded objects, and the web server is stuck, you need to wait for multiple timeouts to expire before the operation terminate.
I want to test 400 Concurrency Users Which allow us to pass our load testing scenario as I am using below configuration setting in Apache JMeter which will through us lots of errors.
Number of Thread (Users): 400
Ramp-Up Time: 1
Loop Count: Forever Until ( Period of 1 minutes )
We are not telepathic enough to tell what's wrong with your setup without seeing the configuration and the nature of errors.
Several generic hints:
Run your test with 1-2 users/iterations to ensure it works fine and does what it is supposed to be doing. Check requests and responses details using View Results Tree listener
Make sure to run your test in command-line non-GUI mode and disable all the Listeners while your test is running.
It is better to increase and decrease the load gradually so consider using longer ramp-up time and increase test duration accordingly. I.e.
During the first minute virtual users arrive
They then hold the load for another minute
During the last minute virtual users leave
This way you will be able to tell what was the load when the errors started occurring, what is the maximum number of users your application can support, where is the saturation point, does it recover when the load gets back to normal, etc. See JMeter Ramp-Up - The Ultimate Guide article for more details.
It might be the case you found the bottleneck, i.e. your application fails to support 400 concurrent users, now you need to find the reason which may be in:
incorrect middleware configuration (wrong web server, database, load balancer settings)
your application simply lacks resources (CPU, RAM, Network, Swap, etc.). You can check this using JMeter PerfMon Plugin
if infrastructure configuration is OK and there is enough headroom for the application to operate most probably the reason is in the application code, you need to inspect what it is doing using APM or Profiler tools and report the issue.
I am new to jmeter.
I am doing load testing on web application using recording feature in jmeter.
The issue is, If I'm giving say 100 with 100s ramp up time in Thread pool for 50 continuous web requests(sequence of web application flow).
If the server is not responding at 25th request(total 50) of 45th Thread(total 100) it is stuck at that point and not sending requests for remaining 55 threads.
What should I do.? is there any other method to initiate the threads.
it is not sending the threads because of many reasons
1. jmeter memory print you need to check
2. the server you are targeting will accept only no of threads.
etc are there.
if each thread processing time will take x amount of time hence n threads with x amount of time the processor is busy .
if your targeting server can only process 40 in this case i am assuming capacity as 40 , then the 41st request will only get chance , only at least one of the previous request get processed or released the thread .
too many threads might cause STUCK or BLOCKED threads at the server end in that case we either dont see response or error code . try stopping the threads you see all the reaming as failed requests
JMeter shouldn't normally act like you described. Check out jmeter.log file, it usually should have enough information to get to the bottom of problem.
It looks like you're trying to run the load test using JMeter GUI. If it's the case - please don't, JMeter is not designed for producing high load in GUI mode.
Run your test in command-line mode
Delete or disable Listeners if any
Increase JVM Heap size, JMeter comes with very little value by default.
Follow other recommendations from 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure article
I am running 24 Hr load test on jmeter with 3256 threads. But even after 28 hrs some of the threads keep running and does not get ramp down. There are several errors in the run.
Even when I choose to stop the threads,"Shutting down all the threads, please be patient" Pop up appears and stays forever and no threads are ramping down.
For your information:- Number of threads-3256; Ramp up period-300; Loop Count-192
Considering all the think/wait time in the script , scenario should run for 24Hrs.
How can I close all the threads forcefully.
There are following options available:
JMeter is listening to shutdown messages on port 4445. There are 2 scripts in /bin folder of your JMeter installation:
shutdown.cmd(sh) - send graceful shutdown request to all threads
stoptest.cmd(sh) - force stop threads
Use Test Action Sampler "Stop Now" option for "All Threads"
Use Beanshell Sampler with the following code:
SampleResult.setStopTestNow(true);
However in that way you can get lots of errors caused by force shutting down of test threads which will be in your test results.
Actually I think that behavior your're experiencing is being caused by lack of resources on your load generator (JMeter) side. Try following recommendations from JMeter Performance and Tuning Tips guide to see if it helps (you don't need to wait all 24h, it will be enough to wait till all threads are ramped up).
If adjusting JMeter parameters won't help it looks like that you'll need to consider distributed testing and generate the load from more than one host.