Ubuntu: Execution was not ending properly before test completion
I have two questions
1) I am running 4500 users test from two machines (2250+2250). Duration of the test is 1 hr 30 min.
At the end test was not closing and it's not giving a consolidated summary in the console.
2) after closing the test by keys(ctrl +C). If I open the jtl report from the master system its showing 2250 users in the (Active threads over time) Lisner.
Using ctrl+c is not enough for terminating the running process.
End your test by executing the shutdown.sh (or bat depending on your environment) to signal the running process to properly and gracefully end its execution. See here: http://jmeter.apache.org/usermanual/get-started.html
Edit: you can use the stoptest.cmd|sh for forcing a stop.
Looking into your execution log file my expectation is that your are kicking off 9000 threads. When it comes to remote testing JMeter Remote Engines are absolutely independent so each engine is executing the Test Plan it receives from the master.
If you define 4500 threads in the Thread Group(s) it means that every JMeter slave will kick off 4500 virtual users.
The fact that JMeter doesn't respond to Shutdown event indicates the fact that JMeter engines are overloaded so they even cannot gracefully shutdown in a reasonable time. Make sure you have monitoring of at least baseline CPU and Memory stats of JMeter load generators to ensure that they have enough headroom to operate as if they will be lacking resources it will result in the increased response time (and impossibility to shut the test down as well).
So my recommendations are:
Make sure the load you deliver matches your expectations as I am under the impression you are delivering twice as many users
Make sure you have monitoring of CPU, RAM, Network and Disk capacity on JMeter load generators, you can use i.e. JMeter PerfMon Plugin for this.
Make sure you're following JMeter Best Practices and recommendations from the 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure
Related
I am running a load test using JMeter with 200 users for approx 1hr. So, the observation is that a few threads are stuck even after the duration completes. Like 60 out of 200 get stuck. When I take the thread dump and observe that these threads are in a Runnable state. Any suggestions for resolving this issue? And I do not see anything meaningful from the JMeter log file.
You will find an unexpected increase in response time at the end of that time.
This is because of the thread's insufficient ramp-down time. Some of your threads were active and made requests to the server and didn't receive the response but threads were closed forcefully. If your JMeter test is stopped forcefully, all the active threads will be closed immediately. So the requests generated by those threads will get higher response time.
You can use Ultimate Thread Group for graceful shutdown time(ramp-down time) of threads just like the ramp-up time.
Here is an example setting:
This is not a normal behaviour for a JMeter test, most probably it indicates that either JMeter engine is overloaded (not properly configured for high loads) or the machine where JMeter is running is overloaded (i.e. lacks RAM and starts intensive swapping)
Make sure to follow JMeter Best Practices (run your test in non-GUI mode, remove all Listeners and test elements you don't need, increase JMeter heap size, etc.)
Make sure to monitor the essential health metrics of the machine where JMeter is running (CPU, RAM, Network and Disk IO, Swap file usage). You can use JMeter PerfMon Plugin for this if you don't have any better software
It might be the case you'll have to switch to Distributed Testing, 200 virtual users doesn't seem to be a "high" load to me, but it depends on what exactly these users are doing, if they're uploading/downloading large files it may be sufficient to cause the problems
Going forward consider adding the thread dump and jmeter log file contents to your question as it doesn't contain any clues so we can only come up with "blind shot" answers
You may want to check your HTTP timeouts.
I usually set Connect Timeout to 5000 milliseconds, and Response Time out to 30000.
Your values may vary for your specific environment/ application.
In this way, if things go bad on the server under test, all requests terminate within the timeout (with errors).
You have also to consider that, if you are retrieving an HTML page with all its embedded objects, and the web server is stuck, you need to wait for multiple timeouts to expire before the operation terminate.
I have come across a situation where I can`t decide what scenario would be best. I have written my test in JMeter as follows:
I have one test plan that runs test in consecutive.
I have 4 thread groups and each thread group has the following properties:
No of threads: 8000
Ramp-up period: 60 sec
Loop count : 10
Same user on each iteration: true
I was having connection error, connection time out error.
So, to make it work , when I test from localhost (same machine), I have to enter a response time out of 1800000 ms, whereas when I do the same test on a remote server, I have to enter the response time out of 3600000 ms.
Can someone please advise :
Is it a good idea to include response time? Is there any other issue I should look for instead of including a response time? Is it an alert for other issue?
Can I improve the test without using response time?
First of all, a couple of questions to you:
Do you really think that the real user will really wait for 1 hour for getting the response from the application?
How did you come to this 8000 users?
Now recommendations:
Never have JMeter and the system under test on the same machine, they are both resource intensive and they will start struggling for CPU cycles, memory pages, etc. and you won't be able to tell whether JMeter is not capable of sending requests fast enough or your application cannot respond properly
Follow JMeter Best Practices
Although the number of virtual users you can simulate with JMeter is very high it's limited by the machine/operating system resources so make sure JMeter has enough headroom to operate in terms of CPU, RAM, Network and Disk IO. The metrics can be checked using JMeter PerfMon Plugin. Once you notice that any of monitored metrics start exceeding i.e. 80% of total available capacity - mention how many users are online and this is how many you can simulate from this machine for this test. If you need more - go for Distributed Testing
The same for the system under test, if you hit the resources consumption limits you need either to upgrade the hardware or to deploy your application in clustered mode (if it supports such a mode)
I want to test 400 Concurrency Users Which allow us to pass our load testing scenario as I am using below configuration setting in Apache JMeter which will through us lots of errors.
Number of Thread (Users): 400
Ramp-Up Time: 1
Loop Count: Forever Until ( Period of 1 minutes )
We are not telepathic enough to tell what's wrong with your setup without seeing the configuration and the nature of errors.
Several generic hints:
Run your test with 1-2 users/iterations to ensure it works fine and does what it is supposed to be doing. Check requests and responses details using View Results Tree listener
Make sure to run your test in command-line non-GUI mode and disable all the Listeners while your test is running.
It is better to increase and decrease the load gradually so consider using longer ramp-up time and increase test duration accordingly. I.e.
During the first minute virtual users arrive
They then hold the load for another minute
During the last minute virtual users leave
This way you will be able to tell what was the load when the errors started occurring, what is the maximum number of users your application can support, where is the saturation point, does it recover when the load gets back to normal, etc. See JMeter Ramp-Up - The Ultimate Guide article for more details.
It might be the case you found the bottleneck, i.e. your application fails to support 400 concurrent users, now you need to find the reason which may be in:
incorrect middleware configuration (wrong web server, database, load balancer settings)
your application simply lacks resources (CPU, RAM, Network, Swap, etc.). You can check this using JMeter PerfMon Plugin
if infrastructure configuration is OK and there is enough headroom for the application to operate most probably the reason is in the application code, you need to inspect what it is doing using APM or Profiler tools and report the issue.
I am running 24 Hr load test on jmeter with 3256 threads. But even after 28 hrs some of the threads keep running and does not get ramp down. There are several errors in the run.
Even when I choose to stop the threads,"Shutting down all the threads, please be patient" Pop up appears and stays forever and no threads are ramping down.
For your information:- Number of threads-3256; Ramp up period-300; Loop Count-192
Considering all the think/wait time in the script , scenario should run for 24Hrs.
How can I close all the threads forcefully.
There are following options available:
JMeter is listening to shutdown messages on port 4445. There are 2 scripts in /bin folder of your JMeter installation:
shutdown.cmd(sh) - send graceful shutdown request to all threads
stoptest.cmd(sh) - force stop threads
Use Test Action Sampler "Stop Now" option for "All Threads"
Use Beanshell Sampler with the following code:
SampleResult.setStopTestNow(true);
However in that way you can get lots of errors caused by force shutting down of test threads which will be in your test results.
Actually I think that behavior your're experiencing is being caused by lack of resources on your load generator (JMeter) side. Try following recommendations from JMeter Performance and Tuning Tips guide to see if it helps (you don't need to wait all 24h, it will be enough to wait till all threads are ramped up).
If adjusting JMeter parameters won't help it looks like that you'll need to consider distributed testing and generate the load from more than one host.
I have created a test plan for creating userprofile.
I want to run my test plan for 100 users but when i run it for 10 users then it is running successfully with rump up time of 2 sec; but when i try it for 100 users & more than that it is getting failed I am giving rump uptime of 40 sec for 100 users.
I am not able to understand what may be the problem with it.
In my test plan the thread user are differentiated with id
Thanks in Advance.
It's a wide question, this behavior can be caused by
Your application under test can't handle load of 100 threads. Check logs for errors and make sure that application/web server and/or database configuration allow 100+ concurrent connections. Also you can check "Latency" metric to see if there is a problem with infrastructure or application itself.
Your load generator machine can't create 100 concurrent threads. If so - you'll need to consider JMeter Distributed Testing
Your script isn't optimized. I.e. using memory-consuming listeners like "View Results Tree", any graph listeners, regular expression extractors. Try following JMeter Performance and Tuning Tips guide and see whether it resolves your issue.
Agree with Dmitri, reason could be one of the above three.
One more thing you can try.
You can run your jmeter in ui mode for validation of your script and after validation you can run it in non-ui mode which will save lot of memory and cpu processing (basically UI is heaviest part in jmeter).
you can run your jmeter script in non-ui mode like this,
Jmeter -n -t -H proxy -P port
generally on a single dual core machine with 2 GB ram (Load Generator in your case) 100 user test can be carried out successfully.
some more things you can look at to find out the actual bottleneck
1.check application server logs (server on which your application is hosted)
if there are any failures in that then see performance counters on server (CPU, Memory, network etc) to see anything is overloaded.
(if server is windows then check using perfmon if linux then try sar)
if something is overloaded then reason is your app server cant take load of 100 users
probably try tuning it more.
2.check load generator system performance counters (JVM heap usage,CPU,Memory etc)
if JVM heap size is small enough try increasing it but if other counters are overloaded then try distributed load testing.
3.remove unwanted/heavy listeners, assertion from script.
maybe this will help :)