I know load tests should be run in non-GUI mode.
But when I run the tests with the following command:
jmeter -n -t server_load_test.jmx -l log_100u_5s_01.jtl
In non-GUI mode:
I get an average response time between 3 or 4 seconds which of course is not acceptable.
In GUI mode:
I get an average response time of 100ms.
The test is really simple, it is just an HTTP request (GET) with 100 users in 5 seconds.
I would not have said anything if it was the other way around.
Which one should I trust?
My question is more: what is going on and how do I find the problem?
Non-GUI mode consumes a way less resources that GUI mode so I would recommend looking not only into Average Response Time, but also keep in mind other important metrics, for example check delivered load in both scenarios, i.e:
Active Threads Over Time and Transactions per Second listeners output (both are available via JMeter Plugins project
Generate HTML Reporting Dashboard and compare the output for both scenarios
Use 3rd-party analysis solution like JAnalyser or BM.Sense
My expectation is that in GUI mode you have much slower ramp-up hence you can run into the situation when some threads already had already finished their work while others were not yet started. In non-GUI ramp-up is faster so you have more online users so the load delivered to your application under test is much higher.
Try increasing loop count and test duration to see how it goes.
Related
When I run Jmeter from Windows CLI, after some random time, the tests are being stopped or stuck. I can click on ctrl+C (one time) just to refresh the run but part of the request will be lost during the time it was stuck.
Take a look at jmeter.log file, normally it should be possible to figure out what's wrong by looking at messages there. If you don't see any suspicious entries there - you can increase JMeter's logging verbosity by changing values in logj2.xml file or via -L command-line parameters.
Take a thread dump and see what exactly threads are doing when they're "stuck"
If you're using HTTP Request samplers be aware that JMeter will wait for the result forever and if the application fails to respond at all - your test will never end so you need to set reasonable timeouts.
Make sure to follow JMeter Best Practices
Take a look at resources consumption like CPU, RAM, etc. - if your machine is overloaded and cannot conduct the required load you will need to switch to distributed testing
There are several approaches to debugging a JMeter test which can be combined as a general systematic approach that I capable of diagnosing most problems.
The first thing that I would suggest is running the test within the JMeter GUI to visualize the test execution. For this you may want to add a View Results Tree listener which will provide you with real time results from each request generated:
Another way you can monitor your test execution in real time within the JMeter GUI is with the Log Viewer. If any exceptions are encountered during your test execution you will see detailed output in this window. This can be found under the Options menu:
Beyond this, JMeter records output files which are often very useful in debugging you load tests. Both the .log file and the .jtl file will provide a time stamped history of every action your test performs. From there you can likely track down the offending request or error if your test unexpectedly hangs:
If you do decide to move your test into the cloud using a service that hosts your test, you may be able to ascertain more information through that platform. Here is a comprehensive example on how to debug JMeter load tests that covers the above approaches as well as more advanced concepts. Using a cloud load test provider can provide your test will additional network and machine resources beyond what your local machine can, if the problem is related to a performance bottleneck.
I have an issue when running load tests using JMeter - the response peaks every ~5 minutes. These response time peaks are repeating in every run and for different processes or even single endpoints.
Below is the response time graph for one of the endpoints I am testing. The graph shows merged results of 4 different runs and the response time peaks are present in all of them - repeating every ~5 minutes.
The test configuration is 100 users, ramp-up time 3500s and thread duration 3600s.
Response time graph
This can also be observed in response time vs threads graph:
Response time vs threads
This looks like some JMeter misconfiguration, but I couldn't find any relevant info for such repeating peaks.
Re-run your test with:
Monitoring your operating system metrics like CPU, RAM, Network, Disk, etc. usage as it might be a third-party periodic activity which has an impact. The majority of operating systems have monitoring toolchains out of the box, if yours don't - you can consider using JMeter PerfMon Plugin
Doing the same for the JVM metrics using a tool like JVisualVM or equivalent (the aforementioned JMeter PerfMon Plugin can read the statistics via JMX) as the pattern you're getting looks like a full GC
Doing the same for the system under test as it might be the case JMeter works just fine and the problem is on the application side.
If you confirm that the problem is in JMeter:
Make sure to follow JMeter Best Practices
If you have followed them already and the issue is still there - you might have to switch to Distributed Testing
I want to test 400 Concurrency Users Which allow us to pass our load testing scenario as I am using below configuration setting in Apache JMeter which will through us lots of errors.
Number of Thread (Users): 400
Ramp-Up Time: 1
Loop Count: Forever Until ( Period of 1 minutes )
We are not telepathic enough to tell what's wrong with your setup without seeing the configuration and the nature of errors.
Several generic hints:
Run your test with 1-2 users/iterations to ensure it works fine and does what it is supposed to be doing. Check requests and responses details using View Results Tree listener
Make sure to run your test in command-line non-GUI mode and disable all the Listeners while your test is running.
It is better to increase and decrease the load gradually so consider using longer ramp-up time and increase test duration accordingly. I.e.
During the first minute virtual users arrive
They then hold the load for another minute
During the last minute virtual users leave
This way you will be able to tell what was the load when the errors started occurring, what is the maximum number of users your application can support, where is the saturation point, does it recover when the load gets back to normal, etc. See JMeter Ramp-Up - The Ultimate Guide article for more details.
It might be the case you found the bottleneck, i.e. your application fails to support 400 concurrent users, now you need to find the reason which may be in:
incorrect middleware configuration (wrong web server, database, load balancer settings)
your application simply lacks resources (CPU, RAM, Network, Swap, etc.). You can check this using JMeter PerfMon Plugin
if infrastructure configuration is OK and there is enough headroom for the application to operate most probably the reason is in the application code, you need to inspect what it is doing using APM or Profiler tools and report the issue.
I'm using Jmeter 2.13 and something interesting is happening for which I need some help. I did two test on website with settings
test 1
users: 500
ramp up time:60
result:smooth connections
test 2
users: 500
ramp up time:120
result: crashes java
All I know is the Apache java GUI crashes. I don't know how to troubleshoot what's causing the crash. I know there are some elements with GUI you can configure to observe Jmeter server health stats i.e threads, load etc.
Also, on the settings where it crashes, a single request is made every 0.24 seconds.
On test case 1 where it worked this equates to 0.12 seconds a single request is made.
If the calculations are correct, theoretically speaking, it shouldn't crash right? (because the difference in negligible)
The answer is simple: don't use GUI mode to run a JMeter test. Ever. Use GUI only for test development and debugging.
Running JMeter in non-GUI mode is fairly simple:
jmeter -n -t /path/to/your/testplan.jmx -l /path/to/resultsfile.jtl
Once test finished you can open resultsfile.jtl with your favourite listener and analyse test results.
For more JMeter-side performance tips and tricks see JMeter Performance and Tuning Tips guide.
Here is the scenario
We are load testing a web application. The application is deployed on two VM servers with a a hardware load balancer distributing the load.
There are tow tools used here
1. HP Load Runner (an expensive tool).
2. JMeter - free
JMeter was used by development team to test for a huge number of users. It also does not have any licensing limit like Load Runner.
How the tests are run ?
A URL is invoked with some parameters and web application reads the parameter , process results and generates a pdf file.
When running the test we found that for a load of 1000 users spread over period of 60 seconds, our application took 4 minutes to generate 1000 files.
Now when we pass the same url through JMeter, 1000 users with a ramp up time of 60 seconds,
application takes 1 minutes and 15 seconds to generate 1000 files.
I am baffled here as to why this huge difference in performance.
Load runner has rstat daemon installed on both servers.
Any clues ?
You really have four possibilities here:
You are measuring two different things. Check your timing record structure.
Your request and response information is different between the two tools. Check with Fiddler or Wireshark.
Your test environment initial conditions are different yielding different results. Test 101 stuff, but quite often overlooked in tracking down issues like this.
You have an overloaded load generator in your loadrunner environment which is causing all virtual users to slow. For example you may be logging everything resulting in your file system becoming a bottleneck for the test. Deliberately underload your generators, reduce your logging levels and watch how you are using memory for correlations so you don't create a physical memory oversubscribed condition which results in high swap activity.
As to the comment above as to JMETER being faster, I have benchmarked both and for very complex code the C based solution for Loadrunner is faster upon execution from iteration to iteration than the Java based solution in JMETER. (method: complex algorithm for creating data files on the fly for upload for batch mortgage processing. p3: 800Mhz. 2GB of RAM. LoadRunner 1.8 million iterations per hour ungoverned for a single user. JMETER, 1.2 million) Once you add in pacing it is the response time of the server which is determinate to both.
It should be noted that LoadRunner tracks its internal API time to directly address accusations of the tool influencing the test results. If you open the results set database set (.mdb or Microsoft SQL server instance as appropriate) and take a look at the [event meter] table you will find a reference for "Wasted Time." The definition for wasted time can be found in the LoadRunner documentation.
Most likely the culprit is in HOW the scripts are structured.
Things to consider:
Think / wait time: When recording,
Jmeter does not automatically put in
waits.
Items being requested: Is
Jmeter ONLY requesting/downloading
HTML pages while Load runner gets all
embedded files?
Invalid Responses:
are all 1000 Jmeter responses valid?
If you have 1000 threads from a
single desktop, I would suspect you
killed Jmeter and not all your
responses were valid.
Dont forget that the testing application itself measures itself, since the arrival of the response is based on the testing machine time. So from this perspective it could be the answer, that JMeter is simply faster.
The second thing to mention is the wait times mentioned by BlackGaff.
Always check results with result tree in jmeter.
And always put the testing application onto separate hardware to see real results, since testing application itself loads the server.