Jmeter-Saturation point - jmeter

How can i know the critical point where the systems breaks.
To analyze the result is the toughest part in Jmeter.I failed to judge it because everytime the result or listeners show different result
Can anyone suggest me what efforts should i put so that i can easily say "that this website is crashing with 500 users or giving no response after certain point."
I also have a problem in configuring the threads that what combination should i entered in thread group.
Because i have to report it further or needs to explain.

Reporting is JMeter's Achilles' heel. You can use JMeter Plugins project which provides
Ultimate Thread Group - which simplifies load scenario definition
Active Threads Over Time - which displays amount of active threads as your test goes
Server Hits Per Second - which provides information how many requests per second your threads provided
You can also consider using Taurus tool which simplifies the process of configuring and executing of JMeter tests and has rich reporting capabilities.

Related

Jmeter CLI stops tests after sometime. Any ideas?

When I run Jmeter from Windows CLI, after some random time, the tests are being stopped or stuck. I can click on ctrl+C (one time) just to refresh the run but part of the request will be lost during the time it was stuck.
Take a look at jmeter.log file, normally it should be possible to figure out what's wrong by looking at messages there. If you don't see any suspicious entries there - you can increase JMeter's logging verbosity by changing values in logj2.xml file or via -L command-line parameters.
Take a thread dump and see what exactly threads are doing when they're "stuck"
If you're using HTTP Request samplers be aware that JMeter will wait for the result forever and if the application fails to respond at all - your test will never end so you need to set reasonable timeouts.
Make sure to follow JMeter Best Practices
Take a look at resources consumption like CPU, RAM, etc. - if your machine is overloaded and cannot conduct the required load you will need to switch to distributed testing
There are several approaches to debugging a JMeter test which can be combined as a general systematic approach that I capable of diagnosing most problems.
The first thing that I would suggest is running the test within the JMeter GUI to visualize the test execution. For this you may want to add a View Results Tree listener which will provide you with real time results from each request generated:
Another way you can monitor your test execution in real time within the JMeter GUI is with the Log Viewer. If any exceptions are encountered during your test execution you will see detailed output in this window. This can be found under the Options menu:
Beyond this, JMeter records output files which are often very useful in debugging you load tests. Both the .log file and the .jtl file will provide a time stamped history of every action your test performs. From there you can likely track down the offending request or error if your test unexpectedly hangs:
If you do decide to move your test into the cloud using a service that hosts your test, you may be able to ascertain more information through that platform. Here is a comprehensive example on how to debug JMeter load tests that covers the above approaches as well as more advanced concepts. Using a cloud load test provider can provide your test will additional network and machine resources beyond what your local machine can, if the problem is related to a performance bottleneck.

Apache Jmeter Concurrent Users Performance Testing

I want to test 400 Concurrency Users Which allow us to pass our load testing scenario as I am using below configuration setting in Apache JMeter which will through us lots of errors.
Number of Thread (Users): 400
Ramp-Up Time: 1
Loop Count: Forever Until ( Period of 1 minutes )
We are not telepathic enough to tell what's wrong with your setup without seeing the configuration and the nature of errors.
Several generic hints:
Run your test with 1-2 users/iterations to ensure it works fine and does what it is supposed to be doing. Check requests and responses details using View Results Tree listener
Make sure to run your test in command-line non-GUI mode and disable all the Listeners while your test is running.
It is better to increase and decrease the load gradually so consider using longer ramp-up time and increase test duration accordingly. I.e.
During the first minute virtual users arrive
They then hold the load for another minute
During the last minute virtual users leave
This way you will be able to tell what was the load when the errors started occurring, what is the maximum number of users your application can support, where is the saturation point, does it recover when the load gets back to normal, etc. See JMeter Ramp-Up - The Ultimate Guide article for more details.
It might be the case you found the bottleneck, i.e. your application fails to support 400 concurrent users, now you need to find the reason which may be in:
incorrect middleware configuration (wrong web server, database, load balancer settings)
your application simply lacks resources (CPU, RAM, Network, Swap, etc.). You can check this using JMeter PerfMon Plugin
if infrastructure configuration is OK and there is enough headroom for the application to operate most probably the reason is in the application code, you need to inspect what it is doing using APM or Profiler tools and report the issue.

Jmeter Execution Report Analysis

I execute jmeter script via Blazemeter and I got 2% error. Is it a acceptable rate?
In detail report I observed 1353 request are failed. I just run the script for one user. In that case non of the request doesn't fail. So is that failures are due to performance issue?
Following is summary of the report. Kindly help me to anlyse this.
Most probably your application simply cannot handle the load of 50 concurrent users. With regards to "acceptable" - we don't know. If you're load testing a fan page of your local hip hop star - even 90% error rate will be acceptable. If you're testing an algorithm which will be deployed on a Mars rover and will have to work without errors and modifications for 20 years - it is not.
Normally maximum response time, minimum throughput, acceptable number of errors, etc. are defined in SLA or NFR. If you don't have those and performing some form of stress testing of your application and want to figure out the root cause of the performance bottleneck - take the next steps:
Check your application log file(s), they should have some information regarding the failure
Check status message and code in .jtl results file. Sometimes it also makes sense to "tell" JMeter to save response data for failed samplers by adding the next lines to user.properties file:
jmeter.save.saveservice.output_format=xml
jmeter.save.saveservice.response_data.on_error=true
Make sure you add the load gradually, this way you will be able to correlate increasing error rate with increasing number of users and will be able to determine exact point of time when first error occurred
Get used to monitoring whether your application under test has enough headroom to operate in terms of CPU, RAM, Network, Disk, etc. It can be done using JMeter PerfMon Plugin
If you have ability to read and understand the code in the language your application is written in - it would be beneficial if you could run your test with profiler tool telemetry enabled, this is probably the most efficient way to identify the performance problem in your application.

jmeter all other threads are stuck when it is not getting response for one Thread

I am new to jmeter.
I am doing load testing on web application using recording feature in jmeter.
The issue is, If I'm giving say 100 with 100s ramp up time in Thread pool for 50 continuous web requests(sequence of web application flow).
If the server is not responding at 25th request(total 50) of 45th Thread(total 100) it is stuck at that point and not sending requests for remaining 55 threads.
What should I do.? is there any other method to initiate the threads.
it is not sending the threads because of many reasons
1. jmeter memory print you need to check
2. the server you are targeting will accept only no of threads.
etc are there.
if each thread processing time will take x amount of time hence n threads with x amount of time the processor is busy .
if your targeting server can only process 40 in this case i am assuming capacity as 40 , then the 41st request will only get chance , only at least one of the previous request get processed or released the thread .
too many threads might cause STUCK or BLOCKED threads at the server end in that case we either dont see response or error code . try stopping the threads you see all the reaming as failed requests
JMeter shouldn't normally act like you described. Check out jmeter.log file, it usually should have enough information to get to the bottom of problem.
It looks like you're trying to run the load test using JMeter GUI. If it's the case - please don't, JMeter is not designed for producing high load in GUI mode.
Run your test in command-line mode
Delete or disable Listeners if any
Increase JVM Heap size, JMeter comes with very little value by default.
Follow other recommendations from 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure article

Why difference in out when using Jmeter to load test vs HP Load runner?

Here is the scenario
We are load testing a web application. The application is deployed on two VM servers with a a hardware load balancer distributing the load.
There are tow tools used here
1. HP Load Runner (an expensive tool).
2. JMeter - free
JMeter was used by development team to test for a huge number of users. It also does not have any licensing limit like Load Runner.
How the tests are run ?
A URL is invoked with some parameters and web application reads the parameter , process results and generates a pdf file.
When running the test we found that for a load of 1000 users spread over period of 60 seconds, our application took 4 minutes to generate 1000 files.
Now when we pass the same url through JMeter, 1000 users with a ramp up time of 60 seconds,
application takes 1 minutes and 15 seconds to generate 1000 files.
I am baffled here as to why this huge difference in performance.
Load runner has rstat daemon installed on both servers.
Any clues ?
You really have four possibilities here:
You are measuring two different things. Check your timing record structure.
Your request and response information is different between the two tools. Check with Fiddler or Wireshark.
Your test environment initial conditions are different yielding different results. Test 101 stuff, but quite often overlooked in tracking down issues like this.
You have an overloaded load generator in your loadrunner environment which is causing all virtual users to slow. For example you may be logging everything resulting in your file system becoming a bottleneck for the test. Deliberately underload your generators, reduce your logging levels and watch how you are using memory for correlations so you don't create a physical memory oversubscribed condition which results in high swap activity.
As to the comment above as to JMETER being faster, I have benchmarked both and for very complex code the C based solution for Loadrunner is faster upon execution from iteration to iteration than the Java based solution in JMETER. (method: complex algorithm for creating data files on the fly for upload for batch mortgage processing. p3: 800Mhz. 2GB of RAM. LoadRunner 1.8 million iterations per hour ungoverned for a single user. JMETER, 1.2 million) Once you add in pacing it is the response time of the server which is determinate to both.
It should be noted that LoadRunner tracks its internal API time to directly address accusations of the tool influencing the test results. If you open the results set database set (.mdb or Microsoft SQL server instance as appropriate) and take a look at the [event meter] table you will find a reference for "Wasted Time." The definition for wasted time can be found in the LoadRunner documentation.
Most likely the culprit is in HOW the scripts are structured.
Things to consider:
Think / wait time: When recording,
Jmeter does not automatically put in
waits.
Items being requested: Is
Jmeter ONLY requesting/downloading
HTML pages while Load runner gets all
embedded files?
Invalid Responses:
are all 1000 Jmeter responses valid?
If you have 1000 threads from a
single desktop, I would suspect you
killed Jmeter and not all your
responses were valid.
Dont forget that the testing application itself measures itself, since the arrival of the response is based on the testing machine time. So from this perspective it could be the answer, that JMeter is simply faster.
The second thing to mention is the wait times mentioned by BlackGaff.
Always check results with result tree in jmeter.
And always put the testing application onto separate hardware to see real results, since testing application itself loads the server.

Resources