1) I can not understand error % in summary result in Listeners. 2) For example first time I've to run a test plan its error% is 90% and then run same test plan it shows 100% error. This error% is vary when i run my test plan.
Error% denotes the percent of requests with errors.
100% error means all the requests sent from JMeter have failed.
You should add a Tree View Listener and then check the individual requests and responses. Such high percentage of error means that either your server is not available or all of your requests are invalid.
So you should use Tree View Listener in order to identify the actual issue.
Error % means how many requests failed or resulted in error throughout the test duration. Its calculated based on the #samples field.
2 and 3 Can you please give more details about your test plan? Like number of threads, ramp-up and duration.
Such high error percentage will need further analysis. Check if you have missed out correlation of some requests i.e. any dynamic values that are passed from one request to other or check for resource utilization of your target system if it can handle the load you are generating.
Related
I need to verify one of API TPS.
6TPS is Requirement.
I have given 6 user load , 1s pacing and runned for 1hr.
Output snap attached.
From the Output how do i verify that API is achieved 6TPS.
Thanks in advance enter image description here
In your case the number of TPS for Get_id transaction is 1.9 per second so my expectation is that you either need to remove pacing or increase the number of users or both.
You can reach 6 TPS with 6 users only if response time is 1 second (or less), looking at your results it can be as high as 5.6 seconds so either your server cannot server 6 transactions per second of you just need to add more users.
If you want to check the throughput automatically and fail the test if the expected number of transactions per second is not met you can consider running your JMeter test using Taurus tool as a wrapper, Taurus provides flexible and powerful Pass/Fail criteria subsystem which can check multiple metrics and return non-zero exit status code in case if throughput will be lower than your expectations.
More information: Failure Criteria
i tried with one API for 100 users. for 50 users I am getting success response but for remaining 50 I am getting 500 internal server error. how half of the API's alone getting failed. please suggest me a solution
As per 500 Internal Server Error description
The HyperText Transfer Protocol (HTTP) 500 Internal Server Error server error response code indicates that the server encountered an unexpected condition that prevented it from fulfilling the request.
So you need to look at your server logs to get the reason for failure. Most probably it becomes overloaded and cannot handle 100 users. Try increasing the load gradually and inspect relationship between:
Number of users and number of requests per second
Number of users and response time
My expectation is that
at the first phase of the test the response time will remain the same and the number of requests per second will grow proportionally to the number of users.
at some stage you will see that the number of requests per second stops growing. The moment right before that is known as saturation point.
After that response time will start growing
After that the errors will start occurring
You might want to collect and report the aforementioned metrics and indicate what is the current bottleneck. If you need to understand the reason - it's a whole big different story
In loadrunner report it excludes failed transactions for calculating average response time but in JMeter it includes failed transactions as well for calculating average response time. I am bit confused here. What is the best way to calculate average response time? Should it include failed transactions or not? Detailed explanations will be highly appreciated.
It depends on where exactly your "transaction" failed.
If it reached the server, made a "hit" (or several hits), kicked off request processing and failed with non-successful status code - I believe it should be included as your load testing tool has triggered the request and it's the application under test which failed to respond properly or on time.
If the "transaction" didn't start due to missing test data or incorrect configuration of the load testing tool - it shouldn't be included. However it means that your test is not correct and needs to be fixed.
So for well-behaved tests I would include everything into the report and maybe prepared 3 views:
Everything (with passed and failed transactions)
Successes only
Failures only
In JMeter you can use Filter Results Tool to remove failed transactions from the final report, the tool can be installed using JMeter Plugins Manager
A failed transaction can be faster than one which passes. Example, a 4xx or 5xx status message may arrive almost instantaneously back to the client. Get enough of these errors and your average response time will drop considerably. In fact, if I was an unscrupulous tester, castigated for the level of failure on my tests, I might include a lot of "fast responses" in my data set to deliberately skew the response time so my stakeholders don't yell at me anymore.
Not that this every happens.
Getting the following error in JMeter while running the list of APIs (with no of threads:1-140 with ramp up period-1).
Response code:500
Response message: Internal Server Error
How should I overcome this Error Response code in order to get the accurate response?
What should do to decrease amount of response with this response code?
In general a 500 is an unhandled response on the part of a developer. Usually on the backend but also on the performance testing tool front end.
Ask yourself, are you validating responses that come back from the server for appropriate content? I am not just suggesting an HTTP200 is valid. You need to check response content to ensure that it is what you expect is valid for the business process, for you can have a completely valid HTTP200 class page which contains a response which will send your business process off the rails. If you do not handle the exception on the part of the unexpected response then you will find that one to two steps down the road in the business process then you are pretty much guaranteed that you will find a 500 as your request is completely out of context with the state of the application at that point.
Test101, for every step there is an expected and positive result which allows the business process to continue. Check for that result and branch your code when you do not find that the result is true.
Or, if this is a single step business process then you are likely handing the service poor data and the developer has not fully fleshed out the graceful part of dealing with your poor data.
The general advice in JMeter is Ramp-up = number of threads, in your case 140
Start with Ramp-up = number of threads and adjust up or down as needed.
Currently you are sending every 1/140 seconds new thread which is almost simultaneously, the reason to the change is:
Ramp-up needs to be long enough to avoid too large a work-load at the start of a test
Status code - 500 comes from server/API's and it's not an issue of Jmeter. Sometimes the concurrent requests are rejected by server as it's too weak to handle that number of requests.In my case, I asked my server team to scale up servers so that we can test the underlying API . It's worth mentioning that sometimes Jmeter also runs out of memory. You can do some tweaking in set HEAP=-Xms512m -Xmx512m property of jmeter execuble file. Also listeners consume too much resources.Try not to use them.
We're running a load test right now in JMeter viewing the Aggregate Report page. While we watch, Samples are increasing by nearly 500/second the number is going up very fast. However, throughput on the same page stays pegged at 18/second and our error rate is not increasing.
How can jmeter be sending so many samples if our server is only handling 18/second and the # of errors is not increasing (we only have 20 errors out of millions of samples).
Do requests equate to samples (they seem to)? Are we missing something?
If you add a "View Results Tree" Listener you can see EACH request and response - and you should check if the responses are what you actually want.
And in the "View Results in Table" Listener compare the Bytes for each response. Does it match the size in all cases?
In cases of errors or incorrect responses - these will be different.
Requests DO equal samples.
Throughput is the number of requests per unit of time (seconds, minutes, hours) that are sent to your server during the test. The aggregate report is number of requests that hit the server PER HOUR.
Remember that almost all errors are user defined. Using JoseK's recommendation, install the View Results Tree to see what your responses actually are. If they are green, but fail your own criteria, add assertions to turn them into errors.