Response code 500 in JMeter when running with threads - jmeter

Getting the following error in JMeter while running the list of APIs (with no of threads:1-140 with ramp up period-1).
Response code:500
Response message: Internal Server Error
How should I overcome this Error Response code in order to get the accurate response?
What should do to decrease amount of response with this response code?

In general a 500 is an unhandled response on the part of a developer. Usually on the backend but also on the performance testing tool front end.
Ask yourself, are you validating responses that come back from the server for appropriate content? I am not just suggesting an HTTP200 is valid. You need to check response content to ensure that it is what you expect is valid for the business process, for you can have a completely valid HTTP200 class page which contains a response which will send your business process off the rails. If you do not handle the exception on the part of the unexpected response then you will find that one to two steps down the road in the business process then you are pretty much guaranteed that you will find a 500 as your request is completely out of context with the state of the application at that point.
Test101, for every step there is an expected and positive result which allows the business process to continue. Check for that result and branch your code when you do not find that the result is true.
Or, if this is a single step business process then you are likely handing the service poor data and the developer has not fully fleshed out the graceful part of dealing with your poor data.

The general advice in JMeter is Ramp-up = number of threads, in your case 140
Start with Ramp-up = number of threads and adjust up or down as needed.
Currently you are sending every 1/140 seconds new thread which is almost simultaneously, the reason to the change is:
Ramp-up needs to be long enough to avoid too large a work-load at the start of a test

Status code - 500 comes from server/API's and it's not an issue of Jmeter. Sometimes the concurrent requests are rejected by server as it's too weak to handle that number of requests.In my case, I asked my server team to scale up servers so that we can test the underlying API . It's worth mentioning that sometimes Jmeter also runs out of memory. You can do some tweaking in set HEAP=-Xms512m -Xmx512m property of jmeter execuble file. Also listeners consume too much resources.Try not to use them.

Related

Should average response time include failed transactions or not?

In loadrunner report it excludes failed transactions for calculating average response time but in JMeter it includes failed transactions as well for calculating average response time. I am bit confused here. What is the best way to calculate average response time? Should it include failed transactions or not? Detailed explanations will be highly appreciated.
It depends on where exactly your "transaction" failed.
If it reached the server, made a "hit" (or several hits), kicked off request processing and failed with non-successful status code - I believe it should be included as your load testing tool has triggered the request and it's the application under test which failed to respond properly or on time.
If the "transaction" didn't start due to missing test data or incorrect configuration of the load testing tool - it shouldn't be included. However it means that your test is not correct and needs to be fixed.
So for well-behaved tests I would include everything into the report and maybe prepared 3 views:
Everything (with passed and failed transactions)
Successes only
Failures only
In JMeter you can use Filter Results Tool to remove failed transactions from the final report, the tool can be installed using JMeter Plugins Manager
A failed transaction can be faster than one which passes. Example, a 4xx or 5xx status message may arrive almost instantaneously back to the client. Get enough of these errors and your average response time will drop considerably. In fact, if I was an unscrupulous tester, castigated for the level of failure on my tests, I might include a lot of "fast responses" in my data set to deliberately skew the response time so my stakeholders don't yell at me anymore.
Not that this every happens.

Tomcat unexpected maximum response time for a request when load testing is done using jmeter

I have a spring boot application which has a post endpoint which takes the request and send it to another service and get the response back and save it to mongo database and returned the response back to user. The application is deployed on embedded tomcat of spring boot. I am using jmeter to see the max response time, throughput etc.
When i ran a test from jmeter with 500 threads for 10 minutes, i got maximum time as around 3500ms.
When i repeat the test from jmeter the maximum time gets reduced to 900ms.
Again, if i run the test after a long time, the maximum again goes upto 3500ms.
I am not able to get any information regarding this behavior of tomcat.
Could you please help me with understanding this behavior of tomcat?
What do you mean by "unexpected"? Lower response time when you repeat the test can be explained by either your application implementation, like when you start load test against the application which is just deployed it's performance might not be optimal and when you repeat the test the cache is "warmed up" so you're getting better performance.
Another explanation could be JIT optimization as JVM is analyzing the pattern of your application usage and does inner improvements of the bytecode to better serve the given load pattern.
Third possible explanation is MongoDB caching, if 500 users are sending the same responses it might be the case the database stores the result sets in memory and when you repeat the test it doesn't actually access the storage but returns the results directly from the memory which is fast and cheap. Consider properly parameterizing your JMeter test so each thread (virtual user) would use its own credentials and perform different query than the other thread(s), but keep in mind that the test needs to be repeatable so don't use unique data each time, it's better to have sufficient set of pre-defined test data

Why is JMeter Result is different to User Experience Result?

We are currently conducting performance tests on both web apps that we have, one is running within a private network and the other is accessible for all. For both apps, a single page-load of the landing page or initial page only takes between 2-3 seconds on a user POV, but when we use blaze and JMeter, the results are between 15-20 seconds. Am I missing something? The 15-20 seconds result came from the Loadtime/Sample Time in JMeter and in Elapsed column if extracted to .csv. Please help as I'm stuck.
We have tried conducting tests on multiple PCs within the office premises along with a PC remotely accessed on another site and we still get the same results. The number of thread and ramp-up period is both set to 1 to imitate a single user only.
Where a delta exists, it is certain to mean that two different items are being timed. It would help to understand on your front end are you timing to a standard metric, such as w3c domComplete, time to interactive, first contentful paint, some other location, and then compare where this comes into play on the drilldown on the performance tab of chrome. Odds are that there is a lot occuring that is not visible that is being captured by Jmeter.
You might also look for other threads on here on how jmeter operates as compared to a "real browser" There are differences which could come into play affecting your page comparisons, particularly if you have dozens/hundreds of elements that need to be downloaded to complete your page. Also, pay attention to third party components where you do not have permission to test their servers.
I can think of 2 possible causees:
Clear your browser history, especially browser cache. It might be the case you're getting HTTP Status 304 for all requests in browser because responses are being returned from the browser cache and no actual requests are being made while JMeter always uses "clean" session.
Pay attention to Connect Time and Latency metrics as it might be the case the server response time is low but the time for network packets to travel back and forth is very high.
Connect Time. JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout.
Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
So basically "Elapsed time = Connect Time + Latency + Server Processing Time"
In general given:
the same machine
clean browser session
and JMeter configured to behave like a real browser
you should get similar or equal timings for the same page

Does JMeter show the correct average response time for the first page it hits for many virtual users?

I'm load testing a system with 500 virtual users. I've kept the "Ramp-Up period (in seconds)" option to zero. So, what I understand, JMeter will hit the system with 500 virtual users all at the same time. Please correct me if I'm wrong here.
Now, the summary report shows the average response time for the first page is ~100 seconds!. Which is more than a minute and a half of wait time. But while the JMeter is running, I manually went to the same page/url using a browser and didn't have to wait for that long. It was not even close, the page response was almost immediate for me.
My question is: is there any known issue for the average response time of the first page? Is it JMeter which is taking long to trigger that many users?
Thanks in advance.
--Ishtiaque
There is no issue in Jmeter related to first page response time.
Summary Report shows all response time details in Milliseconds, the value "100" seconds have you converted milliseconds to seconds?
Also in order to make sure that 500 users hit concurrently, use Synchronizing Timer.
Hope this will help.
While the response times will be accurate, you need to consider the affect of starting so many threads at once on both your server and your client.
500 threads to start at once is not insignificant n the client. If your server has the connections, it will start 500 threads as well.
Ramping over a period of time is more realistic loadwise, but still not really indicative of server capability until the threads have all started and settled in.
Databases can also require a settling in period which can affect response times.
Alternative to ramping is introducing a random wait at the start of each thread before firing the first sample. You can then choose not to ramp over time, but still expect resources on the client to suddenly come under load and change the settings if you hit limits. This will make the entire run much more realistic of typical behaviour. However, you need to determine if your use cases are typical.
Although the heap size is increased, i notice there is still longer time as compared to actual response time. Later i realised it was the probe effect (the extra time a tool generates due to test execution)

Within a thread, are JMeter HTTP request/responses done sequentially?

I'm trying to understand the basics of JMeter. I've got a "plus1" Java servlet that adds one to a request parameter and returns the result, so it's a fast test servlet just so I can understand load testing.
Here's my test plan:
Thread Group: 1 thread, ramp up 1 s, loop count 10000
HTTP Request to localhost
Graph Results
Summary Report
When I run this, the summary report shows a throughput number of 200/sec or so.
The key question, with no controllers in the test plan, is JMeter running the test plan (sending a single request) and waiting for the response before looping?
When I introduce a more computationally intensive page for the request, the throughput number goes down as I would expect.
In short, yes.
There is an argument for having a sampler that would make a request and not wait for the response but it's an edge case. In most cases you would want a testing tool to wait to see what happens and verify things. It's also more realistic, most users will wait for a response, in fact they generally have to, before making subsequent calls.
If you want to run a capacity test then the best approach, I think, is to spread the load over multiple threads and to actually throttle the throughput of each one - you can do this using a Constant Throughput Controller. Eg. You could have 500 threads each running at 60 requests per minute, this would give a total load of 500 reqs/sec. This way, your test load is predictable and stable - it won't be linked to the speed of response from the server. Note. with multiple threads you'll want a ramp up period and you might find you have to spread the test over multiple machines (known as 'distributed' testing if you're going to google it).

Resources