JMeter response times much larger than the requests' latencies - caching

JMeter machines with versions: 2.13 r13365067, 2.11.20140918 |
Java: OpenJDK 1.7.0_79 |
OS: Debian 8.1
I'm having a problem where some HTTP requests seem to be processed far too long on a load injector that isn't really under load.
Examples from result files from tests with 20 vUs (with caching, on weaker load injector, JMeter v2.11) and 40vUs (without caching, on much higher spec'd load injector, JMeter v2.13):
<time_stamp>,3257,<request_name>,200,<thread_name>,true,28537,20,20,437
<time_stamp>,5158,<request_name>,304,<thread_name>,true,138,40,40,0
Memory is at 75% in the first case, and below 50% in the second. CPU doesn't seem to spike (measured in 1 sec intervals) and goes up to 20% max in both examples.
Checked the JVM's garbage collection, and it doesn't seem like the GC is at its limits at the time of the requests either (actually at no point during the test).
I noticed this in the case where I had caching (via Cache Manager with "Use Cache-Control/Expires headers..." checked) enabled, and, like in the second example above, get the unrealistic response time of 5158 ms.
This only happens at some steps during an iteration and to more than one thread, but not all.
It seems like JMeter is somehow processing the result too long, but I can't really see that my load injectors are under heavy load, to cause processing times of seconds.
Clearly this is messing up the performance statistics so I would like to know how this is happening.
Hope someone can help.
EDIT:
#First example: Case where ResponseTime >> Latecy > 0, happens on both JMeter machines (JMeter v2.11, JMeter v2.13).
#Second example: Case where ResponseTime >> Latecy = 0 happens only on the machine with JMeter v2.13.
2nd EDIT:
Turns out it doesn't matter what JMeter version I run (or on which node).
Regex'd my result file:
Of the same requested resources, cached (latency=0), with header check, about 10% took 1 second or multiple seconds. Without header check it is 6%.

You should run same JMeter version on all nodes. If this won't solve the problem, monitor your JMeter instance resource utilisation with jconsole.

Related

Is there any time shift between jmeter and influxdb?

Just starting with jmeter and making some experiments I found something that looks kind of odd to me. I connected jmeter with influxdb and measured the avg. time response of one single request in a infinite loop. When I stopped the test I realized that the last time in the results csv created by jmeter is not the same as the one taken by influxdb. Specifically jmeter last measure is 13s higher than the one registered by influxdb. Any ideas on what could be happening?
I've tried to google it but haven't found any documentation or problem related
JMeter sends aggregated metrics, to wit it doesn't send each and every SampleResult but collects the results within some "window", default value is 5 seconds, controllable via backend_influxdb.send_interval JMeter Property
And metrics which are being sent are described here
You can try decreasing the 5 seconds window by amending the aforementioned backend_influxdb.send_interval JMeter property and setting it i.e. to 1000 ms so JMeter would send the data more often but it will create extra overhead so make sure that JMeter has enough headroom to operate and increasing metrics sending rate doesn't affect the overall throughput.

JMeter requests delayed without timer

Within my JMeter test plan I have transaction controller which contains multiple requests inside. There are no any timers between HTTP samplers and controller is configured to generate parent sample.
When I run that test most of the samples are OK but there are couple outliers for which response times for parent sample are enormous though HTTP response times inside controller are quite low. After checking I found that there are couple minute gaps between HTTP requests even no timers are configured.
E.g. first HTTP sampler started 04:08:34 and load time was 358 ms. Second sampler started 04:11:41 - so it took more than 3 minutes to start it. Then there were couple more similar requests and overall parent sample time is more than 6 minutes though sum of all HTTP sampler load times is less than 1 second.
Does anyone has an idea why it occasionally takes very long to start next HTTP request? Can it be caused by low resources (like memory) on the machines from which test is executed (it's distributed testing)?
If you don't have Timers or Flow Control Action sampler or Inter-Thread Communication Plugin JMeter executes samplers as fast as it can immediately one after another.
The only reasons I can think of is are:
lack of resources (CPU, RAM, Network, Disk) on JMeter side, I would recommend ensuring that JMeter has enough headroom to operate using i.e. JMeter PerfMon Plugin
the delay can be caused by PreProcessors, PostProcessors and Assertions so you could take a thread dump - it will provide information where exactly JMeter "hangs"

How to get high rps with JMeter load testing https endpoint

I'm trying to test my https endpoint with JMeter. I want to make at least 10000 requests per second, but when I set the number of threads to 10000 I get way less rps, around 500.
I've tried setting the number of threads to 1000 and 100, surprisingly I get this same number of rps. I'm using HTTP Sampler and "use Keep-Alive" is set to true. When I look in the statistics I see that when using 100 threads, it makes use of Keep-Alive and connect_time is around 100 ms, but when the number of threads is higher connect_time grows, it's like it stops reusing the connections.
I know this isn't a server issue, because I've tried testing that same endpoint with Yandex.Tank and phantom and it can easily maintain 10 000 requests per second, the problem is it can't use response data to make furhter requests, that's why I have to use JMeter for this task.
This can be done by using "Stepping thread group". It will allow you to send 10000 request per second upto specified time. Refer below image.
Stepping Thread Group
Download jar from below link.
https://jmeter-plugins.org/wiki/SteppingThreadGroup/
I hope you are trying to achieve this using one machine. Try with multiple machine or jmeter distributed mode.
https://jmeter.apache.org/usermanual/jmeter_distributed_testing_step_by_step.pdf
https://www.blazemeter.com/blog/how-to-perform-distributed-testing-in-jmeter/
https://blazemeter.com/blog/3-common-issues-when-running-jmeter-scripts-and-how-solve-them/
I am assuming that it is the issue with machine which is not able to generate that much load. Usually, i have use max 300 threads per machine but it depend on the machine config. Just check if the machine is having issue and multiple machine is able to generate more load, considering server is not having any issue.
Hope this helps.
Update:-Usually 200-500 can be handled my modern machines.
Please check the below link to have some more info:-
1.How do threads and number of iterations impact test and what is JMeter’s max. thread limit
2.https://www.blazemeter.com/blog/what%e2%80%99s-the-max-number-of-users-you-can-test-on-jmeter/ .

Validate newly created server support the same load

We are creating a new hosted server for one of our APIs on managed containers (Kubernetes) and we're trying to validate that it can handle at least the same amount of traffic load requests.
We've started with one of the APIs, where we would need to handle at least 140k requests per minute, all endpoints combined.
To verify this, I created a simple JMeter test as follows:
-Test Plan
---Thread Group Endpoint1
-----HTTP Request -> a GET request with query params for /path1
---Thread Group Endpoint2
-----HTTP Request -> a GET request with query params for /path2
For a local test, I used the following setup:
Thread Groups Endpoint1 and Endpoint2 are set to 200 threads (users), ramp-up period of 1s, loop count = forever and duration 60s.
Using a Summary Report listener when running the test gets me a total of ~9300 # Samples.
Using this approach, is it safe to just increase the number of threads (users) for the Thread Groups until I reach the desired 140k requests per minute?
Note: I only used JMeter a little before, so I'm aware that the entire approach may be wrong, therefore any suggestions and steering to the right path are more than welcomed.
Your approach is viable as long as it represents real-life application usage. If it has 2 endpoints with equally/evenly distributed load - your setup is just fine. If there are more endpoints and some of them are used more than the others - consider defining the workload correspondingly either using different Thread Groups or other distribution mechanism such as Throughput Controller
Increasing the number of threads is also fine, however consider increasing the load gradually, to wit increase ramp-up time so your test could have:
Arrivals phase
Time to hold the load
Ramp-down phase
This way you will be able to correlate various metrics like increasing response time, throughput, number of errors, etc. with the increasing load. Also you will be able to state what was the number of threads/requests per second when the system reached saturation point/breaking point and does it recover when the load gets back.
Also make sure you're following JMeter Best Practices as 2300/2500 requests per second is not something JMeter can support out of the box and you will need to do some tuning, at least increase JVM Heap size allocated to JMeter.
You may not be able to achieve the desired 140k requests per minute using a single Jmeter Machine, in that case you'll need Distributed Load Testing approach here.
refer: http://jmeter.apache.org/usermanual/jmeter_distributed_testing_step_by_step.html
Also keeping the ramp-up period of 1 second will lead to spike and unrealistic load in the system which will not give proper result unless you've pre-warmed your server, you should gradually increase the load as per real/estimated traffic pattern.

JMeter Load test

I want to load test a URL by hitting it few hundred times at same millisecond . I tried JMeter but I could hit 2 request at same millisecond. This seems to be problem that my machine cant create threads fast enough . Is there any solution to the issue ?
In JMeter you can use synchronizing timer setting it to 100, this way all threads will wait until there are 100 available and hit the server:
http://jmeter.apache.org/usermanual/component_reference.html#Synchronizing_Timer
Another solution is to increase the number of Threads so that you hit this throughput.
In next coming version (2.8) of JMeter you will be able to create threads on demand (created once needed).
Anyway hitting few hundred times at same millisecond is a high load so you will have to tune JMeter correctly.
Regards
Philippe
JMeter uses blocking HTTP client, in order to hit the server at the exact same time with 100 reqeusts you need 100 threads in JMeter. Even providing that, you still don't have 100 cores to actually run such code at the same time. Even if you had 100 cores, it takes some time to start a thread, so you would have to start them in advance and synchronize on some sort of barrier. And that is not supported in JMeter.
Why do you really want to run your server "at same millisecond"? An ordinary load test just calls the server with as many connections as possible, but not necessarily at the same time. Moreover, sometimes you are even adding random sleep between requests to simulate so-called think time.
As per Philippe's answer, JMeter does in fact support synchronous requests. But maybe for what you want something like Apache Bench using -c100 (or tune it to whatever works) is a better option? It's pretty basic stuff but then the overhead is a lot smaller which might help in this situation.
But I would also steal from Tomasz's answer and echo his concern that perhaps this is not really the best way to approach load testing. If you're trying to replicate real life traffic then do you really need such a high level of concurrency?
You need to use Jmeter-server and a host of client machines for load generation. Your single machine is not enough to generate the load itself.

Resources