In Jmeter, how can I achieve a 100 RPS for an http request which is taking an average of 20 seconds response time? The more the thread count I gave, the more response time it took.
What should be the thread count that I should ideally give in this situation? If I am adding a constant throughput timer or throughput shaping timer what should the thread count as well as the settings of the timers?
I have adjusted thread count to 100,250,500, 2500. But it is not giving anything close to 100 RPS. It is about 20/minute that I see.
This sounds like a bottleneck so
First of all ensure that you're following JMeter Best Practices as it might be the case JMeter configuration is not suitable for kicking off that many threads.
Second I would recommend checking whether your system under test has enough headroom in terms of resources (CPU, RAM, Network, Disk, etc.), you can do this using i.e. JMeter Perfmon Plugin
If there is no lack of resources but application performance is not satisfactory, like at point 1 you can check infrastructure configuration (web/application server settings, database settings, etc.) as in majority of cases default configurations are not good enough for high loads. Refer documentation on your infrastructure software for tuning tips.
And finally there could be problems with your application code itself, re-run your test having profiling tools telemetry enabled, this way you will be able to tell for sure where your application spends time and/or resources.
Related
I have to achieve tps of around 2000 in jmeter but able to get around 1000 tried following things -
-Increasing the config
-Distributed Load Testing
-Increasing heap memory
-Increasing or keeping the threads as per the standard formula(followed jmeter best practices)
is there any other way I can do this?
Normally in order to get more throughput you need to add more threads (virtual users) in the Thread Group
If you cannot add more threads due to lack of resources (CPU, RAM, etc.) - consider switching to Distributed Testing
Of course the system under test must be able to respond fast enough because if the application you're testing isn't capable of serving more than 1000 requests per second - you won't be able to get more unless you identify and fix the bottleneck
I have seen some questions about the limits of jmeter, like "What is the highest number of threads?" and "What are the physical limits of jmeter?". As some answers indicate, there's no specific limit to jmeter, but rather to jmeter configurations used on specific hardware setups. However, folks do indicate there's a limit & give tips on how to optimize.
My question is more basic - "how can I tell if I'm hitting the limits of my client (Jmeter + hardware)?"
I'm not talking about OOM errors (like described in this blog post), which are pretty obvious, but rather if jmeter is lagging. In the aggregate report, I can see throughput, and I could also count number of responses received in csv output & divide by time. Should I just check if that's equal to my desired QPS? Achieving a desired QPS in jmeter generally seems trickier than just blasting the server with users though, and the math from number of users -> QPS seems a bit tricky.
Finally, how can I tell if it's my server lagging or jmeter lagging? I'm wondering if I can test with some simple static webpage first to confirm jmeter's behavior, and then test my actual server. Any recommendations for a simple static page that can take a high amount of QPS?
Apologies if that's too many questions, but feel free to ask for more details or only answer the primary "how to tell if I'm hitting limits" question.
JMeter doesn't have many "limits", at least they're too high to worry about, you can kick off as many as 2,147,483,647 threads given the underlying hardware/OS allows it and JMeter is properly configured
The easiest solution is switching to Distributed Mode of JMeter execution, i.e. if you're "hitting the limits of JMeter" when you add another instance of JMeter as an extra load generator the throughput should go up given the server is capable of handling the load.
Another option is first of all making sure that you're following JMeter Best Practices and setting up monitoring of baseline resources usage like CPU, RAM, Network, Disk, etc. on the machine where JMeter is running, if any of monitored metrics exceeds i.e. 90% of maximum available capacity - you're "hitting the limits" of the machine where JMeter is running.
I was able to confirm my setup worked by checking the Aggregate Report's throughput metric reached my desired QPS. Initially, when I did not reach the desired throughput and was testing against my server, I was not able to confirm whether the problem was my server or my load testing setup.
To confirm the load testing worked, I swapped the load test to hit a very simple 'hello world' service with an excess of resources. Here, the desired throughput was met.
For reference on actual setup, I ran jmeter on a 5.2xlarge EC2 instance which had 8vCPUs, up to 10Gbps network bandwidth, and 16GiB of RAM and reached 1K QPS. I have yet to see how much further I can push this particular setup.
I am currently using JMeter command line to trigger load test under master(2GB Memory & 1 Core) and slave machine(2GB Memory & 1 Core)
How many threads are supported by JMeter for above configuration.
Do we need to change any thing in Heapsize to get maximum threads?
Can any one help in this regard.
We don't know, it might be the case even 1 thread is not supported, it might be the case 2147483647 users are supported.
The number of virtual users you can simulate varies and depends on different factors like:
The nature of the test (what protocols are in scope, what exactly the test is doing, etc.) For simple GET request with small response you will be able to simulate more users, for complex POST request with a lot of calculated encrypted parameters uploading several large files the number of users will be much less
The size of request and response
The number of pre/post processors, assertions, etc.
So the only way of telling how many users you can simulate to measure it
Make sure to have monitoring of essential OS health metrics like CPU, RAM, etc. usage is in place. If you don't have any solutions in mind you can consider using JMeter PerfMon Plugin
Make sure to follow JMeter Best Practices
Start with 1 user and gradually increase the load at the same time looking at the CPU, RAM, Network, disk usage, etc.
When any of monitored metrics starts exceeding, say, 80% of maximum available capacity take a look at how many threads are online just before this moment using i.e. Active Threads Over Time listener
This is how many users you can simulate for particular this test on particular this hardware/software combination
I need to load test my website with 10k req/sec for 1 hour using JMeter. I am confused with the values of loop count, number of thread, ramp-up period and duration.
Also will my laptop (i5 8GB) be able to do that? If not what is the alternative.
PS: I checked every question/answer on stackoverflow for this but I couldn't find any help. Please dont mark it repeated question.
You can use "Constant Throughput Timer" and define target throughput and select throughput based on "all active threads".
Define maximum number of users count in your script so that it will be enough for 10K req/sec.
Also if you are using windows machine then I think you will face this issue "https://www.baselogic.com/2011/11/23/solved-java-net-bindexception-address-use-connect-issue-windows/"
I will recommend to use distributed testing or use more than 1 machine.
The easiest way of configuring JMeter to send X requests per second is using either Precise Troughput Timer or Throughput Shaping Timer in combination with the Concurrency Thread Group. The number of threads needs to be sufficient, the exact number mainly depends on your application response time, if response time is 1 second - you will need 10k threads, if it's 500ms - you will need 5k threads, if it is 2 seconds - you will need 20k threads, etc.
Only you can answer whether your laptop can kick off the required number of virtual users as there are too many factors to consider: nature of the test, the size of the requests/responses, number of pre/post processors and assertions, etc. Make sure to follow JMeter Best Practices and monitor CPU, RAM, Network, etc. usage using i.e. JMeter PerfMon Plugin as if your laptop will be overloaded - JMeter won't be able to send requests fast enough and you will not be able to conduct 10k requests per second even if the server supports it. If your laptop hardware specifications are too low for the test scenario - you will have to go for Distributed Testing
You have a number of issues in play
test design. Use more than one load generator. In fact, use no fewer than three, evenly matched in hardware. Take one and load only one user of each type. This is your control set. If this set degrades at the same rate as your other load generators then you have a common issue, likely the site. If the control set does not degrade, but the other load generators do, then you likely have an overloaded generator. On the commercial test tool side of the fence, generating all load from one host have never been considered a good practice in performance testing.
10K requests per second. This is substantial. I have worked on some top 20 eCommerce sites and I can tell you that even they do not receive this type of traffic to the origin servers. Why? Cache! Either this his a Content Delivery Network where the load is spread across the county, OR there is a cache node directly in front of the load balancer(S) for the site (thing varnishcache of equivalent), OR both for a multi-staged cache. You might want to look for an objective reference in production to pin this to as a validation poinnt, if and only if (IFF) your goal is to represent end user behavior. Running a count of requests grouped by second from the HTTP access logs should be able to validate this number. Also, check the cache plan for fixed assets - it could be poorly managed and load would drop significantly just by better managing the sites cache settings to the client. If your goal is simply to saturate a SOAP/REST interface to the point of destruction then you might have a better path.
If you are looking to take a particular SOAP or REST set of remote procedure calls to the point of destruction, consider a classical stress test. Start your test at zero load, increase with the smallest step interval possible over the longest possible period of time. The physical analogy to this would be the classical hospital style stress test where a nurse comes around every minute and increases the speed OR the incline on the treadmill OR both until some end of test condition is achieved. For a hospital style test that is moving into Oxygen debt, an inability to keep pace, etc... For your application/interface it could be the doubling of response times from what is acceptable, a saturation of resources in the finite resource pool (CPU, DISK, MEMORY, NETWORK) on the back end hosts, etc...
Below is the graph which I received after the performance test execution.
I am confused about the fluctuated response time graph.
NOTE: 1) Throughput graph is also fluctuating. 2) I did not receive any error during test.
It normally indicates that either application under test or JMeter engine is overloaded hence it cannot handle/produce stable load pattern.
Your response time is around 1.5 minutes which seems little bit high to me so I would suggest that you need to monitor the application under test and check:
whether it has enough headroom to operate in terms of CPU, RAM, Network IO, etc. as it might be the case the application is short on RAM and goes swapping and disk IO is much slower than RAM, it can be checked using i.e. JMeter PerfMon Plugin
whether it is properly configured for high loads as its middleware (database, application server, load balancer, etc. need to be tuned, spike-like response time pattern may stand for intensive GC activity
in any case ensure that JMeter is also properly configured for high load and isn't short on resources as if JMeter isn't able to send/receive requests fast enough you will get false-negative results
Single chart never tells the full story, you need to correlate information from all the possible sources, collect log files, etc.
-