I need to load test my website with 10k req/sec for 1 hour using JMeter. I am confused with the values of loop count, number of thread, ramp-up period and duration.
Also will my laptop (i5 8GB) be able to do that? If not what is the alternative.
PS: I checked every question/answer on stackoverflow for this but I couldn't find any help. Please dont mark it repeated question.
You can use "Constant Throughput Timer" and define target throughput and select throughput based on "all active threads".
Define maximum number of users count in your script so that it will be enough for 10K req/sec.
Also if you are using windows machine then I think you will face this issue "https://www.baselogic.com/2011/11/23/solved-java-net-bindexception-address-use-connect-issue-windows/"
I will recommend to use distributed testing or use more than 1 machine.
The easiest way of configuring JMeter to send X requests per second is using either Precise Troughput Timer or Throughput Shaping Timer in combination with the Concurrency Thread Group. The number of threads needs to be sufficient, the exact number mainly depends on your application response time, if response time is 1 second - you will need 10k threads, if it's 500ms - you will need 5k threads, if it is 2 seconds - you will need 20k threads, etc.
Only you can answer whether your laptop can kick off the required number of virtual users as there are too many factors to consider: nature of the test, the size of the requests/responses, number of pre/post processors and assertions, etc. Make sure to follow JMeter Best Practices and monitor CPU, RAM, Network, etc. usage using i.e. JMeter PerfMon Plugin as if your laptop will be overloaded - JMeter won't be able to send requests fast enough and you will not be able to conduct 10k requests per second even if the server supports it. If your laptop hardware specifications are too low for the test scenario - you will have to go for Distributed Testing
You have a number of issues in play
test design. Use more than one load generator. In fact, use no fewer than three, evenly matched in hardware. Take one and load only one user of each type. This is your control set. If this set degrades at the same rate as your other load generators then you have a common issue, likely the site. If the control set does not degrade, but the other load generators do, then you likely have an overloaded generator. On the commercial test tool side of the fence, generating all load from one host have never been considered a good practice in performance testing.
10K requests per second. This is substantial. I have worked on some top 20 eCommerce sites and I can tell you that even they do not receive this type of traffic to the origin servers. Why? Cache! Either this his a Content Delivery Network where the load is spread across the county, OR there is a cache node directly in front of the load balancer(S) for the site (thing varnishcache of equivalent), OR both for a multi-staged cache. You might want to look for an objective reference in production to pin this to as a validation poinnt, if and only if (IFF) your goal is to represent end user behavior. Running a count of requests grouped by second from the HTTP access logs should be able to validate this number. Also, check the cache plan for fixed assets - it could be poorly managed and load would drop significantly just by better managing the sites cache settings to the client. If your goal is simply to saturate a SOAP/REST interface to the point of destruction then you might have a better path.
If you are looking to take a particular SOAP or REST set of remote procedure calls to the point of destruction, consider a classical stress test. Start your test at zero load, increase with the smallest step interval possible over the longest possible period of time. The physical analogy to this would be the classical hospital style stress test where a nurse comes around every minute and increases the speed OR the incline on the treadmill OR both until some end of test condition is achieved. For a hospital style test that is moving into Oxygen debt, an inability to keep pace, etc... For your application/interface it could be the doubling of response times from what is acceptable, a saturation of resources in the finite resource pool (CPU, DISK, MEMORY, NETWORK) on the back end hosts, etc...
Related
I am currently using JMeter command line to trigger load test under master(2GB Memory & 1 Core) and slave machine(2GB Memory & 1 Core)
How many threads are supported by JMeter for above configuration.
Do we need to change any thing in Heapsize to get maximum threads?
Can any one help in this regard.
We don't know, it might be the case even 1 thread is not supported, it might be the case 2147483647 users are supported.
The number of virtual users you can simulate varies and depends on different factors like:
The nature of the test (what protocols are in scope, what exactly the test is doing, etc.) For simple GET request with small response you will be able to simulate more users, for complex POST request with a lot of calculated encrypted parameters uploading several large files the number of users will be much less
The size of request and response
The number of pre/post processors, assertions, etc.
So the only way of telling how many users you can simulate to measure it
Make sure to have monitoring of essential OS health metrics like CPU, RAM, etc. usage is in place. If you don't have any solutions in mind you can consider using JMeter PerfMon Plugin
Make sure to follow JMeter Best Practices
Start with 1 user and gradually increase the load at the same time looking at the CPU, RAM, Network, disk usage, etc.
When any of monitored metrics starts exceeding, say, 80% of maximum available capacity take a look at how many threads are online just before this moment using i.e. Active Threads Over Time listener
This is how many users you can simulate for particular this test on particular this hardware/software combination
I have a scenario with 5K HTTP requests. When I start JMeter with it, JMeter simply hangs after about 170 users. I followed all the guidelines for successful stress testing (no listeners, headless, increased heap space).
I must say that some of those requests are a little big, the overall file is ~115M.
When I only take a subset of the requests (~100), the simulation works better (faster initialization of users, holds more than 170 users, etc).
My question is, first, as I understand JMeter loads the scenario tree and every threads plays it, there should not be any duplication, so what exactly causes this extensive load? and second, what can I do about it?
PS: when I view the system bottlenecks I notice both CPU and memory are at very high values on the long file, both of the metrics have low values on the shorter version. Anyone can explain?
PS2: the requests have about 7 seconds of delay between them
First I need to let you know that if you are using a single system to do the load testing, the maximum your hardware or the port can handle at a time is 1 Gig of data. and your firewall(if any) would again receive/pass not more than I Gig of data. Try doing the same load test with Distributed System of load testing in Jmeter(Master-Slave-Distributed System). Even then, I don't think it would run for 4k requests(if these requests are heavy).
Best possible solution:
Try Distributed system as I mentioned above.
Try running the load test in Non GUI Mode- CLI
Increase the ramp up time as needed.
Increase the Ram of your system and allocate maximum available heap space to jmeter.
Drastic change- Use 1. Blazemeter cloud or 2. Move the complete setup of your load testing to Amazon Server which is more reliable and scalable.
Hellow
Which is the maximun number of virtual users that can be testet at a Jmeter distributed test? Is it possible to reach one million of virtual users?
Thak you.
It depends on may factors, technically the limit on JMeter end is very high (I think it should be 231 − 1 or 2 147 483 647 virtual users)
Nature of your application: use cases, is it more about consuming ore creating content, average request and response size, response time, etc.
Nature of your test: again, request and response size, need to use pre/post processors and assertions
Hardware specifications of your load generators
Number of load generators
So I would recommend the following approach:
Start with a single JMeter instance
Make sure you have optimal JMeter configuration and amended your test according to JMeter best practices
Make sure you have monitoring of baseline OS health metrics on that machine
Start with 1 virtual users and gradually increase the number of running users until you start running out of hardware resources (CPU or RAM or Network or Disk IO will be close to maximum)
Mind the number of active users at this stage (you can use i.e. Active Threads Over Time listener) - this is how many users you can simulate for particularly that test scenario. Note, the number might be different for other application or other test scenario.
Multiply the number you get by the number of the load generators you have - if there is > 1M - you are good to go.
If you won't be able to simulate that many users there is a workaround, but personally I don't really like it. The idea is that real users don't hammer application non-stop, they need some time to "think" between actions. Normally you should be simulating these "think times"using JMeter Timers. But if you lack load generators you can consider the following:
Given 1 virtual user needs 15 seconds to think between operations and response time of your application is 5 seconds, it means that each user will be able to execute only 3 requests per minute. So 1M of users will execute 3M requests per minute which gives us 50 000 requests per second which is also high, but more likely to be achievable.
while performance testing an application, i was unable to proceed further of handling large number of threads using JMeter, so, i would like to know the max number of threads that are allowed in Jmeter, Is jmeter capable of handling 1,50,000 threads?
There is no upper limit, it strongly depends on what your test is doing, what is response size, etc.
Also keep in mind that real users don't hammer the application nonstop, they need some time to "think" between operations plus they have to wait for response before they start "thinking" about next action.
For example, given users "think" for 10 seconds and response time is 2 seconds it means that each virtual user will execute 5 requests per minute.
In above scenario 1 50 000 users will execute 7 50 000 requests per minute - which is 12 500 requests per second - > 10x times less users to simulate.
So:
first of all make sure that your JMeter configuration is optimal, default settings are good for tests development and debugging but not very good for the load test execution. You need to
tune Java parameters (Heap size, GC, etc.)
disable all listeners
make sure that you have only those assertions and post processors which are absolutely required
you store only those metrics you need and you don't save any excessive results, especially response data
See 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure article for above points comprehensive explanation and some more tips
Even given you apply above tweaks I don't think you'll be able to conduct the load of 1 50 000 users from a single host (unless you have a supercomputer in your QA Lab) so I expect you'll need to consider JMeter Distributed Testing when one master machine orchestrates multiple load generators aka "slaves" acting as a single instance - this way you will be able to increase the load by factor equal to number of slaves
Yes Of course, it depends a lot on the machine running Jmeter, but if mileage counts I can give you some hints.
JMeter allows you to run multiple processes in the same box, and it's usually pretty reliable generating up to 200-300 threads per JMeter instance. If you need more than that, I'd recommend using multiple JMeter instances
Use below link for better description how Jmeter can handle 1,50,000 threads of multiple instances
https://blazemeter.com/blog/how-run-load-test-50k-concurrent-users
Suppose you have a web application, no specific stack (Java/.NET/LAMP/Django/Rails, all good).
How would you decide on which hardware to deploy it? What rules of thumb exist when determining how many machines you need?
How would you formulate parameters such as concurrent users, simultaneous connections, daily hits and DB read/write ratio to a decision on how much, and which, hardware you need?
Any resources on this issue would be very helpful...
Specifically - any hard numbers from real world experience and case studies would be great.
Capacity Planning is quite a detailed and extensive area. You'll need to accept an iterative model with a "Theoretical Baseline > Load Testing > Tuning & Optimizing" approach.
Theory
The first step is to decide on the Business requirements: how many users are expected for peak usage ? Remember - these numbers are usually inaccurate by some margin.
As an example, let's assume that all the peak traffic (at worst case) will be over 4 hours of the day. So if the website expects 100K hits per day, we dont divide that over 24 hours, but over 4 hours instead. So my site now needs to support a peak traffic of 25K hits per hour.
This breaks down to 417 hits per minute, or 7 hits per second. This is on the front end alone.
Add to this the number of internal transactions such as database operations, any file i/o per user, any batch jobs which might run within the system, reports etc.
Tally all these up to get the number of transactions per second, per minute etc that your system needs to support.
This gets further complicated when you have requirements such as "Avg response time must be 3 seconds etc" which means you have to figure in network latency / firewall / proxy etc
Finally - when it comes to choosing hardware, check out the published datasheets from each manufacturer such as Sun, HP, IBM, Windows etc. These detail the maximum transactions per second under test conditions. We usually accept 50% of those peaks under real conditions :)
But ultimately the choice of the hardware is usually a commercial decision.
Also you need to keep a minimum of 2 servers at each tier : web / app / even db for failover clustering.
Load testing
It's recommended to have a separate reference testing environment throughout the project lifecycle and post-launch so you can come back to run dedicated performance tests on the app. Scale this to be a smaller version of production, so if Prod has 4 servers and Ref has 1, then you test for 25% of the peak transactions etc.
Tuning & Optimizing
Too often, people throw some expensive hardware together and expect it all to work beautifully. You'll need to tune the hardware and OS for various parameters such as TCP timeouts etc - these are published by the software vendors, and these have to be done once the software are finalized. Set these tuning params on the Ref env, test and then decide which ones you need to carry over to Production.
Determine your expected load.
Setup a machine and run some tests against it with a Load testing tool.
How close are you if you only accomplished 10% of the peak load with some margin for error then you know you are going to need some load balancing. Design and implement a solution and test again. Make sure you solution is flexible enough to scale.
Trial and error is pretty much the way to go. It really depends on the individual app and usage patterns.
Test your app with a sample load and measure performance and load metrics. DB queries, disk hits, latency, whatever.
Then get an estimate of the expected load when deployed (go ask the domain expert) (you have to consider average load AND spikes).
Multiply the two and add some just to be sure. That's a really rough idea of what you need.
Then implement it, keeping in mind you usually won't scale linearly and you probably won't get the expected load ;)