Straight to the point,
I have used jmeter to load test a website that hosted in apache web server.
When i load test with a single computer (2000 user), the error rate, connect time, and response time is higher than when i load test with 4 computer (500 user each computer, distributed testing).
My question is, what cause distributed testing gave a different and better result than single testing?
So many factors related to:
Your machine configuration is it powerful enough ?, is it correctly configured (OS, network stack, memory ...)
Your JMeter software configuration, is it running with default ? did you change things for 2000 run ?
You should read this:
http://www.dzone.com/links/see_how_to_make_jmeter_run_thousands_of_threads_w.html
http://jmeter.apache.org/usermanual/best-practices.html
Related
This is for e-commerce project where the number of users login will be more. I have been given a benchmark 8000 concurrent users need to login and response time should be 3 minutes
#abi , hi .
Let me provide couple notes here.
Depending upon Your connection bandwidth , from my experience as performance test engineer, I'd say jmeter single instance usually holds up to 1k(1000)- 2k(2000) in best case users load.
Considering You have a requirement for 8k (8000 users) load, You need to launch jmeter in distributed mode ( master <-> slaves).
For this config setup I'd recommend to go with 1 master node and 4slaves. For that - You will need 5 machines (aws/azure, whatever) in the same sub-network.
Re more technical details on distributed setup, please take a look:
in public jmeter documentation
please also look into this step-by-step setup manual
Also, when i've been doing set-up for 10k load for one of my recent projects - I did couple notes for myself in g-doc . Let me know if it opens fine for You.
Last note, If You need to do some load/performance tests on APIs that require AUTHZ, I'd recommend to split authorize (IDP bypass) and performance scenario itself - in different thread groups. As usually IDP in DEVs/Stagings does not hold much load .
So at first You need to authorize w/o any load (1st Thread group).
And in 2nd Thread group - start calling target APIs under the test.
It depends on:
Your machine specifications (CPU, RAM, NIC card, hard drive, etc.)
The nature of your Test Plan (number of requests, size of requests/responses, number of pre/post processors, assertions, timers, etc.)
Response time of your application
So if your test is a simple GET request which returns small text response - you might simulate 10 000 of users on a mid-range modern laptop. And if your test is connected with heavy requests, large responses, file uploads, etc. - it might be 1000 users.
Make sure to follow recommendations from JMeter Best Practices
Make sure to have monitoring of resources usage of your system (CPU, RAM, Swap, etc.). You can use JMeter PerfMon Plugin for this.
Make sure that your test behaves like a real browser
Start with 1 virtual user and gradually increase the load until you reach 8000 virtual users or JMeter starts lacking resources, whatever comes the first. If you can simulate 8000 users from a single machine - you're good to go. If not - you will have to consider Distributed Testing.
JMeter search result is different in the Local machine and remote server machine.
The JMeter batch file runs both environments separately.
Run one website in JMeter but the local machine and remote server load time are different.
Both machine internet speed is proper.
Several factors can influence that:
1) Network path from running JMeter to your server. You should always consider that.
If, say, you're testing microservice based on Amazon Cloud (AWS), and the downstream consumers of its data are also running in the same cloud - it doesn't make a lot of sense to run JMeter at your local machine, you have to run it at AWS as well (as your consumers do) to get realistic timings.
The travel over network path there and back would add hundreds of milliseconds, moreover, it's pretty unpredictable, deviations may be huge.
2) GUI vs Non-GUI mode, the rule of thumb: GUI is for development/debug only. It takes quite a toll on the performance.
3) Available resources on the machine - you didn't mention that at all, though mind that Java Runtime Environment is kind of far from being very lean, so if the machine is not dedicated for running especially JMeter, and especially if machine is not very powerful, the results may vary.
4) Addition to resource scope: by default, the scripts running JMeter are quite restrictive in resource allocation, and if you overwhelm the instance with a lot of threads to run, the timings may get distorted.
These are general factors, if you want it more specific to your case - show how & where (means, in what type of machine) you run your tests, and where's your target in terms of network path.
I have created a test plan for creating userprofile.
I want to run my test plan for 100 users but when i run it for 10 users then it is running successfully with rump up time of 2 sec; but when i try it for 100 users & more than that it is getting failed I am giving rump uptime of 40 sec for 100 users.
I am not able to understand what may be the problem with it.
In my test plan the thread user are differentiated with id
Thanks in Advance.
It's a wide question, this behavior can be caused by
Your application under test can't handle load of 100 threads. Check logs for errors and make sure that application/web server and/or database configuration allow 100+ concurrent connections. Also you can check "Latency" metric to see if there is a problem with infrastructure or application itself.
Your load generator machine can't create 100 concurrent threads. If so - you'll need to consider JMeter Distributed Testing
Your script isn't optimized. I.e. using memory-consuming listeners like "View Results Tree", any graph listeners, regular expression extractors. Try following JMeter Performance and Tuning Tips guide and see whether it resolves your issue.
Agree with Dmitri, reason could be one of the above three.
One more thing you can try.
You can run your jmeter in ui mode for validation of your script and after validation you can run it in non-ui mode which will save lot of memory and cpu processing (basically UI is heaviest part in jmeter).
you can run your jmeter script in non-ui mode like this,
Jmeter -n -t -H proxy -P port
generally on a single dual core machine with 2 GB ram (Load Generator in your case) 100 user test can be carried out successfully.
some more things you can look at to find out the actual bottleneck
1.check application server logs (server on which your application is hosted)
if there are any failures in that then see performance counters on server (CPU, Memory, network etc) to see anything is overloaded.
(if server is windows then check using perfmon if linux then try sar)
if something is overloaded then reason is your app server cant take load of 100 users
probably try tuning it more.
2.check load generator system performance counters (JVM heap usage,CPU,Memory etc)
if JVM heap size is small enough try increasing it but if other counters are overloaded then try distributed load testing.
3.remove unwanted/heavy listeners, assertion from script.
maybe this will help :)
Recently, I received a PC installed LoadRunner 11.03(perhaps patch 3) from my client and watched a web performance with it by long-run test.
In multiple user test, it seems not to work on proper performance because my web's performance monitor couldn't reach any limitation, usage of CPU, network bands, disk usage per minute, usage of Memory. Only waiting threads was little bad, but it was not obvious.
It seems a sequential behavior rather than a parallel access.
(No error occured.)
So I though it was not problem of servers, but the client have some problem having prevent to be acting parallel access for some reasons.
I don't have proper HP passport ID, I can't access the LoadRunner patches' website.
Please notice me if not LoadRunner patches, especially patch 4 or higher , let it show such the above behavior or not.
Ok, it sounds like you are just running a script in VUGen. If that is the case I am guessing (based on what you wrote, correct me if I'm wrong) you are running a script in the Virtual User Generator and not in the Controller. LoadRunner is actually a suite of multiple applications. The Virtual User Generator is the script development application, a development environment like Eclipse. It is single threaded and running a script there is meant only to test the script individually.
To run a multi-threaded test you need to use the Controller app and develop a test scenario, assign multiple virtual users (the LR term for concurrent threads) to each script you want to run and execute the test from the Controller. You can configure machines to be the Load Generators (another app set up to run as a process or service) and push out the test from the Controller to the Generators.
Here is the scenario
We are load testing a web application. The application is deployed on two VM servers with a a hardware load balancer distributing the load.
There are tow tools used here
1. HP Load Runner (an expensive tool).
2. JMeter - free
JMeter was used by development team to test for a huge number of users. It also does not have any licensing limit like Load Runner.
How the tests are run ?
A URL is invoked with some parameters and web application reads the parameter , process results and generates a pdf file.
When running the test we found that for a load of 1000 users spread over period of 60 seconds, our application took 4 minutes to generate 1000 files.
Now when we pass the same url through JMeter, 1000 users with a ramp up time of 60 seconds,
application takes 1 minutes and 15 seconds to generate 1000 files.
I am baffled here as to why this huge difference in performance.
Load runner has rstat daemon installed on both servers.
Any clues ?
You really have four possibilities here:
You are measuring two different things. Check your timing record structure.
Your request and response information is different between the two tools. Check with Fiddler or Wireshark.
Your test environment initial conditions are different yielding different results. Test 101 stuff, but quite often overlooked in tracking down issues like this.
You have an overloaded load generator in your loadrunner environment which is causing all virtual users to slow. For example you may be logging everything resulting in your file system becoming a bottleneck for the test. Deliberately underload your generators, reduce your logging levels and watch how you are using memory for correlations so you don't create a physical memory oversubscribed condition which results in high swap activity.
As to the comment above as to JMETER being faster, I have benchmarked both and for very complex code the C based solution for Loadrunner is faster upon execution from iteration to iteration than the Java based solution in JMETER. (method: complex algorithm for creating data files on the fly for upload for batch mortgage processing. p3: 800Mhz. 2GB of RAM. LoadRunner 1.8 million iterations per hour ungoverned for a single user. JMETER, 1.2 million) Once you add in pacing it is the response time of the server which is determinate to both.
It should be noted that LoadRunner tracks its internal API time to directly address accusations of the tool influencing the test results. If you open the results set database set (.mdb or Microsoft SQL server instance as appropriate) and take a look at the [event meter] table you will find a reference for "Wasted Time." The definition for wasted time can be found in the LoadRunner documentation.
Most likely the culprit is in HOW the scripts are structured.
Things to consider:
Think / wait time: When recording,
Jmeter does not automatically put in
waits.
Items being requested: Is
Jmeter ONLY requesting/downloading
HTML pages while Load runner gets all
embedded files?
Invalid Responses:
are all 1000 Jmeter responses valid?
If you have 1000 threads from a
single desktop, I would suspect you
killed Jmeter and not all your
responses were valid.
Dont forget that the testing application itself measures itself, since the arrival of the response is based on the testing machine time. So from this perspective it could be the answer, that JMeter is simply faster.
The second thing to mention is the wait times mentioned by BlackGaff.
Always check results with result tree in jmeter.
And always put the testing application onto separate hardware to see real results, since testing application itself loads the server.