I am trying to test a webpage using multiple remotes. The performance results of the webserver vary on jusing jmeter client (jmeter master).
I am testing in non-gui mode just with one remote slave. But I found, that I have different results with the same jmeter remote when using different master.
The slave node is dedicated server Intel(R) Core(TM) i7-4770 CPU # 3.40GHz with 32GB RAM (10GB dedicated for jmeter).
When I am using jmeter master on virtual machine with 2CPU Intel(R) Core(TM)2 Duo CPU T7700 #2.40GHz, 3.7GB RAM from the same provider as the slave node is from, the testing results of my webpage is only 50 transactions/s.
When I switched the jmeter-master to google cloud (n1-standard-1 machine with 1 CPU Intel(R) Xeon(R) CPU # 2.50GHz and 3.75GB RAM) and using the same slave node, the result is 130 transactions/s.
The jmeter master setup is in both cases the same. I really have no clue, why this results are different. From my understanding the jmeter master (client) is only collecting the results from remote slaves and the traffic is generated from remote slave, so the results should be the same.
You are definitely hitting the limits of your local slave, I would suggest measuring the OS-level metrics, i.e. usage of CPU, RAM, Swap, Network and Disk, Java Heap, Java Garbage collections, etc.
You can do it either using built-in tools or consider using JMeter PerfMon plugin which allows monitoring of more than 70 metrics, it should allow to identify the bottleneck which in this case could be connected with JMeter.
See How to Monitor Your Server Health & Performance During a JMeter Load Test article for plugin setup, configuration and usage instructions.
Related
Could someone help me to understand the Difference of running the load testing using local System and setting up master-slave system ? How it differs ? what is the best practice to do load testing on server.
If we are setting up master-slave do both should be in same sub-network ?
And we can generate the HTML report in Master system after running the script?
The number of virtual users you can simulate on one machine varies from several hundreds to several thousands (check out What’s the Max Number of Users You Can Test on JMeter? article for more details) but in any case it is limited.
Each thread (virtual user) has some "footprint" in terms of CPU, RAM, Network and Disk usage. So you need to ensure that the machine you're running JMeter on has enough capacity and it's not overloaded because if JMeter is not capable of sending requests fast enough - you will get false negative results because throughput will be low not due to the application under test problem, but due to the problem with JMeter.
So make sure to monitor essential OS health metrics like CPU, RAM, Network, Disk, Swap usage when you're running a load test. You can do it using i.e. JMeter PerfMon Plugin
If you're able to conduct the required load using only single JMeter machine - running the test in distributed mode doesn't make a lot of sense as you will not get any new results.
However if one machine cannot produce the required load - you will have to conduct distributed testing. The main idea is having multiple JMeter instances executing the same Test Plan.
For example if you identified that you can run only 1000 virtual users on one machine and you need to simulate 3000 ones you will need 4 machines for this.
Master machine to orchestrate the slaves and collect the results
3 slave machines each running 1000 virtual users.
Once you start JMeter Serve ron each slave machine you will be able to run your test in command-line non-GUI mode as follows:
jmeter -n -t your_test_plan.jmx -R IP.of.1st.slave, IP.of.2nd.slave, IP.of.3rd.slave -l result.jtl
if you want to generate HTML Reporting Dashboard after the test run you can do it as follows:
jmeter -n -t your_test_plan.jmx -R IP.of.1st.slave, IP.of.2nd.slave, IP.of.3rd.slave -l result.jtl -e -o /path/to/output/folder
I want to simulate up to 100,000 requests per second and I know that tools like Jmeter and Locust can run in distributed mode to generate load.
But since there are cloud VMs with up to 64 vCPUs and 240GB of RAM on a single VM, is it necessary to run in a cluster of smaller machines, or can I just use 1 large VM?
Will I be able to achieve more "concurrency" with more machines due to a network bottleneck coming from the 1 large machine?
If I just use one big machine, would I be limited by the number of ports there are?
In the load generator, does every simulated "user" that sends a request also require a port on the machine to receive a 200 response? (Sorry, my understanding of how TCP ports work is a bit weak.)
Also, we use Kubernetes pretty heavily, but with Jmeter or Locust, I feel like it'd be easier to run it on bare VM, without containerizing (even in distributed mode) while still maintaining reproducibility. Should I be trying to containerize Jmeter or Locust and running in Kubernetes instead?
According to KISS principle it is better to go for a single machine assuming it is capable of conducting the required load.
Make sure you're following JMeter Best Practices
Make sure you have monitoring of baseline OS health metrics (CPU, RAM, swap, network and disk IO, JVM statistics, etc.)
Start with low number of users and gradually increase the load until you reach the desired throughput or limit of any of the monitored metrics, whatever comes the first. If there will be a lack of CPU or RAM or something - see what could be done to overcome the limitation.
More information: What’s the Max Number of Users You Can Test on JMeter?
I am trying to use Jmeter perfmon plugin to monitor cpu and memory utilisation of server.
Server is hosted on linux machine and is running apache and postgresql.
I am running serveragent in linux server and added cpu and memory parameters in Jmeter perfmon metrics collector.
Now when I run my Jmeter tests then both apache and postgrelsql are used.I can see some data coming in performance collector.
1)How can I find cpu utilization of apachae and progresql when test are run?
2) I can see memory is coming as a straight line. I read in some other threads its because of JVM constant memory usage.I am not able to understand why this is happening.Server agent should give memory utilization of all processes rather than JVM. How can I get actual memory usage in this case?
Neither apache nor postgres use JVM, are you sure you are running Server Agent on a correct host?
With regards to your question itself: it is possible to track Per-Process metrics, for example you can apply configuration like:
You will need to replace:
localhost with the hostname or IP address of the machine where apache, postgres and JMeter server agent are running
4949 with the real PID of your apache instance
3521 with the real PID of your postgres instance
Once done you should see 4 charts standing for apache CPU usage, apache memory usage, postgres CPU usage and postgres memory usage correspondingly.
See How to Monitor Your Server Health & Performance During a JMeter Load Test article for more information.
Just need to know that perfmon plugin which is used in jmeter tool, does it analyse cpu/memory, disk utilization of local machine or the server where application is hosted?
Because as a user when give IP and port, we give these details of the remote machines when we perform load test.
Please let me know .
As per JMeter Scientist,
The PerfMon listener was implemented in following way: The
Host collects PerfMon, Remote nodes don't collect PerfMon.
So, Master will collect metrics from the Slaves.
This might help
PerfMon Metrics Collector fetches performance metrics from ServerAgent(s) via TCP or UDP. And it's up to you where you want to install the ServerAgent(s), on JMeter or Application Under Test side or both.
Normally ServerAgent(s) is/are installed on Application Under Test side, i.e. web servers, database servers, load balancers, etc. to measure the load on that end, however if you want to collect performance stats from load generators - feel free to install Server Agent(s) on JMeter machine(s).
See How to Monitor Your Server Health & Performance During a JMeter Load Test article for comprehensive information on PerfMon installation and usage
My HTTP server can't take load tests... It gives really high latency when multiple connections are made.
Server Configuration:
5 instances of (CPU 0.5vCore, Memory 512MB, Disk 20GB)
A load balancer
10G shared bandwidth
When I transfer a 3.5mb zip, it takes about 1second when there is only one connection. However, when over 30 connections are made, it goes up to 20~50 seconds.
I am testing with JMeter on my laptop. Is there a possibility that my testing environment interferes with the load-testing?
If so, what would be a solution to improve my testing environment?
First of all you need to monitor and pin down the problem(s).
Start off by picking up information on these four layers:
CPU Usage
Memory Usage
Network Usage
I/O Usage
All of them on the OS layer. (Monitoring tools will vary depending on your OS).
Once you have this data and you can narrow the problem path (CPU bound, network latency, I/O latency or whatever) an answer will kick in. Also doing this (if it is the first time you are trying to test your app) will help you get scaling information on your environment and your application in general.