JMeter: Network Latency, CPU Usage and Memory - jmeter

How to setup PerfMon Metrics Collector properly, I have just installed JMeter Plugins and add PerfMon on my TestPlan.

Network Latency is something JMeter measures itself, you don't need to additionally collect it
Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
In order to measure CPU and RAM usage
Download PerfMon Server Agent and install it on the server which you would like to monitor
Launch ServerAgent (make sure that TCP and UDP inbound/outbound traffic is allowed on port 4444 in the firewall)
Add PerfMon Metrics Collector listener to your Test Plan
Configure it like:
Run your test. Make sure it lasts longer than several seconds - you should see the CPU and Memory usage charts plotted.
Check out How to Monitor Your Server Health & Performance During a JMeter Load Test article for comprehensive instructions on PerfMon installation, configuration and usage.

Related

Can I use only Dynatrace for performance testing in place pf Apache JMeter (or other Testing tools)

Can I use only Dynatrace for Load testing/Soak Testing/Capacity Testing etc. in place pf Apache JMeter (or other Testing tools)?
I can see load testing reports in Dynatrace but is Dynatrace alternative to Apache JMeter (or other JMeter alike testing tools)?
Dynatrace is an APM tool, it will not create any load but it can be used for collecting various metrics from the system under test like CPU, RAM, Network, Disk, Swap usage, HTTP calls, database calls, application-specific metrics, etc.
JMeter is the tool which generates the load by simulating behaviour of real user, but it doesn't collect any metrics from the system under test (unless you use a special plugin like JMeter PerfMon Plugin), it just sends requests, waits for response and measures time in-between as well as other metrics like connect time and latency, after that it calculates average response times and percentiles so you can correlate increasing load (number of active threads - virtual users) with the changing response time or errors per second or transactions per second
So:
JMeter (or other load testing tool) is used for generating the load
Dynatrace (or other APM tool) is used for monitoring the application while it is under the load to figure out the root cause of the perf

Performance testing bottneck microsercice

I am using jmeter to load test my APi server(running on tomcat) which inturn calls a micrroservicr using thrift.(20k requests/min)
I am using new relic for monitoring . I have observed that a an abnormally high time is spent when API calls the microservice(ranging from 10-15seconds).So I observed the microservice over the same duration. The response time was almost negligible.(10-12 milliseconds)
So, I suspected probably API is queueing up the responses because it is unable to accept the rate at which its receiving response from the microservice.To address the same I doubled Xmx and Xms value of my API java application.
Still am observing the same , what could be the bottleneck which I am missing out.
Make sure that your API running on Tomcat has enough headroom in terms of CPU, Ram, Network, Disk, etc. as it might be slowing the things down. You can use JMeter PerfMon Plugin for this
Make sure that Tomcat itself is configured for high loads as the threads might be queuing up on Tomcat HTTP Connector, i.e. if threads in executor are less than the number of connections you establish - the requests will be queuing up even before reaching your API
Re-run your test using profiler tools telemetry, i.e. set up JProfiler or YourKit monitoring - this way you will learn where your API spends the most of time and what is the underlying reason

JBOSS wildfly 10 Performance Tuning using Jmeter

This is regarding use of JMETER tool to test a REST API and check the throughput.
I am pretty much new to using the JMETER tool.
Coming to my Application, it is a simple REST API which converts an XLS file to JSON formatted data based on few conditions.
This is run on server(WildFly V10).
Configuration in my JMETER:
Number of Threads: 1000
Ramp-up time: 10
Loop Count: 1
The throughput remains constant with 10-12 hits per sec.
I also did few configuration settings for JBOSS wildfly server 10 in the standalone.xml file for different subsystems as shown below:
1) Configuring undertow subsystem:
modified the default max http connections from 10 to 100 till 1000
<http-listener name="default" **max-connections="1000"** socket-binding="http" redirect-socket="https" enable-http2="true" buffer-pipelined-data="true" />
2) Setting io subsystem:
configured io-threads and max-threads from 10 to 100 till 1000
<worker name="default" **io-threads="100" task-max-threads="100"** />
3) Configured standalone.conf file for JAVA VM options
OLD: JAVA_OPTS="-Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true"
NEW: **JAVA_OPTS="-Xms1024m -Xmx1024m -XX:MaxPermSize=256m** -XX:NewRatio=2 -XX:PermSize=64m -Djava.net.preferIPv4Stack=true"
4) Configuring infinispan subsystem:
which has a <cache-container> to configure the thread pool execute in the thread subsystems. It governs the allocation and execution of runnable tasks in the replication queue.
5) Tried running my application on remote system having **64 GB RAM** and the 3rd configuration mentioned above.
6) Configuring high value for core threads in JCA subsystem
**<core-threads count="50"/>** in subsystem urn:jboss:domain:jca:4.0
All these configuration didnt help me increase the throughput.
Can anybody please help me in understanding what actually has to be modified or configured to increase throughput of my server when tested through JMETER.
There are too many possible reasons, I'll list only few most common recommendations:
Your machine running JBoss simply gets overloaded and cannot respond faster due to banal lack of CPU or free RAM or intensive swapping or whatever. Make sure you monitor the application under test resources while your test is running, it will not only allow you to correlate increasing load with increasing utilisation of system resources, but you will also be able to tell if slow response times are connected with the lack of hardware capacity. You can use JMeter PerfMon Plugin to integrate monitoring with the JMeter test, check out How to Monitor Your Server Health & Performance During a JMeter Load Test for more details on the plugin installation and usage
JMeter load generator can suffer from the same with the same impact on the throughput metric: if JMeter is not able to send requests fast enough the application under test won't be able to reply faster so in some situations JMeter itself is the bottleneck so make sure you are following JMeter Best Practices and JMeter has enough headroom to operate from hardware resources perspective. Apply the same monitoring to JMeter load generator(s) and keep an eye on CPU, RAM, Network, Disk usage, when any of metrics exceeds say 90% threshold - this is the maximum load you can achieve with a single JMeter instance.
Re-run your load test but this time having a profiler tool telemetry (for example JProfiler or YourKit), this will allow to see the most resource and time consuming methods so you could identify which part(s) of code need optimisation.

Factors affecting need to consider when using jmeter tool

I would like to know which are the factors i need to consider when using jmeter, Most of the time internet speed will be varying and due to which i don't get accurate response time, and operations on server side [CPU utilization, etc].
Do I need to consider all this points when calculating performance of the application.
In regards to "internet speed vary", JMeter is smart enough to detect it and report as Latency metric. As per JMeter glossary:
Latency.
JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client
So you should be able to subtract latency from the overall response time and calculate the time, required to process your request on the server side. However it will be much better if JMeter load generators will live in the same intranet. If you need to test your application behavior when virtual users are sitting on different network types it can also be simulated
In regards to other factors that matter:
Application under test health. You should be monitoring baseline server-side health metrics to identify whether application server(s) gets overloaded during the load test as i.e. if you see high response times the reason could be as simple as a lack of free RAM or slow hard-drive or whatever.
JMeter load generator(s) health. The same approach should be applicable to JMeter engine(s). If JMeter hosts are overloaded they cannot generate the requests and send them fast enough which will be reported as reduced throughput.
You can use PerfMon JMeter Plugin for both. See How to Monitor Your Server Health & Performance During a JMeter Load Test article for detailed description of the plugin installation and usage
Your tests need to be realistic and represent virtual user as close to real one as possible. So make sure you:
Add Timers to your test plan to represent "think time"
If you are testing web-based application consider adding and properly configuring HTTP Cookie Manager, Header Manager and Cache Manager. Also don't forget to configure HTTP Request Defaults to "Retrieve all embedded resources" and use "Parallel pool" for this.

Is perfmon in jemter analysing cpu utilisation of local machine or the server where application is hosted?

Just need to know that perfmon plugin which is used in jmeter tool, does it analyse cpu/memory, disk utilization of local machine or the server where application is hosted?
Because as a user when give IP and port, we give these details of the remote machines when we perform load test.
Please let me know .
As per JMeter Scientist,
The PerfMon listener was implemented in following way: The
Host collects PerfMon, Remote nodes don't collect PerfMon.
So, Master will collect metrics from the Slaves.
This might help
PerfMon Metrics Collector fetches performance metrics from ServerAgent(s) via TCP or UDP. And it's up to you where you want to install the ServerAgent(s), on JMeter or Application Under Test side or both.
Normally ServerAgent(s) is/are installed on Application Under Test side, i.e. web servers, database servers, load balancers, etc. to measure the load on that end, however if you want to collect performance stats from load generators - feel free to install Server Agent(s) on JMeter machine(s).
See How to Monitor Your Server Health & Performance During a JMeter Load Test article for comprehensive information on PerfMon installation and usage

Resources