This is regarding use of JMETER tool to test a REST API and check the throughput.
I am pretty much new to using the JMETER tool.
Coming to my Application, it is a simple REST API which converts an XLS file to JSON formatted data based on few conditions.
This is run on server(WildFly V10).
Configuration in my JMETER:
Number of Threads: 1000
Ramp-up time: 10
Loop Count: 1
The throughput remains constant with 10-12 hits per sec.
I also did few configuration settings for JBOSS wildfly server 10 in the standalone.xml file for different subsystems as shown below:
1) Configuring undertow subsystem:
modified the default max http connections from 10 to 100 till 1000
<http-listener name="default" **max-connections="1000"** socket-binding="http" redirect-socket="https" enable-http2="true" buffer-pipelined-data="true" />
2) Setting io subsystem:
configured io-threads and max-threads from 10 to 100 till 1000
<worker name="default" **io-threads="100" task-max-threads="100"** />
3) Configured standalone.conf file for JAVA VM options
OLD: JAVA_OPTS="-Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true"
NEW: **JAVA_OPTS="-Xms1024m -Xmx1024m -XX:MaxPermSize=256m** -XX:NewRatio=2 -XX:PermSize=64m -Djava.net.preferIPv4Stack=true"
4) Configuring infinispan subsystem:
which has a <cache-container> to configure the thread pool execute in the thread subsystems. It governs the allocation and execution of runnable tasks in the replication queue.
5) Tried running my application on remote system having **64 GB RAM** and the 3rd configuration mentioned above.
6) Configuring high value for core threads in JCA subsystem
**<core-threads count="50"/>** in subsystem urn:jboss:domain:jca:4.0
All these configuration didnt help me increase the throughput.
Can anybody please help me in understanding what actually has to be modified or configured to increase throughput of my server when tested through JMETER.
There are too many possible reasons, I'll list only few most common recommendations:
Your machine running JBoss simply gets overloaded and cannot respond faster due to banal lack of CPU or free RAM or intensive swapping or whatever. Make sure you monitor the application under test resources while your test is running, it will not only allow you to correlate increasing load with increasing utilisation of system resources, but you will also be able to tell if slow response times are connected with the lack of hardware capacity. You can use JMeter PerfMon Plugin to integrate monitoring with the JMeter test, check out How to Monitor Your Server Health & Performance During a JMeter Load Test for more details on the plugin installation and usage
JMeter load generator can suffer from the same with the same impact on the throughput metric: if JMeter is not able to send requests fast enough the application under test won't be able to reply faster so in some situations JMeter itself is the bottleneck so make sure you are following JMeter Best Practices and JMeter has enough headroom to operate from hardware resources perspective. Apply the same monitoring to JMeter load generator(s) and keep an eye on CPU, RAM, Network, Disk usage, when any of metrics exceeds say 90% threshold - this is the maximum load you can achieve with a single JMeter instance.
Re-run your load test but this time having a profiler tool telemetry (for example JProfiler or YourKit), this will allow to see the most resource and time consuming methods so you could identify which part(s) of code need optimisation.
Related
I have come across a situation where I can`t decide what scenario would be best. I have written my test in JMeter as follows:
I have one test plan that runs test in consecutive.
I have 4 thread groups and each thread group has the following properties:
No of threads: 8000
Ramp-up period: 60 sec
Loop count : 10
Same user on each iteration: true
I was having connection error, connection time out error.
So, to make it work , when I test from localhost (same machine), I have to enter a response time out of 1800000 ms, whereas when I do the same test on a remote server, I have to enter the response time out of 3600000 ms.
Can someone please advise :
Is it a good idea to include response time? Is there any other issue I should look for instead of including a response time? Is it an alert for other issue?
Can I improve the test without using response time?
First of all, a couple of questions to you:
Do you really think that the real user will really wait for 1 hour for getting the response from the application?
How did you come to this 8000 users?
Now recommendations:
Never have JMeter and the system under test on the same machine, they are both resource intensive and they will start struggling for CPU cycles, memory pages, etc. and you won't be able to tell whether JMeter is not capable of sending requests fast enough or your application cannot respond properly
Follow JMeter Best Practices
Although the number of virtual users you can simulate with JMeter is very high it's limited by the machine/operating system resources so make sure JMeter has enough headroom to operate in terms of CPU, RAM, Network and Disk IO. The metrics can be checked using JMeter PerfMon Plugin. Once you notice that any of monitored metrics start exceeding i.e. 80% of total available capacity - mention how many users are online and this is how many you can simulate from this machine for this test. If you need more - go for Distributed Testing
The same for the system under test, if you hit the resources consumption limits you need either to upgrade the hardware or to deploy your application in clustered mode (if it supports such a mode)
How to setup PerfMon Metrics Collector properly, I have just installed JMeter Plugins and add PerfMon on my TestPlan.
Network Latency is something JMeter measures itself, you don't need to additionally collect it
Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
In order to measure CPU and RAM usage
Download PerfMon Server Agent and install it on the server which you would like to monitor
Launch ServerAgent (make sure that TCP and UDP inbound/outbound traffic is allowed on port 4444 in the firewall)
Add PerfMon Metrics Collector listener to your Test Plan
Configure it like:
Run your test. Make sure it lasts longer than several seconds - you should see the CPU and Memory usage charts plotted.
Check out How to Monitor Your Server Health & Performance During a JMeter Load Test article for comprehensive instructions on PerfMon installation, configuration and usage.
I am using jmeter to load test my APi server(running on tomcat) which inturn calls a micrroservicr using thrift.(20k requests/min)
I am using new relic for monitoring . I have observed that a an abnormally high time is spent when API calls the microservice(ranging from 10-15seconds).So I observed the microservice over the same duration. The response time was almost negligible.(10-12 milliseconds)
So, I suspected probably API is queueing up the responses because it is unable to accept the rate at which its receiving response from the microservice.To address the same I doubled Xmx and Xms value of my API java application.
Still am observing the same , what could be the bottleneck which I am missing out.
Make sure that your API running on Tomcat has enough headroom in terms of CPU, Ram, Network, Disk, etc. as it might be slowing the things down. You can use JMeter PerfMon Plugin for this
Make sure that Tomcat itself is configured for high loads as the threads might be queuing up on Tomcat HTTP Connector, i.e. if threads in executor are less than the number of connections you establish - the requests will be queuing up even before reaching your API
Re-run your test using profiler tools telemetry, i.e. set up JProfiler or YourKit monitoring - this way you will learn where your API spends the most of time and what is the underlying reason
I am running a test for testing if my application is able to handle 250 concurrent users or not.first time when I ran the test,results were fine and number of samples generated in aggregate report is also fine but when I am running the same test again,i am getting drastic changes in aggregate report.This time number of samples got reduced and also the response time got higher.Whereas cpu usage and memory usage is fine and database server performance is also good.For this I am using stepping thread group.
please help me to get rid out of it.
What about CPU and RAM usage on the host, you're running JMeter on? Make sure that:
You running JMeter in non-GUI mode
You have all the listeners disabled
You have only absolutely minimum of pre/post processors and assertions added/enabled
JMeter has enough JVM heap space (70-80% of your total physical RAM)
See JMeter Performance and Tuning Tips for detailed explanations and more JMeter configuration tricks
Depending on the logic your application has you might not be able to handle 250 threads on single machine (not enough computing resources RAM, NIC bandwidth etc) You haven't provided details about your machine utilization during the run test and Jmeter logs for any warnings or errors. Check that.
We had the same kind of issues when we were testing heavy application (with sessions and long user flows). Master-slave config can fully resolve the issue.
I have created a test plan for creating userprofile.
I want to run my test plan for 100 users but when i run it for 10 users then it is running successfully with rump up time of 2 sec; but when i try it for 100 users & more than that it is getting failed I am giving rump uptime of 40 sec for 100 users.
I am not able to understand what may be the problem with it.
In my test plan the thread user are differentiated with id
Thanks in Advance.
It's a wide question, this behavior can be caused by
Your application under test can't handle load of 100 threads. Check logs for errors and make sure that application/web server and/or database configuration allow 100+ concurrent connections. Also you can check "Latency" metric to see if there is a problem with infrastructure or application itself.
Your load generator machine can't create 100 concurrent threads. If so - you'll need to consider JMeter Distributed Testing
Your script isn't optimized. I.e. using memory-consuming listeners like "View Results Tree", any graph listeners, regular expression extractors. Try following JMeter Performance and Tuning Tips guide and see whether it resolves your issue.
Agree with Dmitri, reason could be one of the above three.
One more thing you can try.
You can run your jmeter in ui mode for validation of your script and after validation you can run it in non-ui mode which will save lot of memory and cpu processing (basically UI is heaviest part in jmeter).
you can run your jmeter script in non-ui mode like this,
Jmeter -n -t -H proxy -P port
generally on a single dual core machine with 2 GB ram (Load Generator in your case) 100 user test can be carried out successfully.
some more things you can look at to find out the actual bottleneck
1.check application server logs (server on which your application is hosted)
if there are any failures in that then see performance counters on server (CPU, Memory, network etc) to see anything is overloaded.
(if server is windows then check using perfmon if linux then try sar)
if something is overloaded then reason is your app server cant take load of 100 users
probably try tuning it more.
2.check load generator system performance counters (JVM heap usage,CPU,Memory etc)
if JVM heap size is small enough try increasing it but if other counters are overloaded then try distributed load testing.
3.remove unwanted/heavy listeners, assertion from script.
maybe this will help :)