I am testing REST API, each of which returns a 10 MB response body.
Now during load test, JMeter gives Out of Memory exception at 20 threads.
Concluded that this is due to APIs have huge response body. When other APIs with fairly low size Response Body are tested I am able to scale up to 500 Threads
Have tried all options shared under hacks to avoid Out of Memory exception:
Running in Non GUI mode
All Listeners Off - generating JTL from command line
Disabled all assertions, relying on application logs
Groovy used as scripting language in JSR223, using to emulate Pacing between requests
Heap Size increased to 12 GB
JMeter 5.4.1 and JDK used
Going for distributed load testing from multiple JMeter host machines also seems to have problem, as when I reduced No. of Threads to 10 for same APIs, Out of Memory Exception still came up.
How to effectively handle huge response body to a request made from JMeter ?
If you don't need to read or assert the response then you can reduce disk space usage, check Save response as MD5 hash in your HTTP Request Advanced tab
Save response as MD5 hash? If this is selected, then the response is not stored in the sample result. Instead, the 32 character MD5 hash of the data is calculated and stored instead. This is intended for testing large amounts of data.
Related
Just starting with jmeter and making some experiments I found something that looks kind of odd to me. I connected jmeter with influxdb and measured the avg. time response of one single request in a infinite loop. When I stopped the test I realized that the last time in the results csv created by jmeter is not the same as the one taken by influxdb. Specifically jmeter last measure is 13s higher than the one registered by influxdb. Any ideas on what could be happening?
I've tried to google it but haven't found any documentation or problem related
JMeter sends aggregated metrics, to wit it doesn't send each and every SampleResult but collects the results within some "window", default value is 5 seconds, controllable via backend_influxdb.send_interval JMeter Property
And metrics which are being sent are described here
You can try decreasing the 5 seconds window by amending the aforementioned backend_influxdb.send_interval JMeter property and setting it i.e. to 1000 ms so JMeter would send the data more often but it will create extra overhead so make sure that JMeter has enough headroom to operate and increasing metrics sending rate doesn't affect the overall throughput.
I am running the JMeter script with 100 RPS/TPS using Throughput Shaping Timer on Linux VM using Non-GUI mode, as I am not able to reach the desired TPS/RPS with enough RAM and CPU resource available.
I took the ThreadDump and saw that 195 Threads out of 200 Threads are in Blocked State. Thread Dump Analysis is available on:
Thread Dump Analysis
This is the API Script which needs dynamic headers generation before each Request is executed. Dynamic headers are as follows:
contentMD5 - MD5 hash of request body
client - clientTypeAPP
nonce - unique current timestamp
apikey - sha512Hmac of the string generated by concatenating Methods, body, path, md5 etc.
The above headers are generated using the JSR223 Pre Processor. The generated headers are also removed using JSR223 Post Processor after Sampler execution.
The thread dump shows that the problem is with your JSR223 Test Elements, most probably there is an issue with your Groovy code and most probably you're inlining JMeter Functions or Variables there
Make sure to remove all occurrences of calls to functions or variables from the code
Try using script files rather than putting your code into JSR223 test elements
Use a profiler tool to get to the bottom of your code issues
If you hit the limits of the Groovy scripting engine performance consider either moving your custom code logic into a JMeter Plugin or switching to Distributed Testing
I have a HTTP Request in a Test Plan. I am sending a POST Request with a very large body (10 MB).
I am reading the body data from a file using ${__FileToString(data.json)} and testing it with 300 threads
Running it is giving out of memory error
Each thread is storing the same (10 MB) file in RAM, which is causing the error.
Is it possible to make all the threads access the same copy instead of creating duplicates?
As of JMeter 5.5 default maximum Java Heap size is 1 GB
You would need 10 * 300 == 3 GB at least so you will need to increase the Java Heap space accordingly. See 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure article for more details.
As of now there is no way to re-use the pointer to the large object between threads, you can consider using HTTP Raw Request sampler instead, it has nice feature of direct streaming of the request body to the backend without loading it into memory first
I have a spring boot application which has a post endpoint which takes the request and send it to another service and get the response back and save it to mongo database and returned the response back to user. The application is deployed on embedded tomcat of spring boot. I am using jmeter to see the max response time, throughput etc.
When i ran a test from jmeter with 500 threads for 10 minutes, i got maximum time as around 3500ms.
When i repeat the test from jmeter the maximum time gets reduced to 900ms.
Again, if i run the test after a long time, the maximum again goes upto 3500ms.
I am not able to get any information regarding this behavior of tomcat.
Could you please help me with understanding this behavior of tomcat?
What do you mean by "unexpected"? Lower response time when you repeat the test can be explained by either your application implementation, like when you start load test against the application which is just deployed it's performance might not be optimal and when you repeat the test the cache is "warmed up" so you're getting better performance.
Another explanation could be JIT optimization as JVM is analyzing the pattern of your application usage and does inner improvements of the bytecode to better serve the given load pattern.
Third possible explanation is MongoDB caching, if 500 users are sending the same responses it might be the case the database stores the result sets in memory and when you repeat the test it doesn't actually access the storage but returns the results directly from the memory which is fast and cheap. Consider properly parameterizing your JMeter test so each thread (virtual user) would use its own credentials and perform different query than the other thread(s), but keep in mind that the test needs to be repeatable so don't use unique data each time, it's better to have sufficient set of pre-defined test data
I am running jmeter with only one thread and it is still eventually consuming all available memory. The test is running gets, heads and puts from a large csv file >3mil rows. Some of these files are large (> 1gb) but most are average sizes, regardless of that, it seems to die well before it gets to those huge files anyway.
I have all listeners disabled and running from the console, which should also reduce the overhead. I have also given the heap 10G of memory which I would think is plenty?
I have attached the jmx file if that helps:
jmx runfile
Is there some process i can run to cleanup/purge as its processing, or am I doing something wrong in the plan its self.
sample csv:
PUT,500,path_name_here,filename_1,replication=false
GET,1500,path_name_here,filename_2,allowredirect=true
GET,500,path_name_here,filename_3,allowredirect=true
You seem to be suffering from the problem described here, try switching to the HTTP Raw Request which can upload large files without loading them fully into memory.
You can install HTTP Raw Request sampler using JMeter Plugins Manager: