Enormous JMeter Results file - jmeter

I'm running a very simple JMeter test (screenshot attached) that hits 2 web pages. It's set to run for 6 hours but for some reason, the report is enormous. The last run came out at 41GB and as a result, I cannot generate the HTML report.
Any ideas as to why it's so big and what I can do about it?

Well, you're running a test with 100 users for 6 hours and each user creates a separate line for each page in the .jtl results file like this:
1657532286921,298,HTTP Request,200,OK,Thread Group 1-1,text,true,,1591,116,1,1,http://example.com/,295,0,163
so it's roughly 100 bytes per user per page
One iteration (100 users hitting 2 pages) would be 19 kilobytes
If response time of each page is 1 second each user will be able to execute 1800 requests per hour and 10800 requests per 6 hours. 100 users will execute 1080000 requests resulting in 108 gigabytes of results file.
There is nothing you can do about the results file size, you could increase JMeter's heap space so the file would fit into memory (if you don't have enough RAM you can use swap for this)
Alternative option is using Backend Listener to send the results to InfluxDB, in this case JMeter will send only aggregate data and the size will be lower, the results can be visualized in Grafana. More information: JMeter + Grafana: How to Use Grafana to Monitor JMeter

Related

Jmeter: Distributed testing

a very common scenario which we all face.
I have a Master and 2 slaves.
A CSV data set with 20 unique users.
I want to run 10 users on each slave simultaneously.
Should I split 20 users in to 2 files of 10 each and upload the CSV in each of the Slaves? Or
20 in each slaves and the thread count as 20?
I want to run for all the users but not twice.
As always has been Looking forward for thoughts?
Best.
The scenario is not common and it doesn't make a lot of sense either.
Whatever.
All JMeter slaves are executing the same test plan and know nothing about each other so:
"split 20 users in to 2 files of 10 each and upload the CSV in each of the Slaves"
set the number of threads in the Thread Group to 10, in case of 2 slaves you will have 20 virtual users in total
More information:
Apache JMeter Distributed Testing Step-by-step
How to Perform Distributed Testing in JMeter

Jmeter - Sending request data from csv files with a time interval

I have 1000 records in a csv file. I want to send all the records/data with packets of 50 records (all different) per minute in jmeter. Please guide me the Jmeter configuration for this.
flow-1-50 in 1 minute, then 50-100 in 2nd minute, then 100-150 in 3rd minute.....950-1000 in 20th minute
If you want to send 50 records per minute evenly distributed like 0.83 requests per second - go for Constant Throughput Timer
If you want to send 50 records in one shot then wait for 1 minute then send next 50 - go for Synchronizing Timer and Flow Control Action Sampler
Records from CSV can be retrieved using CSV Data Set Config

In Jmeter I want to access 1000 different URL's by 1000 users concurrently ( one URL per user at the same time)

Using ${path} in the 'Paths:' and providing the CSV file location in the 'Filename' under CSV Data Set Config, I am able to get a single user accessing the URL's one after the other from the CSV file.
But to complete my test, I want to get 1000 users access 1000 URL's concurrently to demonstrate the maximum load on a Database server. Please advice.
I am on Jmeter5.0
Define CSV Data Set Config with (default) Sharing mode All threads -
(the default) the file is shared between all the threads.
In same hierarchy of the sampler
Define in Thread Group Number of Threads: 1000
And execute test while each thread get different line/values from CSV
If you want to have "bursty load":
Set "Number of Threads" under your Thread Group to 1000
Add Synchronizing Timer as a child of your HTTP Request sampler and set "Number of Simultaneous Users to Group by" to 1000
So your test plan would look like:
It will execute 1000 requests at exactly the same moment and stop
If you want "prolonged load" - just let your Thread Group to iterate "Forever". You can limit test duration using "Scheduler" input:
It will execute requests with 1000 virtual users as fast as it can for 10 minutes

How to use data from .csv file for Ultimate thread group

I have a use case where 10000 users are hitting the API sequentially.
first 1000 users/sec are hitting an API then they hold for 10-15 seconds and again another 2000 users are going to access the api.
Issue is i have an api <path>/user_id/${userId} and i have 10000 user ids stored in a .csv file
how fetch the file for every 1000 users at first set and 2000 users in the next?
I have added CSV Data set Config and i have the .csv file path
Below screenshot is my .csv set config.
Beanshell error
GetUserID API
Ashu
To pick first 1000 userIds for first 1000 threads and and next 2000 userids for next 2000 threads and so on follow this steps
Create a csv file with only userIds(Do not mention the column name in csv).
To the JMeter test plan add a simple thread group and bean shell sampler to the thread group.
Add the following code to the beanshell sampler
above code will add UserIds to JMeter properties.
now to pick userIds use
${__P(user_id_${__longSum(${__threadNum},-1,)})}
I have created a sample test plan to pick only first 10 values from csv for the first minute and pick next 10 for the next minute you can see the screenshot here
I would recommend run the tests in cloud.
Please follow this link to know more

Your application is submitting requests to Amazon Elastic Transcoder faster than the maximum request rate

There is a Windows service which ingests Video files which are delivered by some content providers. Then the Windows Service tries to create renditions for each given video file using Amazon Elastic Transcoder.
For each video file around 15 renditions are created through creating one Job and then adding 15 outputs to it.
This works perfectly until I run my test project a few times in a row. Then I get this error message "Your application is submitting requests to Amazon Elastic Transcoder faster than the maximum request rate".
I get an error when I just test the logic of my Windows service whilst at production capacity this Windows Service will ingest around 50,000 video files every day. That means I will creating 50,000 jobs every day as well. For such a high volume of request Elastic Transcoder seems to be too weak.
Is there a configuration to increase the throttling on Elastic Transcoder? If there is not, what is the actual limit of crating jobs per minute?
Here's some documentation I found.
In short:
For each region, 4 pipelines per AWS account
Maximum number of queued jobs: 100,000 per pipeline
You can submit two Create Job requests per second per AWS account at a sustained rate; brief bursts of 100 requests per second are allowed.
In other words, you shouldn't have a problem with 50k jobs a day, so long as you don't submit them at a rate higher than 2 jobs/second for any sustained period of time.

Resources