Is there any time shift between jmeter and influxdb? - jmeter

Just starting with jmeter and making some experiments I found something that looks kind of odd to me. I connected jmeter with influxdb and measured the avg. time response of one single request in a infinite loop. When I stopped the test I realized that the last time in the results csv created by jmeter is not the same as the one taken by influxdb. Specifically jmeter last measure is 13s higher than the one registered by influxdb. Any ideas on what could be happening?
I've tried to google it but haven't found any documentation or problem related

JMeter sends aggregated metrics, to wit it doesn't send each and every SampleResult but collects the results within some "window", default value is 5 seconds, controllable via backend_influxdb.send_interval JMeter Property
And metrics which are being sent are described here
You can try decreasing the 5 seconds window by amending the aforementioned backend_influxdb.send_interval JMeter property and setting it i.e. to 1000 ms so JMeter would send the data more often but it will create extra overhead so make sure that JMeter has enough headroom to operate and increasing metrics sending rate doesn't affect the overall throughput.

Related

How to set the 4000 users and 1 hour duration using JMETER

In Jmeter I have a scenario like
Load tested with 4000 users and 1 hour duration
759965 requests made and out of which one request failed on an average 18894.13 requests made per second.
This was the earlier scenario and I want to make the same scenario again with the above information. Can someone guide me how to set up the environment and also the results. I have designed my script using Co-relation with the help regular expression extractor.enter image description here
For the normal Thread Group the configuration would be something like:
It would also be a good idea to use some ramp-up period so the load would increase gradually and you could correlate increasing load with other metrics like response time or transactions per second.
You might also want to use one of Custom Thread Groups which can be installed as JMeter Plugins, they provide easy visual way to define the number of threads, test duration, ramp-up, ramp-down, time to hold the load, eventual spikes, etc.
Once you define your desired workload you should run your test in command-line non-GUI mode, with regards to the test results the easiest option is to generate HTML Reporting Dashboard

Is there a specific way to achieve the below scenarios in JMeter ? if yes how?

Can someone guide how can I achieve below scenarios via JMeter
1.Check if system is able to process 1,00,000 random searches per hour
2.Check if system can accept 1,00,000 transaction's per minute- this is more like form submissions
First of all you need to implement your test scenarios (search and submitting forms) using HTTP Request samplers
The HTTP Request samplers can be:
Recorded using JMeter's HTTP(S) Test Script Recorder
Recorded using JMeter Chrome Extension
Created manually basing on your application/endpoint specifications
Once you have test project skeleton and perform necessary correlation of dynamic values and parameterization of dynamic parameters like usernames you can start defining the workload, i.e. see Building a Web Test Plan user manual chapter
Add as many virtual users as needed, run your test and see whether your application can handle the anticipated load.
Suggested scenario:
Increase the load gradually, i.e. start with 1 user and increment the number of users till the projected amount
Look at Transactions per Second chart and Active Threads Over Time chart. On well-behaved system the throughput (number of requests per second) should increase as the number of users increase.
If you detect the point when you increase the load but the throughput doesn't increase - it means that the system reached the maximum performance. If it is sufficient - you can report the test as passed, otherwise you will need to investigate the root cause and either report or fix it if you're capable of doing this.

Validate newly created server support the same load

We are creating a new hosted server for one of our APIs on managed containers (Kubernetes) and we're trying to validate that it can handle at least the same amount of traffic load requests.
We've started with one of the APIs, where we would need to handle at least 140k requests per minute, all endpoints combined.
To verify this, I created a simple JMeter test as follows:
-Test Plan
---Thread Group Endpoint1
-----HTTP Request -> a GET request with query params for /path1
---Thread Group Endpoint2
-----HTTP Request -> a GET request with query params for /path2
For a local test, I used the following setup:
Thread Groups Endpoint1 and Endpoint2 are set to 200 threads (users), ramp-up period of 1s, loop count = forever and duration 60s.
Using a Summary Report listener when running the test gets me a total of ~9300 # Samples.
Using this approach, is it safe to just increase the number of threads (users) for the Thread Groups until I reach the desired 140k requests per minute?
Note: I only used JMeter a little before, so I'm aware that the entire approach may be wrong, therefore any suggestions and steering to the right path are more than welcomed.
Your approach is viable as long as it represents real-life application usage. If it has 2 endpoints with equally/evenly distributed load - your setup is just fine. If there are more endpoints and some of them are used more than the others - consider defining the workload correspondingly either using different Thread Groups or other distribution mechanism such as Throughput Controller
Increasing the number of threads is also fine, however consider increasing the load gradually, to wit increase ramp-up time so your test could have:
Arrivals phase
Time to hold the load
Ramp-down phase
This way you will be able to correlate various metrics like increasing response time, throughput, number of errors, etc. with the increasing load. Also you will be able to state what was the number of threads/requests per second when the system reached saturation point/breaking point and does it recover when the load gets back.
Also make sure you're following JMeter Best Practices as 2300/2500 requests per second is not something JMeter can support out of the box and you will need to do some tuning, at least increase JVM Heap size allocated to JMeter.
You may not be able to achieve the desired 140k requests per minute using a single Jmeter Machine, in that case you'll need Distributed Load Testing approach here.
refer: http://jmeter.apache.org/usermanual/jmeter_distributed_testing_step_by_step.html
Also keeping the ramp-up period of 1 second will lead to spike and unrealistic load in the system which will not give proper result unless you've pre-warmed your server, you should gradually increase the load as per real/estimated traffic pattern.

Difference between JMeter - HTML Dashboard Report(Response Times Over Time) and Response Times Over Time Listener

I have observed difference between JMeter - HTML Dashboard Report(Response Times Over Time) and Response Times Over Time Listener.
I am using JMeter version 3.3
HTML Dashboard Report shows Peak value of Response time = 28455 ms # 11:33:00 whereas Response Times Over Time Listener shows peak value of Response time = 45803.6 ms # 00:00:52.
Both the HTML report and Listener are generated using same result.csv file.
Can anyone please help me understand this. Please correct me if I am understanding this incorrectly.
It is impossible to say what is the cause without seeing the configuration of the Response Times Over Time listener and HTML Reporting Dashboard.
Most likely your dashboard is configured in the way that it "masks" this spike due to high granularity compared with relatively low test duration (1.5 minutes)
One more place where the differences could be is "Type of Graph" setting of the Response Times Over Time
Similarly configured graphs generation should produce similar graphs on the same data (unless there is a bug)
And finally make sure that you use:
Latest version of the Response Times over Time plugin (you can check for/install updates using JMeter Plugins Manager)
Latest version of JMeter (JMeter 4.0 as of now)

Webservice testing using Jmeter

I am new to JMeter and I have been tasked to do a POC where I need to load test the webserivce. I learnt the basics like adding the test plan, adding threads, adding SOAP/RPC Requestsampler and I got the response as well. But, I am not sure how to achieve the below scenario using JMeter.
I need 600 users to hit the service per request/second (this should run for 10 minutes) and the 2nd scenario is about 2000 users to hit the service at 5 request/second (again this should run for 10 minutes)
Also, would it be possible for JMeter to handle this many number of threads/users?
Any inputs would be deeply appreciated.
Given you properly configure JMeter it shouldn't be a problem to simulate 2k users, actually you may need more as if your web service response time will exceed 1 second - you won't be able to achieve 2k requests per second.
Configuring JMeter:
Run test in non-GUI mode
Disable all Listeners (if any)
Increase JVM Heap size
See 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure for detailed explanation and instructions
Simulating 600 / 2000 requests per second.
Set "Loop Count" to Forever or -1 in Thread Group
Tick "Scheduler" box and set desired duration (600 seconds)
Add Constant Throughput Timer and specify the desired throughput in requests per minute
It is recommended to use HTTP Request sampler for web services testing, you can set Content-Type and SOAPAction headers using HTTP Header Manager

Resources