JMeter maintain constant RPS rate - jmeter

I've started using JMeter for performance load testing different scenarios with my microservices. I have been able to use the Constant Throughput Timer to send requests and measure the throughput and RPS. However, with Constant Throughput Timer, JMeter will adjust the number of requests depending on how fast the web service is responding to these services.
Is there a way to achieve a constant RPS through the duration of the test? Basically sending 40 requests per second at a constant rate for 10 mins. I'm aware that this may increase the error rate but this would help us in testing how well our microservices perform under different scenarios
My current jmx plan is as follows
<hashTree>
<ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="something" enabled="true">
<stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
<elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
<boolProp name="LoopController.continue_forever">false</boolProp>
<intProp name="LoopController.loops">-1</intProp>
</elementProp>
<stringProp name="ThreadGroup.num_threads">${threadnum}</stringProp>
<stringProp name="ThreadGroup.ramp_time">1</stringProp>
<longProp name="ThreadGroup.start_time">1381113647000</longProp>
<longProp name="ThreadGroup.end_time">1381113647000</longProp>
<boolProp name="ThreadGroup.scheduler">true</boolProp>
<stringProp name="ThreadGroup.duration">${duration}</stringProp>
<stringProp name="ThreadGroup.delay">0</stringProp>
<boolProp name="ThreadGroup.same_user_on_next_iteration">true</boolProp>
</ThreadGroup>
<hashTree>
<ConstantThroughputTimer guiclass="TestBeanGUI" testclass="ConstantThroughputTimer" testname="Constant Throughput Timer" enabled="true">
<intProp name="calcMode">2</intProp>
<stringProp name="throughput">${throughput}</stringProp>
</ConstantThroughputTimer>
<hashTree/>

JMeter threads model assumes waiting for the response prior to sending the next request, if your application won't be able to handle more than 30 requests per second there is no way to get 40 requests per second using HTTP Request samplers
Also Constant Throughput Timer is only capable of slowing down JMeter to limit its requests rate to the desired throughput so make sure to supply sufficient number of threads via this ${threadnum} variable. Otherwise you might consider switching to Throughput Shaping Timer and Concurrency Thread Group combination, if you connect them via Feedback Function JMeter will have the possibility to kick off more threads if the current amount is not enough in order to reach/maintain the target throughput

Related

NoHttpResponseException 443 failed to respond error on different computer/network

I've got a strange problem. I run my JMeter test on one computer/network and my test runs perfectly all the time. Single thread, multithread, it runs with no errors. I then run the same test on a different computer/network and I'm getting the below error. The error happens at random steps and not the same step each time.
org.apache.http.NoHttpResponseException: home-env-c.t1cloud.com:443 failed to respond
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:141)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:157)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.executeRequest(HTTPHC4Impl.java:930)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl.sample(HTTPHC4Impl.java:641)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerProxy.sample(HTTPSamplerProxy.java:66)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1281)
at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1270)
at org.apache.jmeter.threads.JMeterThread.doSampling(JMeterThread.java:630)
at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:558)
at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:489)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:256)
at java.lang.Thread.run(Unknown Source)
I've searched for the error and added these two lines to user.properties but it didn't work:
httpsampler.ignore_failed_embedded_resources=true
http.connection.stalecheck$Boolean=true
If the "other" computer has different (better) hardware specifications it might be the case you're delivering more load than your server can handle so it looks like to be a bottleneck
Inspect how many requests per second you produce in both cases, it can be done by looking at Server Hits per Seconds or Transactions per Second charts or by simply looking into how many requests in the given period JMeter was able to make.
If it appears that the "other" computer produces more load than you expect/need you can slow it down using JMeter Timers in general and Constant Throughput Timer in particular.

WRK benchmark: Please explain results

I'm trying to perform benchmark blocking vs non-blocking io.
As a blocking, I use spring-boot.
As a non-blocking - play framework.
I Call endpoint which makes 4 remote calls (sequentially)
Here are results:
Spring boot
Running 5m test # http://localhost:8080/remote-multiple
4 threads and 20000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 713.90ms 429.81ms 2.00s 82.16%
Req/Sec 33.04 22.55 340.00 68.84%
9602 requests in 5.00m, 201.85MB read
Socket errors: connect 15145, read 21942, write 0, timeout 2401
Requests/sec: 32.00
Transfer/sec: 688.83KB
Play framework
Running 5m test # http://localhost:9000/remote-multiple
4 threads and 20000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.40s 395.00ms 2.00s 54.73%
Req/Sec 37.97 21.21 230.00 70.71%
39792 requests in 5.00m, 846.41MB read
Socket errors: connect 15145, read 36185, write 60, timeout 35944
Requests/sec: 132.61
Transfer/sec: 2.82MB
Though Play shows higher Requests/sec, it has more errors, timeout, latency.
Can anybody pls explain, what do all those params in result mean?
Are Requests/sec - succesfull requests per second? etc
P.S.:
I run this benchmark on MBP 2013, Intel Core i7, 2.3 GHz, 16GB
If you post benchmarks : Start with a link to the actual benchmark code. It has no value without. Second : In general, testing code on the same machine is considered bad practice.

Aggregating response from asynchronous publish subscribe channel

I need to call 4 web services asynchronously and aggregate the results to a single message.If one of the service takes more time to respond than the specified timeout(3sec) then the remaining responses which have arrived should be aggregated and the late coming messages should be discarded . For this i used the below snippet in spring configuration file
<int:aggregator input-channel="aggregatorInputChannel" group-timeout="3000" send-partial-result-on-expiry="true" expire-groups-upon-completion="true" output-channel="aggregatorOutputChannel" ref="responseAggregator" method="populateResponseHeader" >
</int:aggregator>
When one of the web service(lets say service4) call takes more time than the timeout value, then the thread for service4 keeps running in the background and the server send a 202 response. Any suggestions on how i should modify my aggregator to ignore the messages which arrive later than the timeout and get the response?
First of all you should take a look into the Scatter-Gather pattern.
Looks like it is fully sufficient for your use-case.
You should use expire-groups-upon-timeout="false":
<xsd:attribute name="expire-groups-upon-timeout">
<xsd:annotation>
<xsd:documentation>
Boolean flag specifying, if a group is completed due to timeout (reaper or
'group-timeout(-expression)'), whether the group should be removed.
When true, late arriving messages will form a new group. When false, they
will be discarded. Default is 'true' for an aggregator and 'false' for a
resequencer.
</xsd:documentation>
</xsd:annotation>
<xsd:simpleType>
<xsd:union memberTypes="xsd:boolean xsd:string" />
</xsd:simpleType>

JMeter - Graphite Backend listener rootmetricsPrefix taking previously generated value

Background:
I am using graphite to store the data generated during the performance test and ideally, we would like to look at the historical graphs as well. Hence, I am creating a rootMetricsPrefix folder name dynamically in a setup thread group and assigning it to a property. The backend listener is in a different thread group and the configuration uses this folder name as the rootMetricsPrefix -
<elementProp name="rootMetricsPrefix" elementType="Argument">
<stringProp name="Argument.name">rootMetricsPrefix</stringProp>
<stringProp `enter code here`name="Argument.value">${__property(graphiteFolderName)}</stringProp>
<stringProp name="Argument.metadata">=</stringProp>
</elementProp>
Symptoms:
The first time the JMeter srcript is run (after opening JMeter), no folder is generated in the Graphite DB. From the second run onwards, results are written to a folder which was defined in the previous run. For example,
Run 1: DynamicResultsFolder_1 (No Results written)
Run 2: DynamicResultsFolder_2 (Results written to DynamicResultsFolder_1)
Run 3: DynamicResultsFolder_3 (Results written to DynamicResultsFolder_2)
When I print the folder name to the log in the main thread group, the expected folder name is printed out. The issue seems to be in the way the ${__property(graphiteFolderName)} is evaluated in the BackendListener configuration.
I have also tried assigning the the property to a local variable and using the local variable in the BackendListener config, but that does not write any results to the DB.
Any ideas as to what is going on here or if I am missing something obvious?
This cannot work because Backend Listener parameters will be passed to components before the setUp ThreadGroup will be executed.
So what is happening is that first time, property is not configured, and it fails , then setup thread group run and fills in property for next run.
This is your issue.
Maybe you could try to generate your name using __BeanShell function in rootMetricsPrefix property.

Testing with JMeter: how to run N requests per second

I need to test if our system can perform N requests per second.
Technically, it's 2 requests to one API, 2 requests to another, and 6 requests to third one.
But the important thing that they should happen simultaneously - so 10 requests per second.
So, in JMeter I've created three Thread Groups, first defines number of threads 1, and ramp-up time 0.
Second thread group is the same, and third thread group defines number of threads 6 and ramp-up time 0.
But that doesn't really guarantee it's going to run them per second
How do I emulate that? And how do I see the results -- if it was able to perform or wasn't?
Thanks!
You could use ConstantThroughputTimer.
Quote from JMeter help files below:
18.6.4 Constant Throughput Timer
This timer introduces variable pauses, calculated to keep the total throughput (in terms of samples per minute) as close as possible to a give figure. Of course the throughput will be lower if the server is not capable of handling it, or if other timers or time-consuming test elements prevent it.
N.B. although the Timer is called the Constant Throughput timer, the throughput value does not need to be constant. It can be defined in terms of a variable or function call, and the value can be changed during a test.
For example I've used it to generate 40 requests per second:
<ConstantThroughputTimer guiclass="TestBeanGUI" testclass="ConstantThroughputTimer" testname="Constant Throughput Timer" enabled="true">
<stringProp name="calcMode">all active threads in current thread group</stringProp>
<doubleProp>
<name>throughput</name>
<value>2400.0</value>
<savedValue>0.0</savedValue>
</doubleProp>
</ConstantThroughputTimer>
And thats a summary:
Created the tree successfully using performance/search-performance.jmx
Starting the test # Tue Mar 15 16:28:39 CET 2011 (1300202919244)
Waiting for possible shutdown message on port 4445
Generate Summary Results + 3247 in 80,3s = 40,4/s Avg: 18 Min: 0 Max: 1328 Err: 108 (3,33%)
Generate Summary Results + 7199 in 180,0s = 40,0/s Avg: 15 Min: 1 Max: 2071 Err: 378 (5,25%)
Generate Summary Results = 10446 in 260,3s = 40,1/s Avg: 16 Min: 0 Max: 2071 Err: 486 (4,65%)
Generate Summary Results + 7200 in 180,0s = 40,0/s Avg: 14 Min: 0 Max: 152 Err: 399 (5,54%)
Generate Summary Results = 17646 in 440,4s = 40,1/s Avg: 15 Min: 0 Max: 2071 Err: 885 (5,02%)
Generate Summary Results + 7199 in 180,0s = 40,0/s Avg: 14 Min: 0 Max: 1797 Err: 436 (6,06%)
Generate Summary Results = 24845 in 620,4s = 40,0/s Avg: 15 Min: 0 Max: 2071 Err: 1321 (5,32%)
But I run this test inside my network.
As with any network test, there's always going to be problems, especially with latency - even if you could send exactly 6 per second, they're going to be sent sequentially (that's just how packets get sent) and may not all hit in that second, plus processing time.
Generally when performance metrics specific x per second, it's measured over a period of time. Your API may even have a buffer - so you could technically send 6 per second, but process 5 per second, with a buffer of 20, meaning it'd be fine for 20 seconds of traffic, as you'd have sent 120, which would take 120/5 = 24 seconds to process. But any more than that would overflow the buffer. So to just send exactly 6 in a second to test is insufficient.
In the thread group, you're right setting number of threads (users) to 6. Then run it looping forever (tick it or put it in a while loop) and add a listener like aggregate report and results tree. The results you can use to check the right stuff is being sent and responded to (assuming you validate the responses) and in the aggregate report, you can see how many of each activity is happening per hour (obviously multiply by 3600 for seconds, but because of this inaccuracy it's best to run it for a good length of time).
The initial load test can now be run, and as a more accurate test, you can leave it running for longer (soak test) to see if any other problems surface - buffer overflows, memory leaks, or other unexpected events.
Use the Throughput Shaping Timer
I had similar problem and here are two solutions I found:
Solution 1:
You can use Stepping Thread Group (allows to set thread number increase stages over set periods of time) with Constant Throughput Timer in it.
Throughput Timer allows you to set number of samples that thread can send per minute (e.g. if you set it to 1, the thread will only send one request per minute). Also, you can apply Throughput Timer to all threads in your Thread Group or have Timer for each thread with its own settings.
Read more about Throughput Timer here: https://www.blazemeter.com/blog/how-use-jmeters-throughput-constant-timer
Solution 2:
Use "SetUp Thread Group". You can calculate thread number and rump up time to get Threads per Second desired.
You can use Schedule Feedback Function and will also need Concurrency Thread Group
Same can Also be done by configuring "ConstantThroughputTimer" as suggested above from UI also by adding "Constant Throughput Timer" by navigating by right click on Thread Group and then click on Timer and then choose the "Constant Throughput Timer".

Resources