Samples failing on increasing number of users and decreasing Ramp-up Period - jmeter

For Jmeter script, when I use
Number of Threads = 10
Ramp-up Period = 40
Loop Count = 1
Then 6 out of 40 samples failed.
When I increase the Ramp-up Period to 60 then all the samples pass.
For the failed requests, the response code returned is 522:
Sampler result
Thread Name: Liberty Insight 1-4
Sample Start: 2018-02-23 20:43:12 IST
Load time: 1
Connect Time: 0
Latency: 1
Size in bytes: 112
Sent bytes:584
Headers size in bytes: 112
Body size in bytes: 0
Sample Count: 1
Error Count: 1
Data type ("text"|"bin"|""):
Response code: 522
Response message:
Response headers:
HTTP/1.1 522
Server: nginx
Date: Fri, 23 Feb 2018 15:13:12 GMT
Content-Length: 0
Connection: keep-alive
HTTPSampleResult fields:
ContentType:
DataEncoding: null
I am unable to figure out the reason for this type of behaviour. Any pointers what could be issue for such type of behaviour?

If you choose Ramp-up Period = 40 with 10 threads calls to server are about 4 transaction a second.
When you use cloudflare services one of its feature is to prevent overload the server
There are a few main causes of this:
The origin server was too overloaded to respond.
The origin web server has a firewall that is blocking our requests, or packets are being dropped within the host’s network.
Error 522:
(source: cloudflare.com)
If you need to load test your server use different route than cloudflare, Consult your IT for such option.
If not reduce transaction per second rate
Ensure that the origin server isn’t overloaded. If it is, it could be dropping requests.

Related

Requests and Threads understanding in JMeter logs

I am still confused with some JMeter logs displayed here. Can someone please give me some light into this?
Below is a log generated by JMeter for my tests.
Waiting for possible Shutdown/StopTestNow/Heapdump message on port 4445
summary + 1 in 00:00:02 = 0.5/s Avg: 1631 Min: 1631 Max: 1631 Err: 0 (0.00%) Active: 2 Started: 2 Finished: 0
summary + 218 in 00:00:25 = 8.6/s Avg: 816 Min: 141 Max: 1882 Err: 1 (0.46%) Active: 10 Started: 27 Finished: 17
summary = 219 in 00:00:27 = 8.1/s Avg: 820 Min: 141 Max: 1882 Err: 1 (0.46%)
summary + 81 in 00:00:15 = 5.4/s Avg: 998 Min: 201 Max: 2096 Err: 1 (1.23%) Active: 0 Started: 30 Finished: 30
summary = 300 in 00:00:42 = 7.1/s Avg: 868 Min: 141 Max: 2096 Err: 2 (0.67%)
Tidying up ... # Fri Jun 09 04:19:15 IDT 2017 (1496971155116)
Does this log means [ last step ] 300 requests were fired, 00.00:42 secs took for the whole tests, 7.1 threads/sec or 7.1 requests/sec fired?
How can i make sure to increase the TPS? Same tests were done in a different site and they are getting 132TPS for the same tests and on the same server. Can someone put some light into this?
In here, total number of requests is 300. Throughput is 7 requests per second. These 300 requests generated by your given number of threads in Thread group configuration. You can also see the number of active threads in the log results. These threads become active depend on your ramp-up time.
Ramp-up time is the speed at which users or threads arrive on your application.
Check this for an example: How should I calculate Ramp-up time in Jmeter
You can give enough duration in your script and also check the loop count forever, so that all of the threads will be hitting those requests in your application server until the test finishes.
When all the threads become active on the server, then they will hit those requests in server.
To increase the TPS, you must have to increase the number of threads because those threads will hit your desired requests in server.
It also depends on the response time of your requests.
Suppose,
If you have 500 virtual users and application response time is 1 second - you will have 500 RPS
If you have 500 virtual users and application response time is 2 seconds - you will have 250 RPS
If you have 500 virtual users and application response time is 500 ms - you will have 1000 RPS.
First of all, a little of theory:
You have Sampler(s) which should mimic real user actions
You have Threads (virtual users) defined under Thread Group which mimic real users
JMeter starts threads which execute samplers as fast as they can and generate certain amount of requests per second. This "request per second" value depends on 2 factors:
number of virtual users
your application response time
JMeter Summarizer doesn't tell the full story, I would recommend generating the HTML Reporting Dashboard from the .jtl results file, it will provide more comprehensive load test result data which is much easier to analyze looking into tables and charts, it can be done as simple as:
jmeter -g /path/to/testresult.jtl -o /path/to/dashboard/output/folder
Looking into current results, you achieved maximum throughput of 7.1 requests second with average response time of 868 milliseconds.
So in order to have more "requests per second" you need to increase the number of "virtual users". If you increase the number of virtual users and "requests per second" is not increasing - it means that you identified so called saturation point and your application is not capable of handling more.

Response code: Non HTTP response code: java.net.ConnectException Response message: Non HTTP response message: Connection timed out: connect

I am executing one script using Jmeter for load testing.I am getting error in between lets say for eg. If I applied load of 500users, till 250 users threads are running successfully then error comes of connection timed out error.Then, again it running successful for some of threads then error.
Code is as follow:-
Thread Name: Thread Group 1-1274
Sample Start: 2016-09-15 15:02:13 IST
Load time: 21004
Connect Time: 21004
Latency: 0
Size in bytes: 2206
Headers size in bytes: 0
Body size in bytes: 2206
Sample Count: 1
Error Count: 1
Data type ("text"|"bin"|""): text
Response code: Non HTTP response code: java.net.ConnectException
Response message: Non HTTP response message: Connection timed out: connect
Response headers:
HTTPSampleResult fields:
ContentType:
DataEncoding: null
I need to break the server.
Can anyone help me for this?
Issue might be due to server hanging, so check the server components health.
Otherwise you could be consuming all ephemeral ports which you can extend by tuning the tcp stacks.
As per Kiril S. answer, on windows it would be:
On Windows:
Follow this guide to check if ports might be an issue:
https://msdn.microsoft.com/en-us/library/aa560610(v=bts.20).aspx.
Check 2 parameters:
MaxUserPort: increase it to max 65534.
TcpTimedWaitDelay, which defines how long the ports are staying in TIME_WAIT state after use. Change that value to 30
On Linux:
Set in sysctl.conf:
net.ipv4.ip_local_port_range=1025 65000

Internal Server Error in Jmeter Script

I am trying to create Jmeter scripts for our PEGA application, but every time reply scripts fail due to the following issue.
Can anyone help?
Thread Name: Thread Group 1-1
Sample Start: 2016-08-12 08:38:58 IST
Load time: 604
Connect Time: 0
Latency: 603
Size in bytes: 4267
Headers size in bytes: 1774
Body size in bytes: 2493
Sample Count: 1
Error Count: 1
Data type ("text"|"bin"|""): text
Response code: 500
Response message: Internal Server Error
Response headers:
HTTP/1.1 500 Internal Server Error
Server: Apache-Coyote/1.1
Set-Cookie: Pega-RULES=H79D7E64B2C531F36AE825FC87DA355F5; Version=1; Comment="PegaRULES session tracking"; Path=/prweb
You most probably are missing either a header or a cookie.
I suggest you try to record traffic using JMeter Test Script Recorder so that you avoid any missing header.
See:
http://jmeter.apache.org/usermanual/component_reference.html#HTTP(S)_Test_Script_Recorder
http://jmeter.apache.org/usermanual/jmeter_proxy_step_by_step.pdf
This way you won't miss any header or cookie.
Then see what need to be variabilized using Post Processor and variables (${varName})

Jmeter Cache Manager - null exception

I have created a web test for simulating a browser behavior that I will use for load test.
Version: Jmeter 2.12
My Test plan
HTTP Request Defaults
Ultimate Thread Group
-HTTP Cache Manager
-HTTP Cookie Manager
-Once Only Controller
Login function
-Random Controller
Random Http requests
Response Assertions
If I have Use cache control/Expires header when processing GET requests unchecked there are no problems.
When I tick "Use cache control..." I get a lot of errors?
Sampler request:
Thread Name: jp#gc - Ultimate Thread Group 1-5
Sample Start: 1970-01-01 01:00:00 CET
Load time: 0
Latency: 0
Size in bytes: 418
Headers size in bytes: 0
Body size in bytes: 418
Sample Count: 1
Error Count: 1
Response code: Non HTTP response code: java.lang.NullPointerException
Response message: Non HTTP response message: null
Response headers:
HTTPSampleResult fields:
ContentType:
DataEncoding: null
Request
Null
Is this the normal behavior? Are thh pages not requested since they are in cache? Should I then remove my assertions? (I use response assertion, contain text) What assertion could I use?
This is a known issue in 2.12, that was fixed in 2.13. See: https://bz.apache.org/bugzilla/show_bug.cgi?id=57579
If you want to still use, 2.12, as a workaround you can add this line to your jmeter.properties file
cache_manager.cached_resource_mode=RETURN_200_CACHE

What's a great way to benchmark Apache locally on Linux?

I've been developing a web-site that uses Django and MySQL; what I want to know is how many HTTP requests my server can handle serving certain pages.
I have been using siege but I'm not sure if that's a good benchmarking tool.
ab, the Apache HTTP server benchmarking tool. Many options. An example of use with ten concurrent requests:
% ab -n 20 -c 10 http://www.bortzmeyer.org/
...
Benchmarking www.bortzmeyer.org (be patient).....done
Server Software: Apache/2.2.9
Server Hostname: www.bortzmeyer.org
Server Port: 80
Document Path: /
Document Length: 208025 bytes
Concurrency Level: 10
Time taken for tests: 9.535 seconds
Complete requests: 20
Failed requests: 0
Write errors: 0
Total transferred: 4557691 bytes
HTML transferred: 4551113 bytes
Requests per second: 2.10 [#/sec] (mean)
Time per request: 4767.540 [ms] (mean)
Time per request: 476.754 [ms] (mean, across all concurrent requests)
Transfer rate: 466.79 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 22 107 254.6 24 854
Processing: 996 3301 1837.9 3236 8139
Waiting: 23 25 1.3 25 27
Total: 1018 3408 1795.9 3269 8164
Percentage of the requests served within a certain time (ms)
50% 3269
66% 4219
...
(In that case, network latency was the main slowness factor.)
ab reports itself in the User-Agent field so, in the log of the HTTP server, you'll see something like:
2001:660:3003:8::4:69 - - [28/Jul/2009:12:22:45 +0200] "GET / HTTP/1.0" 200 208025 "-" "ApacheBench/2.3" www.bortzmeyer.org
ab is a widely used benchmarking tool that comes with apache httpd
Grinder is pretty good. It lets you simulate coordinated load from several client machines, which is more meaningful than from a single machine.
There's also JMeter.
I've used httperf and it's quite easy to use. There's a peepcode screencast on how to use it as well.

Resources