We had a problem with our WCF endpoint performance, so we did a stress test to get more information regarding this.
Our environment:
LOAD BALANCER;
4 HOSTS(4vCPUS and 4Gb RAM) each;
Application Configuration
CPU Limit Application Pool: 0 (default – NO LIMIT)
Threads Per Processor Limit: 25 (default)
ASP Queue Length property: 3000
MaxPoolThreads registry entry: Not Set
Connection TImeout: 120 seconds (2 minutes)
STRESS TEST DATA
- 5000 Requests in 10 minutes
STRESS TEST RESULT
The tests results were observed using Dynatrace.
- Initially we have a response time of 24,6(which is too much time)
- As soon more requests were received, more our response time increased. Reaching up to 55,3 minutes.
WCF CONFIGURATIONS
Result Object:
WebConfig Segments configurations:
Related
I am doing load test on my system using Jmeter. the requirement is I need to generate 150 requests per minute for a duration of 20 minutes constantly.
I tried with below approaches
I tried by giving this configuration.
No of threads - 3000 [150 req/min * 20 mins]
rampup period - 1200sec [20mins * 60]
But here test stopped after creation of 2004 thread. by giving
this error
Failed to start the native thread for java.lang.Thread “Thread Group 1-2004”
Uncaught Exception java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached in thread Thread[#51,StandardJMeterEngine,6,main]. See log file for details
Used concurrency thread group with below details
Target concurrency - 150
ramp up time - 1 min
hold target rate time - 20 mins
but here no of samples collected are more than 3000 [150 req *20 sec] which i feel is not correct
Is it possible to create exact load according to my requirement in Jmeter(150 req/min ->duration of 20 mins) or should I explore other tools like locust??
tried with precision timers (attaching screen shots)
enter image description here
enter image description here
enter image description here
Your understanding of relationship between users and hits per second is not correct.
When JMeter thread (virtual user) is started it begins executing Samplers as fast as it can. The throughput (number of requests per second) mainly depends on the response time.
For example:
you have 1 user and 1 second response time - the load will be 1 request per second
you have 1 user and 2 seconds response time - the load will be 0.5 requests per second
you have 2 users and 2 seconds response time - the load will be 1 requests per second
you have 4 users and 2 seconds response time - the load will be 2 requests per second
etc.
If you want to slow down JMeter to the desired number of requests per minute it can be done using Timers.
For example:
Constant Throughput Timer:
Precise Throughput Timer:
Throughput Shaping Timer
I am getting the Socket Exception while running load test on self-provisioned test rig.
I am trigger those load tests in agent machine(self-provisioned test rig) from my local machine.
Note : For first 2 to 3 minutes test iterations are passing , after that we are getting the Socket Exception.
Below is the error message :
A connection attempt failed because the connected party did not
properly respond after a period of time, or established connection
failed because connected host has failed to respond.
Below are the stack trace details :
at System.Net.Sockets.Socket.EndConnect(IAsyncResult asyncResult) at
System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure,
Socket s4, Socket s6, Socket& socket, IPAddress& address,
ConnectSocketState state, IAsyncResult asyncResult, Exception&
exception)
Run Time - 20min
Sample rate - 10sec
warm up duration 10sec
number of agents used - 2
Load pattern :
initial load - 10user
max user count - 300
step duration - 10sec
step user count - 10
Although, by Changing above values I am still getting the exception in the same way.
I am using Visual studio 2015 enterprise.
The question states: start with 10 users, every 10 seconds add 10 users to a maximum of 300. Thus after 29 increments there will be 300 users and that will take 29*10 seconds which is 4m50s. The test will thus (attempt to) run with the maximum load of 300 users for the remaining 15m10s.
Given that all tests pass for the first 2 or 3 minutes plus the the error message, that suggests that you are overloading some part of the network. It might be the agents, it might be the servers or it might be on the connections between them. Some network components have a maximum number of active connections and the 300 users might be too many.
Increasing the load so rapidly means you do not clearly know what the limiting value. The sampling rate (at 10 seconds) seems high. At each sampling interval a lot of data is transferred (i.e. the sample data) and that can swamp parts of the network. You should look at the network counters for the agents and controller, also the servers if available.
I recommend changing the load test steps to add 10 users every 30 seconds, so it takes about 15 minutes to reach 300 users. It may also be worth reducing the sample rate to every 20 seconds.
can somebody please advise how to create a jmeter thread properties correctly with the following requirements
55 user a min ramp-up over an hour with a test running for 4 hours.
If you need 55 users added each minute during 1 hour, you set up should look like:
Number of Threads: 3300 (55 users x 60 minutes)
Ramp-up: 3600 (1 hour == 3600 seconds)
Loop Count: Forever
Scheduler -> Duration: 14400 (3600 seconds in hour x 4)
Be aware that 3300 concurrent threads is quite a high load, make sure that you're following recommendations from JMeter Performance and Tuning Tips guide.
If you won't be able to create such a load from a single machine consider Distributed Testing when one JMeter master machine orchestrates several slaves, for instance 3 slaves having 1100 virtual users each.
So you think about something like this setup I show on the screenshot below, isn't ot? It is set on jp#gc - Stepping Thread Group that comes with jmeter plugins in standard set. You have there 55 users tat will ramp up to that value through 3600 seconds and will hold that load for next 3 hours (10800 sec).
If I run distributed testing in GUI mode with 4 client servers(1 Master + 3 Slave) and set the following values in my Plan -
Number of Threads = 12000
Ramp Up time = 1000
Loop count = 1
After completion of Test I get 36000 Samples (which is ok as 12000 * 3 = 36000) but my question is for ramp up time - will it be 3000 for 36000 users??
or will it remain 1000 for 36000
Thanks in Advance
It will be 1000, the same for all client.
Note such a load test profile seems strange as running only 1 request is not what happens with usual usage, what are you trying to simulate ?
I'm somewhat newbie for WEB development based on JVM stack, but future project will require specifically some JVM-based WEB engine. So I started looking on some ground to make things quickly and turned to try Grails. Things looked good from the book, but beeing impressed by really long startup time (grails run-app) I decided to test how this works under load. Here it is:
test app: follow few instruction here to make it from ground (takes 2 mins assuming you already have Grails and Tomcat installed):
_http://grails.org/Quick+Start
test case (with Apache benchmark - comes with Apache httpd - _http://httpd.apache.org):
ab.exe -n 500 -c _http://localhost:8080/my-project/book/create
(Note: this is just displays 2 input fields within styled container)
hardware: Intel i5 650 (4Core*3.2GHz) 8GB Ram & Win Server 2003 x64
The result is ..
Grails: 32 Req/Sec
Total transferred: 1380500 bytes
HTML transferred: 1297500 bytes
Requests per second: 32.45 [#/sec] (mean)
Time per request: 308.129 [ms] (mean)
Time per request: 30.813 [ms] (mean, across all concurrent requests)
Transfer rate: 87.51 [Kbytes/sec] received
(Only 32 Req/Sec with 100% of CPU saturation, this is a way too below my expectations for such hardware)
... Next - i tried to compare it for example with similar dummy JSF application (i took one here: _http://www.ibm.com/developerworks/library/j-jsf2/ - look for "Source code with JAR files", there is \jsf-example2\target\jsf-example2-1.0.war inside),
test case: ab.exe -n 500 -c 10 _http://localhost:8080/jsf/backend/listing.jsp
The result is ..
JSF: 400 Req/Sec
Total transferred: 5178234 bytes
HTML transferred: 5065734 bytes
Requests per second: 405.06 [#/sec] (mean)
Time per request: 24.688 [ms] (mean)
Time per request: 2.469 [ms] (mean, across all concurrent requests)
Transfer rate: 4096.65 [Kbytes/sec] received
... And finally goes raw dummy JSP (just for reference)
Jsp: 8000 req/sec:
<html>
<body>
<% for( int i = 0; i < 100; i ++ ) { %>
Dummy Jsp <%= i %> </br>
<% } %>
</body>
</html>
Result:
Total transferred: 12365000 bytes
HTML transferred: 11120000 bytes
Requests per second: 7999.90 [#/sec] (mean)
Time per request: 1.250 [ms] (mean)
Time per request: 0.125 [ms] (mean, across all concurrent requests)
Transfer rate: 19320.07 [Kbytes/sec] received
...
Am i missing something? ... and Grails app can run much better?
PS: I tried profiling my running Grails app with VisualVM, but got endless loop of messages like ...
Profiler Agent: Redefining 100 classes at idx 0, out of total 413
...
Profiler Agent: Redefining 100 classes at idx 0, out of total 769
...
And finally app just stopped working after few mins - so, looks like profiling Grails is no the choice for good diagnose.
Update - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
First of all I have to admin, yes i need to RTFM - i.e. 'grails run-app' is not the correct way to run Grails for performance measurement. After compiling WAR and deploying it to Tomcat performance is not that awfully low - it is just low. The metrics below are for concurrency of 1 user (I just wanted to check what is MAX performance of the framework in one thread and with no heavy load) and while reading other related posts here i came to "http://stackoverflow.com/questions/819684/jsf-and-spring-performance-vs-poor-jsp-performance" and decided to check Apache Wicket mentioned there - its performance is also included.
Use case is:
- ab.exe -n 500 -c 1 _http://localhost:8080/...
- server is Tomcat7 in vFabric tcServer Dev edition with 'insight' running on background
---------------------- tcServer Plain Tomcat 7 -c 10
/Grails/book/create 77 req/sec 130 req/sec 410 req/sec
/jsf/backend/listing.jsp 133 req/sec 194 req/sec 395 req/sec
/wicket/library/ 870 req/sec 1400 req/sec 5300 req/sec
So ... anyway there is something wrong with Grails. I have made some profiling using tcServer (thanks Karthick) - it looks like it is able only to trace 'Spring-based' actions and internal stacktrace for Grails is like following (for 2 requests - note: metrics are not stable - i bet accuracy of tcServer far from beeing perfect, but can be used just for inforamtion)
Total (81ms)
Filter: urlMapping (32ms)
-> SimpleGrailsController#handleRequest (26ms)
-> Render view "/book/create" (4ms)
Render view "/layouts/main.gsp" (47ms)
Total (79ms)
Filter: urlMapping (56ms) ->
-> SimpleGrailsController#handleRequest (4ms)
-> Render view "/book/create" (38ms)
Render view "/layouts/main.gsp" (22ms)
PS: it may happen that the root cause for bad performance in Grails are underlying 'Spring' libs, will check this in more details.
Are you running it with run-app?
http://grails.org/Deployment states:
"Grails should never be deployed using the grails run-app command as this sets Grails up in "development" mode which has additional overheads. "
try deploying your sample app to tomcat. grails run-app is for development only.
in which environment did you start the app? prod? dev?
do you use scaffolding?
i've tried it on my machine (core i7-2600k). a login page with 4 input fields, dynamic layouts and some other things. i've got 525 requests per second in the slower dev environment.
Yeah this is a benchmark by someone who doesn't know alot about grails or his environment; first he is running on Windows know for being bad at resource management which is why most web serves/app serves run in Linux environments.
Second, if he is using 'ab' to benchmark, then he doesn't have his proxy cache setup because after the first hit, the remainder of the hits would be cached and the he is now benchmarking his cache from my understanding of his setup.
So this all just looks like the benchmarking of a bad setup and a poor understanding of Grails. No offense intended.