I am load-testing a WCF service with NetTcpBinding. To do so, I have a unit test which makes call to the service and asks for X number of data results and I use this unit test in a loadtest(Visual Studio 2010).
The problem is I do not know a way to find out what is the maximum throughput of the service. I keep changing the number of users/clients in the loadtest and try to see if I am getting results any faster. Is there any better way to do it?
If you want to measure performance of your WCF service, you may want to use a built-in mechanism called performance counters. It allows you to add some diagnostic intructions in you code like increasing the counter and later view the results in a perfmon.exe tool. More information on http://msdn.microsoft.com/en-us/library/ms735098.aspx.
I suppose you can use it to test how fast the service is responding to a specific number of clients to find the maximum throughput.
Related
How to create proper workload model in JMeter if I have only number of concurrent users and response time as input/requirement? Do I need any additional information in order to load test app?
If these are only info I can get, how to approach load testing in context of using JMeter as load testing tool, any ideas - suggestions - advices?
A "proper workload" would be simulating real life usage of the system under test by "the number of concurrent users".
Just replicate real user behaviour at HTTP protocol level paying attention to pretty much everything i.e.
JMeter's network footprint must be exact replica of the browser's network footprint when it comes to HTTP requests so cross-check the number and the nature of the requests from browser developer tools and JMeter's View Results Tree listener
Make sure to send the same/proper HTTP Headers
As a subset of point 2 pay attention to Cookies
Make sure to download embedded resources like browser does
In addition to point 4 make sure to properly handle cache
Use Timers wisely to simulate think times
If your users are doing different actions configure your JMeter test to distribute them like real users are distributed
Once done just run your test with the anticipated amount of users for the desired duration and compare the real response time against the anticipated
The Avg RT and other KPIs (such as Throughput) are the result of generating or executing the workload with a determined # of VUsers. IMO, the best approach is to generate the load by varying the VUsers as shown in the graph:
This graph also shows a key concept: perfomance (as measure by KPIs) is not linear (although it might appear linear at a small # of VUsers).
I'm struggling to find a framework to help me test the performance of a service I am writing, that has a long running process it fronts. A simplified description of the service is:
POST data to the service /start endpoint, it returns a token.
GET the status of the action at /status/{token}, poll this until it returns a status of completed.
GET the results from /result/{token}.
I've dabbled with Locust.io, which is fine for measuring the responsiveness of the API, but does little for measuring the overall end to end performance. What I would really like to do is measure how long all three steps take to complete, particularly when I run many in parallel etc. I should imagine my service back end falls over far sooner than the rest API does.
Can anyone recommend any tools / libraries / frameworks I can use to measure this please? I would like to integrate it with my build pipeline so I can measure performance as code is changed.
Many thanks
The easiest option I can think of is going for Apache JMeter, it provides Transaction Controller which generates an additional "transaction" holding its children cumulative response time (along with other metrics)
"Polling" can be implemented using While Controller
Example test plan outline with results:
I have created a test plan in Jmeter and ran it for 10 users, it has run successfully without any error, as in the below screenshot of the listeners which I have added in my test plan.
In the above listeners, how may I come to know that the values of these fields Standard Deviation, Throughput, Median, Error% calculated as expected Or is there any ideal/expected/benchmark values of the above fields through which I compare and found that my test plan work as standard. Moreover how may I able to explain that the performance of my test plan is fine/good/better or best
Please suggest me thanks
It sounds like you don't really understand what you're doing so I would recommend starting with i.e. Performance Testing Guidance for Web Applications e-book.
With regards to the "values" - we have no any idea whether the "values" match your expectations. There are no any reference "values", normally your project should have non-functional requirements or SLAs which should define maximum response time or minimum number of hits per unit of time.
Check out JMeter Glossary to learn what the "values" mean.
If you don't have NFRs or SLAs defined you still can perform a stress test like:
Make sure that your JMeter test behaves like a real browser, at least I fail to see:
HTTP Cookie Manager
HTTP Cache Manager
HTTP Header Manager
You should be running your test in command-line non-GUI mode
Start with 1 virtual user and gradually increase the load until
you see the saturation point
you start seeing performance degradation
This way you will be able to state what is the maximum number of users you system can support without issues
I have recorded my web application through template & just to confirm that load test result which i am getting is correct? Just by increasing No of users does it give proper results? Is it enough for load testing of web application?
First of all you need to ensure that your test does what it is supposed to be doing. Recorded tests can rarely be successfully replayed, so normally you should be acting as follows:
Add View Results Tree listener and run your test with 1 user. Inspect request and response details to verify your test steps.
Perform correlation and parametrization if required.
Correlation: the process of identifying and handling any dynamic parameters. Most often people use Regular Expression Extractor for it.
Parametrization: the process of making your test data driven. For example, if your application assumes multiple authenticated users you need to store the credentials somewhere. Most commonly used test element for this is CSV Data Set Config
Make your test realistic. Virtual users simulated by JMeter need to represent real users using real browsers as close as possible with all the related stuff: cookies, headers, cache, etc. See How To Make JMeter Behave More Like A Real Browser to learn how to configure JMeter to act closer to real users. Also real users need some time to "think" between operations so make sure you are using Timers to simulate this behaviour as well.
Only after you apply the above points you should add more virtual users. Again, run your test with 2-3 users and iterations to ensure your test funcitons as designed. Once you are happy with it you can increase the load, but don't overkill your server, increase the load gradually and check the impact of the increasing load on your application, i.e. how response time, throughput and number of errors change as you increase the load. The same is applicable for decreasing the load, don't turn it off at once, decrease the number of virtual users gradually.
Building a Web Test Plan
Building an Advanced Web Test Plan
If suppose I will run load testing with 5000 threads or may be more ,
Will the main server under test crash at a certain level.
As EJP said, it is JMeter purpose to find the limit of tested application and how it will react under performance.
So yes it is perfectly possible.
You should read JMeter Manual.
If you are doing any kind of performance tests (load, stress, soak etc) you will want to know at what point your application server falls over i.e. its breaking point.
Once you've found out what your upper limit is, start dialing back the number of threads until you find your application's "sweet spot" for example CPU usage <80% & >70%