which are the criteras to find out that the webserver can handle load using jmeter? [closed] - jmeter

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have created a test plan using 100 threads. how can we conclude that the web server can handle load? which are the factors we can be taken for the load test.

I personally think you need to define your own metrics for your test plan to get a load test pass.
Typical metrics I would use.
Each response should come back in less than 250 ms. (Adjust to what your customer would expect)
All responses should come back with a non error response.
The server should be in a 'good state' after the load test. (Check memory, threads,database connection leaks etc)
To many resources being consumed is also a bad sign. Database connections, memory , hard disk for log files. Define your own metrics here.
Successive 'soak tests' to compliment your load tests would also be a good idea.
Basically run the a smaller amount of jmeter tests every two hours (So the DBA's etc. don't complain) over the weekend and check on the Monday.

I would recommend to you to first clarify your concepts about performance testing and its types (like load test, stress test, soak test etc). You can refer to following blog to get basic understanding about performance testing and its types:
Load vs. Stress testing
http://www.testerlogic.com/performance-testing-types-concepts-issues/
Once you have a better understanding of concepts, you will be in better position to ask the right question. For now, you can focus on following points..
what is expected load on your web server (in normal and extreme scenarios!)
what is your acceptable criteria for response time, load time etc
Once you know these numbers, you can create a jmeter test which runs for a specific time span (say 1 hour) and no. of threads increase step-by-step (100 user in first 10 minutes, 200 users from 10-20 mins, 300 users from 20-30 mins and so on). (hint: you can use ramp-up period to achieve this scenario).
You can perform these tests and check the reports and compare the response time and other performance factors during first 10 minutes (when load was 100 users) and in last 10 minutes when load was maximum.
This is just to give you a high level idea. As i said before it will better if you first clarify basic performance testing concepts and then design/perform the actual testing.

Like the rjdkolb said you have to define your metrics, check what you require from your service/app.
It all depends what service you are working with - do you have some stable load on the server, or some peaks, do you think there will be like 100 users online or 10000 at once, do you need fast answers or just proper answers in reasonable time. Maybe business foresee that the load will be building gradually through next year and it will start with just 100 requests per minute but will finish with 1000 per sec?
If you think that, like mentioned in other answer, you need an answer in less than 250 ms, then gradually increase load to check how many users/requests you can handle to still have responses on time. And maybe you need answers for 1000 users working simultaneously - then try load like this and check do they have they answers and how fast are they coming back? A lot to think about, do you think?
Try to read a bit about types of performance testing - maybe here on soapui or this explanation of some metrics. A lot of texts on the internet can guide you in your way.
Have fun.

Related

How to Load Test ideally using JMeter tool?

I am completely new to Performance testing and JMeter and hence my question may sound silly to some people.
We have identified some flows of an application and they are like:- Login, SignUp, Perform Transaction. Basically, we are trying to test our API's performance so we have used HTTP Request Sampler heavily. If I have scripted all these flows in JMeter, how can achieve answers to following
How can we decide the benchmark of this system? There is no one in organisation who can help with numbers right now and we have to identify number of users beyond which our system can crash.
For Example, if we say that 1,00,000 users are expected to visit our website in one hour's time then how can we execute this in JMeter? Should Forever loop be used with 3600 seconds(60 mins) of RampUp OR should I go ahead with Number of Threads as 1,00,000 RampUp ask 3600 and Loop Count as 1? What is the ideal way to test this?
What has been done till now?
1. We use to run above mentioned flows with Loop Count as 1. However, as per my knowledge, it's completely based on how much RampUp time I give and JMeter will decide accordingly how many threads it require in parallel to complete the task. Results were not helpful in our case as there was not much load to system.
2. Then, we changed the approach and tried Loop Count as Forever for some 100 users and ran the test for a duration of 10 minutes. After continuing with such test for sometime, we got higher Standard Deviation in JMeter's Summary Report which was fixed by tuning our DB and applying some indexes. We continued this way but I am still confused whether this can really simulate realistic scenario.
Thanks in advance!
Please refer my answer and comments to the similar question below:
performance-testing-in-production-environment-using-jmeter

What are the parameters to collect after the performance testing [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I know this may be a repeat of the questions but I started using the WebPerformanceTest and LoadTest in my web projects.I could run the WebPerformanceTest and Loadttest.
Now what are the parameters/statistics that I need to share with the Dev team or Busniess team?I think of these..But it would be great if somoeone share what are the other parameters I might have to consider sharing..
1.No.of users the application can support
2.Reposne time what the application can give under the sustainable load
following things you can consider for sharing,
if SLA's are mentioned by Dev team or stakeholders and if your performance test shows that the web application is not matching those SLA's then you can share that
Next question comes in your and their mind is why? (try finding out which part/tier is taking most time or a bottleneck). This can be done by analyzing logs or use profiler which will give you costly things,slow compnonents
Next question is job of performance engineer (how to resolve them and improve the performance of my application). If you know application very well then try tuning it and get the improvement results after tuning which should be shared with Dev team or stakeholders.
Maximum number of users may be confusing if you do not limit response time. For 100ms requests 10 simultaneos users mean 100 rps (requests per second) and for 10s requests 100 simultaneous users mean only 10 rps.
If you use simple hit-based testing (e.g. testing single page or specific request performance) it could be better to use rps metric instead.
For response time - mean time could be confusing as well, especially in case of high variance of response time, it's better to provide response time for some percentiles.
I.e. 50% in 50ms, 75% in 55 ms, 90% in 60 ms, 95% in 70 ms, 99% in 90 ms and 100% in 10 sec. With average time of 150 ms. For some services 150 ms is very good, but about 1% of really slow answers is unacceptable and you hardly can find that problem using just mean and medium response time.
Also, in my experience, collecting resource usage stats (cpu, memory, I/O intensity and network usage) is very helpful for determining bottlenecks (i.e. service slow-down due to high I/O because of insufficient amount of memory for caches).
Are you asking the right question?
For me a big part of load and performance testing is deciding what my customer wants to learn about the system being tested. There is an element of "what data can I show the customer?" but that is based on interpreting what they ask for. The customer may not know what to ask, your job as a tester is to understand what the customer wants and provide them with the answers they want.
The two topics you list show how the system appears to its users: when it will break and how fast it responds. There are several variations on those factors based on rate-of-change of user load and on duration of the test.
Other factors include the performance of the various parts of the server computers that are being tested. Visual Studio load tests can collect performance data from other computers while the test runs. So they can monitor the web server(s), database server(s), application server(s) and so on. On each of these servers data about CPU and memory usage, SQL and IIS performance, and many more can be collected. All this data can be compared (most easily via graphs) against user load, error rates and transaction times to determine which parts of the system have plenty of headroom, which are busy and where the bottlenecks occur. Monitoring all this data may also reveal threshold warnings from the various servers, they should be checked against the Microsoft documentation and, perhaps, other sources to determine whether they are adversely affecting system performance and whether they should be investigated in more detail.
These and many other ideas are possible but it all goes back to working out what your customer wants to learn.
The same question was asked on another forum and the above words are almost identical to the answer I posted there.
You can furnish following details to your clients:
Response Time
Hits per Second
Throughput
Connections Per Second
First Time to buffer
Number of Errors
Transactions Graph
CPU, Memory, and Disk Utilization
Network Utilization (if applicable)
Number of database inserts/updates/deletes records
It sounds like you simply have no (or exceedingly poor) requirements and you don't have a great depth in the field of performance testing and engineering. As far as what to collect
Before the test:
Full load profile of business functions that make up the load.
Documentation of each business function. Items to time within each business function.
Expected response times for each of the timed business functions
Pay special attention to think times and iteration pacing
Web logs from the current system so you can objectively measure how many people are on the system at any given time, not how many sessions are alive and have not yet timed out.
Test Environment with some defined match level to the production environment to scale your load appropriately.
In the test
Response times matched to the timing of the business functions on the requirements / user stories
Other enumerated datapoints for requirements (hits, volume returned, etc...)
A measurement of any finite resource in the system under test for bottleneck identification for slow response times. You can start at the top level (CPU, DISK, MEMORY, NETWORK) and work your way down through those stats as you find a resource constriction at the top level.
Post Test:
Executive overview: Did you hit the requrements (YES|NO)
Detailed data: response times, monitor peaks
Analysis: Where is the likely bottleneck holding your back
If you are attempting to represent human behavior then under no circumstance should you eliminate think time. Think time, or time between requests on an individual session, is baked into the definition of the client-server model and as you reduce it to zero your test becomes less and less a predictor of what will happen in production
Typically, it is based on the benchmark that you want to achieve with the given hardware and environment.
Following are the key parameters
No.of concurrent users (manual and system threads)
Load of transactional and existing data
Response time (typically page)
Throughput Utilization of CPU, Memory and Disk IOs and Network
Bandwidth(applicable where there is an integration with peripheral
systems )
Success percentage

Performance Testing fundamentals

I have some basic questions around understanding fundamentals of Performance testing. I know that under various circumstances we might want to do
- Stress Testing
- Endurance Testing etc.
But my main objective here is to ensure that response time is decent from application under a set of load which is towards a higher end or in least above average load.
My questions are as follows:
When you start to plan your expected response time of application; what do you consider. If thats the first step at all. I mean, I have a web application now. Do I just pull out a figure from air and say "I would expect application to take 3 seconds to respond to each request". and then go about figuring out what my application is lacking to get that response time?
OR is it the other way round, and you start performance test with a given set of hardware and say, lets see what response time I get now, and then look at results and say, well it's 8 seconds right now, I'd like it to be 3 seconds at max, so lets see how we can optimize it to be 3 seconds? But again is 3 seconds out of air? I am sure, scaling up machines only will not get response time up. It'll get response time up only when single machine/server is under load and you start clustering?
Now for one single user I have response time as 3 seconds but as the load increases it goes down exponentially; so where do I draw the line between "I need to optimize code further" (which has it's upper limit) and "I need to scale up my servers" (Which has a limit too)
What are the best free tools to do performance and load testing? I have used Jmeter a bit. But is there anything else, that is good and open source?
If I have to optimize code, I start profiling the specific flows which took lot of time responding to requests?
Basically I'd like to see how one goes about from end to end doing performance testing for their application. Any links or articles would be very helpful.
Thanks.
The Performance Testing Council is your gateway to freely exchange experiences, knowledge, and practice of performance testing.
Also read Microsoft Patterns & Practises for Performance testing. This guide shows you an end-to-end approach for implementing performance testing.
phoenix mentioned the Open Source tools.
First of all you can read
Best Practices for Speeding Up Your Web Site
For tools
Open source performance testing tools
performance: tools
This link and this show an example and method of performance tuning an application when the application does not have any obvious "bottlenecks". It works most intuitively on individual threads. I have no experience using it on web applications, although other people do. I agree that profiling is not easy, but I've always relied on this technique, and I think it is pretty easy / effective.
First of all, design your application properly.
Use a profiler, see where the bottlenecks in your application are, and take them away if possible. MEASURE performance before improving it.
I will try to provide basic step by step guide, which can be used for implementing Performance testing in you project.
1 - Before you start testing you should know amount of physical memory and amount of memory allocated for JVM, or whatever. DB size collect as much metrics as possible for your current environment. Know you environment
2 - Next step would be to identify common DB production size and expected yearly growth. You will want to test how your application will behave after year, two, five etc.,
3 - Automate environment setup, this is will help you a lot in future for regression testing and defect fix validation. So you need to have DB dumps for your tests. With current (baseline), one year, five year volume.
4 - Once you're done if gathering basic information - Think about monitoring your servers under load, maybe you already have some monitoring solution like http://newrelic.com/ this will help you to identify cause of performance degradation (CPU/Mem/Amount of threads etc.,) Some performance testing tools do have built in monitoring systems.
At this you are ready to move with tooling and load selection, there is already provided materials on how to do that so I will skip part with workload selection.
5 - Select tool I think that JMeter + http://blazemeter.com/ is what you need at this point, both do have a lot nice articles and education materials, for your script recording I would recommend to use blazemeters Chrom Extension instead of inbuilt JMeters solution. If you still think that you do lack knowledge on how things are done in JMeter I recommend to get this book - Performance Testing With JMeter 2.9 by Bayo Erinle
6 - Analyze results, review test plan and take corresponding actions.

Correlation between requests per second and response time?

Can someone please explain the correlation between requests per second and response time? Which are you trying to improve at first? If your competitor offers less 'requests per second' on his most used functionality then you, is your application performing better in terms of end-user performance?
Can someone please explain the correlation between requests per second and response time?
Think of this situation as if it were a gas station. Cars arrive at various intervals and occupy a pump; they spend some time filling up, and then they leave.
Each car that arrives and occupies a pump is a request.
The time it takes to fill up is your response time.
You can improve things in two ways:
If you add more pumps, you can service additional cars at once because there will be more capacity.
If you make all your pumps faster, you can service more cars over time with the same number of pumps, because each car will finish sooner.
Which are you trying to improve at first?
That depends. Do you want to serve people faster (improving their experience while making some others wait) and thus more people overall, or do you want to serve more people at once (at the possible expense of request time)? Ideally, get both metrics as good as possible.
It all depends on what sort of load your system will be under.
If you have millions of users then you need to handle more requests per second possibly at the expense of response time otherwise users may not be able to connect when they want to.
However, if you are only going to have 30 users then it's more important to them that your system responds quickly than it being able to handle a thousand requests a second.
Requests per second may be high while offering an awful user experience. You might have a lot of users buying thousands of concert tickets per second but the response time for each user is over 30 seconds.
For a high performing, enjoyable web site, you need to have a high number of requests per second and a maximum response time. As a user, I like 5 seconds or less.
If your competitor offers less 'requests per second' on his most used functionality then you, is your application performing better in terms of end-user performance?
I wouldn't agree with that. Look at Google. They make thousands of requests a second - hell, I think it's something like 100 million per day and 3 billion per month.
To answer your question, I think response time is more important than requests per second. Sure you can optimize/minimize the number of requests made, but if your product scales to handle unlimited requests (just by throwing more hardware at the problem) then I think that is more valuable.

Recommendations for Web application performance benchmarks

I'm about to start testing an intranet web application. Specifically, I've to determine the application's performance.
Please could someone suggest formal/informal standards for how I can judge the application's performance.
Use some tool for stress and load testing. If you're using Java take a look at JMeter. It provides different methods to test you application performance. You should focus on:
Response time: How fast your application is running for normal requests. Test some read/write use case
Load test: How your application behaves in high traffic times. The tool will submit several requests (you can configure that properly) during a period of time.
Stress test: Do your application can operate during a long period of time? This test will push your application to the limits
Start with this, if you're interested, there are other kinds of tests.
"Specifically, I have to determine the application's performance...."
This comes full circle to the issue of requirements, the captured expectations of your user community for what is considered reasonable and effective. Requirements have a number of components
General Response time, " Under a load of .... The Site shall have a general response time of less than x, y% of the time..."
Specific Response times, " Under a load of .... Credit Card processing shall take less than z seconds, a% of the time..."
System Capacity items, " Under a load of .... CPU|Network|RAM|DISK shall not exceed n% of capacity.... "
The load profile, which is the mix of the number of users and transactions which will take place under which the specific, objective, measures are collected to determine system performance.
You will notice the the response times and other measures are no absolutes. Taking a page from six sigma manufacturing principals, the cost to move from 1 exception in a million to 1 exception in a billion is extraordinary and the cost to move to zero exceptions is usually a cost not bearable by the average organization. What is considered acceptable response time for a unique application for your organization will likely be entirely different from a highly commoditized offering which is a public internet facing application. For highly competitive solutions response time expectations on the internet are trending towards the 2-3 second range where user abandonment picks up severely. This has dropped over the past decade from 8 seconds, to 4 seconds and now into the 2-3 second range. Some applications, like Facebook, shoot for almost imperceptible response times in the sub one second range for competitive reasons. If you are looking for a hard standard, they just don't exist.
Something that will help your understanding is to read through a couple of industry benchmarks for style, form, function.
TPC-C Database Benchmark Document
SpecWeb2009 Benchmark Design Document
Setting up a solid set of performance tests which represents your needs is a non-trivial matter. You may want to bring in a specialist to handle this phase of your QA efforts.
On your tool selection, make sure you get one that can
Exercise your interface
Report against your requirements
You or your team has the skills to use
You can get training on and will attend with management's blessing
Misfire on any of the four elements above and you as well have purchased the most expensive tool on the market and hired the most expensive firm to deploy it.
Good luck!
To test the front-end then YSlow is great for getting statistics for how long your pages take to load from a user perspective. It breaks down into stats for each specfic HTTP request, the time it took, etc. Get it at http://developer.yahoo.com/yslow/
Firebug, of course, also is essential. You can profile your JS explicitly or in real time by hitting the profile button. Making optimisations where necessary and seeing how long all your functions take to run. This changed the way I measure the performance of my JS code. http://getfirebug.com/js.html
Really the big thing I would think is response time, but other indicators I would look at are processor and memory usage vs. the number of concurrent users/processes. I would also check to see that everything is performing as expected under normal and then peak load. You might encounter scenarios where higher load causes application errors due to various requests stepping on each other.
If you really want to get detailed information you'll want to run different types of load/stress tests. You'll probably want to look at a step load test (a gradual increase of users on system over time) and a spike test (a significant number of users all accessing at the same time where almost no one was accessing it before). I would also run tests against the server right after it's been rebooted to see how that affects the system.
You'll also probably want to look at a concept called HEAT (Hostile Environment Application Testing). Really this shows what happens when some part of the system goes offline. Does the system degrade successfully? This should be a key standard.
My one really big piece of suggestion is to establish what the system is supposed to do before doing the testing. The main reason is accountability. Get people to admit that the system is supposed to do something and then test to see if it holds true. This is key because because people will immediately see the results and that will be the base benchmark for what is acceptable.

Resources