Can someone please suggest how can we create 'Test Run' in Microfocus performance center 2020 version for below requirement:
At first 100% of VUsers will be loaded into system, after 30 minutes of settling time, overall user load will be increased by 10% and then the test will continue for another 30 mins at 110% user load then again overall user load will be increased by another 10% and the continue for another 30 mins at 120% user load.. this cycle will repeat until the user load reaches 200% then rampdown starts.
See scheduling for groups of users. Covered in training material and product documentation
Related
I have to do performance testing of an ecommerce application where i got the details needed like Avg TPH and peak TPH . also Avg User and Peak User.
for e.g., an average of 1000 orders/hour, the peak of 3000 orders/hour during the holiday season, expected to grow to 6000 orders/hour next holiday season.
I was afraid which value to be considered for current users and TPH for performing load test for an hour.
also what load will be preferable foe stress testing and scalability testing.
It would be a great helpful not only for the test point of view but also will help me in understanding the conceptually which would help me in a great deal down the lane.
This is a high business risk endeavor. Get it wrong and your ledger doesn't go from red to black on the day after thanksgiving, plus you have a high probability of winding up with a bad public relations event on Twitter. Add to that greater than 40% of people who hit a website failure will not return.
That being said, do your skills match the risk to the business. If not, the best thing to do is to advise your management to acquire a higher skilled team. Then you should shadow them in all of their actions.
I think it helps to have some numbers here. There are roughly 35 days in this year's holiday shopping season. This translates to 840 hours.
#$25 avg sale, you are looking at revenue of $21 million
#$50 avg sale, ...42 Million
#100 avg sale, ...84 Million
Numbers based upon the average of 1000 sales per hour over 840 hours.
Every hour of downtime at peak costs you
#$25 avg sale, ...$75K
#$50 avg sale, ...$150K
#$100 avg sale, ...$300K
Numbers based upon 3000 orders per hour at peak. If you have downtime then greater than 40% of people will not return based upon latest studies. And you have the Twitter affect where people complain loudly and draw off potential site visitors.
I would advise you to bring in a team. Act fast, the really good engineers are quickly being snapped up for Holiday work. These are not numbers to take lightly nor is it work to press someone into that hasn't done it before.
If you are seriously in need and your marketing department knows exactly how much increased conversion they get from a faster website, then I can find someone for you. They will do the work upfront at no charge, but they will charge a 12 month residual based upon the decrease in response time and the increased conversion that results
Normally Performance Testing technique is not limited to only one scenario, you need to run different performance test types to assess various aspects of your application.
Load Testing - which goal is to check how does your application behave under anticipated load, in your case it would be simulating 1000 orders per hour.
Stress Testing - putting the application under test under maximum anticipated load (in your case 3000 TPH). Another approach is gradually increasing the load until response time starts exceeding acceptable thresholds or errors start occurring (whatever comes the first) or up to 6000 TPH if you don't plan to scale up. This way you will be able to identify the bottleneck and determine what will be the component which fails which could be in
lack of hardware power
problems with database
inefficient algorithms used in your application
You can also consider executing a Soak Test - putting your application under prolonged load, this way you will be able to catch the majority of memory leaks
While working on the SharePoint app, I noticed that it takes more than 10 seconds to load the app first time. so, I was thinking that how much ramp-up period will be idle for running 1000 users on JMeter?
Not sure about the "idle", do you mean "ideal" ramp-up?
I believe the slow initial load can be explained by how IIS internally works:
https://social.technet.microsoft.com/Forums/ie/en-US/297fb51b-b7b4-4b7b-a898-f6c91efd994e/sharepoint-2013-first-load-takes-long-time?forum=sharepointadmin
Taking all that the information into account, we can predict that high response times due to the long initial load will occur for at most several minutes in the beginning of your test and affect the users which are started during this period.
Finding out the "ideal" ramp-up time can be a little bit tricky.
I would address the problem like that:
Add the Response Times Over Time (https://jmeter-plugins.org/wiki/ResponseTimesOverTime/) graph.
Set the ramp-up time to 10 seconds per user and, using the graph added above, find out when the response times will settle down. This will mean that the initial load of the application is completed. Don't forget to exclude this period from the report.
Now you should observe that the response times are slowly growing with the addition of new users.
After the addition of all of the users, if everything went well, you can try to lower your ramp-up time to, say, 3600 seconds for all of the users.
If you see that the response times skyrocketed and/or exceeded the SLA, than you either are adding the users too fast, or the application is already saturated.
Do we need to adjust Throughput given by jmeter, to find out the actual tps of the system
For eg : I am getting 100 tps for concurrent 250 users. This ran for 10 hrs. Can I come to a conclusion like my software can handle 100 transactions per second. Or else do I need to do some adjustment and need to get a value. Why i am asking this because when load started, system will take sometime to perform in adequate level (warm up time). If so how to do this. Please help me to understand this.
By default JMeter sends requests as fast as it can, the main factor which are affecting TPS rate are:
number of threads (virtual users) - this you can define in Thread Group
your application response time - this is not something you can control
Ideally when you increase number of threads the number of TPS should increase by the same factor, i.e. if you have 250 users and getting 100 tps you should get 200 tps for 500 users. If this is not the case - these 500 users are beyond saturation point and your application bottleneck is somewhere between 250 and 500 users (if not earlier).
With regards to "warm up" time - the recommended approach of conducting the load is doing it gradually, this way you will allow your application to get prepared to increasing load, warm up caches, let JIT compiler/optimizer to go their work, etc. Moreover this way you will be able to correlate the increasing load with increasing/decreasing throughput, response time, number of errors, etc. while having 250 users released at once doesn't tell the full story. See
The system warmup period varies from one system to the other. Warm up period is where configurations are cached, different libraries are initialized (eg. Builder.init()) and other initial functions that usually don't happen for subsequent calls. If you study results of the load test, there is a slow period at the very beginning. For most systems, it could be as small as 5 to 10 minutes. These values could be even negligible if the test is as long as 10 hours. But then again, average calculation can be effected if the results give extremely low values at the start (it always depend on the jump from initial warming up period to normal operations).
As per jmeter configurations this thread may explain the configuration. How to exclude warmup time from JMeter summary?
I am analyzing a web application and want to predict the maximum users that application can support. Now i have the below numbers out of my load test execution
1. Response Time
2. Throughput
3. CPU
I have the application use case SLA
Response Time - 4 Secs
CPU - 65%
When i execute load test of 10 concurrent users (without Think Time) for a particular use case the average response time reaches 3.5 Seconds and CPU touches 50%. Next I execute load test of 20 concurrent users and response time reaches 6 seconds and CPU 70% thus surpassing the SLA.
The application server configuration is 4 core 7 GB RAM.
Going by the data does this suggests that the web application can support only 10 user at a time? Is there any formula or procedure which can suggest what is the maximum users the application can support.
TIA
"Concurrent users" is not a meaningful measurement, unless you also model "think time" and a couple of other things.
Think about the case of people reading books on a Kindle. An average reader will turn the page every 60 seconds, sending a little ping to a central server. If the system can support 10,000 of those pings per second, how many "concurrent users" is that? About 10,000 * 60, or 600,000. Now imagine that people read faster, turning pages every 30 seconds. The same system will only be able to support half as many "concurrent users". Now imagine a game like Halo online. Each user will be emitting multiple transactions / requests per second. In other words, user behavior matters a lot, and you can't control it. You can only model it.
So, for your application, you have to make a reasonable guess at the "think time" between requests, and add that to your benchmark. Only then will you start to approach a reasonable simulation. Other things to think about are session time, variability, time of day, etc.
Chapter 4 of the "Mature Optimization Handbook" discusses a lot of these issues: http://carlos.bueno.org/optimization/mature-optimization.pdf
I am going to set up some load testing test with jmeter.
However, I can't find on the net what is a good bench mark.
How good is good?
My site hit is about 80,000/day on average.
After mining out some data from access log for 1 month, I manage to find out:
Average low traffic is about 1 requests / sec
Average Medium traffic of 30 requests / sec
Average High traffic of 60 requests / sec
I came out with the following plans to test 8 kind of pages and it's ideal respective average response time on a single page load :
Page 1 - 409ms
Page 2 - 730ms
Page 3 - 1412ms
Page 4 - 1811ms
Page 5 - 853
Page 6 - 657ms
Page 7 - 10ms
Page 8 - 180ms
Simulate average traffic scenario - 10 requests / second test
Simulated Users: 10 threads
Ramp Period: 1 sec
Duration: 30 min
Simulate medium traffic scenario - 30 requests / second test
Simulated Users: 30 threads
Ramp Period: 1 sec
Duration: 30 min
Simulate super high traffic scenario - 100 requests / second test
Simulated Users: 100 threads
Ramp Period: 1 sec
Duration: 30 min
Simulate an attack scenario - 350 requests / sec (based on MYSQL max connection of 500)
Simulated Users: 100 threads
Ramp Period: 1 sec
Duration: 10 min
While running these test, I plan to monitor the following:
CPU Load average: (Webserver)
CPU Load average: (DB Server)
RAM Load average:
RAM Load average: (DB Server)
Average Response time:
To see how much it's its memory and cpu utilisation and if there's a need to increase RAM, CPU etc... Also I capped MySQL Max connection at 500.
In all testing, I expect response time should be ideally below a capped benchmark of 10secs.
How does this sound? I've no SLA to follow, this is just based on research on current web traffic and coming out with the plan. The thing is I don't know how what is the threshold of the server. I believe with the hardware below it should already exceed what's needed for hosting our pages.
Webserver is running: 8GB RAM, 4 Core ( 1 server, another cold backup server No load balancer )
MySQL Server is running: 4GB RAM , 2 Core.
We are planning to move to cloud so ideally need to find out what kind of instance best fit our scenario through this load testing as well.
Would really appreciate any good advice here..
Read about SLA
I recommend you to determine your requirements of web-site performance. But by default you can use "standard" response time limit about 10sec.
Next, find user activity profile during day, week, month. Then you could choose base values of load for each profile.
Find most frequent and most "heavy" user actions. Create test scenario with this actions (remember about percent ratio of actions, don't use all actions with one frequency).
Limit load by throughput timer (set needed value of hits per minute). Number of threads in thread group select experimentally (for http load not needed many of them).
Add plugins for jmeter from google-code. Use graphs "number of threads in second", "response time" and standard listener "summary report". Don't use standard graphs with high load, it's too slow. You can also save data to file from each listener (I strongly recommend this, and select csv format). Graphs show average points, so if you want precision pictures then build graphs manually in excel or calc. Grab performance metrics from server (CPU, memory, i/o, DB, net interface).
Set ramp up period about 20-60 minutes and test about 1-2 hours. If all right, then test with 200% load profile or more to find the best performance. Also set test about 1-2 days. So you'll see server stability.