What difference does it make if I add think time to my virtual users as opposed to letting them execute requests in a loop as fast as they can? - performance

I have a requirement to test that a Public Website can serve a defined peak number of 400 page loads per second.
From what I read online, when testing web pages performance, virtual users (threads) should be configured to pause and "think" on each page they visit, in order to simulate the behavior of a real live user before sending a new page load request.
I must use some remote load generator machines to generate this necessary load, and I have a limit on how many virtual users I can use per each load generator. This means that if I make each virtual user pause and "think" for x seconds on each page, that user will not generate a lot of load compared to how much it would if it was executing as fast as it could with no configured think time - and this would cause me to need more users and implicitly need more load generator machines to achieve my desired "page loads per second" and this would be more costly in the end.
If my only request is to prove that a server can serve 400 page loads per second, I would like to know what difference does it really make if I add think times (and therefore use more virtual users) or not.
Why is generally "think time" considered as something which should be added when testing web pages performance ?

Virtual user which is "idle" (doing nothing) has minimal resources footprint (mainly thread stack size) so I don't think you will need to have more machines
Well-behaved load test must represent real life usage of the application with 100% accuracy, if you're testing a website each JMeter thread (virtual user) must mimic a real user using a real browser with all related features like
handling embedded resources (image, scripts, styles, fonts, sounds, etc.)
using caching properly
getting and sending back cookies
sending appropriate headers
processing AJAX requests like browser does
the most straightforward example of the difference between 400 users without think times and 4000 users with think times will be that 4000 users will open 4000 connections and keep them open and 400 users will open only 400 connections.

Related

JMeter understand Thread Group / Throughput and INCLUDE

I have a few question to clarify on my understanding of how JMeter works.
a. Thread Group determine the number of users but it does not determine how many HTML requests are generated per sec ? By default, I notice that every user will send a HTML request at a rate of 2 RPS.
b. If I want to change the RPS per user, then I need to use the Through Put Timer. But the Timer can only lower the request rate from 2 RPS to a lower number. It does not increase the RPS.
c. In order to increase the RPS, I need to add more Threads.
d. Does this mean we are limited to 2 RPS per user ? I see some website have links to many other websites so a webpage refresh would make many requests.
Is this the way JMeter works ?
I have a load test which has 8 transaction (eg CRUD,...). I intend to create a overall Test Plan and I want to use INCLUDE to add all the 8 txn. Do I just record the website and INCLUDE ? What should I include, only the HTML requests ?
I'm also thinking of adding Think Time and Add Variables in the 8 scripts before I INCLUDE.
Do I add the Config Element (eg CSV Dataset Config) in the 8 scripts or the overall Test Plan ?
Thanks.
By default each JMeter thread (virtual user) executes requests as fast as it can. If you want to slow JMeter down to mimic a real user which doesn't hammer the server non-stop and needs some time to "think" between operations - use Timers. More information: How do I Correlate the Number of (Concurrent) Users with Hits Per Second
If you want more RPS - add more threads (assuming that the system under test can give you more RPS)
You should INCLUDE everything which is related to your website (images, scripts, styles, fonts, sounds, etc.) but in the same manner as your browser does, i.e. don't record these requests and instead configure JMeter to download embedded resources and use HTTP Cache Manager so JMeter would request these resources just like browser does. Any requests to "external" websites should be excluded (unless they're also developed and supported and in scope for testing)
That's a good approach, if you use a value more than once it makes sense to declare it via User Defined Variables so you would be able to amend the value only in one place
You add it according to your scenarios, be informed about JMeter Scoping Rules

Why is JMeter Result is different to User Experience Result?

We are currently conducting performance tests on both web apps that we have, one is running within a private network and the other is accessible for all. For both apps, a single page-load of the landing page or initial page only takes between 2-3 seconds on a user POV, but when we use blaze and JMeter, the results are between 15-20 seconds. Am I missing something? The 15-20 seconds result came from the Loadtime/Sample Time in JMeter and in Elapsed column if extracted to .csv. Please help as I'm stuck.
We have tried conducting tests on multiple PCs within the office premises along with a PC remotely accessed on another site and we still get the same results. The number of thread and ramp-up period is both set to 1 to imitate a single user only.
Where a delta exists, it is certain to mean that two different items are being timed. It would help to understand on your front end are you timing to a standard metric, such as w3c domComplete, time to interactive, first contentful paint, some other location, and then compare where this comes into play on the drilldown on the performance tab of chrome. Odds are that there is a lot occuring that is not visible that is being captured by Jmeter.
You might also look for other threads on here on how jmeter operates as compared to a "real browser" There are differences which could come into play affecting your page comparisons, particularly if you have dozens/hundreds of elements that need to be downloaded to complete your page. Also, pay attention to third party components where you do not have permission to test their servers.
I can think of 2 possible causees:
Clear your browser history, especially browser cache. It might be the case you're getting HTTP Status 304 for all requests in browser because responses are being returned from the browser cache and no actual requests are being made while JMeter always uses "clean" session.
Pay attention to Connect Time and Latency metrics as it might be the case the server response time is low but the time for network packets to travel back and forth is very high.
Connect Time. JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout.
Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
So basically "Elapsed time = Connect Time + Latency + Server Processing Time"
In general given:
the same machine
clean browser session
and JMeter configured to behave like a real browser
you should get similar or equal timings for the same page

how much requests are made for 100 concurrent users on 5 page application on Jmeter load test

I am running jmeter tests with 100 to 1k users successfully,
But I suspect if response time is higher than expected.
My test plan include: get login and post login, get test page , post test page, get search page, post search page....
in total 5 get and 5 post forms.
For same number of users if I reduce number of pages the overall response time decreases
so should I run test for pages separately?
Should I reduce number of users for realistic numbers (5 pages*20 users=100 concurrent users)--for performance test?
Or should I be using distributed system?
What is the best practice?
Current Setup: i5, 8GB Ram, One windows machine with Jmeter 3.1
You load testing should be realistic, otherwise it does not make sense. You need to simulate anticipated users behaviour as closely as possible, only this way you will get the real life picture.
So you need to implement your test scenario to match the way, your site will be used in the real world. If it is alive already you can use Access Log Sampler to replay the traffic. If it is not, you can think about "user groups", like:
50% of not authenticated users will be browsing the site
20% of authenticated users will be browsing the site
5% of users will be in login process
3% of users will search for something
etc.
You can use different Thread Groups to represent different groups of users and Throughput Controller to control the frequency of samplers execution inside the Thread Group. See Running JMeter Samplers with Defined Percentage Probability article for more information on how to distribute the load in a realistic way.
There are N number of factors that will affect the overall response times. number of users / pages etc are also few of them. Do you have any specific requirement for performance test? If you are not sure of non functional requirements, then I think all your questions will be clarified in these posts -
http://www.testautomationguru.com/jmeter-performance-testing-application-of-littles-law-to-workload-models/
http://www.testautomationguru.com/jmeter-tips-tricks-for-beginners/
I think you could be running into jmeter issues. What is the JVM heap and GC algorithm that you are using? How are you measuring the performance of each page? If you are using "Transaction Controller" to measure the performance, any slowness in jmeter will affect your transaction controller values.
Have you correlated with the application logs to really understand that the application is indeed slowing down when you decrease the # of pages? If your application response time is constant, I think jmeter is the culprit here. Move all your measurements to the sampler level instead of transaction controllers and update the results

Recording application using template

I have recorded my web application through template & just to confirm that load test result which i am getting is correct? Just by increasing No of users does it give proper results? Is it enough for load testing of web application?
First of all you need to ensure that your test does what it is supposed to be doing. Recorded tests can rarely be successfully replayed, so normally you should be acting as follows:
Add View Results Tree listener and run your test with 1 user. Inspect request and response details to verify your test steps.
Perform correlation and parametrization if required.
Correlation: the process of identifying and handling any dynamic parameters. Most often people use Regular Expression Extractor for it.
Parametrization: the process of making your test data driven. For example, if your application assumes multiple authenticated users you need to store the credentials somewhere. Most commonly used test element for this is CSV Data Set Config
Make your test realistic. Virtual users simulated by JMeter need to represent real users using real browsers as close as possible with all the related stuff: cookies, headers, cache, etc. See How To Make JMeter Behave More Like A Real Browser to learn how to configure JMeter to act closer to real users. Also real users need some time to "think" between operations so make sure you are using Timers to simulate this behaviour as well.
Only after you apply the above points you should add more virtual users. Again, run your test with 2-3 users and iterations to ensure your test funcitons as designed. Once you are happy with it you can increase the load, but don't overkill your server, increase the load gradually and check the impact of the increasing load on your application, i.e. how response time, throughput and number of errors change as you increase the load. The same is applicable for decreasing the load, don't turn it off at once, decrease the number of virtual users gradually.
Building a Web Test Plan
Building an Advanced Web Test Plan

does testing a website through JMeter actually overload the main server

I am using to test my web server https://buyandbrag.in .
I have tested it for 100 users. But the main server is not showing like it is crowded or not.
I want to know whether it is really pressuring the main server(a cloud server I am using).Or just use the client resourse where the tool is installed.
Yes as mentioned you should be monitoring both servers to see how they handle the load. The simplest way to do this is with TOP (if your server OS is *NIX) also you should be watching the network activity i.e. Bandwidth, connection status (time wait, close wait and so on).
Also if your using apache keep an eye on the logs you should see the requests being logged there
Good luck with the tests
I want to know "how many users my website can handele ?",when I tested with 50 threads ,the cpu usage of my server increased but not the connections log(It showed just 2 connections).also the bandwidth usage is not that much
Firstly what connections are you referring to? Apache, DB etc?
Secondly if you want to see how many users your current setup can hand you need to create a profile or traffic model of what an average user will do on your site.
For example:
Say 90% of the time they will search for something
5% of the time they will purchase x
5% of the time they login.
Once you have your "Traffic Model" defined, implement it in jMeter then start increasing your load in increments i.e. running your load test for 10mins with x users, after 10mins increment that number and so on until you find your breaking point.
If you graph your responses you should see two main things:
1) The optimum response time / number of users before the service degrades
2) The tipping point i.e. at what point you start returning 503's etc
Now you'll have enough data to scale your site or to start making performance improvements from a code point of view.

Resources