Jmeter concurrent users - jmeter

I've realised there seems to be a lot of confusion around Number of threads(users) My question is what exactly does the word 'users' refer to in JMeter?
According to the documentation:
http://jmeter.apache.org/usermanual/test_plan.html#thread_group
Multiple threads are used to simulate concurrent connections to your server application.
To me this seems to be referring to the number of requests that are made to a server and not an individual user who could make any number of requests.
Can someone clarify if 'Users' refers to an individual user or the number of requests that are made. For example, in JMeter how would this be simulated:
1 user requests a webpage which consists of:
index.html
styles.css
bg.jpg
Now is this considered 1 user or 3 users (1 per resource requested)?

The mapping should be very simple
1 thread == 1 virtual user == 1 real user
The only thing you need is to design your test keeping in mind that JMeter's virtual user should act like a real user, including handling these images and styles like styles.css and bg.jpg, cookies, headers, cache, think times, etc.
In regards to web testing and correctly handling CSS and images, real browsers act as follows:
Request to main page, i.e. index.html
Parallel requests to all entities included in the main page (images, scripts, styles)
So this is still one user. See Web Testing with JMeter: How To Properly Handle Embedded Resources in HTML Responses article for more detailed explanation and recommendations.

Related

How to determine the number of users to use in Jmeter performance testing?

This can actually be a complex question for me, if the number is not given directly from my product owner as a direct requirement.
Jmeter is basically an API performance testing tool. I've seen so many Jmeter scripts that include only the important APIs needed for the flow to be tested. It does not consider any pure front end (UI) related user actions.
A common reply to my question in article/tutorial is: estimate how many concurrent users you normally have on your website?
The problem with this approach is that user purely browsing your website is not causing any 'load' that Jmeter try to simulate.
If user is using a form submission webpage for example, every second user uses to browse the page content, or filling the data in the form are pure front end (UI) activity and does not lead to any 'load'. There may be 10 concurrent user visiting my webpage, but only 2 are 'submitting'. Should i use 10 or 2 in this scenario? The Jmeter script is intended to only measure the performance of 'submit form' API.
Another most sophisticated reply to my question is 'load testing calculator' mentioned in https://www.webperformance.com/library/tutorials/CalculateNumberOfLoadtestUsers.
It calculate concurrent number of user from 'visit rate (visits/hour)' and 'Average visit length (minutes/visit)'. This is more precise that the 1st reply of just 'estimate how many concurrent users are using your system?'.
However it has the same issue as the 1st reply in that it does not define 'Average visit length (minutes/visit)' as 'Average visit length' from API perspective. The same argument i present for the form submission website applies here too. The 'visit' time a user spend on browsing the page, filling the form does not count, only the time he spend on 'submit form' API does.
So what's your way of determining the number of users to use in Jmeter test?
Jmeter is basically an API performance testing tool. - this is wrong
Think of each JMeter thread (virtual user) as of the real user with all its attributes like:
using a real browser
needing some time to "think" between operations
Once you implement your JMeter test so each JMeter virtual user represents a real user with 100% accuracy - you will be able to tell how many users your website can handle without issues by looking at i.e. Active Threads Over Time chart.
If you need to know how many requests per second are X virtual users making - check out Server Hits Per Second chart.

JMeter and page views

I'm trying to use data from google analytics for an existing website to load test a new website. In our busiest month over an hour we had 8361 page requests. So should I get a list of all the urls for these page requests and feed these to jMeter, would that be a sensible approach? I'm hoping to compare the page response times against the existing website.
If you need to do this very quickly, say you have less than an hour for scripting, in that case you can do this way to compare that there are no major differences between 2 instances.
If you would like to go deeper:
8361 requests per hour == 2.3 requests per second so it doesn't make any sense to replicate this load pattern as I'm more than sure that your application will survive such an enormous load.
Performance testing is not only about hitting URLs from list and measuring response times, normally the main questions which need to be answered are:
how many concurrent users my application can support providing acceptable response times (at this point you may be also interested in requests/second)
what happens when the load exceeds the threshold, what types of errors start occurring and what is the impact.
does application recover when the load gets back to normal
what is the bottleneck (i.e. lack of RAM, slow DB queries, low network bandwidth on server/router, whatever)
So the options are in:
If you need "quick and dirty" solution you can use the list of URLs from Google Analytics with i.e. CSV Data Set Config or Access Log Sampler or parse your application logs to replay production traffic with JMeter
Better approach would be checking Google Analytics to identify which groups of users you have and their behavioral patterns, i.e. X % of not authenticated users are browsing the site, Y % of authenticated users are searching, Z % of users are doing checkout, etc. After it you need to properly simulate all these groups using separate JMeter Thread Groups and keep in mind cookies, headers, cache, think times, etc. Once you have this form of test gradually and proportionally increase the number of virtual users and monitor the correlation of increasing response time with the number of virtual users until you hit any form of bottleneck.
The "sensible approach" would be to know the profile, the pattern of your load.
For that, it's excellent you're already have these data.
Yes, you can feed it as is, but that would be the quick & dirty approach - while get the data analysed, patterns distilled out of it and applied to your test plan seems smarter.

Is there a way to keep ajax calls from firing off seemingly sequentially in web2py?

I'm developing an SPA and find myself needing to fire off several (5-10+) ajax calls when loading some sections. With web2py, it seems that many of them are waiting until others are done or near done to get any data returned.
Here's an example of some of Chrome's timeline output
Where green signifies time spent waiting, gray signifies time stalled, transparent signifies time queued, and blue signifies actually receiving the content.
These are all requests that go through web2py controllers, and most just do a simple operation (usually a database query). Anything that accesses a static resource seems to have no trouble being processed quickly.
For the record, I'm using sessions in cookies, since I did read about how file-based sessions force web2py into similar behavior. I'm also calling session.forget() at the top of any controller that doesn't modify the session.
I know that I can and I intend to optimize this by reducing the number of ajax calls, but I find this behavior strange and undesirable regardless. Is there anything else that can be done to improve the situation?
If you are using cookie based sessions, then requests are not serialized. However, note that browsers limit the number of concurrent connections to the same host. Looking at the timeline output, it does look like groups of requests are indeed made concurrently, but Chrome will not make all 21 requests concurrently.
If you can't reduce the number of requests but must make them all concurrently, you could look into domain sharding or configuring your web server to use HTTP/2.
As an aside, in web2py, if you are using file based sessions and want to unlock the session file within a given request in order to prevent serialization of requests, you must use session.forget(response) rather than just session.forget() (the latter prevents the session from being saved even if it has been changed, but it does not immediately unlock the file). In any case, there is no session file to unlock if you are using cookie based sessions.

Should requests contain unnecessary parameters which are sent if manually browsing the application

I'm currently testing a asp.net application. I have recorded all the steps i need and i have noticed that if i remove some of the parameters that i'm sending with the request the scripts still work and the desired outcome still happens. Anyway i couldn't find difference in the response time with them or without them, and i was wondering can i remove those parameters which are not needed and is this going to impact the performance in any way? I understand that the most realistic way of executing the scripts should be to do it like a normal user does (send all which is sent with normal usage) but this would really improve the readability of my scripts, any idea?
Thank you in advance and here is a picture which shows for example some parameters which i can remove and the scripts still work this is from a document management system and i'm performing step which doesn't direct the document as the parameters say but the normal usage records those :
Although it may be something very trivial like pre-populating date and time in calendar in user's time zone I believe you shouldn't be omitting any request parameters.
I strongly believe that load testing should mimic real user as close as possible so if it is not a big deal to send these extra parameters and perform their correlation - I would leave them.
Few other tips:
Embedded Resources (scripts, styles, images). Real-browsers download these entities so
Make sure you have "Retrieve All Embedded Resources" box checked
Make sure you "Use concurrent pool" size 3-5 threads
Filter out any "external" stuff via "URLs must match" input
Well-behaved browsers download embedded resources but do it only once. On subsequent requests they're being returned from browser's cache. Add HTTP Cache Manager to your Test Plan to simulate browser cache.
Add HTTP Cookie Manager to represent browser cookies and deal with cookie-based authentication.
See How To Make JMeter Behave More Like A Real Browser article for above tips explained just in case you want to dive into details
Less data to send, faster response time (normally).
Like you said, it's more realistic to test with all data from the recorded case, but if these parameters really doesn't impact your result and measured time, you can remove them for a better readability.
Sometimes jmeter records not necessary parameters because they are only needed for brower compability.

Requests/min and Android Development

Currently my app easily reaches 2-5 requests/min with 5 users. From my understanding of Parse this is rather high.
To give an example of what we do is:
When a user logs on and refreshes the list is filled with all relevant information from the database (query that specific user and gather all of his event/group data from server)
When a user creates an event, create all of the data and pass it to the server.
From my understanding a request/min is only when you make a network call. In this case, I see that when a user refreshes that is 1 network call as well as when a user creates an event it is 1 network call. Is there something I am missreading about request calls?
Thanks!
30 requests per second is the (current) baseline limit (1800 per minute).
1 network request could be multiple requests for a number of reasons (batch processing, calling a cloud function, anything which has a save hook, ...). Depending on the API you use you could be requesting a batched save of multiple objects for instance.
If all of your users are continually averaging 1 request per second then you have a problem. If your users each have a short burst of 5 per second followed by a long period of no requests then you're fine.

Resources