Performance of webpage: what does "gap" measure? - performance

I am trying to increase performance of my website.
Looking at the IE Network tab, I see:
wait: < 1 ms
start: 31 ms
request 390 ms
response 31 ms
gap 472 ms
I'm especially confused about the gap. What's going on here? Is this the actual time to render the page once everything has been received? It's hard to improve performance when I don't know what each time represents.
MSDN says:
Gap: The offset value that is taken when the response has been received. The duration is the time between that start time and when the end of the last request is associated with the original HTTP request.
That does not help me at all.

It's about as clear as mud but what it means is that the end of that particular request occurred 472ms before the page was considered loaded. This is usually because there are resources loaded after that one taking up the remaining time.
A simplification to illustrate it, if I have a page that loads in 5 ms and has four resources loaded sequentially each taking 5 ms to load. The gap for the initial page request will be 5 x 4 = 20ms, the next request will have a gap of 15ms, the next 10 ms etc. I'm not sure how it would be a useful a metric though...

Related

Does the Constant Timer added in my HTTP Request affect the results in the Summary Report?

I have a HTTP Request in my Thread Grpup that takes around 20 to 30 seconds to complete with a single user, so when I added 50 users I get a 500/Internal Server Error or 503/Server has been shutdown sometimes.
I want to add a Constant Timer with 40 seconds (in miliseconds) under the HTTP Request so maybe the application will have some time to process it. I am going to the rigth way?
If I add the Constant Timer will it be calculate as well in the Summary Report?
I need that the Jmeter give the time to the API (My aplication) complete the process (need at least 30 seconds) and it may be or not affetct my Summary Report
PreProcessors, Post-Processors and Timers are not counted in the Elapsed time. so response time will not be impacted.
However Throughout (the number of requests for the test duration) will be lower.
See JMeter Glossary for more information on the above metrics.
With regards to "right way" - real users don't "hammer" application non-stop, they need some time to "think" between operations so if you're simulating a real user you should have non-zero think time, however 40 seconds it kind of too much for me. Take a look at How to make JMeter behave more like a real browser article for more tips on properly configuring your JMeter test.

Need help on response time

Need help on JMeter response result from the image
My scenario: Am calculating Min/Max/Average response time on Api creating a user account.
1.Login to site
2.Using API request creating a user account - (creating 100 users account using API)
3.Logout.
Observation :
Total elapsed time is 32 mins (which is there in the image).
Response time for 100 users is 90852.
I need to understand how the response time units are measured here.
does 90852 milliseconds mean approximately 90secs.
So is it like a single user account is created in 90 secs by the API?.
So, please tell me how response time is working here when it compared with total response time?
Thanks :)
The average creation of a user took your API 908 ms (the entry with 100 samples ending with /api/users).
Since the line (where the name of the transaction is not in the screenshot) has the sample count 1 and the response time resembles 100*908ms I would guess that you have a Transaction Controller that holds the Loop Controller.
The same hierarchy that you use to organize your test plan also applies to transaction controllers and samplers. So if you group several samplers - and/or transaction controllers - under a parent transaction controller, that parent transaction controller will have the combined response time of all its children.
Response time for 100 users is 90852. - No, only for 1 user. Looking at your image it appears that only 1 sample was collected during 32 mins. So this response time is for that 1 sample not for all 100 users. JMeter only shows you completed responses. Assuming you have a thread group of 100 users, the rest didn't complete / were waiting for the api to respond.
Does 90852 milliseconds mean approximately 90secs. - yes. In your api you seem to be using once only controller for login and authenticate and everything else seems to be running sequentially. So if you are load testing have a slow api response then you won't be able to measure other throughput for the rest of the apis correctly as the slowest api will hold up the thread for a long time.
Hope this helps.
It is hard to provide comprehensive analysis without seeing your Test Plan.
When it comes to your questions:
Total elapsed time is 32 mins (which is there in the image).
this looks a little bit high for me, given you create 100 user accounts and average response time is 908 milliseconds I would expect that your test will finish in 90.8 seconds which is 1.5 minutes.
does 90852 milliseconds mean approximately 90secs.
it rather looks like a sum of all 100 response times most probably you got it from the Transaction Controller
Average Response time is basically arithmetic mean, to wit sum of all response times divided by their count.
First of all you need to understand why does you test take that long.
You seem to be creating 100 user accounts using 1 thread (virtual user) in loop, you might want to consider doing it with multiple threads instead
You should be using JMeter GUI only for tests development and/or debugging, when it comes to test execution you should be running your JMeter tests in command-line non-GUI mode like:
jmeter -n -t test.jmx -l result.jtl

JMeter Test Plan Validation

I am creating a JMeter test plan and need some validation to verify I'm going about it the right way.
I have the following GA data for our busiest hour.
Hour: 10
Average session duration: 00:02:56
Avg. Page Load Time (sec): 1.57
Sessions: 2441
Page Views: 8361
Number of threads (users):
I've calculated this using the following formula:
2441 (Hourly Sessions) x 176 (Average Session Duration (in seconds)) / 3600
Which gives me 119.
1) Is this the correct approach?
Getting average page load time
I'm attempting to bench mark against the average page load time as reported by GA. So I have created currently the following test plan:
Thread Group:
- HTTP Request (Main Request)
- Aggregate graph
1) This will request (main request) 119 times should I add more pages so that requests total 8361 as per the pages views from GA?
2) I'm unclear about how I should get the test plan to run over an hour as the GA data is over an hour, currently the 119 requests get executed within a few minutes or is it even necessary to run over an hour to get a rough idea of capacity?
3) Is it correct to use the average response time from the aggregate graph and compare that against the Avg. Page Load Time from GA?
1.1) Seems like that - but only if you stick to mimic-ing the actual "average user" way to interact with your service: do some chain of requests (let's call it session) during 176 sec.
Then, yes: if inside one thread, you'd stretch your chain of requests along 176 sec, 1 thread could serve ~20.5 sessions per hour.
Which turns into ~119 threads to meet desired ~2440 requests per hour.
The other approach would be to stick to Page views (8361).
That's if maintaining the "session" and particular request sequence doesn't matter, while load does.
Then it comes to ~2.3 rps flat.
As soon as response time is expected to be around 1.5 sec, you would need at least 3 threads to keep the pace, more would be better to have some room to stretch.
But you won't need a lot of them, because they'd be hanging blocked with I/O most of the time.
Checking the actual throughput value JMeter yields during initial runs, you could adjust the number of threads to optimal.

Incorrect graph generate by jmeter listener Hits per Seconds and Composite Graph

learning using jmeter and getting problem when reading graph listener output
creating Thread group with number thread 8, ram-up 1 and loop forever
adding listener active threads over time, hits per seconds, response times over times
result:
a. in Active Threads Over Time getting correct result with maximum 8 thread
b. in Hits per Second, graph result is really weird, there is 148 number of hist/sec
trying to debug and change thread to 1, Hits per Second still generate weird graph with 20 hits/sec
any idea why this happening?
i use latest release from jmeter 3.0
As I had clarified here, jp#gc - Hits per Second, this listener shows the total number of requests sent to the server per second. Per Second is by default - it can be changed in the settings tab.
When you have 1 user, JMeter sends 18-20 requests / second (Loop forever will keep sending the requests for the user as soon as the user gets the response). So, The user was able to make 19 requests in a second. When you have 8 users, the test plan sends around 133 requests. It seems to work fine and nothing weird here.
When you have 8 users, JMeter would not have any issues in sending the first 8 requests (first request for each thread).But the subsequent requests for each thread will be sent only if the response is received for the previous request. (if you have any timers to simulate user think time,then the user will wait for the duration to send the next request after the response is received).
If 1 user is able to make 19 requests (or server processed 19 requests per second), then 8 users should be able to send 152 requests. But, when you increase the user load/increase the number of requests sent to the server, It's throughput (number of requests the server can process / unit time) will also increase gradually as shown in the picture. If you keep increasing the user, at one point, you would see the server's throughput (number of hits / second) gets saturated / would not increase beyond this point. So, may be, here the server got saturated at 133 requests / second. That is why we could not see 152 requests for 8 users. To understand the behavior, you need to increase user (ramp up) slowly.
Check here for few tips on JMeter

Do ongoing parse.com requests continue to count against the API limit?

My understanding of the parse.com API rate limit is that it’s not a concurrent-job limit, it’s just the number of requests started in a given second. So if a user is, say, uploading a file from a slow network and it takes 30 seconds, that’s not 1 of my 30 req/s taken up that whole time. It’s just one request, the first second.
On my team, though, is a wonderful security guy whose job it is to worry. He thinks that if 30 users upload a file each, for 30 seconds, at a 30 r/s limit, no one else will be able to use our app until they are done.
Which one is correct?
Your understanding was correct. It's the number of requests started per second. The duration of the request does not come in to play.
Source: I work at Parse.
I think you are right. I've made some experiments with Parse, for example i reloaded a UITableview 10 or 20 times in one second (can't remember) for 3-4 minutes and checked the requests in the admin panel. The maximum value was always less than 30, but it doesn't matter, the point is that you can test it this way and get more informations.
Just create some test project and reload the SampleViewController.m (which contains a Parse query) 30 times in one second, after this you can check the data browser which will display the traffic by req/sec.
As a second option you can upload a bunch of images by current user in every second, since the upload time is longer than 1 sec, you can check what happens when you start uploading a bunch of images (or other data) in every second.

Resources