How can I decrease "Connecting" and "Waiting" times from AJAX requests to the Server? - ajax

My actual script execution time, is less then a microsecond and yet, the total time, the response takes is about 250 ms - 1000 times more, on a typical ajaxcall. Even in environments where I have a reliable T1 connection, the responses still take 50-100ms.
Background info:
Call are being made via POST/GET through AJAX, jQuery
Backend is PHP/mysql on the Joyent servers.
the information shown below comes from firebug, net tab.
DNS Lookup = 0
Connecting = 46ms
Sending = 0ms
Waiting = 172ms
Receiving = 0ms

you need to move closer to the servers. :) Sounds like the speed of light is your bottleneck.
Have a look at the trace route of your network packets to the server.

Related

JMeter Sampler Result: Understanding Load time, Connect time and Latency

First off, I'm new to JMeter and wanted to clear some doubts regarding the relationship between Load time, Connect time, and Latency.
I found some resources that explain the relationship between these metrics:
Latency time – Connect time = Server Processing Time
Elapsed time – Latency time = Download Time
resource
And then another resource says this:
Response Time = Latency + Processing Time
Given below is one of the sampler results I got. If you take this into consideration, can we really comment on how long it took for the server to process the request?
NOTE: In this scenario, my plan is to analyze how much of a load the server had to withstand. I don't really care about the delay of connection establishing and passing around of data packets.
Basically, I want to know the connection between the 3 aforementioned metrics: Load time, Connect time, and Latency. Any help is greatly appreciated. Thanks in advance :)
You cannot say "how long it took for the server to process the request" by looking to JMeter results because:
Latency is time to first byte
Elapsed time is time to last byte
The request lifecycle looks like:
JMeter establishes the connection (connect time)
JMeter sends request body to the server (unknown)
Server processes the request (unknown)
Server sends the response to JMeter (unknown)
JMeter receives the first byte of the response (Latency)
JMeter receives the last byte of the response (Elapsed time)
So you cannot say what is the server processing time even with millisecond precision as JMeter can only get high-level network metrics, if you want to enrich your report with server processing time you need to use an APM or a profiler tool or at least something like JMeter PerfMon Plugin to get this form of information directly from the application under test.
This documentation explains the metrics :
https://jmeter.apache.org/usermanual/glossary.html
Latency:
JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
Connect Time:
JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout.
Load time or Elapsed time:
JMeter measures the elapsed time from just before sending the request to just after the last response has been received. JMeter does not include the time needed to render the response, nor does JMeter process any client code, for example Javascript.
In layman terms I would describe these terms as below:
Load time: total time taken by the request. First req to the final packet
Connect time: Time taken by the request to reach the server
Latency: time taken by request for first response. (if the response is small this can be same as load time)

Why is JMeter Result is different to User Experience Result?

We are currently conducting performance tests on both web apps that we have, one is running within a private network and the other is accessible for all. For both apps, a single page-load of the landing page or initial page only takes between 2-3 seconds on a user POV, but when we use blaze and JMeter, the results are between 15-20 seconds. Am I missing something? The 15-20 seconds result came from the Loadtime/Sample Time in JMeter and in Elapsed column if extracted to .csv. Please help as I'm stuck.
We have tried conducting tests on multiple PCs within the office premises along with a PC remotely accessed on another site and we still get the same results. The number of thread and ramp-up period is both set to 1 to imitate a single user only.
Where a delta exists, it is certain to mean that two different items are being timed. It would help to understand on your front end are you timing to a standard metric, such as w3c domComplete, time to interactive, first contentful paint, some other location, and then compare where this comes into play on the drilldown on the performance tab of chrome. Odds are that there is a lot occuring that is not visible that is being captured by Jmeter.
You might also look for other threads on here on how jmeter operates as compared to a "real browser" There are differences which could come into play affecting your page comparisons, particularly if you have dozens/hundreds of elements that need to be downloaded to complete your page. Also, pay attention to third party components where you do not have permission to test their servers.
I can think of 2 possible causees:
Clear your browser history, especially browser cache. It might be the case you're getting HTTP Status 304 for all requests in browser because responses are being returned from the browser cache and no actual requests are being made while JMeter always uses "clean" session.
Pay attention to Connect Time and Latency metrics as it might be the case the server response time is low but the time for network packets to travel back and forth is very high.
Connect Time. JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout.
Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
So basically "Elapsed time = Connect Time + Latency + Server Processing Time"
In general given:
the same machine
clean browser session
and JMeter configured to behave like a real browser
you should get similar or equal timings for the same page

jmeter how to conclude that web server successfully handles 1000 concurrent users

We have a web CRUD app.
I need to load test the web app, by simulating 1000 concurrent users.
i am using jmeter, to do the load test.
scenario 1 :
- user login
- request a welcome page
Parameters :
Thread Group : no. of users = 100
ramp-up period = 1
loop = 1
question :
how to i conclude that the web server is capable of accepting 1000
concurrent users?
if in the result tree view all the request gets
status 200 OK means, that web server is capable of accepting 1000
concurrent users? or
if i increase the concurrent users to say
1200, then web server crashes.. will i conclude that the web server
we are using is capable of accepting at max of < 1200 concurrent
users.
what are the other parameters i need to look for the load test
You can conclude it like,
If you see results of all requests as 200 status for not just 1 loop but you need to run your test for some amount of time let say 30 min or something(duration can be decided on stability factor of server).
After some time after test starts, if results starts converging (you will get stable response times) without any errors (1% error rate is acceptable generally) then you can conclude that your web server is capable of supporting 1000 users for login reqeusts
You can increase users to 1200 and try again to see scalability of server (how much your server can scale, with same technique)
that will give you max load your server can take. (stress test)
Very important things you should also monitor while doing these tests are, your server utilization. If your server is choking on resources (100% cpu, 100% memory , or network etc) then probably you should lower the concurrent users and try again. generally system utilization should not be more than 80% on all counters.
as you are running this test for one request only i.e. login . generally results will be inaccurate. you should test with mostly used workflows which will give you precise idea.
I hope this clarifies the doubts.

Apache Makes some AJAX Request Behave Synchronously

I have this strange issue where sometimes if I make two AJAX requests to my Apache 2.2 server in rapid succession, the second request will wait for the first to finish before finishing.
Example, I have two requests, one that sleeps for 10 seconds and one that returns immediately. If I run the request that returns immediatly by itself it will always return within 300ms. However, if I call the request that takes 10 seconds, and then call the request that returns right away about 50% of the time the second request will wait until the first finishes and chrome will report that the request too about 10 seconds before receiving a response. The other half of the time the quick request will return right away.
I can't find any pattern to make it behave one way or another, it will just randomly block the quick AJAX requests sometimes, and other times it will behave as expected. I'm working on a dev server that only I am accessing and I've set several variables such as MaxRequestsPerChild to a high value.
Does anyone have any idea why Apache, seemingly at random, is turning my AJAX requests into synchronous requests?
Here is the code I'm running:
$.ajax({async:true,dataType:'json',url:'/progressTest',success:function(d){console.log('FINAL',d)}}); // Sleeps for 10 seconds
$.ajax({async:true,dataType:'json',url:'/progressTestStatus',success:function(d){console.log('STATUS',d)}}); // Takes ~300ms
And here are two screen shots. The first where it behaved as expected and the second where it waited for the slow process to finish first (in the example the timeout was set to 3 seconds).
UPDATE: Per the comments below - this appears to be related to Chrome only performing one request at a time. Any ideas why Chrome would set such a low limit on async requests?
The problem is not with Apache but with Google Chrome limiting the number of concurrent requests to your development server. I can only make guesses as to why it's limited to one request. Here are a couple:
1) Do you have many tabs open? There is a limit to the total number of concurrent connections and if you have many tabs making requests with KeepAlive you may be at that limit and can only establish one connect to your server. If that's the case you might be able to fix that by adding KeepAlive to your own output headers.
2) Do you have some extensions enabled. Some extensions do weird things to the browser. Try disabling all your extensions and making the same requests. If it works then enable them one at a time to find the culprit extension.

Large Waiting time for an HTTP request

I'm working on developing a web site using cakephp. I'm analyzing the website now using firebug + Yslow and Google chrome developer tools. In an Ajax request I get a large waiting time about 6s while the receiving time is too small 66ms which cause a great latency in the request. Does anybody know why the waiting time is too large??
Waiting time - From the time of request to the time first byte is received, which involves a round trip time. There can be latency if your server away from your machine. Usually it requires 3 round trips. 1 for DNS lookup and 1 for establishing TCP Connection, 1 for request and response pair.
Receiving Time : It will be less if there is less amount of data being downloaded from the server to the client.
For further reference : http://www.webperformancematters.com/journal/2007/7/24/latency-bandwidth-and-response-times.html
My guess is that you might be performing a SQL query as part of the resource that you are calling via Ajax. If this is the case, you may need to tune your query or indexes to improve the speed of the query. Can you post some code so we may review?

Resources