Long chrome HTTP request/response time but in wireshark everything looks good - performance

We have a two workstation.
First workstation from one city (where's placed IIS server), second workstation from another. Both uses NTLM/negotiate auth.
When page from second city refreshed we have a long loading time
From first city same requestes loading faster
But in wireshark we see that the time from the request to response is only 0.16 seconds (on second workstation).
What can be wrong?

Related

understanding network tab on chrome console

I'm in the process of writing an app that builds a table of Trello card data based on multiple API calls, and while the app works I'm finding the performance degrades considerably the longer it runs. The initial calls take a couple of seconds while later calls (after 100 runs or so) take upwards of a minute.
Looking at the XHR Network tab on my Chrome console, I can see the bulk of the call is taken by the 'Content Download' phase of the Ajax call. I'm curious as to whether this means the issue is with my application or if the problem resides with the API I'm trying to call? I'm a bit of a novice so my terminology is probably not appropriate here.
The Content Download time is the time during which your content is downloaded from the server.
Very long time can be due to slow connection client-side or server-side.
As you can see TTFB (time to first byte) is about 200ms. So your server is starting sending data after 200ms. Your server process seems to be OK.
You can click on the Explanation link for further information.

Web app initial load time

I am using a shared hosting plan at Bluehost to host a golf tournament live scoring mobile web app. I am caching everything I can on Cloudflare, and spent quite some time on overall optimization of the initial download & rendering times. There might be more I could do, but without question my single biggest issue is the initial call to my website: www.spanishpointscup.org. Sometimes this seems to be related to DNS lookup and other times related to Waiting(TTFB).
Below are 2 screen shot images of the network calls that show variations in accessing my index.html. Sometimes this initial file load can be even longer. Very rarely are any of the other files downloaded creating a long delay time, so my only focus now is the initial file load. I think that even if I had server side rendering, I would still have this issue.
Does anyone have specific recommendations that they are confident will help me? Switch to VPS or other host? Thank you.
This is typical when you use a shared server.
The DNS has nothing to do with the issue. DNS has to do with the request not the response. It is the Browser that must resolve the the domain name to an ip address.
The delay you are seeing is due to the server being busy and your page is sitting in a queue waiting behind other processes. Possibly you have a CPU grabbing neighbor on your shared server. Or Bluehost has some performance issues.
You will likely notice some image files take an excessively long time to transmit. Which image is slow will appear to be random with each fresh (not in cache) page load.
UPDATE
After further review I noticed the "wait" times are excessive. Wait time is in green on your waterfall. Notice how the transmit time (blue) is short. This is the time it takes the server to retrieve the page from the disk and put it into the transmit buffer. 300-400 millisecond is excessive.
Find a new service provider.

Time reported in WILY is very much less compared to Load Runner Time. Why?

I am trying to monitor the time spent on server using WILY Introscope but i observe that the time mentioned in WILY for each of the servers is in the range of 100 to 1000 ms. But the time taken for a page to load in browser is almost 5 seconds.
Why is the tool reporting incorrect value ? how to get the complete time in WILY ?
time mentioned in WILY for each of the servers is in the range of 100
to 1000 ms. But the time taken for a page to load in browser is almost
5 seconds.
Reason is - In Browser, you see all the outgoing traffic from the browser. Ideally, any web page would contain 1 POST request followed by multiple GET requests. POST could be your text/html data while Get could be image, CSS, javascript etc.
Mostly these Get requests would be answered by the Web server and post request would be served by involving app server.
The time reported in WILY is only the time spent on server to serve the POST request. Your GET request calls will not be captured by WILY.
Why is the tool reporting incorrect value ? how to get the complete
time in WILY ?
Tool is not reporting incorrect value. Tool sits on a JVM ideally. So it monitors the activity of the JVM and provides the metrics. That is the expected behavior.
A page is a complex item, requiring parsing of the page contents and then requests to multiple servers/sources. So, your page load time will be made up request time for an individual component, processing time for the page parsing and javascript (depending upon virtual user type), requests for the page components, where they are served from, etc... Compare this to your Wily monitoring, which may only be on one of the tiers involved.
For instance, you may have static components being served from a CDN which has zero visibility in your Wily Model. You might also be looking at your app server when the majority of the time is spent serving static components off of a web server, which is oft ignored from a monitoring perspective. Your page could have third party components which are loading which get counted in the Loadrunner time, but do not get counted in the Wily time.
It all comes down a a question of sampling. It is very common for what you see in your deep diag tool to be a piece of the total page load, or an individual request which makes up a page where there are many more components to be loaded. If you want and even more interesting look then enable the w3c time-taken field in your web HTTP request logs and look to see the cost of every individual request. You can do this in the web layer of your app servers as well. Wily will then provide internal breakdown for those items which are "slow."

Ajax-calls: big differences between server runtime and client waiting time

I have two REST endpoints driving some navigation in a web site. Both create nearly the same response, but one gets its data straight from the db whereas the other has to ask a search engine (solr) first to get some data and then do the db calls.
If i profile both endpoints via JProfiler i get a higher runtime (approx. 60%) for the second one (about 31ms vs. 53ms). That's as expected.
Profile result:
If i view the same ajax calls from the client side i get a very different picture.
The faster of the both calls takes about 146 ms waiting and network time
The slower of the both calls takes about 1.4 seconds waiting and network
Frontend timing is measured via chrome developer tools. The server is a tomcat 7.0.30 running in STS 3.2. Client and server live on the same system, db and solr are external so there should be no network latency between tomcat and the browser. As a side note: The faster response has the bigger payload (2.6 vs 4.5 kb).
I have no idea why the slower of the both calls takes about 60% more server time but in sum nearly 1000% more "frontend time".
The question is: Is there any way i can figure out where this timing differences originate?
By default, the CPU views in JProfiler show times in the "Runnable" thread state. If a thread reads data from a socket connection or waits for some condition, that time is not included in the "Runnable" thread state.
In the upper right corner of the CPU views there is a thread state selector. If you change that to "All states", you will get times that you can compare with the wall clock times from the browser.

What does the times mean in Google Chrome's timeline in the network panel?

Often when troubleshooting performance using the Google Chrome's network panel I see different times and often wonder what they mean.
Can someone validate that I understand these properly:
Blocking: Time blocked by browser's multiple request for the same domain limit(???)
Waiting: Waiting for a connection from the server (???)
Sending: Time spent to transfer the file from the server to the browser (???)
Receiving: Time spent by the browser analyzing and decoding the file (???)
DNS Lookup: Time spent resolving the hostname.
Connecting: Time spent establishing a socket connection.
Now how would someone fix long blocking times?
Now how would someone fix long waiting times?
Sending is time spent uploading the data/request to the server. It occurs between blocking and waiting. For example, if I post back an ASPX page this would indicate the amount of time it took to upload the request (including the values of the forms and the session state) back to the ASP server.
Waiting is the time after the request has been sent, but before a response from the server has been received. Basically this is the time spent waiting for a response from the server.
Receiving is the time spent downloading the response from the server.
Blocking is the amount of time between the UI thread starting the request and the HTTP GET request getting onto the wire.
The order these occur in is:
Blocking*
DNS Lookup
Connecting
Sending
Waiting
Receiving
*Blocking and DNS Lookup might be swapped.
The network tab does not indicate time spent processing.
If you have long blocking times then the machine running the browser is running slowly. You can fix this by upgrading the machine (more RAM, faster processor, etc.) or by reducing its workload (turn off services you do not need, closing programs, etc.).
Long wait times indicate that your server is taking a long time to respond to requests. This either means:
The request takes a long time to process (like if you are pulling a large amount of data from the database, large amounts of data need to be sorted, or a file has to be found on an HDD which needs to spin up).
Your server is receiving too many requests to handle all requests in a reasonable amount of time (it might take .02 seconds to process a request, but when you have 1000 requests there will be a noticeable delay).
The two issues (long waiting + long blocking) are related. If you can reduce the workload on the server by caching, adding new server, and reducing the work required for active pages then you should see improvements in both areas.
You can read a detailed official explanation from google team here. It is a really helpful resource and your info goes under Timeline view section.
Resource network timing shows the same information as in resource bar in timeline view. Answering your quesiton:
DNS lookup: Time spent performing the DNS lookup. (you need to find out IP address of site.com and this takes time)
Blocking: Time the request spent waiting for an already established connection to become available for re-use. As was said in another answer it does not depend on your server - this is client's problem.
Connecting: Time it took to establish a connection, including TCP handshakes/retries, DNS lookup, and time connecting to a proxy or negotiating a secure-socket layer (SSL). Depends on network congestion.
Sending - Time spent sending the request. Depends on the size of sent data (which is mostly small because your request is almost always a few bytes except if you submit a big image or huge amount of text), network congestion, proximity of client to server
Waiting - Time spent waiting for the initial response. This is mostly the time of your server to process and respond to your response. This is how fast if your server calculating things, fetching records from database and so on.
Receiving - Time spent receiving the response data. Something similar to the sending, but now you are getting your data from the server (response size is mostly bigger than request). So it also depends on the size, connection quality and so on.
Blocking: Time the request spent waiting for an already established connection to become available for re-use. As was said in
another answer it does not depend on your server - this is client's
problem.
I do not agree with the statement above. All else being same [my machine workload] - my browser shows very less "Blocking" time for one website and long blocking time for some other website.
So if waiting for one of the six threads + proxy negotiation** is high, it is mostly because of the cascading effect of the server's slowness OR the bad design of page [too much being sent across the wire, too many times].
** - whatever "Proxy Negotiation" means!, nobody explains this very well, particularly where no local/CDN proxy is actually involved

Resources