What do the colored bars in the Firefox net panel represent? - firefox

In the firefox developer tools, under the "Net" panel, resources that are loaded have their load time split into different colors/categories. These are:
DNS Lookup
Connecting
Blocking
Sending
Waiting
Receiving
What do each of these represent, and more specifically, does any of them accurately represent the amount of time that the server is thinking (accessing the database, running algorithms, etc)?
Thanks.

You couldn't accurately determine what the server is doing as such, I'm afraid.
You can discount most of them except Waiting, however, as the rest occur before and after the server handles your request. What it actually does while you wait will be a 'black box'.
There may be some asynchronous operations taking place during Sending and Receiving, so again it's hard to be accurate but you can get a ballpark figure of the time the server is working and the time the request spends travelling back and forth.
EDIT
Rough Definitions:
DNS Lookup: Translating the web address into a destination IP address by using a DNS server
Connecting: Establishing a connection with the web server
Blocking: Previously known as 'queueing', this is explained in more detail here
Sending: Sending your HTTP Request to the server
Waiting: Waiting for a response from the server - this is where it's probably doing all the work
Receiving: Getting the HTTP response back from the server

The firebug wiki also explains these (see the Timeline section).
Blocking Time spent in a browser queue waiting for a network
connection (formerly called Queueing). For SSL connections this includes the SSL Handshake and the OCSP validation step.
DNS Lookup DNS resolution time
Connection Elapsed time required to create a TCP connection
Waiting Waiting for a response from the server
Receiving Time required to read the entire response from the server
(and/or time required to read from cache)
'DOMContentLoaded' (event) Point in time when DOMContentLoaded event was fired (since the beginning of the request, can be negative if the request has been
started after the event)
'load' (event) Point in time when the page load event was fired (since the beginning of the request, can be negative if the request has been started after the event)

There's a pretty good article with time charts and a protocol level explanation of what's happening at each stage here.I found it pretty helpful as they also visually demonstrate the impact of using persistent and parallel connections versus serial connections.

Related

How to improve/minimize varying response time of api

I created a rest api and I am not very happy with the performance of it. I spent some time to investigate and stumbled across a tool to easily track the performance of my api (www.apiscience.com).
They split the overall response time in 4 categories- connect, resolve, processing and transfer. The resolve part often takes about 150ms while the processing of the call itself only takes about 18ms which results in an average response time of 160ms (the call i tried here is really simple so the average would be higher normally).
My question is how can I improve/minimize the resolve time for my calls?
(side info: my servers are placed in Ireland and I chose Ireland as location for the tests too)
Thanks in advance!
Edit - What do they mean with Resolve Time?
(https://www.apiscience.com/blog/what-do-api-sciences-curl-based-timings-mean/)
API Science’s “Resolve Time” is the equivalent of Ken’s “DNS Lookup.”
DNS stands for Domain Name System. A URL consists of text (and
sometimes numbers); however, the communication addresses that compose
the Internet are formulated as IP (Internet Protocol) addresses, for
example, 208.80.152.2. Before a request can be routed between the
requesting client and the server that will process the request, the IP
address that the URL refers must be looked up. A request is sent to a
DNS resolver by curl, and the resolver returns the correlated IP
address. API Science’s “Resolve Time” is the time in milliseconds that
it took this operation to complete.
As the documentation mentions, the DNS resolution time is the amount of time an API consuming client waits before finding out where to route the actual calls to your API server - the mapping between your server's name and IP address.
Where you host your DNS can be completely independent from both where you host your API service, and where your domain name is registered, and there are multiple choices in the market for DNS hosting service. DNSPerf (of which I have no affiliation) does a comparison of services and is probably a good starting point for further research if you'd like to select a new DNS provider.

nodeJS being bombarded with reconnections after restart

We have a node instance that has about 2500 client socket connections, everything runs fine except occasionally then something happens to the service (restart or failover event in azure), when the node instances comes back up and all socket connections try to reconnect the service comes to a halt and the log just shows repeated socket connect/disconnects. Even if we stop the service and start it the same thing happens, we currently send out a package to our on premise servers to kill the users chrome sessions then everything works fine as users begin logging in again. We have the clients currently connecting with 'forceNew' and force web sockets only and not the default long polling than upgrade. Any one ever see this or have ideas?
In your socket.io client code, you can force the reconnects to be spread out in time more. The two configuration variables that appear to be most relevant here are:
reconnectionDelay
Determines how long socket.io will initially wait before attempting a reconnect (it should back off from there if the server is down awhile). You can increase this to make it less likely they are all trying to reconnect at the same time.
randomizationFactor
This is a number between 0 and 1.0 and defaults to 0.5. It determines how much the above delay is randomly modified to try to make client reconnects be more random and not all at the same time. You can increase this value to increase the randomness of the reconnect timing.
See client doc here for more details.
You may also want to explore your server configuration to see if it is as scalable as possible with moderate numbers of incoming socket requests. While nobody expects a server to be able to handle 2500 simultaneous connections all at once, the server should be able to queue up these connection requests and serve them as it gets time without immediately failing any incoming connection that can't immediately be handled. There is a desirable middle ground of some number of connections held in a queue (usually controllable by server-side TCP configuration parameters) and then when the queue gets too large connections are failed immediately and then socket.io should back-off and try again a little later. Adjusting the above variables will tell it to wait longer before retrying.
Also, I'm curious why you are using forceNew. That does not seem like it would help you. Forcing webSockets only (no initial polling) is a good thing.

Long-polling vs websocket when expecting one-time response from server-side

I have read many articles on real-time push notifications. And the resume is that websocket is generally the preferred technique as long as you are not concerned about 100% browser compatibility. And yet, one article states that
Long polling - potentially when you are exchanging single call with
server, and server is doing some work in background.
This is exactly my case. The user presses a button which initiates some complex calculations on server-side, and as soon as the answer is ready, the server sends a push-notification to the client. The question is, can we say that for the case of one-time responses, long-polling is better choice than websockets?
Or unless we are concerned about obsolete browsers support and if I am going to start the project from scratch, websockets should ALWAYS be preferred to long-polling when it comes to push-protocol ?
The question is, can we say that for the case of one-time responses,
long-polling is better choice than websockets?
Not really. Long polling is inefficient (multiple incoming requests, multiple times your server has to check on the state of the long running job), particularly if the usual time period is long enough that you're going to have to poll many times.
If a given client page is only likely to do this operation once, then you can really go either way. There are some advantages and disadvantages to each mechanism.
At a response time of 5-10 minutes you cannot assume that a single http request will stay alive that long awaiting a response, even if you make sure the server side will stay open that long. Clients or intermediate network equipment (proxies, etc...) just make not keep the initial http connection open that long. That would have been the most efficient mechanism if you could have done that. But, I don't think you can count on that for a random network configuration and client configuration that you do not control.
So, that leaves you with several options which I think you already know, but I will describe here for completeness for others.
Option 1:
Establish websocket connection to the server by which you can receive push response.
Make http request to initiate the long running operation. Return response that the operation has been successfully initiated.
Receive websocket push response some time later.
Close webSocket (assuming this page won't be doing this again).
Option 2:
Make http request to initiate the long running operation. Return response that the operation has been successfully initiated and probably some sort of taskID that can be used for future querying.
Using http "long polling" to "wait" for the answer. Since these requests will likely "time out" before the response is received, you will have to regularly long poll until the response is received.
Option 3:
Establish webSocket connection.
Send message over webSocket connection to initiate the operation.
Receive response some time later that the operation is complete.
Close webSocket connection (assuming this page won't be using it any more).
Option 4:
Same as option 3, but using socket.io instead of plain webSocket to give you heartbeat and auto-reconnect logic to make sure the webSocket connection stays alive.
If you're looking at things purely from the networking and server efficiency point of view, then options 3 or 4 are likely to be the most efficient. You only have the overhead of one TCP connection between client and server and that one connection is used for all traffic and the traffic on that one connection is pretty efficient and supports actual push so the client gets notified as soon as possible.
From an architecture point of view, I'm not a fan of option 1 because it just seems a bit convoluted when you initiate the request using one technology and then send the response via another and it requires you to create a correlation between the client that initiated an incoming http request and a connected webSocket. That can be done, but it's extra bookkeeping on the server. Option 2 is simple architecturally, but inefficient (regularly polling the server) so it's not my favorite either.
There is an alterternative that don't require polling or having an open socket connection all the time.
It's called web push.
The Push API gives web applications the ability to receive messages pushed to them from a server, whether or not the web app is in the foreground, or even currently loaded, on a user agent. This lets developers deliver asynchronous notifications and updates to users that opt in, resulting in better engagement with timely new content.
Some perks are
You need to ask for notification permission
Your site needs to have a service worker running in foreground
having a service worker also means you need to have SSL / HTTPS

Socket.io data loss when Internet speed drop

I am using socket.io 1.4 and I want to know that what happens in this scenario:
The client Emits like this:
Socket.emit('test',data);
The client does 3 emits to server but suddenly Internet speed drops and those emits may not get to server
But after a while the Internet speed rises again but what will happen to previous failed emits?
They will be emitted again automatically?
How should I handle that
Websockets use TCP, which is in general a reliable protocol. There is not exactly such a thing as "The internet speed dropped and I lost some messages." If some messages are lost they will be automatically retransmitted at the TCP level. If retransmission fails completely, the connection will be reset.
So what you really are asking is how socket.io handles this. And the answer is that it has some amount of reconnecting logic, and you may also want to monitor the connection in case it resets (hook up a listener for the disconnect event on the socket), if you want to take some extra action (like notify the user).

Website Speed 'Connect Time'

After noticing a drastically slow load time on one of my website I started running some tests on Pingdom - http://tools.pingdom.com/
I've been comparing 2 sites, and the drastic difference is the 'Connect' time. On the slower site its around 2.5 seconds where as on my other sites its down around 650ms. I suppose its worth mentioning the slower site is hosted by a different company.
Thew only definition Pingdom offers is "The web browser is connecting to the server". I was hoping
Someone could elaborate on this a little for me, and
Point me in a direction of resolving it.
Thanks in advance
Every new TCP connection goes through a three-way handshake before the client can issue a request e.g. GET, to the web server.
Client sends SYN to server, server responds with SYN-ACK, client responds with ACK and then sends the request.
How long this process takes is latency bound i.e. if the round-trip to the server is 100ms then the full handshake will take 150ms, but as the client sends the request just after it sends the ACK work on the basis it's a cost of one-roundtrip.
Congestion and other factors can also affect the TCP connect time.
Connect times should be in the milliseconds range not in the seconds range - my round-trip time from the UK to a server in NY is 100ms so that's roughly what I'd expect the TCP connect time to be if I was requesting something from a server there.
See #igrigorik's High Performance Browser Networking for a really in-depth discussion / explanation - http://chimera.labs.oreilly.com/books/1230000000545/ch02.html#TCP_HANDSHAKE

Resources