What is the keep-alive feature? How can I enable it?
Following is the output from the chrome's Page Speed plugin.
Enable Keep-Alive
The host {MYWEBSITE.COM} should enable Keep-Alive. It serves the following resources.
http://MYWEBSITE.com/
http://MYWEBSITE.com/fonts/AGENCYR.TTF
http://MYWEBSITE.com/images/big_mini/0002_mini.jpeg
http://MYWEBSITE.com/images/big_mini/0003_mini.jpeg
http://MYWEBSITE.com/images/big_mini/0004_mini.jpeg
http://MYWEBSITE.com/images/big_mini/0005_mini.jpeg
http://MYWEBSITE.com/images/big_mini/0006_mini.jpeg
http://MYWEBSITE.com/images/big_mini/0007_mini.jpeg
http://MYWEBSITE.com/images/.jpeg
http://MYWEBSITE.com/images/small/0002S.jpeg
http://MYWEBSITE.com/images/small/0003S.jpeg
http://MYWEBSITE.com/images/small/0004S.jpeg
http://MYWEBSITE.com/images/small/0005S.jpeg
http://MYWEBSITE.com/images/small/0006S.jpeg
http://MYWEBSITE.com/images/small/0007S.jpeg
http://MYWEBSITE.com/images/small/0008S.jpeg
http://MYWEBSITE.com/images/small/0009S.jpeg
http://MYWEBSITE.com/images/small/0010S.jpeg
http://MYWEBSITE.com/images/small/0011S.jpeg
http://MYWEBSITE.com/images/small/0012S.jpg
http://MYWEBSITE.com/images/small/0013S.jpeg
http://MYWEBSITE.com/images/small/0014S.jpeg
http://MYWEBSITE.com/images/small/0015S.jpeg
http://MYWEBSITE.com/images/small/0016S.jpeg
http://MYWEBSITE.com/images/small/0017S.jpeg
http://MYWEBSITE.com/images/small/0018S.jpeg
http://MYWEBSITE.com/images/small/0019S.jpeg
http://MYWEBSITE.com/yoxview/yoxview.css
http://MYWEBSITE.com/yoxview/images/empty.gif
http://MYWEBSITE.com/yoxview/images/left.png
http://MYWEBSITE.com/yoxview/images/popup_ajax_loader.gif
http://MYWEBSITE.com/yoxview/images/right.png
http://MYWEBSITE.com/yoxview/images/sprites.png
http://MYWEBSITE.com/yoxview/img3_mini.jpeg
http://MYWEBSITE.com/yoxview/jquery.yoxview-2.21.min.js
http://MYWEBSITE.com/yoxview/lang/en.js
http://MYWEBSITE.com/yoxview/yoxview-init.js
HTTP Keep-Alive (otherwise known as HTTP persistent connections) configures the HTTP server to hold open a connection so that it can be reused by the client to send multiple requests thus reducing the overhead of loading a page. Each server and environment are different, so setting it up depends on your environment.
In short: if you're using HTTP/1.0, when making the original request (assuming your server supports it) add a Connection: Keep-Alive header. If the server supports it, it will return the same header back to you. If you're using HTTP/1.1 and the server is configured properly, it will automatically use persistent connections.
Be aware that while Keep-Alive provides some benefit at low volumes, it performs poorly at high volumes for small and medium-size sites (for example, if your blog gets Slashdotted). This Hacker News thread has some good background information.
In other words, while many of the PageSpeed recommendations are great across the board, this one should be taken with a grain of salt.
Related
I'm using Squid 3.5 on windows 2012 server and I want to know how many DNS requests my server makes.
Some more details:
I suspect it makes a dns query on every request and produces a slightly added latency that could be avoided.
Is there any means of finding out this info? I have tried squidclient mgr:5min and it shows how long dns requests take on average, but doesn't show the count.
My dns.median_svc_time reads 0.025624 seconds, and it's fine as long as it caches those responses, but if it's 25 msec added to every request, then this is totally unacceptable.
Yes, squid should be able to give you the info you want via cache manager. It provides FQDN stats and a full IP Cache summary (Which I suspect is more what your looking for)
Have a look at the docs here for the fqdn info and here for the full ipcache details, it gives details of what they both mean/provide.
You access these via;
http://localhost/cgi-bin/cachemgr.cgi?host=localhost&port=3128&user_name=&operation=fqdncache&auth=
http://localhost/cgi-bin/cachemgr.cgi?host=localhost&port=3128&user_name=&operation=ipcache&auth=
When I open the haproxy statistics report page of my http proxy server, I saw something like this:
Cum. connections: 280073
Cum. sessions : 3802
Cum. HTTP requests: 24245
I'm not using 'appsession' and any other cookie related command in the configuration. So what's 'session' means here?
I guess haproxy identify http session by this order:
Use cookie or query string if it's exists in the configuration.
Use SSL/TLS session.
Use ip address and TCP connection status.
Am I Right?
I was asking myself the very same question this morning.
Searching through http://www.haproxy.org/download/1.5/doc/configuration.txt I came accross this very short definition (hidden in a parameter description) :
A session is a connection that was accepted by the layer 4 rules.
In your case, you're obviously using Haproxy as a layer7/HTTP loadbalancer. If a session is a TCP connection, due to client-side / frontend Keep-Alive, it's normal to have more HTTP reqs than sessions.
Then I guess the high connection number shows that a lot of incoming connections were rejected even before being considered by the HTTP layer. For instance via IP-based ACLs.
As a far as I understand, the 'session' word was introduced to make sure two different concepts were not mixed :
a (TCP) connection : it's a discrete event
a (TCP) session : it's a state which tracks various metadata and has some duration; most importantly Haproxy workload (CPU and memory) should be mostly related to the number of sessions (both arrival rate and concurrent number)
In fact sessions were not introduced after but before connections. An end-to-end connection was called a "session". With the introduction of SSL, proxy protocol and layer4 ACLs, it was needed to cut the end-to-end sessions in smaller parts, hence the introduction of "connections". Zerodeux has perfectly explained what you're observing.
I have set up a squid proxy on EC2, and I'm trying to use it from behind a corporate firewall. After configuring firefox to use my proxy, I tried to surf to yahoo.com. The browser seems to hang as if handling an extremely long running request. Checking the squid logs I see:
1431354246.891 11645 xxx.0.xx.xxx TCP_MISS/200 7150 CONNECT www.yahoo.com:443 username HIER_DIRECT/xx.xxx.XX.xx-
So far, I don't have a good explanation of most of these entries , but from http://wiki.squid-cache.org/SquidFaq/SquidLogs#access.log , I've found that:
MISS = The response object delivered was the network response object.
What does this mean? Is anything I can do to connect to the outside internet?
This has been asked a long time ago, but maybe someone can still use this...
This means you connected to squid and the request was made to yahoo using the TCP protocol that HTTP uses. Furthermore, the MISS means it's a cache miss, squid doesn't have this page stored.
The reason for the hanging might be caused by the response being caught somewhere along the line (corporate firewall, maybe? local firewall?) or even misconfiguration of the proxy.
For more, perhaps you should search on https://serverfault.com, for example this is a good starting point, then you can narrow down the problem: https://serverfault.com/questions/514716/whats-the-minimum-required-squid-config-to-make-a-public-proxy-server
Gradle 2.2 takes hours to build a project on a PC that takes 8 minutes on Linux. When run with –debug, on the slow machine, gradle reports no errors, but it stops and waits for approx. 2 minutes at every resource, after every User-Agent line:
18:39:15.819 [DEBUG] [org.apache.http.headers] >> User-Agent: Gradle/2.0 (Windows 7;6.1;amd64) (Oracle Corporation;1.7.0_67;24.65-b04)
<2 min. delay>
18:41:15.527 [DEBUG] [org.apache.http.impl.conn.DefaultClientConnection] Receiving response: HTTP/1.1 200 OK
18:41:15.527 [DEBUG] [org.apache.http.headers] << HTTP/1.1 200 OK
Linux workstations on the same subnet (behind the same firewall and using the same squid proxy) do not have this delay.
An Extended snip from Windows is here.
Snip from Linux build around same point in build.
This seems to have been a VERY STRANGE issue with a transparent http proxy and DansGuardian web filter. For still unknown reasons, this one PC’s http traffic got mangled.
This is odd, because our entire LAN’s http traffic to the internet is content filtered. There was a filtering exception that allowed any traffic from this slow pc to be unfiltered. But that had the opposite effect as expected. Gradle traffic became crazy slow on the ‘unfiltered’ PC, while content-filtered workstations had no problems. Even stranger, Gradle also ran at normal speed on unfiltered Linux workstations.
The workaround was to configure IPTables and the transparent proxy to completely ignore the slow pc's http traffic. So now it is unfiltered, and unproxied. It has been nicknamed the pornstation.
It happened to us as well, though in our case it was caused by the AntiVirus on the PC (Nod32 not to name it).
We had to completely disable the HTTP/web filters on it.
May not be your case, but may help others coming here for advice.
I ran a Google Page Speed and it says I scored 57/100 because I need to "Enable Keep-Alive" and "Enable Compression". I did some Google searches but I can't find anything. I even contacted my domain provider and asked them to turn it on, but they said it was already on.
Long story short:
1.) What is Keep-Alive?
2.) How do I enable it?
Configure Apache KeepAlive settings
Open up apache’s configuration file and look for the following settings. On Centos this file is called httpd.conf and is located in /etc/httpd/conf. The following settings are noteworthy:
KeepAlive: Switches KeepAlive on or off. Put in “KeepAlive on” to turn it on and “KeepAlive off” to turn it off.
MaxKeepAliveRequests: The maximum number of requests a single persistent connection will service. A number between 50 and 75 would
be plenty.
KeepAliveTimeout: How long should the server wait for new requests from connected clients. The default is 15 seconds which is
way too high. Set it to between 1 and 5 seconds to avoid having
processes wasting RAM while waiting for requests.
Read more about benefits of keep alive connection here: http://abdussamad.com/archives/169-Apache-optimization:-KeepAlive-On-or-Off.html
Keep-alive is using the same tcp connection for HTTP conversation instead of opening new one with each new request. You basically need to set HTTP header in your HTTP response
Connection: Keep-Alive
Read more here
I had the same problem and after a bit of research I found that the two most popular ways to do it are:
If you do not have access to your webserver config file you can add HTTP headers yourself using a .htaccess file by adding this line of code:
<ifModule mod_headers.c> Header set Connection keep-alive </ifModule>
If you are able to access your Apache config file, you can turn on keep-alive there by changing these 3 lines in httpd.conf file found here /etc/httpd/conf/
KeepAlive On
MaxKeepAliveRequests 0
KeepAliveTimeout 100
You can read more from this source which explains it better than me https://varvy.com/pagespeed/keep-alive.html
To enable keep-alive through .htaccess you need to add the following code to your .htaccess file:
<ifModule mod_headers.c>
Header set Connection keep-alive
</ifModule>
When you have "keep-alive" enabled you tell the browser of your user to use one TCP/IP connection for all the files(images, scripts,etc.) your website loads instead of using a TCP/IP connection for every single file. So it keeps a single connection "alive" to retrieve all the website files at once. This is much faster as using a multitude of connections.
There are various ways to enable keep-alive. You can enable it by
Using/Editing the .htaccess file
Enabling it through access to your web server(Apache, Windows server, etc.)
Go here for more detailed information about this.
With the "Enable Compression" part they mean you should enable GZIP compression (if your web host hasn't already enabled it, as it's pretty much the default nowadays). The GZIP compression technique makes it possible for your web files to be compressed before they're being sent to your users browser. This means your user has to download much smaller files to fully load your web pages.
To enable KeepAlive configuration, Go to conf/httpd.conf in Apache configuration and set the below property :
KeepAlive On