I ran a Google Page Speed and it says I scored 57/100 because I need to "Enable Keep-Alive" and "Enable Compression". I did some Google searches but I can't find anything. I even contacted my domain provider and asked them to turn it on, but they said it was already on.
Long story short:
1.) What is Keep-Alive?
2.) How do I enable it?
Configure Apache KeepAlive settings
Open up apache’s configuration file and look for the following settings. On Centos this file is called httpd.conf and is located in /etc/httpd/conf. The following settings are noteworthy:
KeepAlive: Switches KeepAlive on or off. Put in “KeepAlive on” to turn it on and “KeepAlive off” to turn it off.
MaxKeepAliveRequests: The maximum number of requests a single persistent connection will service. A number between 50 and 75 would
be plenty.
KeepAliveTimeout: How long should the server wait for new requests from connected clients. The default is 15 seconds which is
way too high. Set it to between 1 and 5 seconds to avoid having
processes wasting RAM while waiting for requests.
Read more about benefits of keep alive connection here: http://abdussamad.com/archives/169-Apache-optimization:-KeepAlive-On-or-Off.html
Keep-alive is using the same tcp connection for HTTP conversation instead of opening new one with each new request. You basically need to set HTTP header in your HTTP response
Connection: Keep-Alive
Read more here
I had the same problem and after a bit of research I found that the two most popular ways to do it are:
If you do not have access to your webserver config file you can add HTTP headers yourself using a .htaccess file by adding this line of code:
<ifModule mod_headers.c> Header set Connection keep-alive </ifModule>
If you are able to access your Apache config file, you can turn on keep-alive there by changing these 3 lines in httpd.conf file found here /etc/httpd/conf/
KeepAlive On
MaxKeepAliveRequests 0
KeepAliveTimeout 100
You can read more from this source which explains it better than me https://varvy.com/pagespeed/keep-alive.html
To enable keep-alive through .htaccess you need to add the following code to your .htaccess file:
<ifModule mod_headers.c>
Header set Connection keep-alive
</ifModule>
When you have "keep-alive" enabled you tell the browser of your user to use one TCP/IP connection for all the files(images, scripts,etc.) your website loads instead of using a TCP/IP connection for every single file. So it keeps a single connection "alive" to retrieve all the website files at once. This is much faster as using a multitude of connections.
There are various ways to enable keep-alive. You can enable it by
Using/Editing the .htaccess file
Enabling it through access to your web server(Apache, Windows server, etc.)
Go here for more detailed information about this.
With the "Enable Compression" part they mean you should enable GZIP compression (if your web host hasn't already enabled it, as it's pretty much the default nowadays). The GZIP compression technique makes it possible for your web files to be compressed before they're being sent to your users browser. This means your user has to download much smaller files to fully load your web pages.
To enable KeepAlive configuration, Go to conf/httpd.conf in Apache configuration and set the below property :
KeepAlive On
Related
All required changes have been done to respective files like:
stalecheck=true,
keepalive is checked from HTTP request defaults,
retrycount=1,
hc.parameters file changes,
Socket timeout is 240000
Still we see "java.net.SocketException: Connection reset" in response data however I see the valid requests been passed to Server.
The issue wasnt till we reach 3000 users, worked smoothly till 3000 users.
Connection Reset has a lot of meaning, possible reasons are:
One of the server components is not able to handle load so it closes connections on its side
On JMeter side, check that you running in NON GUI mode and that neither JMeter JVM nor injector machine are overloaded which could explain this. See:
https://jmeter.apache.org/usermanual/get-started.html#non_gui
I've been trying unsuccessfully for a few days to setup a reverse proxy to a localhost websocket url.
ProxyPass /chat/stream/ wss://localhost:8000/chat/stream/
ProxyPassReverse /chat/stream/ wss://localhost:8000/chat/stream/
I get an error in the apache error_log that reads:
No protocol handler was valid for the URL /chat/stream/. If you are
using a DSO version of mod_proxy, make sure the proxy submodules are
included in the configuration using LoadModule.
I have read countless pages via google of people using this method so I wonder if there is some issue in our setup/install of Apache that ships with Server.app 5.2?
I have all the standard modules loaded in httpd_server_app.conf
mod_proxy
mod_proxy_wstunnel
mod_proxy_http
...
Can anyone shed some light on this?
Thanks
Adam
In case someone finds themselves in a similar situation here is how I got WebSocket connections working via Apache in MacOS Server 5.2.
And the solution is simple.
The short version:
I use MacOS Server 5.2 (ships with Apache 2.4.23) to run a python Django application via the mod_wsgi module.
I had been trying to setup proxypass and wstunnel in MacOS 10.12 & Server 5.2 to handle websocket connections via an ASGI interface server called Daphne running on localhost on port 8001.
I wanted to reverse proxy any WebSocket connection to wss://myapp.local/chat/stream/ to ws://localhost:8001/chat/stream/
From what I had read on all the forums and mailing lists that I had scoured was to simply make some proxypass definitions in the appropriate virtual host and make sure that the mod_proxy and mod_proxy_wstunnel modules were loaded and it would work.
Long story short - from what I understand all of this trouble came down to MacOS Server 5 and one major change:
"A single instance of httpd runs as a reverse proxy, called the Service Proxy, and several additional instances of httpd run behind that proxy to support specific HTTP-based services, including an instance for the Websites service."
All I needed to do to proxy the websocket connection was the following:
in: /Library/Server/Web/Config/Proxy/apache_serviceproxy.conf
Add (around line 297 (in the section about user websites, webdav):
ProxyPass / http://localhost:8001/
ProxyPassReverse / http://localhost:8001/
RewriteEngine on
RewriteCond %{HTTP:UPGRADE} ^WebSocket$ [NC]
RewriteCond %{HTTP:CONNECTION} ^Upgrade$ [NC]
RewriteRule .* ws://localhost:8001%{REQUEST_URI} [P]
I then kicked over the service proxy:
sudo launchctl unload -w /Applications/Server.app/Contents/ServerRoot/System/Library/LaunchDaemons/com.apple.serviceproxy.plist
sudo launchctl load -w /Applications/Server.app/Contents/ServerRoot/System/Library/LaunchDaemons/com.apple.serviceproxy.plist
And the web socket connections were instantly working!
The long version:
For many weeks I have been trying to get WebSocket connections functioning with Apache and Andrew Godwin's Django Channels project in an app I am developing.
Django Channels is
"a project to make Django able to handle more than just plain HTTP requests, including WebSockets and HTTP2, as well as the ability to run code after a response has been sent for things like thumbnailing or background calculation."
My interest in Django Channels came from my requirement of a chat system in my webapp. After watching a several of Andrew's demos on youtube and having read through the docs and eventually installing Andrew's demo django channels project I figured I would be able to get this working in production on our MacOS Server.
The current release of MacOS 10.12 and Server 5.2 ships with Apache 2.4.23. This comes with the necessary mod_proxy_wstunnel module to be able to proxy WebSocket connections (ws:// and secure wss://) in Apache and is already loaded in the server config file:
/Library/Server/Web/Config/apache2/httpd_server_app.conf
Daphne is Andrew's ASGI interface server that supports WebSockets & long-poll HTTP requests. WSGI does not.
With Daphne running on localhost on a port that MacOS isn't occupying (I went with 8001), the idea was to get Apache to reverse proxy certain requests to Daphne.
Daphne can be run on a specified port (8001 in this example) like so (-v2 for more verbosity):
daphne -p 8001 yourapp.asgi:channel_layer -v2
I wanted Daphne to handle only the web socket connections (as I use currently depend on some apache modules for serving media such as mod_xsendfile). In my case the websocket connection was via /chat/stream/ based on Andrew's demo project.
From what I had read in MacOS Server's implementation of Apache the idea is to declare these proxypass commands inside the virtual host files of your "sites" in: /Library/Server/Web/Config/apache2/sites/
In config files such as: 0000_127.0.0.1_34543_.conf
I did also read that any customisation for web apps running on MacOS Server should be made to the plist file for the required web app in: /Library/Server/Web/Config/apache2/webapps/
In a plist file such as: com.apple.webapp.wsgi.plist
Anyway...
I edited the 0000_127.0.0.1_34543_.conf file adding:
ProxyPass /chat/stream/ ws://localhost:8001/
ProxyPassReverse /chat/stream/ ws://localhost:8001/
Eager to test out my first web socket chat connection I refreshed the page only to see an error printed in the apache log:
No protocol handler was valid for the URL /chat/stream/. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule.
I had read of many people finding a solution at least with Apache on Ubuntu or a custom install on MacOS.
I even tried installing Apache using Brew and when that didn't work I almost proceeded to install nginx.
After countless hours/days of googling I reached to the Apache mailing list for some help with this error. Yann Ylavic was very generous with his time and offered me various ideas on how to get it going. After trying the following:
SetEnvIf Request_URI ^/chat/stream/ is_websocket
RequestHeader set Upgrade WebSocket env=is_websocket
ProxyPass /chat/stream/ ws://myserver.local:8001/chat/stream/
I noticed that the interface server on port 8001 Daphne was starting to receive ws connections!
However in the client browser it was logging:
"Error during WebSocket handshake: 'Upgrade' header is missing"
From what I could see mod_dumpio was logging that the "Connection: Upgrade" and "Upgrade: WebSocket" headers were being sent as part of the web socket handshake:
mod_dumpio: dumpio_in (data-HEAP): HTTP/1.1 101 Switching Protocols\r\nServer: AutobahnPython/0.17.1\r\nUpgrade: WebSocket\r\nConnection: Upgrade\r\nSec-WebSocket-Accept: 17WYrMeMS8a4ImHpU0gS3/k0+Cg=\r\n\r\n
mod_dumpio.c(164): [client 127.0.0.1:63944] mod_dumpio: dumpio_out
mod_dumpio.c(58): [client 127.0.0.1:63944] mod_dumpio: dumpio_out (data-TRANSIENT): 160 bytes
mod_dumpio.c(100): [client 127.0.0.1:63944] mod_dumpio: dumpio_out (data-TRANSIENT): HTTP/1.1 101 Switching Protocols\r\nServer: AutobahnPython/0.17.1\r\nUpgrade: WebSocket\r\nConnection: Upgrade\r\nSec-WebSocket-Accept: 17WYrMeMS8a4ImHpU0gS3/k0+Cg=\r\n\r\n
However the client browser showed nothing in the response headers.
I was more stumped than ever.
I explored the client side jQuery framework as well as the the Django channels & autobahn module to see if perhaps something was amiss, and then revised my own app and various combinations of suggestions about Apache and it's module. But nothing stood out to me.
Then I reread the ReadMe.txt inside the apache2 dir: /Library/Server/Web/Config/apache2/ReadMe.txt
"Special notes about the web proxy architecture in Server application
5.0:
This version of Server application contains a revised architecture for all HTTP-based services. In previous versions there was a single instance of httpd acting as a reverse proxy for Wiki, Profile, and Calendar/Address services, and also acting as the Websites service. With this version, there is a major change: A single instance of httpd runs as a reverse proxy, called the Service Proxy, and several additional instances of httpd run behind that proxy to support specific HTTP-based services, including an instance for the Websites service.
Since the httpd instance for the Websites service is now behind a reverse proxy, or Service Proxy, note the following: ... It is only the external Service Proxy httpd instance that listens on TCP ports 80 and 443; it proxies HTTP requests and responses to Websites and other HTTP-based services. ... "
I wondered if thus ServiceProxy had something to do with it. I had a look over: /Library/Server/Web/Config/Proxy/apache_serviceproxy.conf
and noticed a comment - "# The user websites, and webdav".
I figured it wouldn't hurt to try adding the proxypass definitions & rewrite rules that people had suggested on the forums as their solution.
ProxyPass / http://localhost:8001/
ProxyPassReverse / http://localhost:8001/
RewriteEngine on
RewriteCond %{HTTP:UPGRADE} ^WebSocket$ [NC]
RewriteCond %{HTTP:CONNECTION} ^Upgrade$ [NC]
RewriteRule .* ws://localhost:8001%{REQUEST_URI} [P]
Sure enough after restarting the ServiceProxy it all started to work!
Adam
Take a look at the ReadMe.txt file in /Library/Server/Web/Config/apache2/
It writes about the new proxy service where requests first go to an http server running on port 80 and 443, which are then forwarded to internal ports 34580 and 34543.
For the wstunnel module there's a conflict with the mod proxy module, where mod proxy will strip out tunnel headers (https://lists.gt.net/apache/users/393509). I confirmed this by changing the LogFormat in apache_serviceproxy.conf by adding to LogFormat
c:\"%{Connection}i\" u:\"%{Upgrade}i\"
I did the same in httpd_server_app.conf and could see the Connection and Upgrade websocket headers were being removed before getting to my webapp.
The fix was merely to add a file in /Library/Server/Web/Config/Proxy for my application. Look at the last line in apache_serviceproxy.conf to see the expected naming format. In my case the file is called apache_serviceproxy_customsites_ws.conf
The contents:
ProxyPass /ws/ ws://localhost:34543/ws/
ProxyPassReverse /ws/ ws://localhost:34543/ws/
This will forward the ws request to the expected internal https port 34543 and retain the headers. You must forward it to 34543 or 34580. And also note the path is included so that it can be picked up in the next step.
Then, in my webapp_script (the include file for my webapp) I have:
ProxyPass "/ws/" "ws://localhost:61614/"
ProxyPassReverse "/ws/" "ws://localhost:61614/"
This forwards the request to my websocket server running on port 61614.
With that, it's now working as expected.
In addition to what #adamteale mentioned in his answer (the TL;DR) version, I also had to add
ProxyPreserveHost On
in my apache virtual host config. Without this, Daphne wasn't returning back any response. Perhaps it was, but it wasn't being sent back to the client.
The only other unresolved issue for my setup is that daphne isn't handling wss connection. Either Apache isn't terminating SSL for wss or something else is happening. It does terminate SSL for all other http requests. However, that question is for another thread. I will raise it as soon as I am done with more research on it.
Note: Couldn't add comments to Adam's answer due to MyCurrentRep < 50
When I open the haproxy statistics report page of my http proxy server, I saw something like this:
Cum. connections: 280073
Cum. sessions : 3802
Cum. HTTP requests: 24245
I'm not using 'appsession' and any other cookie related command in the configuration. So what's 'session' means here?
I guess haproxy identify http session by this order:
Use cookie or query string if it's exists in the configuration.
Use SSL/TLS session.
Use ip address and TCP connection status.
Am I Right?
I was asking myself the very same question this morning.
Searching through http://www.haproxy.org/download/1.5/doc/configuration.txt I came accross this very short definition (hidden in a parameter description) :
A session is a connection that was accepted by the layer 4 rules.
In your case, you're obviously using Haproxy as a layer7/HTTP loadbalancer. If a session is a TCP connection, due to client-side / frontend Keep-Alive, it's normal to have more HTTP reqs than sessions.
Then I guess the high connection number shows that a lot of incoming connections were rejected even before being considered by the HTTP layer. For instance via IP-based ACLs.
As a far as I understand, the 'session' word was introduced to make sure two different concepts were not mixed :
a (TCP) connection : it's a discrete event
a (TCP) session : it's a state which tracks various metadata and has some duration; most importantly Haproxy workload (CPU and memory) should be mostly related to the number of sessions (both arrival rate and concurrent number)
In fact sessions were not introduced after but before connections. An end-to-end connection was called a "session". With the introduction of SSL, proxy protocol and layer4 ACLs, it was needed to cut the end-to-end sessions in smaller parts, hence the introduction of "connections". Zerodeux has perfectly explained what you're observing.
I'm trying to get some protocols work through my company's firewall. Until now I have been succesfull in masking either http or https data by setting a http proxy on localhost and one on a remote server I own. The communication is done via $_POSTed and received modified .bmp files that contain a header and the encripted serialised request array.
This works fine, but there are a few drawbacks that make me think I might have taken a wrong approach.
Firstly I do not use apache's mod-proxy. instead I just created a local subdomain (proxy.localhost) and use that in browser's proxy settings. the subdomain's index.php does all the work. This creates some problems. I cannot use http and https simultaneously or the server will complain of using either "http on a https enabled port" or "incoresc ssl response length".
The second problem is, well, other protocols. I could make use of some ftp, sftp, remote deskoptop, ssh, nust name another... I need it
there are 2 solutions I can think of: First is if I run a php script in CLI so that it listens on a predefined port and handles the requests differently, or some sort of ssh tunnel. Problem is I haven't had any success with freeSSHd and putty because of my ignorance.
Thanks in advance for any advice.
I used the free version of bitvise SSH Client and server and it seems to work just fine.
What is the keep-alive feature? How can I enable it?
Following is the output from the chrome's Page Speed plugin.
Enable Keep-Alive
The host {MYWEBSITE.COM} should enable Keep-Alive. It serves the following resources.
http://MYWEBSITE.com/
http://MYWEBSITE.com/fonts/AGENCYR.TTF
http://MYWEBSITE.com/images/big_mini/0002_mini.jpeg
http://MYWEBSITE.com/images/big_mini/0003_mini.jpeg
http://MYWEBSITE.com/images/big_mini/0004_mini.jpeg
http://MYWEBSITE.com/images/big_mini/0005_mini.jpeg
http://MYWEBSITE.com/images/big_mini/0006_mini.jpeg
http://MYWEBSITE.com/images/big_mini/0007_mini.jpeg
http://MYWEBSITE.com/images/.jpeg
http://MYWEBSITE.com/images/small/0002S.jpeg
http://MYWEBSITE.com/images/small/0003S.jpeg
http://MYWEBSITE.com/images/small/0004S.jpeg
http://MYWEBSITE.com/images/small/0005S.jpeg
http://MYWEBSITE.com/images/small/0006S.jpeg
http://MYWEBSITE.com/images/small/0007S.jpeg
http://MYWEBSITE.com/images/small/0008S.jpeg
http://MYWEBSITE.com/images/small/0009S.jpeg
http://MYWEBSITE.com/images/small/0010S.jpeg
http://MYWEBSITE.com/images/small/0011S.jpeg
http://MYWEBSITE.com/images/small/0012S.jpg
http://MYWEBSITE.com/images/small/0013S.jpeg
http://MYWEBSITE.com/images/small/0014S.jpeg
http://MYWEBSITE.com/images/small/0015S.jpeg
http://MYWEBSITE.com/images/small/0016S.jpeg
http://MYWEBSITE.com/images/small/0017S.jpeg
http://MYWEBSITE.com/images/small/0018S.jpeg
http://MYWEBSITE.com/images/small/0019S.jpeg
http://MYWEBSITE.com/yoxview/yoxview.css
http://MYWEBSITE.com/yoxview/images/empty.gif
http://MYWEBSITE.com/yoxview/images/left.png
http://MYWEBSITE.com/yoxview/images/popup_ajax_loader.gif
http://MYWEBSITE.com/yoxview/images/right.png
http://MYWEBSITE.com/yoxview/images/sprites.png
http://MYWEBSITE.com/yoxview/img3_mini.jpeg
http://MYWEBSITE.com/yoxview/jquery.yoxview-2.21.min.js
http://MYWEBSITE.com/yoxview/lang/en.js
http://MYWEBSITE.com/yoxview/yoxview-init.js
HTTP Keep-Alive (otherwise known as HTTP persistent connections) configures the HTTP server to hold open a connection so that it can be reused by the client to send multiple requests thus reducing the overhead of loading a page. Each server and environment are different, so setting it up depends on your environment.
In short: if you're using HTTP/1.0, when making the original request (assuming your server supports it) add a Connection: Keep-Alive header. If the server supports it, it will return the same header back to you. If you're using HTTP/1.1 and the server is configured properly, it will automatically use persistent connections.
Be aware that while Keep-Alive provides some benefit at low volumes, it performs poorly at high volumes for small and medium-size sites (for example, if your blog gets Slashdotted). This Hacker News thread has some good background information.
In other words, while many of the PageSpeed recommendations are great across the board, this one should be taken with a grain of salt.