Cloudflare uses nginx.
They have this feature called Always Online: http://www.cloudflare.com/always-online
As their website states:
Always Online is a feature that caches a static version of your pages in case your server goes offline.
I would like to setup a caching nginx server on the other side of the globe and have it cache my website's static files and point my secondary dns to it. If my website's server goes down, a cached version will be showed.
Is this possible to do using nginx reverse-proxy feature?
Or, I could also save a copy of all my static files including .html files in the nginx server, and have it load these files when the main server is offline.
Can nginx do this?
In Nginx it is possible to set-up a caching. The documentation even provides an example of a Nginx reverse proxy with caching:
http {
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=STATIC:10m
inactive=24h max_size=1g;
server {
location / {
proxy_pass http://1.2.3.4;
proxy_set_header Host $host;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout invalid_header updating
http_500 http_502 http_503 http_504;
}
}
}
Note the proxy_cache_use_stale argument, in the example, in the event of a range of errors (including timeouts, 500, 502, 503, 504), then the old cached item is supplied.
Related
So after three weeks of 12hr shifts I am almost done with building a Knowledge Base system using Apostrophe. Now comes the task of speeding things on the front end. My questions are:
How do I add expires headers: Express has inbuilt milddleware called static, can I just implement it normally under the index.js file?
Minify JavaScript & CSS: http://apostrophecms.org/docs/modules/apostrophe-assets/ I saw that apostrophe has something inbuilt but its not clear how to enable it? And also do I need to put the assets in specific folder for it to work? Right now I have all the JS files under lib/modules/apostrophe-pages/public/js and CSS under public/css.
Enable GZIP on the server: there is nothing mentioned but again express does have gzip modules can I just implement them under lib/modules/apostrophe-express/index.js?
Any help will is greatly appreciated.
I'm the lead developer of Apostrophe at P'unk Avenue.
Sounds like you found our deployment HOWTO with the recently added material about minification, and so you have sorted out that part yourself. That's good.
As for expires headers and gzip on the server, while you could do that directly in node, we wouldn't! In general, we never have node talk directly to the end user. Instead, we use nginx as a reverse proxy, which gives us load balancing and allows us to deliver static files directly. nginx, being written in C/C++, is faster at that. Also the implementations of gzip and TLS are very much battle-tested. No need to make javascript do what it's not best at.
We usually configure nginx using mechanic, which we created to manage nginx with a few commands rather than writing configuration files by hand. Our standard recipe for that includes both gzip and expires headers.
However, here's an annotated version of the nginx configuration file it creates. You'll see that it covers load balancing, gzip and longer expiration times for static files.
# load balance across 4 instances of apostrophe listening on different ports
upstream upstream-example {
server localhost:3000;
server localhost:3001;
server localhost:3002;
server localhost:3003;
}
server {
# gzip transfer encoding
gzip on;
gzip_types text/css text/javascript image/svg+xml
application/vnd.ms-fontobject application/x-font-ttf
application/x-javascript application/javascript;
listen *:80;
server_name www.example.com example.com;
client_max_body_size 32M;
access_log /var/log/nginx/example.access.log;
error_log /var/log/nginx/example.error.log;
# reverse proxy: pass requests to nodejs backends
location #proxy-example-80 {
proxy_pass http://upstream-example;
proxy_next_upstream error timeout invalid_header http_500 http_502
http_503 http_504;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Deliver static files directly if they exist matching the URL,
# if not proxy to node
location / {
root /opt/stagecoach/apps/example/current/public/;
try_files $uri #proxy-example-80;
# Expires header: 7-day lifetime for static files
expires 7d;
}
}
I want to use nginx to be a proxy between clients and a set of my apps, the problem I encountered is:
I have an app(on different machine than nginx) that is having static content(images, css etc.)
That static content is gzipped
I configured nginx as follows:
location ^~ /static/ {
etag on;
expires 1d;
more_set_headers "Vary: Accept-Encoding";
location ~* \.(css|js)$ {
gzip_static on;
expires -1;
proxy_pass http://my_upstream;
}
proxy_pass http://my_upstream;
}
and I was expecting to have etags working for things like js and css but they are not. I'm supposing it's because that js and css files are not on the same machine as nginx and it's the problem that gzip_static on suppose to fix.
So basically my question is, is it possible to have it working that way? And if it is how to do it:)
According to this forum post, etag on is not compatible with a proxy_pass that may modify the response (as gzip_static does):
The ETag header used in it's strong from means that it must be
changed whenever bits of an entity are changed. This basically
means that ETag headers have to be removed by filters which change
a response content.
As suggested in that same post, passing through Last-Modified will usually be enough for clients to make conditional requests.
We are trying to disable nginx cache for specific header, in "Modify Header" chrome extension (you may use other) I added header like: "X-Dev = 1" and want to catch this header in nginx.conf to disable nginx cache and proxy request to developer server, is it possible to do?
Looks like I found solution, as Alexey recommended I added $http_x_dev header to proxy_cache_bypass directive and point requests to other server by condition:
proxy_cache_bypass $http_x_dev;
location / {
if ( $http_x_dev = 1 ) {
proxy_pass http://DEV_SERVER_IP:80;
break;
}
...
}
I am new to nginx, so I'm not sure if this is possible.
However, I am attempting to create short, long, and never caches for sites to use.
I naively attempted to set these up in my http block:
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=short:10m;
proxy_cache short;
proxy_cache_key "short:$scheme$proxy_host$uri$is_args$args";
proxy_cache_valid 2m;
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=long:10m;
proxy_cache long;
proxy_cache_key "long:$scheme$proxy_host$uri$is_args$args";
proxy_cache_valid 1h;
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=day:10m;
proxy_cache never;
proxy_cache_key "long:$scheme$proxy_host$uri$is_args$args";
proxy_cache_valid 1d;
proxy_cache off;
Upon reload this throws the following error:
[emerg]: "proxy_cache" directive is duplicate in
How can I setup different cache paths to share among my virtual hosts?
The proxy_cache directive means "use this cache right now in this block" so as you are using this directive multiple times nginx can't decide which cache to use and shows and error. What you must do is remove proxy_cache and proxy_cache_valid directives and use one of each at a time in location and/or server blocks.
You must know that the usage of proxy_cache and proxy_cache_valid directives is forbidden in if blocks so you may not get what you want this way (I'm assuming you will select a particular cache based on some test).
Therefore, an other approach is to use specific headers in upstream's reply. You can use one of the following headers to set caching time :
X-Accel-Expires
Cache-Control
Expires
Nginx will honor these headers by default. You can tell him to ignore some of those when deciding the caching duration with proxy_ignore_headers.
I'm curious if nginx can be configured so the cache is saved out in some manner that would make the data user-friendly? While all my options might fall short of anything anyone would consider "human friendly" I'm in general interested in how people configure it to meet their specific needs. The documentation may be complete but I am very much a learn by example type guy.
My current configuration is from an example I ran accross and if it were to be used woud is not much more than proof to me that nginx correctly proxy caches/the data
http {
# unrelated stuff...
proxy_cache_path /var/www/cache levels=1:2 keys_zone=my-cache:8m max_size=1000m inactive=600m;
proxy_temp_path /var/www/cache/tmp;
server {
server_name g.sente.cc;
location /stu/ {
proxy_pass http://sente.cc;
proxy_cache my-cache;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
}
}
Nginx has two methods to cache content:
proxy_store is when Nginx builds a mirror. That is, it will store the file preserving the same path, while proxying from the upstream. After that Nginx will serve the mirrored file for all the subsequent requests to the same URI. The downside is that Nginx does not control expiration, however you are able to remove (and add) files at your will.
proxy_cache is when Nginx manages a cache, checking expiration, cache size, etc.