nginx - create multiple cache paths - caching

I am new to nginx, so I'm not sure if this is possible.
However, I am attempting to create short, long, and never caches for sites to use.
I naively attempted to set these up in my http block:
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=short:10m;
proxy_cache short;
proxy_cache_key "short:$scheme$proxy_host$uri$is_args$args";
proxy_cache_valid 2m;
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=long:10m;
proxy_cache long;
proxy_cache_key "long:$scheme$proxy_host$uri$is_args$args";
proxy_cache_valid 1h;
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=day:10m;
proxy_cache never;
proxy_cache_key "long:$scheme$proxy_host$uri$is_args$args";
proxy_cache_valid 1d;
proxy_cache off;
Upon reload this throws the following error:
[emerg]: "proxy_cache" directive is duplicate in
How can I setup different cache paths to share among my virtual hosts?

The proxy_cache directive means "use this cache right now in this block" so as you are using this directive multiple times nginx can't decide which cache to use and shows and error. What you must do is remove proxy_cache and proxy_cache_valid directives and use one of each at a time in location and/or server blocks.
You must know that the usage of proxy_cache and proxy_cache_valid directives is forbidden in if blocks so you may not get what you want this way (I'm assuming you will select a particular cache based on some test).
Therefore, an other approach is to use specific headers in upstream's reply. You can use one of the following headers to set caching time :
X-Accel-Expires
Cache-Control
Expires
Nginx will honor these headers by default. You can tell him to ignore some of those when deciding the caching duration with proxy_ignore_headers.

Related

Performance Engineering on Apostrophe CMS

So after three weeks of 12hr shifts I am almost done with building a Knowledge Base system using Apostrophe. Now comes the task of speeding things on the front end. My questions are:
How do I add expires headers: Express has inbuilt milddleware called static, can I just implement it normally under the index.js file?
Minify JavaScript & CSS: http://apostrophecms.org/docs/modules/apostrophe-assets/ I saw that apostrophe has something inbuilt but its not clear how to enable it? And also do I need to put the assets in specific folder for it to work? Right now I have all the JS files under lib/modules/apostrophe-pages/public/js and CSS under public/css.
Enable GZIP on the server: there is nothing mentioned but again express does have gzip modules can I just implement them under lib/modules/apostrophe-express/index.js?
Any help will is greatly appreciated.
I'm the lead developer of Apostrophe at P'unk Avenue.
Sounds like you found our deployment HOWTO with the recently added material about minification, and so you have sorted out that part yourself. That's good.
As for expires headers and gzip on the server, while you could do that directly in node, we wouldn't! In general, we never have node talk directly to the end user. Instead, we use nginx as a reverse proxy, which gives us load balancing and allows us to deliver static files directly. nginx, being written in C/C++, is faster at that. Also the implementations of gzip and TLS are very much battle-tested. No need to make javascript do what it's not best at.
We usually configure nginx using mechanic, which we created to manage nginx with a few commands rather than writing configuration files by hand. Our standard recipe for that includes both gzip and expires headers.
However, here's an annotated version of the nginx configuration file it creates. You'll see that it covers load balancing, gzip and longer expiration times for static files.
# load balance across 4 instances of apostrophe listening on different ports
upstream upstream-example {
server localhost:3000;
server localhost:3001;
server localhost:3002;
server localhost:3003;
}
server {
# gzip transfer encoding
gzip on;
gzip_types text/css text/javascript image/svg+xml
application/vnd.ms-fontobject application/x-font-ttf
application/x-javascript application/javascript;
listen *:80;
server_name www.example.com example.com;
client_max_body_size 32M;
access_log /var/log/nginx/example.access.log;
error_log /var/log/nginx/example.error.log;
# reverse proxy: pass requests to nodejs backends
location #proxy-example-80 {
proxy_pass http://upstream-example;
proxy_next_upstream error timeout invalid_header http_500 http_502
http_503 http_504;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Deliver static files directly if they exist matching the URL,
# if not proxy to node
location / {
root /opt/stagecoach/apps/example/current/public/;
try_files $uri #proxy-example-80;
# Expires header: 7-day lifetime for static files
expires 7d;
}
}

Nginx using etags with gzipped content when proxying

I want to use nginx to be a proxy between clients and a set of my apps, the problem I encountered is:
I have an app(on different machine than nginx) that is having static content(images, css etc.)
That static content is gzipped
I configured nginx as follows:
location ^~ /static/ {
etag on;
expires 1d;
more_set_headers "Vary: Accept-Encoding";
location ~* \.(css|js)$ {
gzip_static on;
expires -1;
proxy_pass http://my_upstream;
}
proxy_pass http://my_upstream;
}
and I was expecting to have etags working for things like js and css but they are not. I'm supposing it's because that js and css files are not on the same machine as nginx and it's the problem that gzip_static on suppose to fix.
So basically my question is, is it possible to have it working that way? And if it is how to do it:)
According to this forum post, etag on is not compatible with a proxy_pass that may modify the response (as gzip_static does):
The ETag header used in it's strong from means that it must be
changed whenever bits of an entity are changed. This basically
means that ETag headers have to be removed by filters which change
a response content.
As suggested in that same post, passing through Last-Modified will usually be enough for clients to make conditional requests.

Debugging in nGinx yields 1 line and infinite frustration- How to Debug the Debugger

Yay for experimental projects! I decided to try setting up my blog with Facebook's new hhvm-fastcgi and wordpress. Followed instructions and am using the following nGnix configuration:
server {
listen *:80 default;
server_name _;
access_log /home/blogs/logs/nginx/access.log;
error_log /home/blogs/logs/nginx/error.log debug;
location / {
deny all;
}
}
server {
listen *:80;
server_name www.site.com;
root /home/blogs/wordpress/;
index index.html index.php index.htm;
access_log /home/blogs/logs/nginx/site/access.log main;
error_log /home/blogs/logs/nginx/site/error.log debug;
# proxy_redirect off;
set $myuri $request_uri;
rewrite ^/wp-admin\/a$ /wp-admin/index.php;
if (!-e $request_filename) {
rewrite /wp-admin$ $scheme://$host$uri/ permanent;
rewrite ^/[_0-9a-zA-Z-]+(/wp-.*) $1 last;
rewrite ^/[_0-9a-zA-Z-]+(/.*\.php)$ $1 last;
}
# Try MemCached First
location / {
set $memcached_key "$uri?$args";
memcached_pass 127.0.0.1:11211;
error_page 404 405 502 504 = #fallback;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /home/blogs/wordpress$fastcgi_script_name;
include fastcgi_params;
}
location #fallback {
try_files $uri $uri/ /index.php?$args;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
}
It would be all too simple if it worked. Hitting the site just causes my browser to hang and ultimately give up, but the debug log (from /home/blogs/logs/nginx/error.log as /home/blogs/logs/nginx/site/error.log is just blank) yields only one line:
2014/01/03 19:20:35 [debug] 8536#0: epoll add event: fd:11 op:1 ev:00000001
I'm guessing the weak link is nGinx.
Trying to hit the site from a restricted domain, produces the 403 as expected, and the debug log to actually work.
My question is less how to make my setup work, but why the setup isn't debugging. A simple fuser tells me Hip-hop is running on 9000. I feel like I could make some headway if I knew what was wrong.
I'm super self-conscious of my questions on Stackoverflow; I've seen people been ripped apart and it's frankly quiet scary. I realize there's another similar, very recent, question: HHVM with Nginx fastcgi not working properly but given our configurations are not quiet the same, and my question is more about the debug log (albeit a very short one) I thought my situation warranted another question.
NOTE:
Tag by rights should be hhvm-fastcgi, but I don't have the rep to create it as a tag.
Wow. Just Wow.
Turns out after struggling with this for far too long, turns out it was my firewall was blocking port 80. Why I was able to invoke a 403? I occasionally run a proxy through the server, so the other domain I tested with was seen as an internal request.
I'm guessing this proxy mix up is what led to anything being in the error logs at all.
As deeply embarrassing as this mix up has been- I'm going to leave this question up, because I've taken something away from this experience.
First of all, don't point fingers:
I immediately assumed because nothing else could have been invoked it was nginx's fault. The strange debug log egged on my doubts.
Secondly, look higher:
No point looking at the middle of the stack. I should have looked for requests, and made sure that that calls were even being made.
Thirdly, keep track of what your doing:
My obvious confusion stemmed from the fact I had some weird proxy mojo going on. Had I taken the time to remember everything out of the ordinary I was doing with my server, I might have thought to check my firewall settings earlier.
There go my chances at a tumbleweed badge

How would one recreate Cloudflare's always online feature using nginx?

Cloudflare uses nginx.
They have this feature called Always Online: http://www.cloudflare.com/always-online
As their website states:
Always Online is a feature that caches a static version of your pages in case your server goes offline.
I would like to setup a caching nginx server on the other side of the globe and have it cache my website's static files and point my secondary dns to it. If my website's server goes down, a cached version will be showed.
Is this possible to do using nginx reverse-proxy feature?
Or, I could also save a copy of all my static files including .html files in the nginx server, and have it load these files when the main server is offline.
Can nginx do this?
In Nginx it is possible to set-up a caching. The documentation even provides an example of a Nginx reverse proxy with caching:
http {
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=STATIC:10m
inactive=24h max_size=1g;
server {
location / {
proxy_pass http://1.2.3.4;
proxy_set_header Host $host;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout invalid_header updating
http_500 http_502 http_503 http_504;
}
}
}
Note the proxy_cache_use_stale argument, in the example, in the event of a range of errors (including timeouts, 500, 502, 503, 504), then the old cached item is supplied.

nginx - can proxy caching be configured so files are saved without HTTP headers or otherwise in a more "human friendly" format?

I'm curious if nginx can be configured so the cache is saved out in some manner that would make the data user-friendly? While all my options might fall short of anything anyone would consider "human friendly" I'm in general interested in how people configure it to meet their specific needs. The documentation may be complete but I am very much a learn by example type guy.
My current configuration is from an example I ran accross and if it were to be used woud is not much more than proof to me that nginx correctly proxy caches/the data
http {
# unrelated stuff...
proxy_cache_path /var/www/cache levels=1:2 keys_zone=my-cache:8m max_size=1000m inactive=600m;
proxy_temp_path /var/www/cache/tmp;
server {
server_name g.sente.cc;
location /stu/ {
proxy_pass http://sente.cc;
proxy_cache my-cache;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
}
}
Nginx has two methods to cache content:
proxy_store is when Nginx builds a mirror. That is, it will store the file preserving the same path, while proxying from the upstream. After that Nginx will serve the mirrored file for all the subsequent requests to the same URI. The downside is that Nginx does not control expiration, however you are able to remove (and add) files at your will.
proxy_cache is when Nginx manages a cache, checking expiration, cache size, etc.

Resources