nginx cache slice-by-slice and browser cache - caching

I'm using "Filling the Cache Slice-by-Slice" from this article https://www.nginx.com/blog/nginx-caching-guide/
proxy_cache_path /tmp/mycache keys_zone=mycache:10m;
server {
listen 80;
proxy_cache mycache;
slice 1m;
proxy_cache_key $host$uri$is_args$args$slice_range;
proxy_set_header Range $slice_range;
proxy_http_version 1.1;
proxy_cache_valid 200 206 1h;
location / {
proxy_pass http://origin:80;
}
}
nginx is caching video from server, but browser is not caching video. Help me please.

I'm solved problem. I'm added ETag and Last-Modified on backend.

Related

Redirect all HTTP traffic to HTTPS seems to be impossible

I am posting the question because the previous attempts have proved to be futile.
I have a rails server using nginx, and I am trying to redirect all http traffic to https.
Here is my nginx.conf file:
upstream backend {
server unix:PROJECT_PATH/tmp/thin1.sock;
server unix:PROJECT_PATH/tmp/thin2.sock;
server unix:PROJECT_PATH/tmp/thin3.sock;
server unix:PROJECT_PATH/tmp/thin4.sock;
server unix:PROJECT_PATH/tmp/thin5.sock;
server unix:PROJECT_PATH/tmp/thin6.sock;
server unix:PROJECT_PATH/tmp/thin7.sock;
server unix:PROJECT_PATH/tmp/thin8.sock;
}
server {
listen 80 default_server;
listen 443 default_server ssl;
server_name app_name;
ssl_certificate path_to_certificate_file.crt;
ssl_certificate_key path_to_certificatefile.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
root PATH_TO_PUBLIC_FOLDER;
access_log path_to_project/log/access.log;
error_log path_to_project/log/error.log;
client_max_body_size 10m;
large_client_header_buffers 4 16k;
location /ping {
echo "pong"
return 200;
}
# Cache static content
location ~* ^.+\.(jpg|jpeg|gif|css|png|js|ico|swf|wav)$ {
expires max;
log_not_found off;
}
# Status, local only (accessed via ssh+wget)
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
# double slash removal
set $test_uri $host$request_uri;
if ($test_uri != $host$uri$is_args$args) {
rewrite ^/(.*)$ /$1 break;
}
location / {
if ($http_x_forwarded_proto = 'http') {
return 301 https://$server_name$request_uri;
}
try_files $uri #proxy;
}
location #proxy {
proxy_redirect off;
# Inform we are on SSL
proxy_set_header X-Forwarded-Proto https;
# force timeouts if one of backend is died
proxy_next_upstream error timeout invalid_header http_502 http_503;
# Set headers
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://backend;
}
error_page 500 502 503 504 /500.html;
}
The current configuration causes:
400 Bad Request The plain HTTP request was sent to HTTPS port
You may notice the /ping location. That's because I have the servers behind a GCE balancer that performs a health check, and this is THE ONLY one I do not want to redirect. Everything else should be redirected to HTTPS.
Previous attempts:
server {
listen 80;
server_name app_name;
location /ping {
echo "pong";
return 200;
}
location / {
return 301 https://$server_name$request_uri;
}
}
With the https server part like the current config (with listen 80 default_server commented). This causes a too many redirections error.
I tried to simply redirect ALL traffic to https, including the health check. GCE expects a 200 response and instead it gets a 301, thus marking the machine as unhealthy and rendering the application useless.
I also tried the ssl on; on the https server config, same result (400)
I also tried to toggle the config.force_ssl = true in the rails project to no avail. Every other solution I try fails too.
Did anyone stumble on this also?
It seems the problem was not the Nginx config, but the certificates.
Putting a valid certificate led me to create an https backend and health check. Everything is working fine now.

Nginx proxy_cache

I am trying to get nginx running as a cache proxy. The server is up and running properly - except it seems its not caching properly.
According to a scan with pingdom:
The following cacheable resources have a short freshness lifetime. Specify
an expiration at least one week in the future for the following resources:
/layout/theme1/css/bootstrap-responsive.css
/layout/theme1/css/bootstrap.min.css
/layout/theme1/css/font-awesome.css
/layout/theme1/css/pages/dashboard.css
/layout/theme1/css/style.css
/layout/theme1/css/theme.css
/layout/theme1/img/logo.png
/layout/theme1/js/bootstrap.js
/layout/theme1/js/chart.min.js
/layout/theme1/js/excanvas.min.js
/layout/theme1/js/jquery.js
My nginx proxy_cache settings are as follows:
Proxy Cache Path Settings:
proxy_cache_path /var/cache/nginx/marketers.coop levels=1:2 keys_zone=marketers.coop:10m inactive=525600m max_size=5120m;
location settings:
location / {
proxy_pass http://192.227.210.138:8080;
proxy_cache cache;
proxy_cache_valid 30d;
proxy_cache_valid 404 30d;
proxy_no_cache $no_cache;
proxy_cache_bypass $no_cache;
proxy_cache_bypass $cookie_session $http_x_update;
location ~* ^.+\.(jpg|jpeg|gif|png|ico|svg|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|odt|ods|odp|odf|tar|wav|bmp|rtf|js|mp3|avi|mpeg|flv|html|htm)$ {
proxy_pass http://192.227.210.138:8080;
proxy_cache cache;
proxy_cache_valid 30d;
proxy_cache_valid 404 30d;
root /home/admin/web/marketers.coop/public_html;
access_log /var/log/apache2/domains/marketers.coop.log combined;
access_log /var/log/apache2/domains/marketers.coop.bytes bytes;
expires 365d;
try_files $uri #fallback;
}
}
As can bee seen both css and js are set to be cached, yet neither the expire of 365 days nor the proxy cache valid for 30 days seems to be having any effect. What am I doing wrong?

Preemptive caching with Nginx

How to make a preemptive cache with nginx?
Currently, the cache becomes stale and unloads lots of images at once.
In my http section I have
proxy_cache_path /var/cache/nginx levels=1:1 keys_zone=zone:10m;
In my server configuration I have something like
server {
listen 80 default deferred;
server_name myservername
root /myapp/public;
client_max_body_size 2G;
proxy_cache_bypass $http_pragma;
proxy_cache_valid 200 301 302 304 1M;
proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;
proxy_cache zone;
gzip_static on;
try_files $uri #app;
location #app {
if ($request_uri ~* "\.(ico|css|js|gif|jpe?g|png)\?[0-9]+$") {
expires max;
break;
}
client_body_buffer_size 32k;
proxy_buffers 8 64k;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://myupstream;
}
}

nginx is not caching proxied responses on disk, even though I ask it to

I'm using nginx as a load-balancing proxy, and I would also like it to cache its responses on disk so it doesn't have to hit the upstream servers as often.
I tried following the instructions at http://wiki.nginx.org/ReverseProxyCachingExample. I'm using nginx 1.7 as provided by Docker.
Here's my nginx.conf (which gets installed into nginx/conf.d/):
upstream balancer53 {
server conceptnet-api-1:10053;
server conceptnet-api-2:10053;
}
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=STATIC:1g max_size=1g;
server {
listen 80;
gzip on;
gzip_proxied any;
gzip_types application/json;
charset utf-8;
charset_types application/json;
location /web {
proxy_pass http://balancer53;
proxy_set_header X-Remote-Addr $proxy_add_x_forwarded_for;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control X-RateLimit-Limit X-RateLimit-Remaining X-RateLimit-Reset;
}
location /data/5.3 {
proxy_pass http://balancer53;
proxy_set_header X-Remote-Addr $proxy_add_x_forwarded_for;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control X-RateLimit-Limit X-RateLimit-Remaining X-RateLimit-Reset;
}
location /data/5.2 {
# serve the old version
proxy_pass http://conceptnet52:10052/;
proxy_set_header X-Remote-Addr $proxy_add_x_forwarded_for;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control X-RateLimit-Limit X-RateLimit-Remaining X-RateLimit-Reset;
}
location / {
root /var/www;
index index.html;
autoindex on;
rewrite ^/static/(.*)$ /$1;
}
}
Despite this configuration, nothing ever shows up in /data/nginx/cache.
Here's an example of the response headers from the upstream server:
$ curl -vs http://localhost:10053/data/5.3/assoc/c/en/test > /dev/null
* Hostname was NOT found in DNS cache
* Trying ::1...
* Connected to localhost (::1) port 10053 (#0)
> GET /data/5.3/assoc/c/en/test HTTP/1.1
> User-Agent: curl/7.35.0
> Host: localhost:10053
> Accept: */*
>
< HTTP/1.1 200 OK
* Server gunicorn/19.1.1 is not blacklisted
< Server: gunicorn/19.1.1
< Date: Thu, 06 Nov 2014 20:54:52 GMT
< Connection: close
< Content-Type: application/json
< Content-Length: 1329
< Access-Control-Allow-Origin: *
< X-RateLimit-Limit: 60
< X-RateLimit-Remaining: 59
< X-RateLimit-Reset: 1415307351
<
{ [data not shown]
* Closing connection 0
Each upstream server is enforcing a rate limit, but I am okay with disregarding the rate limit on cached responses. I was unsure whether these headers were preventing caching, which is why I told nginx to ignore them.
What do I need to do to get nginx to start using the cache?
Official documentation tells If the header includes the “Set-Cookie” field, such a response will not be cached. Please check it out here.
To make cache working use hide and ignore technique:
location /web {
...
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
}
I tried running nginx alone with that nginx.conf, and found that it complained about some of the options being invalid. I think I was never successfully building a new nginx container at all.
In particular, it turns out you don't just put any old headers in the proxy_ignore_headers option. It only takes particular headers as arguments, ones that the proxy system cares about.
Here is my revised nginx.conf, which worked:
upstream balancer53 {
server conceptnet-api-1:10053;
server conceptnet-api-2:10053;
}
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=STATIC:100m max_size=100m;
server {
listen 80;
gzip on;
gzip_proxied any;
gzip_types application/json;
charset utf-8;
charset_types application/json;
location /web {
proxy_pass http://balancer53;
proxy_set_header X-Remote-Addr $proxy_add_x_forwarded_for;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
}
location /data/5.3 {
proxy_pass http://balancer53;
proxy_set_header X-Remote-Addr $proxy_add_x_forwarded_for;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
}
location / {
root /var/www;
index index.html;
autoindex on;
rewrite ^/static/(.*)$ /$1;
}
}

How to caching certains pages in nginx as a reverse proxy

Well, i have my site.conf file like this:
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache_key "$scheme$request_method$host$request_uri";
proxy_cache_path /etc/nginx/cache/pag levels=1:2 keys_zone=APP:100m inactive=1m;
proxy_temp_path /etc/nginx/cache/tmp;
add_header X-Cache $upstream_cache_status;
server {
listen 80;
root /etc/nginx/html;
index index.html index.htm;
server_name www.example.com;
error_page 404 /404.html;
location /404.html {
internal;
}
location / {
proxy_pass http://127.0.0.1:8080/;
proxy_cache APP;
proxy_cache_valid 200 1m;
proxy_cache_methods POST;
expires 1m;
}
}
With this configuration, everything (including POST request methods) is cached for 1 min, OK.
What i need? I need that only this pages can be cached:
1) www.example.com
2) www.example.com/index.html
3) www.example.com/test/page.html
4) www.example.com/test/text.txt (this is a file requested by POST thru page.html, and i need it cached also)
5) www.example.com/test/page2.php?var1=val1&var2=val2 (val1 and val2 are dynamics)
My question is: What i have to put in location / to match the 1-5 items? Like this:
location (1-5 items match) {
proxy_pass http://127.0.0.1:8080/;
proxy_cache APP;
proxy_cache_valid 200 1m;
proxy_cache_methods POST;
expires 1m;
}
Other pages (not cached) will be automatically redirected to 127.0.0.1:8080. I know this can be do like this:
location / {
proxy_pass http://127.0.0.1:8080/;
}
NOTE 1: Other PHP pages receive POST|GET request methods, but i don't need it in cache, only aboves.
NOTE: 2 127.0.0.1:8080 is an apache server that runs PHP, so i can request PHP pages.
Since apache runs on the same host, simply serve the html files you do not want cached through nginx. As for the php pages, send the correct expiration headers in your application and everything will work correctly.

Resources