I configured nginx to cache all files for 3min. This works only for files I upload to the webserver manually. All files generated by the CMS get cached forever (or a long time I didn't wait)...
The CMS delivers all pages as "index.html" with an own folder structure (www.x.de/category1/category2/articlename/index.html).
How can I debug this? Is there a way to check the lifetime of a specific file?
Can something in the .html files overwrite the proxy_cache_valid value?
Many thanks!
Config:
server {
listen 1.2.3.4:80 default_server;
server_name x.de;
server_name www.x.de;
server_name ipv4.x.de;
client_max_body_size 128m;
location / { # IPv6 isn't supported in proxy_pass yet.
proxy_pass http://apache.ip:7080;
proxy_cache my-cache;
proxy_cache_valid 200 3m;
proxy_cache_valid 404 1m;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Accel-Internal /internal-nginx-static-location;
access_log off;
}
location /internal-nginx-static-location/ {
alias /var/www/vhosts/x.de/httpdocs/cms/;
access_log /var/www/vhosts/x.de/statistics/logs/proxy_access_log;
add_header X-Powered-By PleskLin;
internal;
}}
Using curl -I, you can retrieve the headers which will tell you what the cache settings are.
E.g.
>>> curl -I http://www.google.com
HTTP/1.1 200 OK
Date: Sun, 09 Feb 2014 06:28:36 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Transfer-Encoding: chunked
Cache settings are done in the response headers, so it's not possible for html to modify those.
Related
I've defined the below nginx cache to config.
location ~* \.(ico|css|js|gif|jpe?g|png)$ {
proxy_cache folder;
proxy_cache_min_uses 1;
proxy_cache_valid 200 60m;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_cache_valid any 0;
proxy_cache_key $scheme$proxy_host$host$request_uri;
client_max_body_size 50m;
proxy_pass http://upstream_folder;
proxy_http_version 1.1;
proxy_buffers 4 256k;
proxy_buffer_size 128k;
proxy_busy_buffers_size 256k;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $x_forwareded_for;
keepalive_timeout 5;
}
I don't want to cache the non 200 requests static content like img/css/js/gif etc.
But for the requests like the below which has a 302 redirect response, nginx caches it. Please let me know what is wrong with my ningx config. And how to avoid the 302 response caching for images or any static content.
curl 'https://localhost:9000/hello.jpg' --verbose
* Trying localhost....
* Connected to localhost (ip) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
> GET /support.jpg HTTP/1.1
> Host: localhost
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 302 Moved Temporarily
< Server: nginx/1.8.0
< Date: Tue, 19 Dec 2017 00:47:51 GMT
< Content-Type: image/jpeg
< Content-Length: 0
< Connection: keep-alive
< Cache-Control: public, max-age=315358505
< Expires: Fri, 17 Dec 2027 00:22:56 GMT
< Location: https://localhost:9000/login.jspa?referer=%252Fhello.jpg&hint=
< P3P: CP="CAO PSA OUR"
< X-Frame-Options: SAMEORIGIN
< X-JVL: D=16190 t=1513642992041808
< X-Robots-Tag: none
we have a NGINX reverse proxy in front of our web server, and we need it to cache responses based on the request body (for a detailed explanation of why see this other question).
The problem I have is that even though the POST data is the same, the multipart boundary is actually different every time (the boundaries are created by the web browser).
Here's a simplified example of what the request looks like (notice the browser-generated WebKitFormBoundaryV2BlneaIH1rGgo0w):
POST http://my-server/some-path HTTP/1.1
Content-Length: 383
Accept: application/json
Content-Type: multipart/form-data; boundary=----
WebKitFormBoundaryV2BlneaIH1rGgo0w
------WebKitFormBoundaryV2BlneaIH1rGgo0w
Content-Disposition: form-data; name="some-key"; filename="some-pseudo-file.json"
Content-Type: application/json
{ "values": ["value1", "value2", "value3"] }
------WebKitFormBoundaryV2BlneaIH1rGgo0w--
If it's useful to someone, here's a simplified example of the NGINX config we use
proxy_cache_path /path/to/nginx/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off;
location /path/to/post/request {
proxy_pass http://remote-server;
proxy_cache my_cache;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_cache_lock on;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_methods POST;
proxy_cache_key "$scheme$proxy_host$uri$is_args$args|$request_body";
proxy_cache_valid 5m;
client_max_body_size 1500k;
proxy_hide_header Cache-Control;
}
I have seen the LUA NGINX module which looks like it could work, but I don't see how I could use it to parse the request data to build the cache key, but still pass the original request, or if there's something I can use with the "stock" NGINX.
Thanks!
Our stack is nginx - gunicorn - mezzanine (django cms) running on an EC2 instance. Everything works, but I can't seem to enable nginx proxy_cache. Here is my minimal config:
upstream %(proj_name)s {
server 127.0.0.1:%(gunicorn_port)s;
}
proxy_cache_path /cache keys_zone=bravo:10m;
server {
listen 80;
listen 443;
server_name %(live_host)s;
client_max_body_size 100M;
keepalive_timeout 15;
location /static/ {
expires 1M;
access_log off;
add_header Cache-Control "public";
root %(proj_path)s;
}
location / {
expires 1M;
add_header X-Proxy-Cache $upstream_cache_status;
proxy_cache bravo;
proxy_ignore_headers Cache-Control Expires Set-Cookie;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://%(proj_name)s;
}
}
Sample response:
HTTP/1.1 200 OK
Server: nginx/1.4.6 (Ubuntu)
Date: Wed, 07 Jan 2015 03:43:47 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
Vary: Accept-Encoding
Vary: Cookie, Accept-Language
Content-Language: en
Expires: Fri, 06 Feb 2015 03:43:47 GMT
Cache-Control: max-age=2592000
X-Proxy-Cache: MISS
Content-Encoding: gzip
I have mezzanine cache middleware enabled and it is returning responses with Set-Cookie headers, but proxy_ignore_headers should ignore that.
I did chmod 777 on proxy_cache_path dir (/cache) so it shouldn't be a permissions issue.
Error logging is enabled but has produced nothing.
proxy_cache_path continues to remain completely empty...
Why is nginx not caching anything with this config?
I'm using nginx as a load-balancing proxy, and I would also like it to cache its responses on disk so it doesn't have to hit the upstream servers as often.
I tried following the instructions at http://wiki.nginx.org/ReverseProxyCachingExample. I'm using nginx 1.7 as provided by Docker.
Here's my nginx.conf (which gets installed into nginx/conf.d/):
upstream balancer53 {
server conceptnet-api-1:10053;
server conceptnet-api-2:10053;
}
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=STATIC:1g max_size=1g;
server {
listen 80;
gzip on;
gzip_proxied any;
gzip_types application/json;
charset utf-8;
charset_types application/json;
location /web {
proxy_pass http://balancer53;
proxy_set_header X-Remote-Addr $proxy_add_x_forwarded_for;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control X-RateLimit-Limit X-RateLimit-Remaining X-RateLimit-Reset;
}
location /data/5.3 {
proxy_pass http://balancer53;
proxy_set_header X-Remote-Addr $proxy_add_x_forwarded_for;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control X-RateLimit-Limit X-RateLimit-Remaining X-RateLimit-Reset;
}
location /data/5.2 {
# serve the old version
proxy_pass http://conceptnet52:10052/;
proxy_set_header X-Remote-Addr $proxy_add_x_forwarded_for;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control X-RateLimit-Limit X-RateLimit-Remaining X-RateLimit-Reset;
}
location / {
root /var/www;
index index.html;
autoindex on;
rewrite ^/static/(.*)$ /$1;
}
}
Despite this configuration, nothing ever shows up in /data/nginx/cache.
Here's an example of the response headers from the upstream server:
$ curl -vs http://localhost:10053/data/5.3/assoc/c/en/test > /dev/null
* Hostname was NOT found in DNS cache
* Trying ::1...
* Connected to localhost (::1) port 10053 (#0)
> GET /data/5.3/assoc/c/en/test HTTP/1.1
> User-Agent: curl/7.35.0
> Host: localhost:10053
> Accept: */*
>
< HTTP/1.1 200 OK
* Server gunicorn/19.1.1 is not blacklisted
< Server: gunicorn/19.1.1
< Date: Thu, 06 Nov 2014 20:54:52 GMT
< Connection: close
< Content-Type: application/json
< Content-Length: 1329
< Access-Control-Allow-Origin: *
< X-RateLimit-Limit: 60
< X-RateLimit-Remaining: 59
< X-RateLimit-Reset: 1415307351
<
{ [data not shown]
* Closing connection 0
Each upstream server is enforcing a rate limit, but I am okay with disregarding the rate limit on cached responses. I was unsure whether these headers were preventing caching, which is why I told nginx to ignore them.
What do I need to do to get nginx to start using the cache?
Official documentation tells If the header includes the “Set-Cookie” field, such a response will not be cached. Please check it out here.
To make cache working use hide and ignore technique:
location /web {
...
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
}
I tried running nginx alone with that nginx.conf, and found that it complained about some of the options being invalid. I think I was never successfully building a new nginx container at all.
In particular, it turns out you don't just put any old headers in the proxy_ignore_headers option. It only takes particular headers as arguments, ones that the proxy system cares about.
Here is my revised nginx.conf, which worked:
upstream balancer53 {
server conceptnet-api-1:10053;
server conceptnet-api-2:10053;
}
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=STATIC:100m max_size=100m;
server {
listen 80;
gzip on;
gzip_proxied any;
gzip_types application/json;
charset utf-8;
charset_types application/json;
location /web {
proxy_pass http://balancer53;
proxy_set_header X-Remote-Addr $proxy_add_x_forwarded_for;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
}
location /data/5.3 {
proxy_pass http://balancer53;
proxy_set_header X-Remote-Addr $proxy_add_x_forwarded_for;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
}
location / {
root /var/www;
index index.html;
autoindex on;
rewrite ^/static/(.*)$ /$1;
}
}
I'm working on a project with grails 2.2.2 on a local machine Mac OSX Lion 10.7.5 I have installed NGINX with brew and modified the nginx.conf as following :
worker_processes 1;
error_log logs/error.log info;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 8081;
server_name localhost;
root /;
access_log /Users/lorenzo/grails/projects/logs/myproject_access.log;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:8081;
}
#images folders
location /posters {
root /Users/lorenzo/grails/projects/posters/;
}
#images folders
location /avatars {
root /Users/lorenzo/grails/projects/avatars/;
}
#images folders
location /waveforms {
root /Users/lorenzo/grails/projects/waveforms/;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
When I access http://localhost:8081 my site is running but I want to be sure the images are served by nginx and not by tomcat so I look at myproject_access.log but nothing is happening.
ngnix is writing into the log ONLY when tomcat is NOT running.
Is there a way to "monitor" the static files served by nginx ?
Thank you
EDIT
Executing curl -I http://localhost:8081
when tomcat is running the output is:
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1 //TOMCAT
...
when tomcat is NOT running the output is:
HTTP/1.1 500 Internal Server Error
Server: nginx/1.4.1 //NGINX
Date: Tue, 08 Apr 2014 09:30:00 GMT
Content-Type: text/html
Content-Length: 192
Connection: keep-alive
Your problem is that your are making the both servers listen on the same port, you need to move tomcat to another port like 8082 and let nginx listen to the main port ( which is 8081 in your case ), and then tell nginx to proxy to 8082 when the request isn't an image ( or any asset ).
also here's a refinement to your server block
server {
server_name localhost;
listen 8081;
root /Users/lorenzo/grails/projects;
location #tomcat {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:8082;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
location / {
try_files $uri $uri/ #tomcat;
}
}