403 on POST preflight (sometimes) - spring

I have a Spring Boot server behind an Nginx reverse proxy, that I access using fetch from a React app. The frontend is served from another port, so I have to enable CORS on the server. In most cases this works great, but about 1% of my users get a 403 in response to the OPTIONS preflight request, and I need help figuring out why. The biggest problem I have is that I'm unable to replicate the issue on any of my machines.
Spring Boot CORS config:
#Bean
public FilterRegistrationBean corsFilter() {
CorsConfiguration config = new CorsConfiguration();
config.addAllowedOrigin("https://example.com");
config.addAllowedHeader("*");
config.addAllowedMethod("GET");
config.addAllowedMethod("PUT");
config.addAllowedMethod("POST");
config.addAllowedMethod("DELETE");
config.addAllowedMethod("PATCH");
UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
source.registerCorsConfiguration("/**", config);
FilterRegistrationBean bean = new FilterRegistrationBean(new CorsFilter(source));
bean.setOrder(Ordered.HIGHEST_PRECEDENCE);
return bean;
}
Nginx config (3000 is NodeJS serving frontend and 3001 is Spring Boot):
server {
...
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /api/v1/ {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
...
}
Nginx log format (I removed some parts for clarity):
"$request" $status "$http_referer" "$http_user_agent"
By looking at the Nginx access.log I've nailed down 2 types of log rows where 403's show:
"OPTIONS /api/v1/oauth/token?grant_type=password&username=user%40example.com&password=****" 403 "https://www.example.com/login" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36"
meaning Windows 7 running Chrome 61, and
"OPTIONS /api/v1/oauth/token?grant_type=password&username=user%40example.com&password=****" 403 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko"
meaning Windows 7 running IE11.
Other users using the same setup of OS and browser experience no problems.
Data manually sent with fetch:
URL: https://example.com:3001/api/v1/oauth/token?grant_type=password&username=user%40example.com&password=****
Method: POST
Headers:
Authorization: Basic XXXXXXXXXXX=
Content-Type: application/x-www-form-urlencoded
Body: undefined
Actual parameters in the preflight request for a working user (from Chrome console):
Request headers:
OPTIONS /api/v1/oauth/token?grant_type=password&username=user%40example.com&password=**** HTTP/1.1
Host: https://example.com:3001
Connection: keep-alive
Access-Control-Request-Method: POST
Origin: https://example.com:3000
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.79 Safari/537.36
Access-Control-Request-Headers: authorization
Accept: */*
Referer: https://example.com:3000/login
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.8,sv;q=0.6
Response headers:
HTTP/1.1 200
Access-Control-Allow-Origin: https://example.com:3000
Vary: Origin
Access-Control-Allow-Methods: GET,PUT,POST,DELETE,PATCH
Access-Control-Allow-Headers: authorization
Content-Length: 0
Date: Tue, 03 Oct 2017 16:01:37 GMT
My guess would be that there is something wrong with the way I send the fetch request, or that I've configured the headers incorrectly. But so far I've not been able to solve it.
Any help would be mighty appreciated.

I've finally managed to find the error, and it was my own fault completely.
As can be seen in the Nginx access.log, one of the failed preflights has an $http_referer, that is the Origin Header, of www.example.com/login. This will of course fail a preflight since my CORS config only allows example.com, without the www subdomain.
I've fixed this by adding server blocks to the Nginx config so that all requests from the www subdomain return a redirect to the non-www domain, using a 301 Permanently moved:
server {
...
listen 443 ssl;
server_name example.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /api/v1/ {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
...
}
server {
server_name www.example.com example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
server_name www.example.com;
return 301 https://example.com$request_uri;
}
Note that I also redirect all http traffic to https in order to keep everything encrypted.
This way I can be sure all requests are entering my 2 servers using https://example.com as origin, and there is no need to modify the CORS configuration.

Related

How to configure Nginx to redirect https traffic to my Springboot application

I was following stack question How can I set up a letsencrypt SSL certificate and use it in a Spring Boot application?
to configure my Springboot app to use https(certbot) but my Nginx is not redirecting properly to my application.
More context:
I am using Cloudflare, to redirect www.example.com (my domain) requests to the machine where I have Nginx and my Springboot app. I want Nginx to redirect this http on port 80 requests to my application that is running on port 8443 (https). I have installed certbot (letsencrypt) certificates and set up my nginx config with those.
My configuration after generating my certificate is below:
Springboot application.properties
server.port=8443
security.require-ssl=true
server.ssl.key-store=/etc/letsencrypt/live/mydomain/keystore.p12
server.ssl.key-store-password=mydomain
server.ssl.keyStoreType=PKCS12
server.ssl.keyAlias=myAlias
Update 1 >> Nginx /etc/nginx/nginx.conf
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
log_format formatWithUpstreamLogging '[$time_local] $remote_addr - $remote_user - $server_name to: $upstream_addr: $request';
access_log /var/log/nginx/access.log formatWithUpstreamLogging;
error_log /var/log/nginx/error.log;
server {
listen 80;
server_name www.example.com example.com;
return 301 https://$server_name$request_uri;
}
# SSL configuration
server {
listen 443 ssl;
server_name www.example.com example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass https://localhost:8443/;
}
}
}
Output of the command nginx -T
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
# configuration file /etc/nginx/nginx.conf:
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
log_format formatWithUpstreamLogging '[$time_local] $remote_addr - $remote_user - $server_name to: $upstream_addr: $request';
access_log /var/log/nginx/access.log formatWithUpstreamLogging;
error_log /var/log/nginx/error.log;
server {
listen 80;
server_name www.example.com example.com;
return 301 https://$server_name$request_uri;
}
# SSL configuration
server {
listen 443 ssl;
server_name www.example.com example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# managed by Certbot
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass https://localhost:8443/;
}
}
}
Output of the command nginx -T
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
# configuration file /etc/nginx/nginx.conf:
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
log_format formatWithUpstreamLogging '[$time_local] $remote_addr - $remote_user - $server_name to: $upstream_addr: $request';
access_log /var/log/nginx/access.log formatWithUpstreamLogging;
error_log /var/log/nginx/error.log;
server {
listen 80;
server_name www.example.com example.com;
return 301 https://$server_name$request_uri;
}
# SSL configuration
server {
listen 443 ssl;
server_name www.example.com example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# managed by Certbot
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass https://localhost:8443/;
}
}
}
Update 2 >>> Nginx /etc/nginx/nginx.conf
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
log_format formatWithUpstreamLogging '[$time_local] $remote_addr - $remote_user - $server_name to: $upstream_addr: $request';
access_log /var/log/nginx/access.log formatWithUpstreamLogging;
error_log /var/log/nginx/error.log;
server {
listen 80;
server_name www.codeonblue.com.br codeonblue.com.br;
ssl_certificate /etc/letsencrypt/live/codeonblue.com.br/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/codeonblue.com.br/privkey.pem;
# managed by Certbot
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass https://localhost:8443/;
}
}
}
When I start nginx and my springboot application and try to access www.example.com in Chrome I get this page (image below)
Here is what I get when I try to access by Firefox:
Since /var/log/nginx/error.log had no entries I have check the access log and there are dozens of requests like this (though I only did one request):
Nginx access log (/var/log/nginx/access.log)
[06/Mar/2019:12:59:52 +0000] <same-IP-address> - - - www.example.com to: -: GET / HTTP/1.1
Here are the curl results:
curl -I https://www.example.com/
HTTP/1.1 301 Moved Permanently
Date: Wed, 06 Mar 2019 14:32:26 GMT
Content-Type: text/html
Connection: keep-alive
Set-Cookie: __cfduid=d330e880850b37d5a9870c1edb71ab8c01551882746; expires=Thu, 05-Mar-20 14:32:26 GMT; path=/; domain=.example.com; HttpOnly
Location: https://www.example.com/
Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
Server: cloudflare
CF-RAY: 4b3509fdb8e0c879-MIA
curl -I http://www.example.com/
HTTP/1.1 301 Moved Permanently
Date: Wed, 06 Mar 2019 14:32:56 GMT
Content-Type: text/html
Connection: keep-alive
Set-Cookie: __cfduid=d9e1f2908ee4037d46bffa6866549c3151551882776; expires=Thu, 05-Mar-20 14:32:56 GMT; path=/; domain=.example.com; HttpOnly
Location: https://www.example.com/
Server: cloudflare
CF-RAY: 4b350ab9d83fc895-MIA
Could anyone help me on this issue? What did I miss?
Update 3 >>> More changes in nginx.conf and cleaning cloudflare cache
Nginx /etc/nginx/nginx.conf
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
log_format formatWithUpstreamLogging '[$time_local] $remote_addr - $remote_user - $server_name to: $upstream_addr: $request';
#main log format
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log;
server {
listen 80;
server_name www.codeonblue.com.br codeonblue.com.br;
ssl_certificate /etc/letsencrypt/live/codeonblue.com.br/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/codeonblue.com.br/privkey.pem;
# managed by Certbot
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass https://localhost:8443/;
}
}
}
After this update part of the problem was solved:
Curl test
curl -I http://www.example.com
HTTP/1.1 200
Date: Wed, 06 Mar 2019 22:19:11 GMT
Content-Type: text/html;charset=UTF-8
Connection: keep-alive
Set-Cookie: __cfduid=d3f91ee93c3657a851354dbb4f03741a31551910750; expires=Thu, 05-Mar-20 22:19:10 GMT; path=/; domain=.example.com; HttpOnly
Last-Modified: Wed, 06 Mar 2019 22:03:01 GMT
Accept-Ranges: bytes
Content-Language: en-US
Server: cloudflare
CF-RAY: 4b37b5b08cf05eb2-TPA
In the browsers Firefox / Chrome the front end part is visible as you can see in the picture below:
Firefox
Chrome
New issues:
The certificate being used is from cloudflare, not from certbot (letsencrypt). Chrome does not consider it good enough and keep showing as "Not secure".
The endpoint of my application is not been called I don't know why yet. Maybe I am calling the wrong address. How should I call my endpoint?
In the access log I have accessed:
curl -I www.example.com.br
Accessed http://www.example.com.br in Firefox
Accessed http://www.example.com.br in Chrome
Try to access my endpoint in Postman
Nginx access log (/var/log/nginx/access.log)
172.68.78.24 - - [06/Mar/2019:22:19:11 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.47.0" "<ip-address-of-my-machine>"
172.68.78.24 - - [06/Mar/2019:22:19:46 +0000] "GET / HTTP/1.1" 200 578 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:65.0) Gecko/20100101 Firefox/65.0" "<ip-address-of-my-machine>"
172.68.78.36 - - [06/Mar/2019:22:19:46 +0000] "GET /runtime.js HTTP/1.1" 200 6224 "https://www.example.com/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:65.0) Gecko/20100101 Firefox/65.0" "<ip-address-of-my-machine>"
172.68.78.42 - - [06/Mar/2019:22:19:46 +0000] "GET /main.js HTTP/1.1" 200 19198 "https://www.example.com/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:65.0) Gecko/20100101 Firefox/65.0" "<ip-address-of-my-machine>"
172.68.78.54 - - [06/Mar/2019:22:19:46 +0000] "GET /styles.js HTTP/1.1" 200 185363 "https://www.example.com/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:65.0) Gecko/20100101 Firefox/65.0" "<ip-address-of-my-machine>"
172.68.78.96 - - [06/Mar/2019:22:19:46 +0000] "GET /polyfills.js HTTP/1.1" 200 228524 "https://www.example.com/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:65.0) Gecko/20100101 Firefox/65.0" "<ip-address-of-my-machine>"
172.68.78.42 - - [06/Mar/2019:22:19:46 +0000] "GET /vendor.js HTTP/1.1" 200 6821593 "https://www.example.com/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:65.0) Gecko/20100101 Firefox/65.0" "<ip-address-of-my-machine>"
172.68.78.18 - - [06/Mar/2019:22:19:48 +0000] "GET /favicon.ico HTTP/1.1" 200 5430 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:65.0) Gecko/20100101 Firefox/65.0" "<ip-address-of-my-machine>"
172.68.78.60 - - [06/Mar/2019:22:20:36 +0000] "GET / HTTP/1.1" 200 578 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36" "<ip-address-of-my-machine>"
172.68.78.60 - - [06/Mar/2019:22:21:33 +0000] "GET / HTTP/1.1" 200 578 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36" "<ip-address-of-my-machine>"
172.68.78.60 - - [06/Mar/2019:22:22:06 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36" "<ip-address-of-my-machine>"
221.229.166.47 - - [06/Mar/2019:22:32:53 +0000] "GET / HTTP/1.1" 200 578 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1" "-"
172.68.78.60 - - [06/Mar/2019:22:44:06 +0000] "GET / HTTP/1.1" 200 578 "-" "PostmanRuntime/7.1.1" "<ip-address-of-my-machine>"
In Postman when I try to access the endpoint like:
https://www.example.com:8443/api/cars
I get a big output where I think this part is more important:
(...)
<title>www.example.com | 522: Connection timed out</title>
(...)
<h2 class="cf-subheadline">Connection timed out</h2>
(...)
<span class="cf-status-desc">www.example.com</span>
(...)
<h2>What happened?</h2>
<p>The initial connection between Cloudflare's network and the origin web server timed out. As a result, the web page can not be displayed.</p>
<h5>If you're the owner of this website:</h5>
<span>Contact your hosting provider letting them know your web server is not completing requests. An Error 522 means that the request was able to connect to your web server, but that the request didn't finish. The most likely cause is that
something on your server is hogging resources.
</span>
(...)
I presume the time out is due to the wrong way I am accessing my service / endpoint. So, how should I access it now? How can I set up nginx to use the certbot certificate instead of cloudflare's ?
My problem is partially solved. Here is my scenario and the configuration I have used:
Application: Springboot + Angular6 (Springboot app uses ssl on port 8443 and is configured to use certbot certificates)
Domain Resolution: CloudFlare (configured to resolve DNS from my domain to the IP of my cloud server)
Cloud Server: Amazon Lightsail (linux machine in the cloud where Nginx and my application are running)
Web Server: Nginx (used in the Amazon machine to redirect http traffic on port 80 to https on port 8443 which is used by my springboot application )
Springboot application.properties
server.port=8443
security.require-ssl=true
server.ssl.key-store=/etc/letsencrypt/live/www.example.com/keystore.p12
server.ssl.key-store-password=www.example.com
server.ssl.keyStoreType=PKCS12
server.ssl.keyAlias=myAlias
How Angular 6 services should use the API
getAll(): Observable<any> {
return this.http.get('/api/cars'); // production
}
Nginx /etc/nginx/nginx.conf
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
log_format formatWithUpstreamLogging '[$time_local] $remote_addr - $remote_user - $server_name to: $upstream_addr: $request';
#main log format
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log;
server {
listen 80;
server_name www.example.com example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# managed by Certbot
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass https://localhost:8443/;
proxy_redirect http://localhost:8443/ https://localhost:8443/;
}
}
}
Other settings:
Certbot was installed in the cloud machine ( more info in this question: How can I set up a letsencrypt SSL certificate and use it in a Spring Boot application? )
In Cloudflare configuration for my domain, two A records were created: one for example.com and other for www pointing to the IP of my cloud machine. DNS servers of my domain were replaced by Cloudflare's DNS servers.
http traffic still is not being redirected to https by Nginx though
If anyone knows how to make this redirect work please let me know :)
Ps.: I would like to thank Richard Smith for all help and time spent on this question!
You need to add this to your server block:
# Redirect non-https traffic to https
if ($scheme != "https") {
return 301 https://$host$request_uri;
}
I know this is old, but I'd like to share my Nginx configuration for other people who may end up in this post.
My entire configuration is larger, but the redirection part looks as follows:
server {
listen 80;
location / {
return 301 https://$host:8443$request_uri;
}
}
Basically, if you're serving the application on HTTPS using what Spring provides you don't need Proxy pass, ssl_certificate or anything else on Nginx. The configuration above will simply return a 301 Redirect to the browser when an HTTP request arrives to port 80.

How to cache a multipart POST request with NGINX reverse proxy

we have a NGINX reverse proxy in front of our web server, and we need it to cache responses based on the request body (for a detailed explanation of why see this other question).
The problem I have is that even though the POST data is the same, the multipart boundary is actually different every time (the boundaries are created by the web browser).
Here's a simplified example of what the request looks like (notice the browser-generated WebKitFormBoundaryV2BlneaIH1rGgo0w):
POST http://my-server/some-path HTTP/1.1
Content-Length: 383
Accept: application/json
Content-Type: multipart/form-data; boundary=----
WebKitFormBoundaryV2BlneaIH1rGgo0w
------WebKitFormBoundaryV2BlneaIH1rGgo0w
Content-Disposition: form-data; name="some-key"; filename="some-pseudo-file.json"
Content-Type: application/json
{ "values": ["value1", "value2", "value3"] }
------WebKitFormBoundaryV2BlneaIH1rGgo0w--
If it's useful to someone, here's a simplified example of the NGINX config we use
proxy_cache_path /path/to/nginx/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off;
location /path/to/post/request {
proxy_pass http://remote-server;
proxy_cache my_cache;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_cache_lock on;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_methods POST;
proxy_cache_key "$scheme$proxy_host$uri$is_args$args|$request_body";
proxy_cache_valid 5m;
client_max_body_size 1500k;
proxy_hide_header Cache-Control;
}
I have seen the LUA NGINX module which looks like it could work, but I don't see how I could use it to parse the request data to build the cache key, but still pass the original request, or if there's something I can use with the "stock" NGINX.
Thanks!

How to enable nginx proxy caching for gunicorn mezzanine

Our stack is nginx - gunicorn - mezzanine (django cms) running on an EC2 instance. Everything works, but I can't seem to enable nginx proxy_cache. Here is my minimal config:
upstream %(proj_name)s {
server 127.0.0.1:%(gunicorn_port)s;
}
proxy_cache_path /cache keys_zone=bravo:10m;
server {
listen 80;
listen 443;
server_name %(live_host)s;
client_max_body_size 100M;
keepalive_timeout 15;
location /static/ {
expires 1M;
access_log off;
add_header Cache-Control "public";
root %(proj_path)s;
}
location / {
expires 1M;
add_header X-Proxy-Cache $upstream_cache_status;
proxy_cache bravo;
proxy_ignore_headers Cache-Control Expires Set-Cookie;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://%(proj_name)s;
}
}
Sample response:
HTTP/1.1 200 OK
Server: nginx/1.4.6 (Ubuntu)
Date: Wed, 07 Jan 2015 03:43:47 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
Vary: Accept-Encoding
Vary: Cookie, Accept-Language
Content-Language: en
Expires: Fri, 06 Feb 2015 03:43:47 GMT
Cache-Control: max-age=2592000
X-Proxy-Cache: MISS
Content-Encoding: gzip
I have mezzanine cache middleware enabled and it is returning responses with Set-Cookie headers, but proxy_ignore_headers should ignore that.
I did chmod 777 on proxy_cache_path dir (/cache) so it shouldn't be a permissions issue.
Error logging is enabled but has produced nothing.
proxy_cache_path continues to remain completely empty...
Why is nginx not caching anything with this config?

nginx is not caching proxied responses on disk, even though I ask it to

I'm using nginx as a load-balancing proxy, and I would also like it to cache its responses on disk so it doesn't have to hit the upstream servers as often.
I tried following the instructions at http://wiki.nginx.org/ReverseProxyCachingExample. I'm using nginx 1.7 as provided by Docker.
Here's my nginx.conf (which gets installed into nginx/conf.d/):
upstream balancer53 {
server conceptnet-api-1:10053;
server conceptnet-api-2:10053;
}
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=STATIC:1g max_size=1g;
server {
listen 80;
gzip on;
gzip_proxied any;
gzip_types application/json;
charset utf-8;
charset_types application/json;
location /web {
proxy_pass http://balancer53;
proxy_set_header X-Remote-Addr $proxy_add_x_forwarded_for;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control X-RateLimit-Limit X-RateLimit-Remaining X-RateLimit-Reset;
}
location /data/5.3 {
proxy_pass http://balancer53;
proxy_set_header X-Remote-Addr $proxy_add_x_forwarded_for;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control X-RateLimit-Limit X-RateLimit-Remaining X-RateLimit-Reset;
}
location /data/5.2 {
# serve the old version
proxy_pass http://conceptnet52:10052/;
proxy_set_header X-Remote-Addr $proxy_add_x_forwarded_for;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control X-RateLimit-Limit X-RateLimit-Remaining X-RateLimit-Reset;
}
location / {
root /var/www;
index index.html;
autoindex on;
rewrite ^/static/(.*)$ /$1;
}
}
Despite this configuration, nothing ever shows up in /data/nginx/cache.
Here's an example of the response headers from the upstream server:
$ curl -vs http://localhost:10053/data/5.3/assoc/c/en/test > /dev/null
* Hostname was NOT found in DNS cache
* Trying ::1...
* Connected to localhost (::1) port 10053 (#0)
> GET /data/5.3/assoc/c/en/test HTTP/1.1
> User-Agent: curl/7.35.0
> Host: localhost:10053
> Accept: */*
>
< HTTP/1.1 200 OK
* Server gunicorn/19.1.1 is not blacklisted
< Server: gunicorn/19.1.1
< Date: Thu, 06 Nov 2014 20:54:52 GMT
< Connection: close
< Content-Type: application/json
< Content-Length: 1329
< Access-Control-Allow-Origin: *
< X-RateLimit-Limit: 60
< X-RateLimit-Remaining: 59
< X-RateLimit-Reset: 1415307351
<
{ [data not shown]
* Closing connection 0
Each upstream server is enforcing a rate limit, but I am okay with disregarding the rate limit on cached responses. I was unsure whether these headers were preventing caching, which is why I told nginx to ignore them.
What do I need to do to get nginx to start using the cache?
Official documentation tells If the header includes the “Set-Cookie” field, such a response will not be cached. Please check it out here.
To make cache working use hide and ignore technique:
location /web {
...
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
}
I tried running nginx alone with that nginx.conf, and found that it complained about some of the options being invalid. I think I was never successfully building a new nginx container at all.
In particular, it turns out you don't just put any old headers in the proxy_ignore_headers option. It only takes particular headers as arguments, ones that the proxy system cares about.
Here is my revised nginx.conf, which worked:
upstream balancer53 {
server conceptnet-api-1:10053;
server conceptnet-api-2:10053;
}
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=STATIC:100m max_size=100m;
server {
listen 80;
gzip on;
gzip_proxied any;
gzip_types application/json;
charset utf-8;
charset_types application/json;
location /web {
proxy_pass http://balancer53;
proxy_set_header X-Remote-Addr $proxy_add_x_forwarded_for;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
}
location /data/5.3 {
proxy_pass http://balancer53;
proxy_set_header X-Remote-Addr $proxy_add_x_forwarded_for;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
}
location / {
root /var/www;
index index.html;
autoindex on;
rewrite ^/static/(.*)$ /$1;
}
}

Nginx Cache doesn't refresh

I configured nginx to cache all files for 3min. This works only for files I upload to the webserver manually. All files generated by the CMS get cached forever (or a long time I didn't wait)...
The CMS delivers all pages as "index.html" with an own folder structure (www.x.de/category1/category2/articlename/index.html).
How can I debug this? Is there a way to check the lifetime of a specific file?
Can something in the .html files overwrite the proxy_cache_valid value?
Many thanks!
Config:
server {
listen 1.2.3.4:80 default_server;
server_name x.de;
server_name www.x.de;
server_name ipv4.x.de;
client_max_body_size 128m;
location / { # IPv6 isn't supported in proxy_pass yet.
proxy_pass http://apache.ip:7080;
proxy_cache my-cache;
proxy_cache_valid 200 3m;
proxy_cache_valid 404 1m;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Accel-Internal /internal-nginx-static-location;
access_log off;
}
location /internal-nginx-static-location/ {
alias /var/www/vhosts/x.de/httpdocs/cms/;
access_log /var/www/vhosts/x.de/statistics/logs/proxy_access_log;
add_header X-Powered-By PleskLin;
internal;
}}
Using curl -I, you can retrieve the headers which will tell you what the cache settings are.
E.g.
>>> curl -I http://www.google.com
HTTP/1.1 200 OK
Date: Sun, 09 Feb 2014 06:28:36 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Transfer-Encoding: chunked
Cache settings are done in the response headers, so it's not possible for html to modify those.

Resources