Nginx not caching when Vary headers not being ignored - caching

First off: I don't have much experience with Nginx.
I'll just proceed directly to the problem though:
Nginx config:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 2048;
multi_accept on;
}
http {
proxy_cache_path /var/nginx_cache levels=1:2 keys_zone=STATIC:10m inactive=24h max_size=10g;
upstream server {
server -removed-;
}
server {
listen 80;
server_name -removed-;
location / {
gzip on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
gzip_http_version 1.1;
gzip_min_length 500;
gzip_vary on;
gzip_proxied any;
gzip_types
application/atom+xml
application/javascript
application/json
application/ld+json
application/manifest+json
application/rss+xml
application/vnd.geo+json
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
application/xml
font/opentype
image/bmp
image/svg+xml
image/x-icon
text/cache-manifest
text/css
text/plain
text/vcard
text/vnd.rim.location.xloc
text/vtt
text/x-component
text/x-cross-domain-policy
text/js
text/xml
text/javascript;
add_header X-Cache-Status $upstream_cache_status;
proxy_cache STATIC;
proxy_set_header Host $host;
----> proxy_ignore_headers Vary; <-----
proxy_cache_key $host$uri$is_args$args;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_pass -removed-;
}
}
}
When the line 'proxy_ignore_headers Vary;' is set, everything will cache, including the HTML pages. When I remove this line, everything gets cached EXCEPT the HTML pages. Why is this?
I would like that Nginx caches the HTML pages even when Vary-headers are being sent by the origin server.
I hope someone can help me :).
Response Headers are:
Vary:Host, Content-Language, Content-Type, Content-Encoding

Fixed:
In the source code of Nginx there is set a maximum of 42 characters being used by Vary headers. In my case there where 51 characters thus my Vary headers where being handled as Vary:* (no-cache). Setting the maximum to 84 fixed it for me.
This article explains it more in depth.
https://thedotproduct.org/nginx-vary-header-handling/
Credits to the guy posting that short article.

Related

Nginx throwing 502 timeout error even though spring application is processing the request with 200OK

I am using nginx and a spring boot application with Netty server, but for some requests nginx is throwing 502 error even though netty access logs are showing 200OK for the same request. So basically the response packet is being dropped in between netty server and nginx.
This is my nginx.conf
daemon off;
worker_processes 4;
worker_rlimit_nofile 100000;
pid /var/run/nginx.pid;
error_log /opt/logs/myservice/nginx-error.log warn;
events {
worker_connections 512;
use epoll;
multi_accept off;
}
http {
#server_tokens off;
#include mime.types;
default_type application/octet-stream;
################# Gzip Settings ################
gzip on;
gzip_comp_level 4;
gzip_min_length 1024;
gzip_proxied any;
gzip_static on;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js application/soap
+xml;
gzip_disable "MSIE [1-6]\.";
####################################################
log_format upstreamlog '$time_local $status $remote_addr to:- $upstream_addr $request -- upstream_response_time:$upstream_response_time request_time:$request_time tid_header:$http_tid status:$upstream_cache_status slot:$http_slot slotTime:$http_slotstarttime ttlReq:$http_ttl ttlResp:$upstream_http_x_accel_expires jobFlag:$http_jobflag cookies:"$http_cookie" bytes_sent:$bytes_sent gzip_ratio:$gzip_ratio "$http_referer" "$http_user_agent" $http_x_forwarded_for';
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 30;
keepalive_requests 10000;
reset_timedout_connection on;
client_body_timeout 30;
send_timeout 300;
set_real_ip_from 10.117.0.0/16;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $http_scheme;
proxy_set_header X-Real-IP $remote_addr;
server {
listen 80;
server_name 127.0.0.1;
client_header_buffer_size 64k;
large_client_header_buffers 4 64k;
client_max_body_size 2M;
if ($host ~* ^(example)) {
rewrite ^/(.*)$ https://www.example.com/$1 permanent;
}
access_log /opt/logs/myservice/nginx-frontend.log upstreamlog;
location / {
# Proxy Settings
proxy_pass http://127.0.0.1:8000$request_uri;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $http_scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_buffering off;
proxy_buffers 8 16k;
proxy_buffer_size 16k;
proxy_set_header Cookie "";
fastcgi_read_timeout 120;
proxy_read_timeout 120;
client_max_body_size 500M;
add_header Cache-Control "no-cache, , max-age=0, must-revalidate, no-store";
################# Gzip Settings ################
gzip on;
gzip_comp_level 4;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js application/soap+xml;
###################################################
gzip_disable "MSIE [1-6]\.";
set_real_ip_from 10.117.0.0/16;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
}
location /nginx_status {
stub_status on;
access_log off;
}
}
}
I am using spring boot version of 2.5.0.
Also, during the issue, the CPU and memory usage are also below 10%.
I already have tried changing the number of worker processes and changing the reverse proxy timeouts. Tries increasing the keep alive timeouts and keep alive connections count.
Does the error.log of Nginx explain the reason for 502? See if this helps you.
NGINX returning HTTP 502, but HTTP 200 in the logs
I do not know the exact reason why this is happening but upgrading my Netty server version does sole the problem.

How to solve the ERROR 405 METHOD NOT ALLOWED?

I have a frontend hosted under nginx and backend hosted under tomcat.
I got the error 405 : Method not allowed when I try to login into the application.
you will find here the config file nginx.conf :
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html index.htm;
include /etc/nginx/mime.types;
gzip on;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
client_max_body_size 300M;
location / {
proxy_http_version 1.1;
proxy_request_buffering off;
expires off;
try_files $uri $uri/ /index.html;
}
location /api/app/ {
proxy_http_version 1.1;
proxy_request_buffering off;
expires off;
proxy_method GET;
proxy_pass http://192.168.57.129:8080/backend-1.0/api/app/;
}
}
}
I added also the following lines to web.xml in order to let the server accept GET, POST, etc. request:
<param-name>readonly</param-name>
<param-value>false</param-value>
The connection with the database works properly when I start tomcat. But the error still existing.

Nginx(as reverse proxy) does not notify gin-gonic(as web server) when connection canceled by client

In a website which uses gin-gonic as webserver and nginx as a proxy server, clients send their data to the server via gin-gonic exposed APIs, and — in order to send server commands to clients — each one(i.e client) has a connection to the web server which it keeps for a long time(i.e does not expire it for several hours) and fill their response body with the desired command (for bidirectional connection, we can use ws instead of that).
The problem is, we need to know if a client drop/canceled its connection with the server. In order to understand that we have this line case <-c.Request.Context().Done(): in the following code.
func MakeConnection(c *gin.Context) {
// ... code
select {
case notification = <-clientModel.Channels[token]:// we have a command to send to user
log.Info("Notification")
err = nil
case <-c.Request.Context().Done(): // connection dropped by client
log.Info("Cancel")
err = c.Err()
case <-ctx.Done(): // time elapsed and there was no command to send to client
log.Info("timeout")
err = errors.New("timeout exceeded")
}
// ... code
}
Everything works well (if we make such a connection to the server and cancel it after arbitrary time, immediately Cancel displayed in the terminal) until the count of clients increases.
For about 5 clients this work as expected but for more than 10 clients(more load) although nginx logs cancellation in its access log(as error code 499) but webserver (i.e gin-gonic) does not get notified.
nginx configuration
server {
# SSL configuration
listen 443 ssl http2 so_keepalive=10:150:1;
listen [::]:443 ssl http2 so_keepalive=10:150:1;
include /etc/nginx/snippets/self-signed.conf;
include /etc/nginx/snippets/ssl-params.conf;
root /var/www/project;
autoindex off;
location / {
try_files $uri $uri/ #proxy;
}
location ~ /v1/v/[0-9a-zA-Z]+/notification {
proxy_connect_timeout 86460;
proxy_send_timeout 86460;
proxy_read_timeout 86460;
send_timeout 86460;
proxy_pass http://example_host:8002;
}
}
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
use epoll;
multi_accept on;
}
http {
limit_req_zone $binary_remote_addr zone=req_zone:200m rate=10r/s;
limit_req_zone $request_uri zone=search_limit:100m rate=5r/s;
limit_req_zone $binary_remote_addr zone=login_limit:100m rate=20r/m;
vhost_traffic_status_zone;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
gzip on;
gzip_types text/html text/css text/javascript text/xml text/plain
image/x-icon image/svg+xml
application/rss+xml application/javascript application/x-javascript
application/xml application/xhtml+xml
application/x-font application/x-font-truetype application/x-font-ttf application/x-font-otf application/x-font-opentype application/vnd.ms-fontobject font/ttf font/otf font/opentype
font/woff font/ttf
application/octet-stream;
gzip_vary on;
include /etc/nginx/conf.d/*.conf;
}

How to correctly setup Nginx to get minimal TTFB delay?

I have a rails application that is running on Nginx and Puma in production environment.
There is a problem with web page loading (TTBF delay), and I am trying to figure out a reason.
On backend side in production.log I see that my web page is rendered fast enough in 134ms:
Completed 200 OK in 134ms (Views: 49.9ms | ActiveRecord: 29.3ms)
But in browser I see that TTFB is 311.49ms:
I understand that there may be a problem in settings or processes count may be not optimal, but cannot find a a reason of ~177ms delay.. Will be grateful for some advices.
My VPS properties and configurations are listed below.
Environment
Nginx 1.10.3
Puma 3.12.0 (rails 5.2)
PostgreSQL
Sidekiq
ElasticSearch
VPS properties
Ubuntu 16.04 (64-bit)
8 cores (2.4 GHz)
16gb of RAM.
Network Bandwidth: 1000 Mbps
nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 8096;
multi_accept on;
use epoll;
}
http {
# Basic Settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logging Settings
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# Gzip Settings
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
web_app.conf
upstream puma {
server unix:///home/deploy/apps/web_app/shared/tmp/sockets/web_app-puma.sock fail_timeout=0;
}
log_format timings '$remote_addr - $time_local '
'"$request" $status '
'$request_time $upstream_response_time';
server {
server_name web_app.com;
# SSL configuration
ssl on;
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_protocols TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
ssl_buffer_size 4k;
ssl_certificate /etc/ssl/certs/cert.pem;
ssl_certificate_key /etc/ssl/private/key.pem;
root /home/deploy/apps/web_app/shared/public;
access_log /home/deploy/apps/web_app/current/log/nginx.access.log;
error_log /home/deploy/apps/web_app/current/log/nginx.error.log info;
access_log /home/deploy/apps/web_app/current/log/timings.log timings;
location ^~ /assets/ {
#gzip_static on;
expires max;
add_header Cache-Control public;
add_header Vary Accept-Encoding;
access_log off;
}
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_request_buffering off;
proxy_pass http://puma;
}
error_page 500 502 503 504 /500.html;
client_body_buffer_size 8K;
client_max_body_size 10M;
client_header_buffer_size 1k;
large_client_header_buffers 2 16k;
client_body_timeout 10s;
keepalive_timeout 10;
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
}
puma.rb
threads 1, 6
port 3000
environment 'production'
workers 8
preload_app!
before_fork { ActiveRecord::Base.connection_pool.disconnect! if defined?(ActiveRecord) }
on_worker_boot { ActiveRecord::Base.establish_connection if defined?(ActiveRecord) }
plugin :tmp_restart
Check the true response time of the backend
The backend might claim it's answering/rendering in 130ms, that doesn't mean it's actually doing that. You can define a logformat like this:
log_format timings '$remote_addr - $time_local '
'"$request" $status '
'$request_time $upstream_response_time';
and apply it with:
access_log /var/log/nginx/timings.log timings;
This will tell how long the backend actually takes to respond.
Others possible way to debug
Check the raw latency between you and the server (i.e. with ping or by querying from the server itself)
Check how fast static content is served to get a baseline
Use caching
Add something like this to your location block:
proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g
inactive=60m use_temp_path=off;
proxy_cache my_cache;
If your backend supports a "moddified since" header:
proxy_cache_revalidate on;
Disable buffering
You can instruct nginx to forward the responses from the backend without buffering them. This might reduce response time:
proxy_buffering off;
Since version 1.7.11 there also exists a directive that allows nginx to forward a reponse to a backend without buffering it.
proxy_request_buffering off;

nginx is not caching proxied responses on disk, even though I ask it to

I'm using nginx as a load-balancing proxy, and I would also like it to cache its responses on disk so it doesn't have to hit the upstream servers as often.
I tried following the instructions at http://wiki.nginx.org/ReverseProxyCachingExample. I'm using nginx 1.7 as provided by Docker.
Here's my nginx.conf (which gets installed into nginx/conf.d/):
upstream balancer53 {
server conceptnet-api-1:10053;
server conceptnet-api-2:10053;
}
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=STATIC:1g max_size=1g;
server {
listen 80;
gzip on;
gzip_proxied any;
gzip_types application/json;
charset utf-8;
charset_types application/json;
location /web {
proxy_pass http://balancer53;
proxy_set_header X-Remote-Addr $proxy_add_x_forwarded_for;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control X-RateLimit-Limit X-RateLimit-Remaining X-RateLimit-Reset;
}
location /data/5.3 {
proxy_pass http://balancer53;
proxy_set_header X-Remote-Addr $proxy_add_x_forwarded_for;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control X-RateLimit-Limit X-RateLimit-Remaining X-RateLimit-Reset;
}
location /data/5.2 {
# serve the old version
proxy_pass http://conceptnet52:10052/;
proxy_set_header X-Remote-Addr $proxy_add_x_forwarded_for;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control X-RateLimit-Limit X-RateLimit-Remaining X-RateLimit-Reset;
}
location / {
root /var/www;
index index.html;
autoindex on;
rewrite ^/static/(.*)$ /$1;
}
}
Despite this configuration, nothing ever shows up in /data/nginx/cache.
Here's an example of the response headers from the upstream server:
$ curl -vs http://localhost:10053/data/5.3/assoc/c/en/test > /dev/null
* Hostname was NOT found in DNS cache
* Trying ::1...
* Connected to localhost (::1) port 10053 (#0)
> GET /data/5.3/assoc/c/en/test HTTP/1.1
> User-Agent: curl/7.35.0
> Host: localhost:10053
> Accept: */*
>
< HTTP/1.1 200 OK
* Server gunicorn/19.1.1 is not blacklisted
< Server: gunicorn/19.1.1
< Date: Thu, 06 Nov 2014 20:54:52 GMT
< Connection: close
< Content-Type: application/json
< Content-Length: 1329
< Access-Control-Allow-Origin: *
< X-RateLimit-Limit: 60
< X-RateLimit-Remaining: 59
< X-RateLimit-Reset: 1415307351
<
{ [data not shown]
* Closing connection 0
Each upstream server is enforcing a rate limit, but I am okay with disregarding the rate limit on cached responses. I was unsure whether these headers were preventing caching, which is why I told nginx to ignore them.
What do I need to do to get nginx to start using the cache?
Official documentation tells If the header includes the “Set-Cookie” field, such a response will not be cached. Please check it out here.
To make cache working use hide and ignore technique:
location /web {
...
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
}
I tried running nginx alone with that nginx.conf, and found that it complained about some of the options being invalid. I think I was never successfully building a new nginx container at all.
In particular, it turns out you don't just put any old headers in the proxy_ignore_headers option. It only takes particular headers as arguments, ones that the proxy system cares about.
Here is my revised nginx.conf, which worked:
upstream balancer53 {
server conceptnet-api-1:10053;
server conceptnet-api-2:10053;
}
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=STATIC:100m max_size=100m;
server {
listen 80;
gzip on;
gzip_proxied any;
gzip_types application/json;
charset utf-8;
charset_types application/json;
location /web {
proxy_pass http://balancer53;
proxy_set_header X-Remote-Addr $proxy_add_x_forwarded_for;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
}
location /data/5.3 {
proxy_pass http://balancer53;
proxy_set_header X-Remote-Addr $proxy_add_x_forwarded_for;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
}
location / {
root /var/www;
index index.html;
autoindex on;
rewrite ^/static/(.*)$ /$1;
}
}

Resources