nginx not using proxy cache and cloudfront - caching

I'm trying to cache static files on a server, rather than going to the 'upstream' server each time. This upstream server happens to be Cloudfront,
Here is my nginx configuration:
nginx.conf http context:
proxy_cache_key "$scheme$host$request_uri";
proxy_cache_path /var/spool/nginx levels=1:1 keys_zone=oly_zone:1000m;
proxy_cache_use_stale updating;
proxy_cache_valid 200 301 302 10m;
proxy_cache_valid any 10s;
proxy_cache oly_zone;
website.conf:
location /gameimages/stock/ {
proxy_ignore_headers "Set-Cookie";
proxy_hide_header "Set-Cookie";
add_header X-Proxy-Cache $upstream_cache_status;
proxy_cache_valid 404 1s;
proxy_cache_valid any 15d;
proxy_cache oly_zone;
proxy_pass http://d34sdfsfsadfasdfmhbsdafirsdfsdffelaut.cloudfront.net/;
}
I thought this worked, but an example response header shows this:
Accept-Ranges:bytes
Age:11515
Connection:keep-alive
Content-Length:11577
Content-Type:image/jpeg
Date:Mon, 08 Aug 2016 19:25:16 GMT
ETag:"57a47349-2d39"
Last-Modified:Fri, 05 Aug 2016 11:06:49 GMT
Server:nginx/1.4.1
Via:1.1 3ba457b8dbcd4sadfsdfe93515e26caad.cloudfront.net (CloudFront)
X-Amz-Cf-Id:N0Dlk5c28sdfsf5Cvfskb3-T6PRBfSXfEPsdfasfuOLW7SHa1hjQ==
X-Cache:Hit from cloudfront
X-Proxy-Cache:HIT
It seems to be hitting both CloudFront and the cache on the server. Am I doing something wrong?
Thanks,
Michael

If issue is still actual - I have found an solution.
Generally CloudFront should be excluded from assets loading path and S3 bucket should be used directly as data source.
Solution: https://dpb587.me/blog/2015/06/20/using-nginx-to-reverse-proxy-and-cache-s3-objects.html
I my case only one line had to be added to get cache magic working:
# use google as dns
resolver 8.8.8.8;
Also sometimes SELinux requres some tuning to prevent nginx (13: Permission denied) errors:
sudo setsebool httpd_can_network_connect on -P
sudo semanage permissive -a httpd_t

Related

Kibana is ending up with error code of 503

When i am using curl command to check the kibana status , it says:
curl -I http://localhost:5601/status
HTTP/1.1 503 Service Unavailable
retry-after: 30
content-type: text/html; charset=utf-8
cache-control: no-cache
content-length: 30
Date: Sat, 04 May 2019 12:50:18 GMT
Connection: keep-alive
Kibana.YML file
cat /etc/kibana/kibana.yml
elasticsearch.url: "http://10.0.1.41:9200"
server.port: 5601
server.host: "localhost"
server.ssl.enabled: false
logging.dest: /var/log/kibana/kibana.log
Can someone help me here to solve the 503 error?
Tried to get change the Kibana.yml but noluck
Issue was with Amazon Linux AMI, which has been resolved now.

Nginx - why isn't uwsgi_cache working with this headers?

I'm using Nginx + uWsgi + web2py framework, and I want to make Nginx to cache the HTML responses generated by web2py.
The HTML headers generated by web2py are these:
Cache-Control:max-age=300, s-maxage=300, public
Connection:keep-alive
Content-Length:147
Content-Type:text/html; charset=utf-8
Date:Mon, 27 Mar 2017 16:27:54 GMT
Expires:lun, 27 mar 2017 16:32:54 GMT
Server:Rocket 1.2.6 Python/2.7.6
X-Powered-By:web2py
Those are the ones served directly with the web2py embedded server.
The same request served with nginx and uwsgi (without any cache configuration) produces these headers:
Cache-Control:max-age=300, s-maxage=300, public
Connection:keep-alive
Content-Encoding:gzip
Content-Type:text/html; charset=utf-8
Date:Mon, 27 Mar 2017 16:31:09 GMT
Expires:lun, 27 mar 2017 16:36:09 GMT
Server:nginx
Transfer-Encoding:chunked
Vary:Accept-Encoding
X-Powered-By:web2py
Now, I want to implement uwsgi_cache for nginx configuration, and I'm trying like this:
uwsgi_cache_path /tmp/nginx_cache/ levels=1:2 keys_zone=mycache:10m max_size=10g inactive=10m use_temp_path=off;
server {
listen 80;
server_name myapp.com;
root /home/user/myapp;
location / {
uwsgi_cache mycache;
uwsgi_cache_valid 200 15m;
uwsgi_cache_key $request_uri;
add_header X-uWSGI-Cache $upstream_cache_status;
expires 1h;
uwsgi_pass unix:///tmp/myapp.socket;
include uwsgi_params;
uwsgi_param UWSGI_SCHEME $scheme;
uwsgi_param SERVER_SOFTWARE nginx/$nginx_version;
}
}
However, every time I hit an URL, I get a MISS in the response headers, indicating that nginx didn't serve the request from cache:
Cache-Control:max-age=3600
Connection:keep-alive
Content-Encoding:gzip
Content-Type:text/html; charset=utf-8
Date:Mon, 27 Mar 2017 16:37:29 GMT
Expires:Mon, 27 Mar 2017 22:37:29 GMT
Server:nginx
Transfer-Encoding:chunked
Vary:Accept-Encoding
X-Powered-By:web2py
X-uWSGI-Cache:MISS
The nginx process is running as "www-data" user/group. I've checked the permissions of the folder /tmp/nginx_cache/ and they are ok: the user has permissions to read and write the folder. Also, inside the /tmp/nginx_cache/ a "temp" folder is created by nginx, but no cache files are written there.
I've tried also adding proxy_ignore_headers to location block in order to instruct nginx to ignore some headers like Set-Cookie and Cache-Control, like this:
location / {
uwsgi_cache mycache;
uwsgi_cache_valid 200 15m;
uwsgi_cache_key $scheme$proxy_host$uri$is_args$args;
add_header X-uWSGI-Cache $upstream_cache_status;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control Set-Cookie Vary;
uwsgi_pass unix:///tmp/myapp.socket;
include uwsgi_params;
uwsgi_param UWSGI_SCHEME $scheme;
uwsgi_param SERVER_SOFTWARE nginx/$nginx_version;
}
However, this makes no difference: the first request isn't cached, and all the subsequent requests are a MISS, that is, they aren't served from cache.
I've found this similar post, where the person who answers points out that it could be a problem of the response headers generated by (in this case) web2py:
https://serverfault.com/questions/690164/why-is-this-nginx-setup-not-caching-responses
Why nginx isn't caching the responses?
I've found the cause of the issue: I was mixing uwsgi_cache_* directives with proxy_cache_* directives, and they belong to different Nginx modules. I just needed to replace proxy_ignore_headers with uwsgi_ignore_headers.
Notice that proxy_cache module is different than uwsgi_cache, they have very similar directives, but they are two different modules.

Poor performance while using nginx as proxy

The current setup of our backend uses Route53 to route requests to tomcat servers which run on ec2 instances.
I am trying to setup nginx as a load balancer(proxy) to route requests to our tomcat servers.
Here are the instance types,
Tomcat server instance type = m3.2xlarge
nginx server instance type
= c3.large
When I run ab (apache benchmark) with 100 concurrent connections without keep alive, I see that the performance of a single tomcat instance is better than 2 tomcat servers in front of an nginx server. I am now wondering if there is something wrong with my nginx config. I checked the error.log file on nginx instance and there are no errors. Also, the CPU on nginx instance does not cross 30% while running benchmark tool. Here is my nginx config,
user nginx;
pid /run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 32768;
events {
worker_connections 8192;
multi_accept on;
use epoll;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
upstream backend {
server x.x.x.x:443;
server x.x.x.x:443;
keepalive 1024;
}
server {
listen 443;
server_name localhost;
ssl on;
ssl_certificate /etc/nginx/certs/ssl-bundle_2015_2018.crt;
ssl_certificate_key /etc/nginx/certs/chewie.key;
ssl_dhparam /etc/nginx/certs/dhparam.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_session_cache shared:SSL:10m;
ssl_prefer_server_ciphers on;
ssl_session_timeout 10m;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS !RC4";
location / {
proxy_pass https://backend;
proxy_cache_bypass true;
proxy_no_cache true;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Here are apache benchmark results without nginx.
Concurrency Level: 100
Time taken for tests: 8.393 seconds
Complete requests: 800
Failed requests: 0
Total transferred: 368000 bytes
HTML transferred: 16800 bytes
Requests per second: 95.32 [#/sec] (mean)
Time per request: 1049.083 [ms] (mean)
Time per request: 10.491 [ms] (mean, across all concurrent requests)
Transfer rate: 42.82 [Kbytes/sec] received
These are results with nginx in front of 2 tomcat servers:
Concurrency Level: 100
Time taken for tests: 23.494 seconds
Complete requests: 800
Failed requests: 0
Total transferred: 381600 bytes
HTML transferred: 16800 bytes
Requests per second: 34.05 [#/sec] (mean)
Time per request: 2936.768 [ms] (mean)
Time per request: 29.368 [ms] (mean, across all concurrent requests)
Transfer rate: 15.86 [Kbytes/sec] received
Any thoughts on where I should be looking to optimize are appreciated!
Here are some things done to improve performance,
Convert the traffic between nginx and the upstream server to http
form https
Use the right ssl ciphers for your nginx. Make sure to run
the ssl test to make sure ciphers used are secure (www.ssllabs.com)
Increase file descriptor limits for nginx server as well the tomcat
instances to a high number.
Will keep updating as I find more things.

Nginx not caching requests coming through ELB

I've got nginx sitting between ELBs. I've got a couple application pools behind an ELB that nginx passes traffic back to and I want to cache static content. My problem is nginx doesn't appear to be caching any responses. This is the cache configuration:
proxy_cache_path /usr/share/nginx/cache/app levels=1:2 keys_zone=cms-cache:8m max_size=1000m inactive=600m;
proxy_temp_path /usr/share/nginx/cache/;
location / {
proxy_pass http://cms-pool;
proxy_cache cms-cache;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
}
After some reading I found there might be some headers causing the issue but after hiding the obvious ones I had no luck and ended up breaking the application since I was hiding all the backend cookies. These are the headers I tried removing:
proxy_ignore_headers Cache-Control Expires Set-Cookie;
proxy_hide_header Cache-Control;
proxy_hide_header Set-Cookie;
I'm at a loss right now as to why requests are not being cached, here's an output from curl of the headers that came through with the above header configuration (the cookies and such were set from nginx/elb in front of nginx):
Accept-Ranges: bytes
Connection: keep-alive
Content-length: 6821
Content-Type: text/html
Date: Sun, 16 Nov 2014 19:25:41 GMT
ETag: W/"6821-1415964130000"
Expires: Thu, 01 Jan 1970 00:00:00 GMT
HTTP/1.1 200 OK
Last-Modified: Fri, 14 Nov 2014 11:22:10 GMT
Pragma: no-cache
Server: nginx/1.7.6
Set-Cookie: AWSELB=4BB7AB49169E74EC05060FB9839BD30C2CB1D0E43D90837DC593EB2BA783FB372E90B6F6F575D13C6567102032557C76E00B1F5DB0B520CF929C3B81327C1D259A9EA5C73771C4EA3DB6390EB40484EDF56491135B;PATH=/
Set-Cookie: frontend=CgAAi1Ro+jUDNkZYAwMFAg==; path=/
Update I found that the above wasn't entirely accurate as there's a 302 that directs a user to login which hits another backend that doesn't have static resources, as such the headers above are coming from the login backend. I adjusted the URI to point to just the images but no caching is occurring. I'm using the following location block:
location /app/images {
proxy_pass http://cms-pool/app/images;
proxy_cache cms-cache;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
proxy_ignore_headers Cache-Control Expires Set-Cookie;
proxy_hide_header Cache-Control;
proxy_hide_header Set-Cookie;
}
These are the headers which are coming through now:
Accept-Ranges: bytes
Connection: keep-alive
Content-length: 12700
Content-Type: image/png
Date: Mon, 17 Nov 2014 09:25:38 GMT
ETag: "0cd80ce9afecf1:0"
HTTP/1.1 200 OK
Last-Modified: Wed, 12 Nov 2014 17:05:06 GMT
Server: nginx/1.7.6
Set-Cookie: AWSELB=4BB7AB49169E74EC05060FB9839BD30C2CB1D0E43D638163025E92245C6C6E40197CA48C5B22F3E8FDA53365109BC1C16C808322881855C100D4AC54E5C0EC6CDE91B96151F66369C7B697B04D2C08439274033D81;PATH=/
Set-Cookie: tscl-frontend=CgAK1FRpvxI4b0bQAwMEAg==; path=/
X-Powered-By: ASP.NET
This is caused by a wobbly implementation of HTTP caching headers in what's behind ELB.
Per RFC 2616 (HTTP 1.1) :
Pragma directives MUST be passed through by a proxy or gateway
application, regardless of their significance to that application,
since the directives might be applicable to all recipients along the
request/response chain.
HTTP/1.1 caches SHOULD treat "Pragma: no-cache" as if the client had
sent "Cache-Control: no-cache". No new Pragma directives will be
defined in HTTP.
Note: because the meaning of "Pragma: no-cache as a response
header field is not actually specified, it does not provide a
reliable replacement for "Cache-Control: no-cache" in a response
The Pragma: no-cache header doesn't mean anything in an HTTP reply because it's behaviour is unspecified by RFCs.
But as nginx acts as a (reverse) proxy in your case, it will honor the header as if it was Cache-Control: no-cache header to maintain compatibility with HTTP 1.0 protocol's Pragma header defined in RFC 1945.
It will also pass it through client's response headers as it doesn't have to suppose anything about the actual meaning of it.
So either correct this bad implementation or append Pragma header to proxy_ignore_headers and proxy_hide_header directives.

How to clear nginx cache on Plesk 11 without shell acces

I'm running a website on a vps with Plesk 11 which uses nginx as a reverse proxy. In my webserver settings I enabled gzip compression and browser caching with the following code:
gzip on;
gzip_min_length 1100;
gzip_buffers 4 32k;
gzip_types text/plain [s]text/html[/s] text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
location ~* ^/(.*\.(js|css|png|jpg|jpeg|gif|ico))$ {
expires 2w;
log_not_found off;
}
After I editing my CSS I want the cache to be cleared so changes will take effect immediately. I'm not familiar with SSH and I was wondering how to clear the nginx cache from the Pesk Admin Panel.

Resources