I've got nginx sitting between ELBs. I've got a couple application pools behind an ELB that nginx passes traffic back to and I want to cache static content. My problem is nginx doesn't appear to be caching any responses. This is the cache configuration:
proxy_cache_path /usr/share/nginx/cache/app levels=1:2 keys_zone=cms-cache:8m max_size=1000m inactive=600m;
proxy_temp_path /usr/share/nginx/cache/;
location / {
proxy_pass http://cms-pool;
proxy_cache cms-cache;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
}
After some reading I found there might be some headers causing the issue but after hiding the obvious ones I had no luck and ended up breaking the application since I was hiding all the backend cookies. These are the headers I tried removing:
proxy_ignore_headers Cache-Control Expires Set-Cookie;
proxy_hide_header Cache-Control;
proxy_hide_header Set-Cookie;
I'm at a loss right now as to why requests are not being cached, here's an output from curl of the headers that came through with the above header configuration (the cookies and such were set from nginx/elb in front of nginx):
Accept-Ranges: bytes
Connection: keep-alive
Content-length: 6821
Content-Type: text/html
Date: Sun, 16 Nov 2014 19:25:41 GMT
ETag: W/"6821-1415964130000"
Expires: Thu, 01 Jan 1970 00:00:00 GMT
HTTP/1.1 200 OK
Last-Modified: Fri, 14 Nov 2014 11:22:10 GMT
Pragma: no-cache
Server: nginx/1.7.6
Set-Cookie: AWSELB=4BB7AB49169E74EC05060FB9839BD30C2CB1D0E43D90837DC593EB2BA783FB372E90B6F6F575D13C6567102032557C76E00B1F5DB0B520CF929C3B81327C1D259A9EA5C73771C4EA3DB6390EB40484EDF56491135B;PATH=/
Set-Cookie: frontend=CgAAi1Ro+jUDNkZYAwMFAg==; path=/
Update I found that the above wasn't entirely accurate as there's a 302 that directs a user to login which hits another backend that doesn't have static resources, as such the headers above are coming from the login backend. I adjusted the URI to point to just the images but no caching is occurring. I'm using the following location block:
location /app/images {
proxy_pass http://cms-pool/app/images;
proxy_cache cms-cache;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
proxy_ignore_headers Cache-Control Expires Set-Cookie;
proxy_hide_header Cache-Control;
proxy_hide_header Set-Cookie;
}
These are the headers which are coming through now:
Accept-Ranges: bytes
Connection: keep-alive
Content-length: 12700
Content-Type: image/png
Date: Mon, 17 Nov 2014 09:25:38 GMT
ETag: "0cd80ce9afecf1:0"
HTTP/1.1 200 OK
Last-Modified: Wed, 12 Nov 2014 17:05:06 GMT
Server: nginx/1.7.6
Set-Cookie: AWSELB=4BB7AB49169E74EC05060FB9839BD30C2CB1D0E43D638163025E92245C6C6E40197CA48C5B22F3E8FDA53365109BC1C16C808322881855C100D4AC54E5C0EC6CDE91B96151F66369C7B697B04D2C08439274033D81;PATH=/
Set-Cookie: tscl-frontend=CgAK1FRpvxI4b0bQAwMEAg==; path=/
X-Powered-By: ASP.NET
This is caused by a wobbly implementation of HTTP caching headers in what's behind ELB.
Per RFC 2616 (HTTP 1.1) :
Pragma directives MUST be passed through by a proxy or gateway
application, regardless of their significance to that application,
since the directives might be applicable to all recipients along the
request/response chain.
HTTP/1.1 caches SHOULD treat "Pragma: no-cache" as if the client had
sent "Cache-Control: no-cache". No new Pragma directives will be
defined in HTTP.
Note: because the meaning of "Pragma: no-cache as a response
header field is not actually specified, it does not provide a
reliable replacement for "Cache-Control: no-cache" in a response
The Pragma: no-cache header doesn't mean anything in an HTTP reply because it's behaviour is unspecified by RFCs.
But as nginx acts as a (reverse) proxy in your case, it will honor the header as if it was Cache-Control: no-cache header to maintain compatibility with HTTP 1.0 protocol's Pragma header defined in RFC 1945.
It will also pass it through client's response headers as it doesn't have to suppose anything about the actual meaning of it.
So either correct this bad implementation or append Pragma header to proxy_ignore_headers and proxy_hide_header directives.
Related
First time poster with a bizarre issue I am having. I usually install software through conda, but from one moment to the other I stopped being able to use conda install because of a 403 I get from conda trying to access some configuration files. When trying to download those files with wget --spider --debug https://conda.anaconda.org/anaconda/noarch/current_repodata.json, I get the same 403 error.
DEBUG output created by Wget 1.19.4 on linux-gnu.
Reading HSTS entries from /home/jsequeira/.wget-hsts
URI encoding = ‘UTF-8’
Converted file name 'current_repodata.json' (UTF-8) -> 'current_repodata.json' (UTF-8)
Spider mode enabled. Check if remote file exists.
--2020-07-30 11:25:59-- https://conda.anaconda.org/anaconda/noarch/current_repodata.json
Resolving conda.anaconda.org (conda.anaconda.org)... 104.17.92.24, 104.17.93.24, 2606:4700::6811:5d18, ...
Caching conda.anaconda.org => 104.17.92.24 104.17.93.24 2606:4700::6811:5d18 2606:4700::6811:5c18
Connecting to conda.anaconda.org (conda.anaconda.org)|104.17.92.24|:443... connected.
Created socket 5.
Releasing 0x000056545deb1850 (new refcount 1).
Initiating SSL handshake.
Handshake successful; connected socket 5 to SSL handle 0x000056545deb2700
certificate:
subject: CN=anaconda.org,O=Cloudflare\\, Inc.,L=San Francisco,ST=CA,C=US
issuer: CN=Cloudflare Inc ECC CA-3,O=Cloudflare\\, Inc.,C=US
X509 certificate successfully verified and matches host conda.anaconda.org
---request begin---
HEAD /anaconda/noarch/current_repodata.json HTTP/1.1
User-Agent: Wget/1.19.4 (linux-gnu)
Accept: */*
Accept-Encoding: identity
Host: conda.anaconda.org
Connection: Keep-Alive
---request end---
HTTP request sent, awaiting response...
---response begin---
HTTP/1.1 403 Forbidden
Date: Thu, 30 Jul 2020 11:25:59 GMT
Content-Type: text/html; charset=UTF-8
Connection: close
CF-Chl-Bypass: 1
Set-Cookie: __cfduid=d3cd3a67d3926551371d8ffe5a840b04f1596108359; expires=Sat, 29-Aug-20 11:25:59 GMT; path=/; domain=.anaconda.org; HttpOnly; SameSite=Lax
Cache-Control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Expires: Thu, 01 Jan 1970 00:00:01 GMT
X-Frame-Options: SAMEORIGIN
cf-request-id: 044111dd9600005d4732b73200000001
Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
Vary: Accept-Encoding
Server: cloudflare
CF-RAY: 5baeb8dc2ba65d47-LIS
---response end---
403 Forbidden
cdm: 1
Stored cookie anaconda.org -1 (ANY) / <permanent> <insecure> [expiry 2020-08-29 11:25:59] __cfduid d3cd3a67d3926551371d8ffe5a840b04f1596108359
URI content encoding = ‘UTF-8’
Closed 5/SSL 0x000056545deb2700
Remote file does not exist -- broken link!!!
These files are accessible through the browser, and were always accessible with wget and conda until yesterday, when I was installing some tools not related to these network accesses. How can wget fail to download them?
So this was fixed by reinstalling apt-get. Some configuration file there must have been messed up.
When I activate cloudflare we have an encoding or caching issue whereby special characters appear all over the page.
Header response with cloudflare deactivated:
Alt-Svc: quic=":443"; ma=2592000; v="35,39,43,44"
Cache-Control: no-cache, must-revalidate
Connection: close
Content-Encoding: gzip
Content-Length: 8156
Content-Type: text/html; charset=UTF-8
Date: Wed, 14 Aug 2019 14:19:31 GMT
Expires: Mon, 26 Jul 1997 05:00:00 GMT
Last-Modified: Wed, 14 Aug 2019 14:19:31 GMT
Pragma: no-cache
Server: LiteSpeed
Set-Cookie: pmd_template=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT; Max-Age=0; path=/New/; domain=www.eastlondonbusinessdirectory.co.za
Set-Cookie: pmd_template=listimia; expires=Fri, 13-Sep-2019 14:19:31 GMT; Max-Age=2592000; path=/New/; domain=www.eastlondonbusinessdirectory.co.za
Vary: Accept-Encoding,User-Agent
X-Powered-By: PHP/7.0.33
Header response with cloudflare activated:
Cache-Control: no-cache, must-revalidate
CF-RAY: 50639e64dd188074-CPT
Connection: keep-alive
Content-Encoding: zlib,gzip,deflate
Content-Type: text/html; charset=UTF-8
Date: Wed, 14 Aug 2019 14:29:03 GMT
Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
Expires: Mon, 26 Jul 1997 05:00:00 GMT
Last-Modified: Wed, 14 Aug 2019 14:29:02 GMT
Pragma: no-cache
Server: cloudflare
Set-Cookie: pmd_template=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT; Max-Age=0; path=/New/; domain=www.eastlondonbusinessdirectory.co.za
Set-Cookie: pmd_template=listimia; expires=Fri, 13-Sep-2019 14:29:02 GMT; Max-Age=2592000; path=/New/; domain=www.eastlondonbusinessdirectory.co.za
Transfer-Encoding: chunked
Vary: User-Agent
X-Powered-By: PHP/7.0.33
X-Turbo-Charged-By: LiteSpeed
I believe I might need to make sure my origin server sends a header that tells the cache to serve pages based on the content encoding, but I'm not too sure if this logic is correct because with Cloudflare activated I only see Vary: User Agent? which is in fact ignored by Cloudflare... If the logic is correct then I'm not too sure how to go about fixing this. I have tried to add a page rule from Cloudflare to cache everything and also added the following in .htaccess
</IfModule>
AddDefaultCharset UTF-8
<IfModule mod_headers.c>
<FilesMatch ".(js|css|xml|gz|html)$">
Header append Vary: Accept-Encoding
</FilesMatch>
</IfModule>
but both does not work.
Any help will be much appreciated to get this issue resolved and will accept the answer
Please help. Thank you
Cloudflare currently only respects the Accept-Encoding vary header.
If you want to vary based on other factors, you can either consider:
Custom caching set up for Enterprise Plan
"Bypass Cache" entirely using Page Rules
Serve different content types from distinct URLs to continue leveraging caching
Workaround using Cloudflare Workers
I'm using Nginx + uWsgi + web2py framework, and I want to make Nginx to cache the HTML responses generated by web2py.
The HTML headers generated by web2py are these:
Cache-Control:max-age=300, s-maxage=300, public
Connection:keep-alive
Content-Length:147
Content-Type:text/html; charset=utf-8
Date:Mon, 27 Mar 2017 16:27:54 GMT
Expires:lun, 27 mar 2017 16:32:54 GMT
Server:Rocket 1.2.6 Python/2.7.6
X-Powered-By:web2py
Those are the ones served directly with the web2py embedded server.
The same request served with nginx and uwsgi (without any cache configuration) produces these headers:
Cache-Control:max-age=300, s-maxage=300, public
Connection:keep-alive
Content-Encoding:gzip
Content-Type:text/html; charset=utf-8
Date:Mon, 27 Mar 2017 16:31:09 GMT
Expires:lun, 27 mar 2017 16:36:09 GMT
Server:nginx
Transfer-Encoding:chunked
Vary:Accept-Encoding
X-Powered-By:web2py
Now, I want to implement uwsgi_cache for nginx configuration, and I'm trying like this:
uwsgi_cache_path /tmp/nginx_cache/ levels=1:2 keys_zone=mycache:10m max_size=10g inactive=10m use_temp_path=off;
server {
listen 80;
server_name myapp.com;
root /home/user/myapp;
location / {
uwsgi_cache mycache;
uwsgi_cache_valid 200 15m;
uwsgi_cache_key $request_uri;
add_header X-uWSGI-Cache $upstream_cache_status;
expires 1h;
uwsgi_pass unix:///tmp/myapp.socket;
include uwsgi_params;
uwsgi_param UWSGI_SCHEME $scheme;
uwsgi_param SERVER_SOFTWARE nginx/$nginx_version;
}
}
However, every time I hit an URL, I get a MISS in the response headers, indicating that nginx didn't serve the request from cache:
Cache-Control:max-age=3600
Connection:keep-alive
Content-Encoding:gzip
Content-Type:text/html; charset=utf-8
Date:Mon, 27 Mar 2017 16:37:29 GMT
Expires:Mon, 27 Mar 2017 22:37:29 GMT
Server:nginx
Transfer-Encoding:chunked
Vary:Accept-Encoding
X-Powered-By:web2py
X-uWSGI-Cache:MISS
The nginx process is running as "www-data" user/group. I've checked the permissions of the folder /tmp/nginx_cache/ and they are ok: the user has permissions to read and write the folder. Also, inside the /tmp/nginx_cache/ a "temp" folder is created by nginx, but no cache files are written there.
I've tried also adding proxy_ignore_headers to location block in order to instruct nginx to ignore some headers like Set-Cookie and Cache-Control, like this:
location / {
uwsgi_cache mycache;
uwsgi_cache_valid 200 15m;
uwsgi_cache_key $scheme$proxy_host$uri$is_args$args;
add_header X-uWSGI-Cache $upstream_cache_status;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control Set-Cookie Vary;
uwsgi_pass unix:///tmp/myapp.socket;
include uwsgi_params;
uwsgi_param UWSGI_SCHEME $scheme;
uwsgi_param SERVER_SOFTWARE nginx/$nginx_version;
}
However, this makes no difference: the first request isn't cached, and all the subsequent requests are a MISS, that is, they aren't served from cache.
I've found this similar post, where the person who answers points out that it could be a problem of the response headers generated by (in this case) web2py:
https://serverfault.com/questions/690164/why-is-this-nginx-setup-not-caching-responses
Why nginx isn't caching the responses?
I've found the cause of the issue: I was mixing uwsgi_cache_* directives with proxy_cache_* directives, and they belong to different Nginx modules. I just needed to replace proxy_ignore_headers with uwsgi_ignore_headers.
Notice that proxy_cache module is different than uwsgi_cache, they have very similar directives, but they are two different modules.
My goal is to cache RESTful API calls via Varnish. AS I was reading on stackoverflow and other resources, Varnish can not cache post requests. This is exactly what I am experiencing. Therefore I moved to get with ?id=30 but then I realized that those also do not get cached because of the question mark.
So the question is, how o cache API-Calls over Varnish?
Here are two example calls to my API, secured by Oauth2 with 2 parameters pased by post:
curl --insecure -s -k https://test-api/v1/test -d 'access_token=72f50e68a0aed7921c6cb058de8e7e6ed4ebd692&clid=585970' -D- -o/dev/null
HTTP/1.1 200 OK
Date: Sat, 29 Aug 2015 13:02:36 GMT
Server: Apache/2.2.31 (Unix) PHP/5.6.11
X-Powered-By: PHP/5.6.11
Vary: Accept-Encoding
Content-Length: 2121
Content-Type: application/json
X-Varnish: 6558010
Age: 0
Via: 1.1 varnish-v4
Accept-Ranges: bytes
Set-Cookie: SERVERID=S3; path=/
curl --insecure -s -k https://test-api/v1/test -d 'access_token=72f50e68a0aed7921c6cb058de8e7e6ed4ebd692&clid=585970' -D- -o/dev/null
HTTP/1.1 200 OK
Date: Sat, 29 Aug 2015 13:02:56 GMT
Server: Apache/2.2.31 (Unix) PHP/5.6.11
X-Powered-By: PHP/5.6.11
Vary: Accept-Encoding
Content-Length: 2121
Content-Type: application/json
X-Varnish: 12814168
Age: 0
Via: 1.1 varnish-v4
Accept-Ranges: bytes
Set-Cookie: SERVERID=S2; path=/
Is it possible to configure Varnish to cache the API calls? POST/GET either way I don't mind.
Now I am trying to set cookie from my J2EE application for my jasper server on the localhost and getting the below response . I have passed domain name as .report.com but cookie is not created in browser . is there something wrong in below response to generate a cookie successfully in browser
Info : As localhost is not a valid domain name so I have changed localhost to jasper.report.com on system32/host file .
Accept-Ranges bytes
Cache-Control no-cache,no-store,must-revalidate
Connection Keep-Alive
Content-Encoding gzip
Content-Length 148
Content-Type text/html;charset=utf-8
Date Wed, 08 Oct 2014 14:36:43 GMT
Expires Thu, 01 Dec 1994 16:00:00 GMT
Keep-Alive timeout=5, max=100
Server Apache/2
Set-Cookie JSESSIONID=89E39BF51000E23270CE40EAE425EBF4; Domain=.report.com; Expires=Wed, 08-Oct-2014 19:36:44 GMT; Path=/jasperserver/; HttpOnly
Vary Accept-Encoding
x-frame-options SAMEORIGIN