nginx proxy_cache key hash change on each another browser request - caching

I was create config of nginx like:
proxy_cache_path /tmp/nginx/static levels=1:2 keys_zone=static_zone:10m inactive=10d use_temp_path$
proxy_cache_key "$request_uri$args";
location ~* \.(css|gif|ico|jpe?g|js(on)?|png|svg|webp|ttf|woff|woff2|txt|map)$ {
proxy_hide_header Date;
proxy_cache_revalidate on;
proxy_pass http://static:8080;
proxy_cache_bypass $cookie_nocache $arg_nocache;
proxy_ignore_headers "Cache-Control" "Expires" "Set-Cookie";
proxy_hide_header "Set-Cookie";
proxy_buffering on;
proxy_cache static_zone;
proxy_cache_valid 200 301 302 30m;
proxy_cache_valid 404 10m;
#expires max;
add_header X-Proxy-Cache $upstream_cache_status;
access_log off;
add_header Cache-Control "public";
add_header Pragma "public";
expires 30d;
log_not_found off;
tcp_nodelay off;
}
On first request from Chrome all work as excepted x-proxy-cache:MISS other request got from disk cache with header x-proxy-cache:HIT. After refresh it's also HIT. But when I open page from other browsers(Opera,Edge) on this machine this request is MISS. In file system nginx create two files with different md5sum hash on a same content. For example filename 438476ac40665c852d3acde1acf769f1 head:
^C^#^#^#^#^#^#^#/^V
W^#^#^#^#��^CW^#^#^#^#'^O
W^#^#^#^#m�,�^#^#�^#�^A^N"5703e3a7-67e"^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#$
KEY: /js/catalog.js
HTTP/1.1 200 OK
Server: nginx
Date: Tue, 12 Apr 2016 15:07:19 GMT
Content-Type: application/javascript
Content-Length: 1662
Last-Modified: Tue, 05 Apr 2016 16:11:19 GMT
Connection: close
Vary: Accept-Encoding
ETag: "5703e3a7-67e"
Accept-Ranges: bytes
The second filename a6f57423c2220fba3ada5f516f6dd91c with a same content and this head:
^C^#^#^#^#^#^#^# ^V
W^#^#^#^#��^CW^#^#^#^#^A^O
W^#^#^#^#m�,�^#^#�^#�^A^N"5703e3a7-67e"^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#$
KEY: /js/catalog.js
HTTP/1.1 200 OK
Server: nginx
Date: Tue, 12 Apr 2016 15:06:41 GMT
Content-Type: application/javascript
Content-Length: 1662
Last-Modified: Tue, 05 Apr 2016 16:11:19 GMT
Connection: close
Vary: Accept-Encoding
ETag: "5703e3a7-67e"
Accept-Ranges: bytes
By documentation the name of file must be md5 from key, and there is echo -n '/js/catalog.js'| md5sum is a6f57423c2220fba3ada5f516f6dd91c as name of one of files (it was first request). I don't want to cache in server js|css per each user|browser. Just cache it once and receive from cache to all users requests. P.S. My site use https, http2, version of nginx 1.9.14.

Based on the Vary: Accept-Encoding header that's there, I would guess that Edge and Opera send different "Accept-Encoding" headers for the request. For example, one may simply send "gzip" while the other sends "gzip, deflate". Those are technically different Accept-Encoding request headers.
If you know that the origin won't send meaningfully different encodings that won't work between browsers you can add:
proxy_ignore_headers Vary;
You already have the proxy_ignore_headers, so you can probably just add to that.
Since all major browsers support gzip, the risk is likely very low. However, "webp" is also done via the Accept-Encoding, so that could create surprising results for some images if the origin can handle webp.

TLDR: Request Header Accept-Encoding matters.
Consider its normal value : Accept-Encoding: gzip, deflate, br.
When you change it to Accept-Encoding: gzip, deflate, lolkek then Nginx will store cached response in different file. And those 2 files ( under /var/cache/nginx/) will be of the same content but with different names.
The same issue: https://trac.nginx.org/nginx/ticket/1840

Related

Mod_security - help needed

I need help configuring mod_security. I installed the component in joomla CMS. One function does not work. I think it's the fault of configuring mod_security. However, I can't handle the configuration.
Can anyone suggest me how to configure mod_security so that the following error does not appear?
Thank you for any suggestions
greetings
Mariusz
--70c75e19-H--
Apache-Handler: application/x-httpd-php
Stopwatch: 1576492965429719 55389 (- - -)
Stopwatch2: 1576492965429719 55389; combined=3, p1=0, p2=0, p3=0, p4=0, p5=3, sr=0, sw=0, l=0, gc=0
Producer: ModSecurity for Apache/2.9.2 (http://www.modsecurity.org/); OWASP_CRS/2.2.9.
Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips
--70c75e19-Z--
--f4ebab2d-A--
[16/Dec/2019:11:43:18 +0100] Xfdfxt4HssQCZkLHjRnp17QAAAAA 192.168.11.19 54334 222.222.222.222 443
--f4ebab2d-B--
POST /administrator/index.php?option=**com_arismartbook**&task=ajaxOrderUp&categoryId=16 HTTP/1.1
Host: test2.tld.pl
User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:71.0) Gecko/20100101 Firefox/71.0
Accept: */*
Accept-Language: pl,en-US;q=0.7,en;q=0.3
Accept-Encoding: gzip, deflate, br
X-Requested-With: XMLHttpRequest
Content-Type: application/x-www-form-urlencoded
Content-Length: 119
Origin: https://test2.tld.pl
Connection: keep-alive
Referer: https://test2.tld.pl/administrator/index.php?option=com_arismartbook&view=categories
Cookie: wf_browser_dir=eczasopisma/Akcent/Rok_2014/nr1; cookieconsent_status=dismiss; e8c44a98a762cac37a9dc36fe9daa126=pl0h5nbstk465jhgoohn1kc5m9; joomla_user_state=logged_in; b00894f58bebfc7f2swE6f509b1869a6=dvkignad5f8r2fn9mkt1v5kca6
--f4ebab2d-F--
HTTP/1.1 500 Internal Server Error
X-Content-Type-Options: nosniff
X-Powered-By: PHP/7.2.25
Cache-Control: no-cache
Pragma: no-cache
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8
--f4ebab2d-H--
Apache-Handler: application/x-httpd-php
Stopwatch: 1576492998778551 50173 (- - -)
Stopwatch2: 1576492998778551 50173; combined=3, p1=0, p2=0, p3=0, p4=0, p5=3, sr=0, sw=0, l=0, gc=0
Producer: ModSecurity for Apache/2.9.2 (http://www.modsecurity.org/); OWASP_CRS/2.2.9.
Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips
--f4ebab2d-Z--

Firefox web push "Invalid URL endpoint"

I try to send webpush to firefox
curl -i -X PUT https://updates.push.services.mozilla.com/push/gAAAAABW5EzHyop8VZSH2jm9LJ7W8ybH3ISlbZHDGnd4RwW7h2Jb0IGTuSsP2BCoBxl0kJp-kXXL164xNzhxkTEztP1-IqVf9040VOEuy_htb1nnp-24W-RGgWgjtGK1kZYAb1k3xmAS
HTTP/1.1 400 Bad Request
Access-Control-Allow-Headers: content-encoding,encryption,crypto-key,ttl,encryption-key,content-type,authorization
Access-Control-Allow-Methods: POST,PUT
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: location,www-authenticate
Content-Type: application/json
Date: Tue, 15 Mar 2016 13:04:44 GMT
Server: cyclone/1.1
Content-Length: 51
Connection: keep-alive
{"errno": 102, "code": 400, "error": "Bad Request"}
Does it mean that I have invalid registration id stored in my database and I should remove it?
The endpoint URL doesn't seem valid, it's usually something like https://updates.push.services.mozilla.com/push/v1/SOME_LONG_ID (note the v1 that your URL doesn't contain).
Indeed, this works:
curl -i -X PUT https://updates.push.services.mozilla.com/push/v1/gAAAAABW5EzHyop8VZSH2jm9LJ7W8ybH3ISlbZHDGnd4RwW7h2Jb0IGTuSsP2BCoBxl0kJp-kXXL164xNzhxkTEztP1-IqVf9040VOEuy_htb1nnp-24W-RGgWgjtGK1kZYAb1k3xmAS
Note that you might want to add the TTL header, otherwise your request might fail (you just need -H "TTL: 60"): https://blog.mozilla.org/services/2016/02/20/webpushs-new-requirement-ttl-header/.

Ruby http, net/http, httpclient: can't parse www.victoriassecret.com

I am using httpclient gem, it works fine on Windows, just moved to AWS EC2, tried it on https://victoriassecret.com and it gets this response:
= Response
HTTP/1.1 920 Unknown
Content-Type: text/html
Date: Wed, 21 Oct 2015 21:42:51 GMT
Connection: Keep-Alive
Content-Length: 23
<h1>File not found</h1>#<HTTP::Message:0x000000023f5168
#http_body=
#<HTTP::Message::Body:0x000000023f50a0
#body="<h1>File not found</h1>",
#chunk_size=nil,
#positions=nil,
#size=0>,
#http_header=
#<HTTP::Message::Headers:0x000000023f5140
#body_charset=nil,
#body_date=nil,
#body_encoding=#<Encoding:ASCII-8BIT>,
#body_size=0,
#body_type=nil,
#chunked=false,
#dumped=false,
#header_item=
[["Content-Type", "text/html"],
["Date", "Wed, 21 Oct 2015 21:42:51 GMT"],
["Connection", "Keep-Alive"],
["Content-Length", "23"]],
#http_version="1.1",
#is_request=false,
#reason_phrase="Unknown",
#request_absolute_uri=nil,
#request_method="GET",
#request_query=nil,
#request_uri=
#<URI::HTTPS:0x000000023f58c0 URL:https://www.victoriassecret.com/pink/new-and-now>,
#status_code=920>,
#peer_cert=
#<OpenSSL::X509::Certificate: subject=#<OpenSSL::X509::Name:0x000000024ebe00>, issuer=#<OpenSSL::X509::Name:0x000000024ebec8>, serial=#<OpenSSL::BN:0x000000024de110>, not_before=2015-05-27 00:00:00 UTC, not_after=2017-05-26 23:59:59 UTC>,
#previous=nil>
It does not work only with this website, httpclient get https://google.com for example works fine. But on Windows I get normal response from httpclient get https://www.victoriassecret.com. Butt when using standard NET/HTTP library I get the same 920 response on Windows.
This isn't ec2 related. It's most likely related to the User Agent header sent by the various http library implementations.
For example, they clearly don't like 'wget':
curl -A "Wget/1.13.4 (linux-gnu)" -v https://www.victoriassecret.com
* Rebuilt URL to: https://www.victoriassecret.com/
* Trying 98.158.54.100...
* Connected to www.victoriassecret.com (98.158.54.100) port 443 (#0)
* TLS 1.2 # truncated
> GET / HTTP/1.1
> Host: www.victoriassecret.com
> User-Agent: Wget/1.13.4 (linux-gnu)
> Accept: */*
>
< HTTP/1.1 910 Unknown
< Content-Type: text/html
< Date: Thu, 22 Oct 2015 01:16:31 GMT
< Connection: Keep-Alive
< Content-Length: 23
<
* Connection #0 to host www.victoriassecret.com left intact
<h1>File not found</h1>%

Nginx mod_rewrite $request_uri manipulation

I would like to do some redirects but involving the $args.
I am trying to to the following:
rewrite /aaa?a=1&aa=2 /bbb?b=1&bb=2 permanent;
But it does not work. The line below works fine, though
rewrite /aaa /bbb permanent;
I added those lines to my config file:
proxy_set_header x-request_uri "$request_uri";
proxy_set_header x-args "$args";
And I can see those headers:
GET /aaa?a=1&aa=2 HTTP/1.0
Host: www.example.com
x-request_uri: /aaa?a=1&aa=2
x-args: a=1&aa=2
Connection: close
User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.15.3 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
Accept: */*
What I am doing wrong? is there a way to accomplish redirect taking full $request_uri in consideration?
I've got the answer on irc.freenode.net #nginx:
Mod_rewrite does not match against url-with-args only without, use if or map instead.
I managed to get it working with if:
if ( $request_uri = '/aaa?a=1&aa=2' ){
return 301 $scheme://$host/bbb?b=1&bb=2;
}
Response header:
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.0.15
< Date: Wed, 02 Jul 2014 20:05:34 GMT
< Content-Type: text/html
< Content-Length: 185
< Connection: keep-alive
< Location: http://www.example.com/bbb?b=1&bb=2
< x-uri: /aaa?a=1&aa=2

Apache2 is changing my content type for a Ruby cgi script

I have a ruby cgi script which writes it output like this:
cgi.out("Cache-Control" => "no-cache, must-revalidate",
"type" => "text/html",
"charset" => "UTF-8") {
template.result(binding)
}
Unfortunately, when I view the headers from cURL, I see the following:
< HTTP/1.1 200 OK
< Date: Sun, 23 Aug 2009 09:48:03 GMT
< Server: Apache/2.2.11 (Ubuntu) DAV/2 SVN/1.5.4 PHP/5.2.6-3ubuntu4.1 with Suhosin-Patch mod_ssl/2.2.11 OpenSSL/0.9.8g
< 5541-Content-Type: text/html; charset=UTF-8
< Cache-Control: no-cache, must-revalidate
< Content-Length: 2495
< Cache-Control: max-age=86400
< Expires: Mon, 24 Aug 2009 09:48:03 GMT
< Content-Type: application/x-ruby
It's renaming my Content-Type, and adding a second cache control header. Clearly I have something misconfigured.
Turns out a had a debugging 'print' statement which was executing before the cgi.out() line. This caused a bit a text to prefix the headers.

Resources