Nginx mod_rewrite $request_uri manipulation - mod-rewrite

I would like to do some redirects but involving the $args.
I am trying to to the following:
rewrite /aaa?a=1&aa=2 /bbb?b=1&bb=2 permanent;
But it does not work. The line below works fine, though
rewrite /aaa /bbb permanent;
I added those lines to my config file:
proxy_set_header x-request_uri "$request_uri";
proxy_set_header x-args "$args";
And I can see those headers:
GET /aaa?a=1&aa=2 HTTP/1.0
Host: www.example.com
x-request_uri: /aaa?a=1&aa=2
x-args: a=1&aa=2
Connection: close
User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.15.3 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
Accept: */*
What I am doing wrong? is there a way to accomplish redirect taking full $request_uri in consideration?

I've got the answer on irc.freenode.net #nginx:
Mod_rewrite does not match against url-with-args only without, use if or map instead.
I managed to get it working with if:
if ( $request_uri = '/aaa?a=1&aa=2' ){
return 301 $scheme://$host/bbb?b=1&bb=2;
}
Response header:
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.0.15
< Date: Wed, 02 Jul 2014 20:05:34 GMT
< Content-Type: text/html
< Content-Length: 185
< Connection: keep-alive
< Location: http://www.example.com/bbb?b=1&bb=2
< x-uri: /aaa?a=1&aa=2

Related

Mod_security - help needed

I need help configuring mod_security. I installed the component in joomla CMS. One function does not work. I think it's the fault of configuring mod_security. However, I can't handle the configuration.
Can anyone suggest me how to configure mod_security so that the following error does not appear?
Thank you for any suggestions
greetings
Mariusz
--70c75e19-H--
Apache-Handler: application/x-httpd-php
Stopwatch: 1576492965429719 55389 (- - -)
Stopwatch2: 1576492965429719 55389; combined=3, p1=0, p2=0, p3=0, p4=0, p5=3, sr=0, sw=0, l=0, gc=0
Producer: ModSecurity for Apache/2.9.2 (http://www.modsecurity.org/); OWASP_CRS/2.2.9.
Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips
--70c75e19-Z--
--f4ebab2d-A--
[16/Dec/2019:11:43:18 +0100] Xfdfxt4HssQCZkLHjRnp17QAAAAA 192.168.11.19 54334 222.222.222.222 443
--f4ebab2d-B--
POST /administrator/index.php?option=**com_arismartbook**&task=ajaxOrderUp&categoryId=16 HTTP/1.1
Host: test2.tld.pl
User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:71.0) Gecko/20100101 Firefox/71.0
Accept: */*
Accept-Language: pl,en-US;q=0.7,en;q=0.3
Accept-Encoding: gzip, deflate, br
X-Requested-With: XMLHttpRequest
Content-Type: application/x-www-form-urlencoded
Content-Length: 119
Origin: https://test2.tld.pl
Connection: keep-alive
Referer: https://test2.tld.pl/administrator/index.php?option=com_arismartbook&view=categories
Cookie: wf_browser_dir=eczasopisma/Akcent/Rok_2014/nr1; cookieconsent_status=dismiss; e8c44a98a762cac37a9dc36fe9daa126=pl0h5nbstk465jhgoohn1kc5m9; joomla_user_state=logged_in; b00894f58bebfc7f2swE6f509b1869a6=dvkignad5f8r2fn9mkt1v5kca6
--f4ebab2d-F--
HTTP/1.1 500 Internal Server Error
X-Content-Type-Options: nosniff
X-Powered-By: PHP/7.2.25
Cache-Control: no-cache
Pragma: no-cache
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8
--f4ebab2d-H--
Apache-Handler: application/x-httpd-php
Stopwatch: 1576492998778551 50173 (- - -)
Stopwatch2: 1576492998778551 50173; combined=3, p1=0, p2=0, p3=0, p4=0, p5=3, sr=0, sw=0, l=0, gc=0
Producer: ModSecurity for Apache/2.9.2 (http://www.modsecurity.org/); OWASP_CRS/2.2.9.
Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips
--f4ebab2d-Z--

curl syntax in GET based HTTP logins

For practice purposes I decided to create a simple bruteforcing bash script, that I succesuly used to solve DWVA. I then moved to IoT - namely my old IP camera. This is my code as of now:
#!/bin/bash
if [ "${##}" != "2" ]; then
echo "<command><host><path>"
exit
fi
ip=$1
path=$2
for name in $(cat user.txt); do
for pass in $(cat passwords.txt); do
echo ${name}:${pass}
res="$(curl -si ${name}:${pass}#${ip}${path})"
check=$(echo "$res" | grep "HTTP/1.1 401 Unauthorised")
if [ "$check" != '' ]; then
tput setaf 1
echo "[FAILURE]"
tput sgr0
else
tput setaf 2
echo "[SUCCESS]"
tput sgr0
exit
fi
sleep .1
done;
done;
Despite obvious flaws - like reporting succes in case of network failure - it's as good as my 20 minutes coding jobs are. However, I can't seem to get the curl command syntax quite right. Camera in question is a simple Axis, running cramFS and a small scripting os. It's similar to a lot of publicly available cameras' login forms, like ones found here, here or here. A simple GET, yet I feel like I'm bashing my head against a wall. Any bit of ahint will be madly appreciated at this point.
I've taken the liberty to paste contents of first GET package:
AYGET /operator/basic.shtml?id=478 HTTP/1.1
Host: <target_host_ip>
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:58.0) Gecko/20100101 Firefox/58.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US;q=0.7,en;q=0.3
Accept-Encoding: gzip, deflate
Referer: http://<target_host_ip>/view/view.shtml?id=282&imagepath=%2Fmjpg%2Fvideo.mjpg&size=1
Connection: keep-alive
Upgrade-Insecure-Requests: 1
Authorization: Digest username="root", realm="AXIS_ACCC8E4A2177", nonce="w3PH7XVmBQA=32dd7cd6ab72e0142e2266eb2a68f59e92995033", uri="/operator/basic.shtml?id=478", algorithm=MD5, response="025664e1ba362ebbf9c108b1acbcae97", qop=auth, nc=00000001, cnonce="a7e04861c3634d3b"
Package sent in return is a simple, dry 401.
PS.: Any powers that be - feel free to remove the IPs if they violate anything. Also feel free to point out grammar/spelling etc. mistakes since C2 exam is coming.
It looks like those cameras don't simply use "Basic" HTTP auth with a base64 encoded username:password combo, but use digest authentication which involves a bit more.
Luckily, with cURL this just means you need to specify --digest on the command line to handle it properly.
Test the sequence of events yourself using:
curl --digest http://user:password#example.com/digest-url/
You should see something similar to:
* Trying example.com...
* Connected to example.com (x.x.x.x) port 80 (#0)
* Server auth using Digest with user 'admin'
> GET /view/viewer_index.shtml?id=1323 HTTP/1.1
> Host: example.com
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 401 Unauthorized
< Date: Wed, 08 Nov 1972 17:30:37 GMT
< Accept-Ranges: bytes
< Connection: close
< WWW-Authenticate: Digest realm="AXIS_MACADDR", nonce="00b035e7Y417961b2083fae7e4b2c4053e39ef8ba0b65b", stale=FALSE, qop="auth"
< WWW-Authenticate: Basic realm="AXIS_MACADDR"
< Content-Length: 189
< Content-Type: text/html; charset=ISO-8859-1
<
* Closing connection 0
* Issue another request to this URL: 'http://admin:admin2#example.com/view/viewer_index.shtml?id=1323'
* Server auth using Digest with user 'admin'
> GET /view/viewer_index.shtml?id=1323 HTTP/1.1
> Host: example.com
> Authorization: Digest username="admin", realm="AXIS_MACADDR", nonce="00b035e7Y417961b2083fae7e4b2c4053e39ef8ba0b65b", uri="/view/viewer_index.shtml?id=1323", cnonce="NWIxZmY1YzA3NmY3ODczMDA0MDg4MTUwZDdjZmE0NGI=", nc=00000001, qop=auth, response="3b03254ef43bc4590cb00ba32defeaff"
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 401 Unauthorized
< Date: Wed, 08 Nov 1972 17:30:37 GMT
< Accept-Ranges: bytes
< Connection: close
* Authentication problem. Ignoring this.
< WWW-Authenticate: Digest realm="AXIS_MACADDR", nonce="00b035e8Y8232884a74ee247fc1cc42cab0cdf59839b6f", stale=FALSE, qop="auth"
< WWW-Authenticate: Basic realm="AXIS_MACADDR"
< Content-Length: 189
< Content-Type: text/html; charset=ISO-8859-1
<

nginx proxy_cache key hash change on each another browser request

I was create config of nginx like:
proxy_cache_path /tmp/nginx/static levels=1:2 keys_zone=static_zone:10m inactive=10d use_temp_path$
proxy_cache_key "$request_uri$args";
location ~* \.(css|gif|ico|jpe?g|js(on)?|png|svg|webp|ttf|woff|woff2|txt|map)$ {
proxy_hide_header Date;
proxy_cache_revalidate on;
proxy_pass http://static:8080;
proxy_cache_bypass $cookie_nocache $arg_nocache;
proxy_ignore_headers "Cache-Control" "Expires" "Set-Cookie";
proxy_hide_header "Set-Cookie";
proxy_buffering on;
proxy_cache static_zone;
proxy_cache_valid 200 301 302 30m;
proxy_cache_valid 404 10m;
#expires max;
add_header X-Proxy-Cache $upstream_cache_status;
access_log off;
add_header Cache-Control "public";
add_header Pragma "public";
expires 30d;
log_not_found off;
tcp_nodelay off;
}
On first request from Chrome all work as excepted x-proxy-cache:MISS other request got from disk cache with header x-proxy-cache:HIT. After refresh it's also HIT. But when I open page from other browsers(Opera,Edge) on this machine this request is MISS. In file system nginx create two files with different md5sum hash on a same content. For example filename 438476ac40665c852d3acde1acf769f1 head:
^C^#^#^#^#^#^#^#/^V
W^#^#^#^#��^CW^#^#^#^#'^O
W^#^#^#^#m�,�^#^#�^#�^A^N"5703e3a7-67e"^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#$
KEY: /js/catalog.js
HTTP/1.1 200 OK
Server: nginx
Date: Tue, 12 Apr 2016 15:07:19 GMT
Content-Type: application/javascript
Content-Length: 1662
Last-Modified: Tue, 05 Apr 2016 16:11:19 GMT
Connection: close
Vary: Accept-Encoding
ETag: "5703e3a7-67e"
Accept-Ranges: bytes
The second filename a6f57423c2220fba3ada5f516f6dd91c with a same content and this head:
^C^#^#^#^#^#^#^# ^V
W^#^#^#^#��^CW^#^#^#^#^A^O
W^#^#^#^#m�,�^#^#�^#�^A^N"5703e3a7-67e"^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#$
KEY: /js/catalog.js
HTTP/1.1 200 OK
Server: nginx
Date: Tue, 12 Apr 2016 15:06:41 GMT
Content-Type: application/javascript
Content-Length: 1662
Last-Modified: Tue, 05 Apr 2016 16:11:19 GMT
Connection: close
Vary: Accept-Encoding
ETag: "5703e3a7-67e"
Accept-Ranges: bytes
By documentation the name of file must be md5 from key, and there is echo -n '/js/catalog.js'| md5sum is a6f57423c2220fba3ada5f516f6dd91c as name of one of files (it was first request). I don't want to cache in server js|css per each user|browser. Just cache it once and receive from cache to all users requests. P.S. My site use https, http2, version of nginx 1.9.14.
Based on the Vary: Accept-Encoding header that's there, I would guess that Edge and Opera send different "Accept-Encoding" headers for the request. For example, one may simply send "gzip" while the other sends "gzip, deflate". Those are technically different Accept-Encoding request headers.
If you know that the origin won't send meaningfully different encodings that won't work between browsers you can add:
proxy_ignore_headers Vary;
You already have the proxy_ignore_headers, so you can probably just add to that.
Since all major browsers support gzip, the risk is likely very low. However, "webp" is also done via the Accept-Encoding, so that could create surprising results for some images if the origin can handle webp.
TLDR: Request Header Accept-Encoding matters.
Consider its normal value : Accept-Encoding: gzip, deflate, br.
When you change it to Accept-Encoding: gzip, deflate, lolkek then Nginx will store cached response in different file. And those 2 files ( under /var/cache/nginx/) will be of the same content but with different names.
The same issue: https://trac.nginx.org/nginx/ticket/1840

GZIP encoding in Jersey 2 / Grizzly

I can't activate gzip-encoding in my Jersey service. This is what I've tried:
Started out with the jersey-quickstart-grizzly2 archetype from the Getting Started Guide.
Added rc.register(org.glassfish.grizzly.http.GZipContentEncoding.class);
(have also tried rc.register(org.glassfish.jersey.message.GZipEncoder.class);)
Started with mvn exec:java
Tested with curl --compressed -v -o - http://localhost:8080/myapp/myresource
The result is the following:
> GET /myapp/myresource HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 zlib/1.2.3.4 ...
> Host: localhost:8080
> Accept: */*
> Accept-Encoding: deflate, gzip
>
< HTTP/1.1 200 OK
< Content-Type: text/plain
< Date: Sun, 03 Nov 2013 08:07:10 GMT
< Content-Length: 7
<
* Connection #0 to host localhost left intact
* Closing connection #0
Got it!
That is, despite Accept-Encoding: deflate, gzip in the request, there is no Content-Encoding: gzip in the response.
What am I missing here??
You have to register the org.glassfish.jersey.server.filter.EncodingFilter as well. This example enables deflate and gzip compression:
import org.glassfish.jersey.message.DeflateEncoder;
import org.glassfish.jersey.message.GZipEncoder;
import org.glassfish.jersey.server.ResourceConfig;
import org.glassfish.jersey.server.filter.EncodingFilter;
...
private void enableCompression(ResourceConfig rc) {
rc.registerClasses(
EncodingFilter.class,
GZipEncoder.class,
DeflateEncoder.class);
}
This solution is jersey specific and works not only with Grizzly, but with the JDK Http server as well.
Try the code like:
HttpServer httpServer = GrizzlyHttpServerFactory.createHttpServer(
BASE_URI, rc, false);
CompressionConfig compressionConfig =
httpServer.getListener("grizzly").getCompressionConfig();
compressionConfig.setCompressionMode(CompressionConfig.CompressionMode.ON); // the mode
compressionConfig.setCompressionMinSize(1); // the min amount of bytes to compress
compressionConfig.setCompressableMimeTypes("text/plain", "text/html"); // the mime types to compress
httpServer.start();

Changing HTTP status message using Sinatra

I'm writing a simple Sinatra app, and given a user posts a request with an specific data, I want to return an error '453' (custom error code) with a message CLIENT_ERROR, or something similar.
The problem is: looking into the Sinatra documentation and doing some testing I couldn't find a way to setup the response error message, only the response status.
So, if a set the Sinatra response
get '/' do
response.status = 453
end
I get the error code right:
curl -v localhost:4567
* About to connect() to localhost port 4567 (#0)
* Trying 127.0.0.1... connected
> GET / HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: localhost:4567
> Accept: */*
>
< HTTP/1.1 453
< X-Frame-Options: sameorigin
< X-XSS-Protection: 1; mode=block
< Content-Type: text/html;charset=utf-8
< Content-Length: 0
< Connection: keep-alive
< Server: thin 1.3.1 codename Triple Espresso
<
* Connection #0 to host localhost left intact
* Closing connection #0
But what I want to have is:
< HTTP/1.1 453 CLIENT_ERROR
The same way I have
< HTTP/1.1 200 OK
When everything goes according to the plan.
Is there anyway to do this using Sinatra/Rack?
The status message is generated by the server you are using, e.g. in Thin the messages are in Thin::HTTP_STATUS_CODES and the reponse line is generated in Thin::Response, and in WEBrick they are in WEBrick::HHTPStatus::StatusMessage and the response is generated in WEBrick::HTTPResponse.
If you know what server you are using, you could add your error to the appropriate hash.
With Thin:
require 'thin'
Thin::HTTP_STATUS_CODES[453] = "Client Error"
and the output:
$ curl -v localhost:4567
* About to connect() to localhost port 4567 (#0)
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 4567 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3
> Host: localhost:4567
> Accept: */*
>
< HTTP/1.1 453 Client Error
< X-Frame-Options: sameorigin
< X-XSS-Protection: 1; mode=block
< Content-Type: text/html;charset=utf-8
< Content-Length: 0
< Connection: keep-alive
< Server: thin 1.4.1 codename Chromeo
<
* Connection #0 to host localhost left intact
* Closing connection #0
and with WEBrick:
require 'webrick'
WEBrick::HTTPStatus::StatusMessage[453] = "Client Error"
which gives the output:
$ curl -v localhost:4567
* About to connect() to localhost port 4567 (#0)
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 4567 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3
> Host: localhost:4567
> Accept: */*
>
localhost - - [13/Aug/2012:01:41:48 BST] "GET / HTTP/1.1" 453 0
- -> /
< HTTP/1.1 453 Client Error
< X-Frame-Options: sameorigin
< X-Xss-Protection: 1; mode=block
< Content-Type: text/html;charset=utf-8
< Content-Length: 0
< Server: WEBrick/1.3.1 (Ruby/1.9.3/2012-04-20)
< Date: Mon, 13 Aug 2012 00:41:48 GMT
< Connection: Keep-Alive
<
* Connection #0 to host localhost left intact
* Closing connection #0
I would recommend not to use custom HTTP status codes. If you think you have something of general use, consider writing an Internet Draft and going through the IETF specification process.

Resources