ncat: piping multiline request breaks HTTP - bash

I just started using ncat, and playing around with simple HTTP requests, I came across the following:
Starting ncat and typing a two-line get request works fine:
$ ncat 192.168.56.20 80
GET / HTTP/1.1
Host: 192.168.56.20
HTTP/1.1 200 OK
If however the request gets echoed to ncat, it apparently breaks somewhere:
$ echo 'GET / HTTP/1.1\nHost: 192.168.56.20' | ncat 192.168.56.20 80
HTTP/1.1 400 Bad Request
I don't get it.

The \n in the string is sent literally. Use echo -e to enable interpretation of backslash escapes. Also, the newline sequence for HTTP 1.1 is \r\n (CRLF). And the header section ends with an additional end-of-line.
Try:
echo -e 'GET / HTTP/1.1\r\nHost: 192.168.56.20\r\n\r\n' | ncat 192.168.56.20 80
Alternatively, the ncat has the option to convert new lines to CRLF:
-C, --crlf Use CRLF for EOL sequence
Hence, you can write:
echo -e 'GET / HTTP/1.1\nHost: 192.168.56.20\n\n' | ncat -C 192.168.56.20 80
and you should get the same result.

Related

How to pipe header information of a httpie request in bash?

I am making a HEAD request against this file location using httpie:
$ http HEAD https://dbeaver.io/files/dbeaver-ce_latest_amd64.deb
HTTP/1.1 302 Moved Temporarily
Connection: keep-alive
Content-Length: 169
Content-Type: text/html
Date: Mon, 09 Sep 2019 14:55:56 GMT
Location: https://dbeaver.io/files/6.2.0/dbeaver-ce_6.2.0_amd64.deb
Server: nginx/1.4.6 (Ubuntu)
I am only interested in the Location header as I want to store its value in a file to see if it the target was updated.
I tried:
http HEAD https://dbeaver.io/files/dbeaver-ce_latest_amd64.deb \
| grep Location \
| sed "s/Location: //"
yet this yields in an empty response.
I assume the output goes to stderr instead of stdout, though I don't really want to combine stdout and stderr for this.
I am rather looking for a solution directly with the http command.
You are missing the --header option:
http HEAD https://dbeaver.io/files/dbeaver-ce_latest_amd64.deb \
--headers \
| grep Location \
| sed "s/Location: //"
will as of this writing print:
https://dbeaver.io/files/6.2.0/dbeaver-ce_6.2.0_amd64.deb
Furthermore, your assumption that httpie would redirect to stderr is also wrong. Instead, it boils down to the automatically changing default behavior of the --print option. And it changes on the fact if the httpie was piped!
--print WHAT, -p WHAT
String specifying what the output should contain:
'H' request headers
'B' request body
'h' response headers
'b' response body
The default behaviour is 'hb' (i.e., the response headers and body
is printed), if standard output is not redirected. If the output is piped
to another program or to a file, then only the response body is printed
by default.
The --header/ -h option is merely a shortcut for --print=h.

Extract substring in bash from a file with pattern location from a given position to a special character

I need to get a nonce from a http service
I am using curl and later openssl to calculate the sha1 of that nonce.
but for that i need to get the nonce to a variable
1 step (done)
curl --user username:password -v -i -X POST http://192.168.0.202:8080/RPC3 -o output.txt -d #initial.txt
and now, the output file #output.txt holds the http reponse
HTTP/1.1 401 Unauthorized
Server: WinREST HTTP Server/1.0
Connection: Keep-Alive
Content-Length: 89
WWW-Authenticate: ServiceAuth realm="WinREST", nonce="/wcUEQOqUEoS64zKDHEUgg=="
<html><head><title>Unauthorized</title></head><body>Error 401: Unauthorized</body></html>
I have to get the position of "nonce=" and extract all the way to the " char.
How can I get in bash, the value of nonce ??
Regards
Pretty simple with grep using the -o/--only-matching and -P/--perl-regexp options (available in GNU grep):
$ grep -oP 'nonce="\K[^"]+' output.txt
/wcUEQOqUEoS64zKDHEUgg==
The -o option will print only matched part, which would normally include nonce=" if we had not used the reset match start escape sequence available in PCRE.
Additionally, if your output.txt (i.e. server response) can contain more than one nonce, and you are interested in only reading the first one, you can use the -m1 option (as Glenn suggests):
$ grep -oPm1 'nonce="\K[^"]+' output.txt
To store that nonce in a variable, simply use command substitution; or just pass it through openssl sha1 to get that digest you need:
$ nonce=$(grep -oPm1 'nonce="\K[^"]+' output.txt)
$ echo "$nonce"
/wcUEQOqUEoS64zKDHEUgg==
$ read hash _ <<<"$(grep -oPm1 'nonce="\K[^"]+' output.txt | openssl sha1 -r)"
$ echo "$hash"
2277ef32822c37b5c2b1018954f750163148edea
You can use GNU sed for this as below :
ubuntu$ cat output.txt
HTTP/1.1 401 Unauthorized
Server: WinREST HTTP Server/1.0
Connection: Keep-Alive
Content-Length: 89
WWW-Authenticate: ServiceAuth realm="WinREST", nonce="/wcUEQOqUEoS64zKDHEUgg=="
<html><head><title>Unauthorized</title></head><body>Error 401: Unauthorized</body></html>
ubuntu$ sed -E -n 's/(.*)(nonce="\/)([a-zA-Z0-9=]+)(")(.*)/\3/gp' output.txt
wcUEQOqUEoS64zKDHEUgg==
Regards!

How to combine two lines from the bash prompt?

I am trying to craft a script to perform curl requests on webservers and parse out the "server" and "Location." This way I can easily import it into my excel tables without having to reformat.
My current script:
curl -sD - -o /dev/null -A "Mozilla/4.0" http://site/ | sed -e '/Server/p' -e '/Location/!d' | paste - -
Expected/Desired output:
Server: Apache Location: http://www.site
Current output:
From curl:
HTTP/1.1 301 Moved permanently
Date: Sun, 16 Nov 2014 20:14:01 GMT
Server: Apache
Set-Cookie: USERNAME=;path=/
Set-Cookie: CFID=16581239;path=/
Set-Cookie: CFTOKEN=32126621;path=/
Location: http://www.site
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8
Piped into 'sed':
Server: Apache
Location: http://www.site
Piped into 'paste':
Server: Location: http://www.site
Why does paste immediately 'paste' after the first space? How do I get it to format correctly? I'm open to other methods, but keep in mind, the responses from the 'curl' request will be different lengths.
Thanks
Output of "curl" contains "return" i.e. \r character(s) which will cause that behaviour.
curl -sD - -o /dev/null -A "Mozilla/4.0" http://site/ | tr -d '\r'| sed -e '/Server/p' -e '/Location/!d' | paste - -
tr -d '\r' filters out all carriage return characters.
About line ends
While Linux/Unix uses "LF" (Line Feed, \n) line ends many other systems use "CR LF" (Carriage Return Line Feed \r\n) line ends. That can cause weard looking results unless you are prepared for it. Let's see some examples without \r and the same with \r.
Concatenation of strings:
a=$(echo -e "Please notice don't delete your files in /<config_dir> ")
b=$(echo -e "without hesitation ")
echo "$a""$b"
Result:
Please notice don't delete your files in /<config_dir> without hesitation
We get somewhat different result if lines end with CR LF:
a=$(echo -e "Please notice don't delete your files in /<config_dir> \r")
b=$(echo -e "without hesitation \r")
echo "$a""$b"
Result:
without hesitation delete your files in /<config_dir>
What might happen with programs which modify text only if matching string is at line end ?
Let's remove "ny" if it appears at line end:
echo "Stackoverflow is funny" | sed 's/ny$//g'
Result:
Stackoverflow is fun
The same wirh CR LF ending line:
echo -e "Stackoverflow is funny\r" | sed 's/ny$//g'
Result:
Stackoverflow is funny
sed works as designed because the line does not end to "ny" but "ny CR".
The teaching of all this is to be prepared for unexpected input data. In most cases it may be a good idea to filter out \r from data copletely since it's seldom needed for anything useful in BASH script. Filtering out unwanted character(s) is simple with "tr":
tr -d '\r'

Send text file, line by line, with netcat

I'm trying to send a file, line by line, with the following commands:
nc host port < textfile
cat textfile | nc host port
I've tried with tail and head, but with the same result: the entire file is sent as a unique line.
The server is listening with a specific daemon to receive data log information.
I'd like to send and receive the lines one by one, not the whole file in a single shot.
How can I do that?
Do you HAVE TO use netcat?
cat textfile > /dev/tcp/HOST/PORT
can also serve your purpose, at least with bash.
I'de like to send, and receive, one by one the lines, not all the file in a single shot.
Try
while read x; do echo "$x" | nc host port; done < textfile
OP was unclear on whether they needed a new connection for each line. But based on the OP's comment here, I think their need is different than mine. However, Google sends people with my need here so here is where I will place this alternative.
I have a need to send a file line by line over a single connection. Basically, it's a "slow" cat. (This will be a common need for many "conversational" protocols.)
If I try to cat an email message to nc I get an error because the server can't have a "conversation" with me.
$ cat email_msg.txt | nc localhost 25
554 SMTP synchronization error
Now if I insert a slowcat into the pipe, I get the email.
$ function slowcat(){ while read; do sleep .05; echo "$REPLY"; done; }
$ cat email_msg.txt | slowcat | nc localhost 25
220 et3 ESMTP Exim 4.89 Fri, 27 Oct 2017 06:18:14 +0000
250 et3 Hello localhost [::1]
250 OK
250 Accepted
354 Enter message, ending with "." on a line by itself
250 OK id=1e7xyA-0000m6-VR
221 et3 closing connection
The email_msg.txt looks like this:
$ cat email_msg.txt
HELO localhost
MAIL FROM:<system#example.com>
RCPT TO:<bbronosky#example.com>
DATA
From: [IES] <system#example.com>
To: <bbronosky#example.com>
Date: Fri, 27 Oct 2017 06:14:11 +0000
Subject: Test Message
Hi there! This is supposed to be a real email...
Have a good day!
-- System
.
QUIT
Use stdbuf -oL to adjust standard output stream buffering. If MODE is 'L' the corresponding stream will be line buffered:
stdbuf -oL cat textfile | nc host port
Just guessing here, but you probably CR-NL end of lines:
sed $'s/$/\r/' textfile | nc host port

Scripting an HTTP header request with netcat

I'm trying to play around with netcat to learn more about how HTTP works. I'd like to script some of it in bash or Perl, but I've hit upon a stumbling block early on in my testing.
If I run netcat straight from the prompt and type in a HEAD request, it works and I receive the headers for the web server I'm probing.
This works:
[romandas#localhost ~]$ nc 10.1.1.2 80
HEAD / HTTP/1.0
HTTP/1.1 200 OK
MIME-Version: 1.0
Server: Edited out
Content-length: 0
Cache-Control: public
Expires: Sat, 01 Jan 2050 18:00:00 GMT
[romandas#localhost ~]$
But when I put the same information into a text file and feed it to netcat through a pipe or via redirection, in preparation for scripting, it doesn't return the headers.
The text file consists of the HEAD request and two newlines:
HEAD / HTTP/1.0
Sending the same information via echo or printf doesn't work either.
$ printf "HEAD / HTTP/1.0\r\n"; |nc -n 10.1.1.2 80
$ /bin/echo -ne 'HEAD / HTTP/1.0\n\n' |nc 10.1.1.2 80
Any ideas what I'm doing wrong? Not sure if it's a bash problem, an echo problem, or a netcat problem.
I checked the traffic via Wireshark, and the successful request (manually typed) sends the trailing newline in a second packet, whereas the echo, printf, and text file methods keep the newline in the same packet, but I'm not sure what causes this behavior.
You need two lots of "\r\n", and also to tell netcat to wait for a response. printf "HEAD / HTTP/1.0\r\n\r\n" |nc -n -i 1 10.1.1.2 80 or similar should work.
Another way is to use what is called the 'heredoc' convention.
$ nc -n -i 1 10.1.1.2 80 <<EOF
> HEAD / HTTP/1.0
>
> EOF
Another way to get nc to wait for the response is to add a sleep to the input. e.g.
(printf 'GET / HTTP/1.0\r\n\r\n'; sleep 1) | nc HOST 80
You can use below netcat command to make your instance webserver:
MYIP=$(ifconfig eth0|grep 'inet addr'|awk -F: '{print $2}'| awk '{print $1}')
while true; do echo -e "HTTP/1.0 200 OK\r\n\r\nWelcome to $MYIP" | sudo nc -l -p 80 ; done&
This line will also work as equivalent:
echo -e "HEAD / HTTP/1.1\nHost: 10.1.1.2\nConnection: close\n\n\n\n" | netcat 10.1.1.2 80

Resources