Monitoring URL Requests from Shell Script - bash

I am required to create a shell script in Mac, which will monitor and if a specified URL (for example, *.google.com) is hit from any browser or program, shell script will prompt or do an operation. Could anyone guide how to do this?

Those environment variables will set proxy settings to my programs, like curl, wget and browser.
$ env | grep -i proxy
NO_PROXY=localhost,127.0.0.0/8,::1
http_proxy=http://138.106.75.10:3128/
https_proxy=https://138.106.75.10:3128/
no_proxy=localhost,127.0.0.0/8,::1
Here you can see that curl respects it and always connects on my proxy, in your case your proxy settings will be like this: http://localhost:3128.
$ curl -vvv www.google.com
* Rebuilt URL to: www.google.com/
* Trying 138.106.75.10...
* Connected to 138.106.75.10 (138.106.75.10) port 3128 (#0)
> GET http://www.google.com/ HTTP/1.1
> Host: www.google.com
> User-Agent: curl/7.47.0
> Accept: */*
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 302 Found
< Cache-Control: private
< Content-Type: text/html; charset=UTF-8
< Referrer-Policy: no-referrer
< Location: http://www.google.se/?gfe_rd=cr&ei=3ExvWajSGa2EyAXS376oCw
< Content-Length: 258
< Date: Wed, 19 Jul 2017 12:13:16 GMT
< Proxy-Connection: Keep-Alive
< Connection: Keep-Alive
< Age: 0
<
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>302 Moved</TITLE></HEAD><BODY>
<H1>302 Moved</H1>
The document has moved
here.
</BODY></HTML>
* Connection #0 to host 138.106.75.10 left intact
Install apache on your machine and configure it as forward proxy, like the example below, the trick is combine mod_actions and mod_proxy:
Listen 127.0.0.1:3128
<VirtualHost 127.0.0.1:3128>
Script GET "/cgi-bin/your-script.sh"
ProxyRequests On
ProxyVia On
<Proxy http://www.google.com:80>
ProxySet keepalive=On
Require all granted
</Proxy>
</VirtualHost>
I never tried it, but theoretically it should work.

If you want to monitor or capture network traffic, tcpdump is your friend- requires no proxy servers, additional installs, etc., and should work on stock Mac OS as well as other *nix variants.
Here's a simple script-
sudo tcpdump -ql dst host google.com | while read line; do echo "Match found"; done
The while read loop will keep running until manually terminated; replace echo "Match found" with your preferred command. Note that this will trigger multiple times per page load; you can use tcpdump -c 1 if you only want it to run until it sees relevant traffic.
As Azize mentions, You could also have tcpdump outputting to a file in one process, and monitor that file in another. incrontab is not available on Mac OS X; you could wrap tail -f in a while read loop:
sudo tcpdump -l dst host google.com > /tmp/output &
tail -fn 1 /tmp/output | while read line; do echo "Match found"; done
There's a good similar script available on github. You can also read up on tcpdump filters if you want to make the filter more sophisticated.

first we have to put url's in sample.txt file that we have to monitor.
FILEPATH=/tmp/url
INFILE=$FILEPATH/sample.txt
OUTFILE=$FILEPATH/url_status.txt
> $OUTFILE
for file in `cat $INFILE`
do
echo -n "$file |" >> $FILEPATH/url_status.txt
timeout 20s curl -Is $file |head -1 >> $FILEPATH/url_status.txt
done
grep '200' $OUTFILE|awk '{print $1" Url is working fine"}' > /tmp/url/working.txt
grep -v '200' $OUTFILE|awk '{print $1" Url is not working"}' > /tmp/url/notworking.txt
COUNT=`cat /tmp/url/notworking.txt | wc -l`
if [ $COUNT -eq 0 ]
then
echo "All url working fine"
else
echo "Issue in following url"
cat /tmp/url/notworking.txt
fi

Related

Error "write(stdout): Broken pipe" when using GNU netcat with named pipe

I have a simple shell script sv that responds to HTTP requests with a short message:
#!/bin/sh
echo_crlf() {
printf '%s\r\n' "$#"
}
respond() {
body='Hello world!'
echo_crlf 'HTTP/1.1 200 OK'
echo_crlf 'Connection: close'
echo_crlf "Content-Length: $(echo "$body" | wc -c)"
echo_crlf 'Content-Type: text/plain'
echo_crlf
echo "$body"
}
mkfifo tube
respond < tube | netcat -l -p 8888 > tube
rm tube
When I start the script and send a request,
everything looks right on the client side:
$ ./sv
$ curl localhost:8888
Hello world!
$
but the script prints the following error:
$ ./sv
write(stdout): Broken pipe
$
I am running this script on Linux,
using GNU's implementation of netcat and coreutils.
I've tried running this script with both dash and bash; the same error occurs.
What is the cause of this error and how can I avoid it?
Edit: It seems that the error was caused
by leaving out the read command in respond
in simplifying my code for this question.
That or the lack of a Connection: close header
when testing with a web browser causes this error message.
You're writing to the FIFO "tube" but no one is reading from it. You can try like this:
{ respond; cat tube > /dev/null; } | netcat -l -p 8888 > tube
I don't get the point of using the FIFO here. The following would just work:
respond | netcat -l -p 8888 > /dev/null

Bash script written in vim works differently than in the terminal! WHY?

The idea is the bash script that enumerates internal ports (SSRF) of a website using ports in the "common_ports.txt" file and outputs the port and "content-length" of each port accordingly.
That is curl request:
$ curl -Is "http://10.10.182.210:8000/attack?port=5000"
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 1035
Server: Werkzeug/0.14.1 Python/3.6.9
Date: Sat, 16 Oct 2021 13:02:27 GMT
To get the content length I have used grep:
$ curl -Is "http://10.10.182.210:8000/attack?port=5000" | grep "Content-Length"
Content-Length: 1035
Till now everything was ok. But when I wrote it in vim to automate the process I got weird output.
This is my full bash script:
#!/bin/bash
file="./common_ports.txt"
while IFS= read -r line
do
response=$(curl -Is "http://10.10.182.210:8000/attack?port=$line")
len=$(echo $response | grep "Content-Length:")
echo "$len"
done < "$file"
An THIS IS THE OUTPUT:
$ ./script.sh
Date: Sat, 16 Oct 2021 13:10:35 GMT9-8
Date: Sat, 16 Oct 2021 13:10:36 GMT9-8
^C
It outputs the last line of the response variable. Could anyone explain why??
Thanks in advance!!!
You need to wrap the $response inside double quotes.
len=$(echo "$response" | grep "Content-Length:")

How do I read the date created info of a file from URL

Is it possible for me to get the creation date of a file given a url without downloading the file?
lets say its http:///sever1.example.com/foo.bar. How do I get the date that foo bar was created on a bash terminal
There is no need to use curl, BASH can handle TCP connections:
$ exec 5<>/dev/tcp/google.com/80
$ echo -e "GET / HTTP/1.0\n" >&5
$ head -5 <&5
HTTP/1.0 200 OK
Date: Tue, 04 Dec 2018 13:29:30 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
$ exec 5>&-
Inline with #MichaWiedenmann I see the only solution is to use a 3rd party binary like curl to help bash in the command
curl -Is http://server1.example.com/foo.bar
This would print information which can be manipulated with grep and sed to fit my purposes

Connecting to website without curl or bash

I'm trying to connect to a website like this "examplesite.com:9000/link" using a method like this:
echo -e "GET http://google.com HTTP/1.0\n\n" | nc google.com 80 > /dev/null 2>&1
I've seen people ping google with the above code.
I can use curl or wget to go to that site but I don't wanna use those methods because I'm using a microcontroller that doesn't support curl or wget.
Could someone explain how the above code is working?
nc opens a connection to port 80 on google.com
The echo statement is a valid GET request, using HTTP/1.0 protocol
> /dev/null 2>&1 redirects both stdout and stderr, producing no output
You can tell success by the exit code, in $? (value of 0 means success)
You could write this shorter:
echo -e "GET /\n\n" | nc google.com 80
And more portable (without echo -e):
printf "GET /\n\n" | nc google.com 80
Or more portable but still with echo:
{ echo GET /; echo; } | nc google.com 80

redirect wget header output

i'm getting a page with wget in a shell script, but header information going to stdout, how can i redirect it to a file?
#!/bin/sh
wget -q --server-response http://192.168.1.130/a.php > /tmp/ilan_detay.out
root#abc ./get.sh
HTTP/1.0 200 OK
X-Proxy: 130
Set-Cookie: language=en; path=/; domain=.abc.com
X-Generate-Time: 0,040604114532471
Content-type: text/html
Connection: close
Date: Mon, 17 Jan 2011 02:55:11 GMT
root#abc
The header info must be going to stderr so you'll need to redirect that to a file. To do that change the > to 2>
To get only the server response in a file you can do:
wget -q --server-response http://www.stackoverflow.com >& response.txt
You can read more about UNIX output redirection here

Resources