i'm getting a page with wget in a shell script, but header information going to stdout, how can i redirect it to a file?
#!/bin/sh
wget -q --server-response http://192.168.1.130/a.php > /tmp/ilan_detay.out
root#abc ./get.sh
HTTP/1.0 200 OK
X-Proxy: 130
Set-Cookie: language=en; path=/; domain=.abc.com
X-Generate-Time: 0,040604114532471
Content-type: text/html
Connection: close
Date: Mon, 17 Jan 2011 02:55:11 GMT
root#abc
The header info must be going to stderr so you'll need to redirect that to a file. To do that change the > to 2>
To get only the server response in a file you can do:
wget -q --server-response http://www.stackoverflow.com >& response.txt
You can read more about UNIX output redirection here
Related
The idea is the bash script that enumerates internal ports (SSRF) of a website using ports in the "common_ports.txt" file and outputs the port and "content-length" of each port accordingly.
That is curl request:
$ curl -Is "http://10.10.182.210:8000/attack?port=5000"
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 1035
Server: Werkzeug/0.14.1 Python/3.6.9
Date: Sat, 16 Oct 2021 13:02:27 GMT
To get the content length I have used grep:
$ curl -Is "http://10.10.182.210:8000/attack?port=5000" | grep "Content-Length"
Content-Length: 1035
Till now everything was ok. But when I wrote it in vim to automate the process I got weird output.
This is my full bash script:
#!/bin/bash
file="./common_ports.txt"
while IFS= read -r line
do
response=$(curl -Is "http://10.10.182.210:8000/attack?port=$line")
len=$(echo $response | grep "Content-Length:")
echo "$len"
done < "$file"
An THIS IS THE OUTPUT:
$ ./script.sh
Date: Sat, 16 Oct 2021 13:10:35 GMT9-8
Date: Sat, 16 Oct 2021 13:10:36 GMT9-8
^C
It outputs the last line of the response variable. Could anyone explain why??
Thanks in advance!!!
You need to wrap the $response inside double quotes.
len=$(echo "$response" | grep "Content-Length:")
I used curl -v xxx in shell to get data, like
HTTP/1.1 200 OK
Date: Thu, 21 Jan 2021 03:20:45 GMT
Content-Type: application/x-gzip
Content-Length: 0
Connection: keep-alive
cache-control: no-store
upload-time: 1610523530924
...
While I used command res=$(curl -v xxx), and then echo res, it is empty, and those information printed in the terminal as well.
So how can I get the field upload-time: 1610523530924?
Oh, I know. The inforamtion of head is in stderr, so if you want to print out, it is necessary to redirect it to stdout. Use curl -v xxx > tmp.txt 2>&1.
Is it possible for me to get the creation date of a file given a url without downloading the file?
lets say its http:///sever1.example.com/foo.bar. How do I get the date that foo bar was created on a bash terminal
There is no need to use curl, BASH can handle TCP connections:
$ exec 5<>/dev/tcp/google.com/80
$ echo -e "GET / HTTP/1.0\n" >&5
$ head -5 <&5
HTTP/1.0 200 OK
Date: Tue, 04 Dec 2018 13:29:30 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
$ exec 5>&-
Inline with #MichaWiedenmann I see the only solution is to use a 3rd party binary like curl to help bash in the command
curl -Is http://server1.example.com/foo.bar
This would print information which can be manipulated with grep and sed to fit my purposes
I am required to create a shell script in Mac, which will monitor and if a specified URL (for example, *.google.com) is hit from any browser or program, shell script will prompt or do an operation. Could anyone guide how to do this?
Those environment variables will set proxy settings to my programs, like curl, wget and browser.
$ env | grep -i proxy
NO_PROXY=localhost,127.0.0.0/8,::1
http_proxy=http://138.106.75.10:3128/
https_proxy=https://138.106.75.10:3128/
no_proxy=localhost,127.0.0.0/8,::1
Here you can see that curl respects it and always connects on my proxy, in your case your proxy settings will be like this: http://localhost:3128.
$ curl -vvv www.google.com
* Rebuilt URL to: www.google.com/
* Trying 138.106.75.10...
* Connected to 138.106.75.10 (138.106.75.10) port 3128 (#0)
> GET http://www.google.com/ HTTP/1.1
> Host: www.google.com
> User-Agent: curl/7.47.0
> Accept: */*
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 302 Found
< Cache-Control: private
< Content-Type: text/html; charset=UTF-8
< Referrer-Policy: no-referrer
< Location: http://www.google.se/?gfe_rd=cr&ei=3ExvWajSGa2EyAXS376oCw
< Content-Length: 258
< Date: Wed, 19 Jul 2017 12:13:16 GMT
< Proxy-Connection: Keep-Alive
< Connection: Keep-Alive
< Age: 0
<
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>302 Moved</TITLE></HEAD><BODY>
<H1>302 Moved</H1>
The document has moved
here.
</BODY></HTML>
* Connection #0 to host 138.106.75.10 left intact
Install apache on your machine and configure it as forward proxy, like the example below, the trick is combine mod_actions and mod_proxy:
Listen 127.0.0.1:3128
<VirtualHost 127.0.0.1:3128>
Script GET "/cgi-bin/your-script.sh"
ProxyRequests On
ProxyVia On
<Proxy http://www.google.com:80>
ProxySet keepalive=On
Require all granted
</Proxy>
</VirtualHost>
I never tried it, but theoretically it should work.
If you want to monitor or capture network traffic, tcpdump is your friend- requires no proxy servers, additional installs, etc., and should work on stock Mac OS as well as other *nix variants.
Here's a simple script-
sudo tcpdump -ql dst host google.com | while read line; do echo "Match found"; done
The while read loop will keep running until manually terminated; replace echo "Match found" with your preferred command. Note that this will trigger multiple times per page load; you can use tcpdump -c 1 if you only want it to run until it sees relevant traffic.
As Azize mentions, You could also have tcpdump outputting to a file in one process, and monitor that file in another. incrontab is not available on Mac OS X; you could wrap tail -f in a while read loop:
sudo tcpdump -l dst host google.com > /tmp/output &
tail -fn 1 /tmp/output | while read line; do echo "Match found"; done
There's a good similar script available on github. You can also read up on tcpdump filters if you want to make the filter more sophisticated.
first we have to put url's in sample.txt file that we have to monitor.
FILEPATH=/tmp/url
INFILE=$FILEPATH/sample.txt
OUTFILE=$FILEPATH/url_status.txt
> $OUTFILE
for file in `cat $INFILE`
do
echo -n "$file |" >> $FILEPATH/url_status.txt
timeout 20s curl -Is $file |head -1 >> $FILEPATH/url_status.txt
done
grep '200' $OUTFILE|awk '{print $1" Url is working fine"}' > /tmp/url/working.txt
grep -v '200' $OUTFILE|awk '{print $1" Url is not working"}' > /tmp/url/notworking.txt
COUNT=`cat /tmp/url/notworking.txt | wc -l`
if [ $COUNT -eq 0 ]
then
echo "All url working fine"
else
echo "Issue in following url"
cat /tmp/url/notworking.txt
fi
When I send a curl command in my bash shell script I get output as follows
< Date: Fri, 14 Jul 2017 10:21:25 GMT
< Set-Cookie: vmware-api-session-id=7ed7b5e95530fd95c1a6d71cf91f7140;Path=/rest;Secure;HttpOnly
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Content-Type: application/json
How an I access vmware-api-session-id going ahead. Should I store it in a variable while executing curl?
You can execute the following command:
sessionid=`[YOUR CURL COMMAND] 2>&1 | grep -o vmware-api-session-id=[0-9a-z]+ | grep -o [0-9a-z]+`
2>&1 sends stderr to stdout. You need this, because curl sends session-information to stderr.
Using your example values, this would translate into
sessionid=7ed7b5e95530fd95c1a6d71cf91f7140
Now you can access the cookie value by adressing the variable ${sessionid}
If you want to export the vairable, you can use:
sessionid=`[YOUR CURL COMMAND] 2>&1 | grep -o vmware-api-session-id=[0-9a-z]+ | grep -o [0-9a-z]+`
export sessionid
or shorter
export sessionid=`[YOUR CURL COMMAND] 2>&1 | grep -o vmware-api-session-id=[0-9a-z]+ | grep -o [0-9a-z]+`