When I send a curl command in my bash shell script I get output as follows
< Date: Fri, 14 Jul 2017 10:21:25 GMT
< Set-Cookie: vmware-api-session-id=7ed7b5e95530fd95c1a6d71cf91f7140;Path=/rest;Secure;HttpOnly
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Content-Type: application/json
How an I access vmware-api-session-id going ahead. Should I store it in a variable while executing curl?
You can execute the following command:
sessionid=`[YOUR CURL COMMAND] 2>&1 | grep -o vmware-api-session-id=[0-9a-z]+ | grep -o [0-9a-z]+`
2>&1 sends stderr to stdout. You need this, because curl sends session-information to stderr.
Using your example values, this would translate into
sessionid=7ed7b5e95530fd95c1a6d71cf91f7140
Now you can access the cookie value by adressing the variable ${sessionid}
If you want to export the vairable, you can use:
sessionid=`[YOUR CURL COMMAND] 2>&1 | grep -o vmware-api-session-id=[0-9a-z]+ | grep -o [0-9a-z]+`
export sessionid
or shorter
export sessionid=`[YOUR CURL COMMAND] 2>&1 | grep -o vmware-api-session-id=[0-9a-z]+ | grep -o [0-9a-z]+`
Related
The idea is the bash script that enumerates internal ports (SSRF) of a website using ports in the "common_ports.txt" file and outputs the port and "content-length" of each port accordingly.
That is curl request:
$ curl -Is "http://10.10.182.210:8000/attack?port=5000"
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 1035
Server: Werkzeug/0.14.1 Python/3.6.9
Date: Sat, 16 Oct 2021 13:02:27 GMT
To get the content length I have used grep:
$ curl -Is "http://10.10.182.210:8000/attack?port=5000" | grep "Content-Length"
Content-Length: 1035
Till now everything was ok. But when I wrote it in vim to automate the process I got weird output.
This is my full bash script:
#!/bin/bash
file="./common_ports.txt"
while IFS= read -r line
do
response=$(curl -Is "http://10.10.182.210:8000/attack?port=$line")
len=$(echo $response | grep "Content-Length:")
echo "$len"
done < "$file"
An THIS IS THE OUTPUT:
$ ./script.sh
Date: Sat, 16 Oct 2021 13:10:35 GMT9-8
Date: Sat, 16 Oct 2021 13:10:36 GMT9-8
^C
It outputs the last line of the response variable. Could anyone explain why??
Thanks in advance!!!
You need to wrap the $response inside double quotes.
len=$(echo "$response" | grep "Content-Length:")
I used curl -v xxx in shell to get data, like
HTTP/1.1 200 OK
Date: Thu, 21 Jan 2021 03:20:45 GMT
Content-Type: application/x-gzip
Content-Length: 0
Connection: keep-alive
cache-control: no-store
upload-time: 1610523530924
...
While I used command res=$(curl -v xxx), and then echo res, it is empty, and those information printed in the terminal as well.
So how can I get the field upload-time: 1610523530924?
Oh, I know. The inforamtion of head is in stderr, so if you want to print out, it is necessary to redirect it to stdout. Use curl -v xxx > tmp.txt 2>&1.
Is it possible for me to get the creation date of a file given a url without downloading the file?
lets say its http:///sever1.example.com/foo.bar. How do I get the date that foo bar was created on a bash terminal
There is no need to use curl, BASH can handle TCP connections:
$ exec 5<>/dev/tcp/google.com/80
$ echo -e "GET / HTTP/1.0\n" >&5
$ head -5 <&5
HTTP/1.0 200 OK
Date: Tue, 04 Dec 2018 13:29:30 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
$ exec 5>&-
Inline with #MichaWiedenmann I see the only solution is to use a 3rd party binary like curl to help bash in the command
curl -Is http://server1.example.com/foo.bar
This would print information which can be manipulated with grep and sed to fit my purposes
I want to run tcpdump with some parameters (still don't know what to use), then load the stackoverflow.com page.
Output should be the HTTP communication. Later, I want to use it as a shell script, so whenever I want to check the HTTP communication of a site site.com, I just can run script.sh site.com.
The HTTP communication should be simple enough. Like this:
GET /questions/9241391/how-to-capture-all-the-http-communication-data-using-tcp-dump
Host: stackoverflow.com
...
...
HTTP/1.1 200 OK
Cache-Control: public, max-age=60
Content-Length: 35061
Content-Type: text/html; charset=utf-8
Expires: Sat, 11 Feb 2012 15:36:46 GMT
Last-Modified: Sat, 11 Feb 2012 15:35:46 GMT
Vary: *
Date: Sat, 11 Feb 2012 15:35:45 GMT
....
decoded deflated data
....
Now, which options should I use with tcpdump to capture it?
It can be done by ngrep
ngrep -q -d eth1 -W byline host stackoverflow.com and port 80
^ ^ ^ ^
| | | |
| | | |
| | | v
| | | filter expression
| | |
| | +--> -W is set the dump format ("normal", "byline", "single", "none")
| |
| +----------> -d is use specified device instead of the pcap default
|
+-------------> -q is be quiet ("don't print packet reception hash marks")
Based on what you have mentioned, ngrep (on Unix) and Fiddler (Windows) might be better/easier solutions.
If you absolutely want to use tcpdump, try out the following options
tcpdump -A -vvv host destination_hostname
-A (ascii)
-vvv (verbose output)
tcpdump -i eth0 -w dump3.pcap -v 'tcp and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)'
see http://www.tcpdump.org/manpages/tcpdump.1.html
i'm getting a page with wget in a shell script, but header information going to stdout, how can i redirect it to a file?
#!/bin/sh
wget -q --server-response http://192.168.1.130/a.php > /tmp/ilan_detay.out
root#abc ./get.sh
HTTP/1.0 200 OK
X-Proxy: 130
Set-Cookie: language=en; path=/; domain=.abc.com
X-Generate-Time: 0,040604114532471
Content-type: text/html
Connection: close
Date: Mon, 17 Jan 2011 02:55:11 GMT
root#abc
The header info must be going to stderr so you'll need to redirect that to a file. To do that change the > to 2>
To get only the server response in a file you can do:
wget -q --server-response http://www.stackoverflow.com >& response.txt
You can read more about UNIX output redirection here