Send a burst/multi threaded requests using bash and curl - bash

I have a use case where I want to send a burst of requests. My current script sends the request sequentially but I want to be able to send a burst of requests (all of them at once) and wait for the response to come back (ex. if I enter 10 as request count- all 10 should be sent together). I am not sure how I can achieve that using bash and curl. Any ideas would be appreciated.
#!/bin/bash
CURL="/usr/bin/curl"
echo -n "how many times you want to run the request: "
read erc
ERC="$erc"
#count=1;
total_connect=0
total_start=0
total_time=0
echo " Time_Connect Time_startTransfer Time_total HTTP_Code ";
#while [ $count -le $ERC ]
for ((i=1;i<=$ERC;i+=1)); do
result=`$CURL -k -o /dev/null -s -w %{time_connect}:%{time_starttransfer}:%{time_total}:%{http_code} -H "Accept: application/xml" -H "Accept: text/xml" -H "Accept: application/json" -H "Accept: application/cbor" "http://google.com"`
echo $result;
var=$(echo $result | awk -F":" '{print $1, $2, $3, $4}')
set -- $var
total_connect=`echo "scale=6; $total_connect + $1" | bc`;
total_start=`echo "scale=6; $total_start + $2" | bc`;
total_time=`echo "scale=6; $total_time + $3" | bc`;
#count=$((count+1))
done
echo "URL executed is http://google.com"
echo "HTTP CODE of all request is $4"
echo "average time connect: `echo "scale=6; $total_connect/$ERC" | bc`";
echo "average time start: `echo "scale=6; $total_start/$ERC" | bc`";
echo "average Totaltime taken: `echo "scale=6; $total_time/$ERC" | bc`";

Here's the code with the & introduced to issue all of the curls "at the same time".
#!/bin/bash
CURL="/usr/bin/curl"
echo -n "how many times you want to run the request: "
read erc
ERC="$erc"
#count=1;
total_connect=0
total_start=0
total_time=0
echo " Time_Connect Time_startTransfer Time_total HTTP_Code ";
#while [ $count -le $ERC ]
for ((i=1;i<=$ERC;i+=1)); do
{
result=`$CURL -k -o /dev/null -s -w %{time_connect}:%{time_starttransfer}:%{time_total}:%{http_code} -H "Accept: application/xml" -H "Accept: text/xml" -H "Accept: application/json" -H "Accept: application/cbor" "http://google.com"`
echo $result;
var=$(echo $result | awk -F":" '{print $1, $2, $3, $4}')
set -- $var
total_connect=`echo "scale=6; $total_connect + $1" | bc`;
total_start=`echo "scale=6; $total_start + $2" | bc`;
total_time=`echo "scale=6; $total_time + $3" | bc`;
#count=$((count+1))
} > $i.result &
done
wait
# TODO: You'll need to write code here to process the intermediate results in *.result
# It would have been helpful to have created a temporary directory for your indivdual results as well
echo "URL executed is http://google.com"
echo "HTTP CODE of all request is $4"
echo "average time connect: `echo "scale=6; $total_connect/$ERC" | bc`";
echo "average time start: `echo "scale=6; $total_start/$ERC" | bc`";
echo "average Totaltime taken: `echo "scale=6; $total_time/$ERC" | bc`";
The wait I introduced tells bash to wait for all of the child processes (curls and such) to complete before continuing.

Related

Timer in a loop - Bash

I have a token whos expire in 30 minutes, i want do renew this token until the 30 minutes time expire, i have the following script calling an api:
function token {
gen_bearer_token=$(curl -X POST "https://api.foo.bar/oauth2/token" -H "accept: application/json" -H "Content-Type: application/x-www-form-urlencoded" -d "client_id=foo&client_secret=bar" | cut -d '{' -f2 | cut -d '}' -f1)
bearer_token=$(echo $gen_bearer_token | awk '{print $2}' | cut -d '"' -f2 | cut -d '"' -f1)
token_type=$(echo $gen_bearer_token | awk '{print $6}' | cut -d '"' -f2 | cut -d '"' -f1)
}
echo -e "HOSTNAME;LAST_SEEN" > ${file}
ids=$(curl -s -X GET 'https://api.foo.bar/devices/queries/devices-scroll/v1?limit=5000' -H 'accept: application/json' -H 'authorization: '${token_type}' '${bearer_token}'' | jq .resources[])
for id in ${ids}
do
result=$(curl -s -X GET 'https://api.foo.bar/devices/entities/devices/v1?ids='${id}'' -H 'accept: application/json' -H 'authorization: '${token_type}' '${bearer_token}'' | jq '.resources[] | "\(.hostname) \(.last_seen)"')
if [[ ! -z ${result} ]]
then
hostname=$(echo ${result} | awk '{print $1}' | cut -d '"' -f2)
last_seen=$(echo ${result} | awk '{print $2}' | cut -d '"' -f1)
echo -e "${hostname};${last_seen}" >> ${file}
fi
done
in this looping, time duration is more than 2 hours (variable duration) + i want to create a timer to renew token if the times pass through 30 minutes, if passed 30 minutes the request in api will fail.
for id in ${ids}
do
result=$(curl -s -X GET 'https://api.foo.bar/devices/entities/devices/v1?ids='${id}'' -H 'accept: application/json' -H 'authorization: '${token_type}' '${bearer_token}'' | jq '.resources[] | "\(.hostname) \(.last_seen)"')
if [[ ! -z ${result} ]]
then
hostname=$(echo ${result} | awk '{print $1}' | cut -d '"' -f2)
last_seen=$(echo ${result} | awk '{print $2}' | cut -d '"' -f1)
echo -e "${hostname};${last_seen}" >> ${file}
fi
done
bash as a suitable timer built-in: the SECONDS variable.
Its value is its initial value plus the number of seconds since the most recent assignment. At shell start-up, it is initialized to 0.
All you need to do is check the value of $SECONDS before using the token, and if its value is greater than 1800 (the number of seconds in 30 minutes), get a new one.
SECONDS=1801 # Make sure a token will be generated before the first call
for id in $ids; do
if [ "$SECONDS" -gt 1800 ]; then
token
SECONDS=0
fi
...
done
I Solved this problem creating a check response with status code.
response=$(curl -H 'authorization: '${token_type}' '${bearer_token}'' --write-out '%{http_code}' --silent --output /dev/null https://api.foo.bar/devices/entities/devices/v1?ids='0000000000000')
if [ ${response} -eq 200 ]; then
do something
else
token
fi

How to retrieve the real redirect location header with Curl? without using {redirect_url}

I realized that Curl {redirect_url} does not always show the same redirect URL. For example if the URL header isLocation: https:/\example.com this will redirect to https:/\example.com but curl {redirect_url} shows redirect_url: https://host-domain.com/https:/\example.com and it won't display the response real location header. (I like to see the real location: result.)
This is the BASH I'm working with:
#!/bin/bash
# Usage: urls-checker.sh domains.txt
FILE="$1"
while read -r LINE; do
# read the response to a variable
response=$(curl -H 'Cache-Control: no-cache' -s -k --max-time 2 --write-out '%{http_code} %{size_header} %{redirect_url} ' "$LINE")
# get the title
title=$(sed -n 's/.*<title>\(.*\)<\/title>.*/\1/ip;T;q'<<<"$response")
# read the write-out from the last line
read -r http_code size_header redirect_url < <(tail -n 1 <<<"$response")
printf "***Url: %s\n\n" "$LINE"
printf "Status: %s\n\n" "$http_code"
printf "Size: %s\n\n" "$size_header"
printf "Redirect-url: %s\n\n" "$redirect_url"
printf "Title: %s\n\n" "$title"
# -c 20 only shows the 20 first chars from response
printf "Body: %s\n\n" "$(head -c 100 <<<"$response")"
done < "${FILE}"
How can I printf "Redirect-url: the original requested location: header without having to use redirect_url?
To read the exact Location header field value, as returned by the server, you can use the -i/--include option, in combination with grep.
For example:
$ curl 'http://httpbin.org/redirect-to?url=http:/\example.com' -si | grep -oP 'Location: \K.*'
http:/\example.com
Or, if you want to read all headers, content and the --write-out variables line (according to your script):
response=$(curl -H 'Cache-Control: no-cache' -s -i -k --max-time 2 --write-out '%{http_code} %{size_header} %{redirect_url} ' "$url")
# break the response in parts
headers=$(sed -n '1,/^\r$/p' <<<"$response")
content=$(sed -e '1,/^\r$/d' -e '$d' <<<"$response")
read -r http_code size_header redirect_url < <(tail -n1 <<<"$response")
# get the real Location
location=$(grep -oP 'Location: \K.*' <<<"$headers")
Fully integrated in your script, this looks like:
#!/bin/bash
# Usage: urls-checker.sh domains.txt
file="$1"
while read -r url; do
# read the response to a variable
response=$(curl -H 'Cache-Control: no-cache' -s -i -k --max-time 2 --write-out '%{http_code} %{size_header} %{redirect_url} ' "$url")
# break the response in parts
headers=$(sed -n '1,/^\r$/p' <<<"$response")
content=$(sed -e '1,/^\r$/d' -e '$d' <<<"$response")
read -r http_code size_header redirect_url < <(tail -n1 <<<"$response")
# get the real Location
location=$(grep -oP 'Location: \K.*' <<<"$headers")
# get the title
title=$(sed -n 's/.*<title>\(.*\)<\/title>.*/\1/ip;T;q'<<<"$content")
printf "***Url: %s\n\n" "$url"
printf "Status: %s\n\n" "$http_code"
printf "Size: %s\n\n" "$size_header"
printf "Redirect-url: %s\n\n" "$location"
printf "Title: %s\n\n" "$title"
printf "Body: %s\n\n" "$(head -c 100 <<<"$content")"
done < "$file"
According to #randomir answer and since I was only need raw redirect URL I use this command on my batch
curl -w "%{redirect_url}" -o /dev/null -s "https://stackoverflow.com/q/46507336/3019002"
https:/\example.com is not a legal URL(*). The fact that this works in browsers in an abomination (that I've fought against) and curl doesn't. %{redirect_url} shows exactly the URL curl would redirect to...
A URL should use to forward slashes, so the above should look like http://example.com.
(*) = I refuse to accept the WHATWG "definition".

Check return code in bash while capturing text

When running an ldapsearch we get a return code indicating success or failure. This way we can use an if statement to check success.
On failure when using debug it prints if the cert validation failed. How can I capture the output of the command while checking the sucess or failure of ldapsearch?
ldapIP=`nslookup corpadssl.glb.intel.com | awk '/^Address: / { print $2 }' | cut -d' ' -f2`
server=`nslookup $ldapIP | awk -F"= " '/name/{print $2}'`
ldap='ldapsearch -x -d8 -H "ldaps://$ldapIP" -b "dc=corp,dc=xxxxx,dc=com" -D "name#am.corp.com" -w "366676" (mailNickname=sdent)"'
while true; do
if [[ $ldap ]] <-- capture text output here ??
then
:
else
echo $server $ldapIP `date` >> fail.txt
fi
sleep 5
done
As #codeforester suggested, you can use $? to check the return code of the last command.
ldapIP=`nslookup corpadssl.glb.intel.com | awk '/^Address: / { print $2 }' | cut -d' ' -f2`
server=`nslookup $ldapIP | awk -F"= " '/name/{print $2}'`
while true; do
captured=$(ldapsearch -x -d8 -H "ldaps://$ldapIP" -b "dc=corp,dc=xxxxx,dc=com" -D "name#am.corp.com" -w "366676" "(mailNickname=sdent)")
if [ $? -eq 0 ]
then
echo "${captured}"
else
echo "$server $ldapIP `date`" >> fail.txt
fi
sleep 5
done
EDIT: at #rici suggestion (and because I forgot to do it)... ldap needs to be run before the if.
EDIT2: at #Charles Duffy suggestion (we will get there), we don't need to store the command in a variable.

if statement function to many argumnets

I've overlooked my program for any mistakes and can't find any. Usually when I run into a mistake with BASH the interpreter is off on where the mistake is. I'm trying to customize this script from SANS InfoSec Using Linux Scripts to Monitor Security. Everything is fine until the part where the check function looks at the different protocols. When I uncomment them I get the error: ./report: line 41: [: too many arguments. Here is the program...
#!/bin/bash
if [ "$(id -u)" != "0" ]; then
echo "Must be root to run this script!"
exit 1
fi
##### CONSTANTS -
report=/home/chron/Desktop/report.log
#router=/home/chron/Desktop/router.log
red=`tput bold;tput setaf 1`
yellow=`tput bold;tput setaf 3`
green=`tput bold;tput setaf 2`
blue=`tput bold;tput setaf 4`
magenta=`tput bold;tput setaf 5`
cyan=`tput bold;tput setaf 6`
white=`tput sgr0`
##### FUNCTIONS -
pingtest() {
ping=`ping -c 3 localhost | tail -2`
loss=`echo $ping | cut -d"," -f3 | cut -d" " -f2`
delay=`echo $ping | cut -d"=" -f2 | cut -d"." -f1`
if [ "$loss" = "100%" ]; then
echo -n $red$1$white is not responding at all | mail -s'REPORT' localhost
echo 'You have mail in /var/mail!'
echo `date` $1 is not responding at all >> $report
elif [ "$loss" != "0%" ]; then
echo $yellow$1$white is responding with some packet loss
else
if [ "$delay" -lt 100 ]; then
echo $green$1$white is responding normally
else
echo $yellow$1$white is responding slow
fi
fi
}
check() {
if [ "$2" != "" -a "$2" $3 ] ; then
echo -n $green$1$white' '
else
echo -n $red$1$white' '
echo `date` $1 was not $3 >> $report
fi
}
##### __MAIN__ -
pingtest localhost # hostname or ip
echo "Server Configuration:"
check hostname `hostname -s` '= localhost'
check domain `hostname -d` '= domain.com'
check ipaddress `hostname -I | cut -d" " -f1` '= 10.10.0.6'
check gateway `netstat -nr | grep ^0.0.0.0 | cut -c17-27` '= 10.10.0.1'
echo
echo "Integrity of Files:"
check hostsfile `md5sum /etc/hosts | grep 7c5c6678160fc706533dc46b95f06675 | wc -l` '= 1'
check passwd `md5sum /etc/passwd | grep adf5a9f5a9a70759aef4332cf2382944 | wc -l` '= 1'
#/etc/inetd.conf is missing...
echo
#echo "Integrity of Website:"
#check www/index.html `lynx -reload -dump http://<LOCALIP> 2>&1 | md5sum | cut -d" " -f1 '=<MD5SUM>'
#echo
echo "Incoming attempts:"
#lynx -auth user:password -dump http://10.10.0.1 >> $router 2>&1
check telnet `grep \ 23$ $PWD/router.log | wc -l` '= 0'
check ftp `grep \ 21$ $PWD/router.log | wc -l` '= 0'
check ssh `grep \ 22$ $PWD/router.log | wc -l` '=0'
check smtp `grep \ 25$ $PWD/router.log | wc -l` '=0'
check dns `grep \ 53$ $PWD/router.log | wc -l` '=0'
echo
Some of the lines are commented out for later tweaking. Right now my problem is with the protocols. Not sure what's wrong because it looks like to me there are 3 arguments for the function.
In your last three calls to check, you are missing the required space between the operator and the operand.
check ssh `grep \ 22$ $PWD/router.log | wc -l` '=0'
check smtp `grep \ 25$ $PWD/router.log | wc -l` '=0'
check dns `grep \ 53$ $PWD/router.log | wc -l` '=0'
The final argument to all of these should be '= 0'.
However, this is not a good way to structure your code. If you really need to parameterize the comparison fully (all your calls use = as the operation), pass the operator as a separate argument. Further, written correctly, there is no need to pre-check that $2 is a non-empty string.
check() {
if [ "$2" "$3" "$4" ] ; then
printf '%s%s%s ' "$green" "$1" "$white"
else
printf '%s%s%s ' "$red" "$1" "$white"
printf '%s %s was not %s\n' "$(date)" "$1" "$3" >> "$report"
fi
}
Then your calls to check should look like
check hostname "$(hostname -s)" = localhost
check domain "$(hostname -d)" = domain.com
check ipaddress "$(hostname -I | cut -d" " -f1)" = 10.10.0.6
check gateway "$(netstat -nr | grep ^0.0.0.0 | cut -c17-27)" = 10.10.0.1
etc
Run your code through http://shellcheck.net; there are a lot of things you can correct.
Here is my other problem. I changed it up a bit just to see what's going on.
router=/home/chron/Desktop/router.log
check() {
if [ "$2" "$3" "$4" ]; then
printf "%s%s%s" "$green" "$1" "$white"
else
printf "%s%s%s" "$red" "$1" "$white"
printf "%s %s was not %s\n" "$(date)" "$1" $3" >> report.log
fi
check gateway "$(route | grep 10.10.0.1 | cut -c17-27)" = 10.10.0.1
check telnet "$(grep -c \ 23$ $router)" = 0
check ftp "$(grep -c \ 21$ $router)" = 0
check ssh "$(grep -c \ 22$ $router)" = 0
check smtp "$(grep -c \ 25$ $router)" = 0
check dns "$(grep -c \ 53$ $router)" = 0

BASH shell script echo to output on same line

I have a simple BASH shell script which checks the HTTP response code of a curl command.
The logic is fine, but I am stuck on "simply" printing out the "output".
I am using GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu)
I would like to output the URL with a tab - then the 404|200|501|502 response. For example:
http://www.google.co.uk<tab>200
I am also getting a strange error where the "http" part of a URL is being overwritten with the 200|404|501|502. Is there a basic BASH shell scripting (feature) which I am not using?
thanks
Miles.
#!/bin/bash
NAMES=`cat $1`
for i in $NAMES
do
URL=$i
statuscode=`curl -s -I -L $i |grep 'HTTP' | awk '{print $2}'`
case $statuscode in
200)
echo -ne $URL\t$statuscode;;
301)
echo -ne "\t $statuscode";;
302)
echo -ne "\t $statuscode";;
404)
echo -ne "\t $statuscode";;
esac
done
From this answer you can use the code
response=$(curl --write-out %{http_code} --silent --output /dev/null servername)
Substituted into your loop this would be
#!/bin/bash
NAMES=`cat $1`
for i in $NAMES
do
URL=$i
statuscode=$(curl --write-out %{http_code} --silent --output /dev/null $i)
case $statuscode in
200)
echo -e "$URL\t$statuscode" ;;
301)
echo -e "$URL\t$statuscode" ;;
302)
echo -e "$URL\t$statuscode" ;;
404)
echo -e "$URL\t$statuscode" ;;
* )
;;
esac
done
I've cleaned up the echo statements too so for each URL there is a new line.
try
200)
echo -ne "$URL\t$statuscode" ;;
I'm taking a stab here, but I think what's confusing you is the fact that curl is sometimes returning more than one header info (hence more than one status code) when the initial request gets redirected.
For example:
[me#hoe]$ curl -sIL www.google.com | awk '/HTTP/{print $2}'
302
200
When you're printing that in a loop, it would appear that the second status code has become part of the next URL.
If this is indeed your problem, then there are several ways to solve this depending on what you're trying to achieve.
If you don't want to follow redirections, simple leave out the -L option in curl
statuscode=$(curl -sI $i | awk '/HTTP/{print $2}')
To take only the last status code, simply pipe the whole command to tail -n1 to take only the last one.
statuscode=$(curl -sI $i | awk '/HTTP/{print $2}' | tail -n1)
To show all codes in the order, replace all linebreaks with spaces
statuscode=$(curl -sI $i | awk '/HTTP/{print $2}' | tr "\n" " ")
For example, using the 3rd scenario:
[me#home]$ cat script.sh
#!/bin/bash
for URL in www.stackoverflow.com stackoverflow.com stackoverflow.com/xxx
do
statuscode=$(curl -siL $i | awk '/^HTTP/{print $2}' | tr '\n' ' ')
echo -e "${URL}\t${statuscode}"
done
[me#home]$ ./script.sh
www.stackoverflow.com 301 200
stackoverflow.com 200
stackoverflow.com/xxx 404

Resources