Assign variable and redirect in bash - bash

I'm doing ad-hoc profiling on a web service that seems to maintain some state and get slower and slower until eventually things start timing out. I have a simple script that will expose this behavior:
while true
do
RESPONSE_CODE=$( curl --config curl.config )
if [ "$RESPONSE_CODE" -eq "200" ]; then
echo SUCCESS
else
echo FAILURE
fi
done
Along with some headers, cookies, post data, url, etc. curl.config in particular has the lines:
silent
output = /dev/null
write-out = "%{http_code}"
So the only output from curl should be the HTTP status code.
This works fine. What I'd like to do is something like this:
{ time -p RESPONSE_CODE=$(curl --config curl.config) ; } 2>&1 | awk '/real/{print $2;}'
to get a running printout of how long these queries actually take, while still saving curl's output for use in my test. But that doesn't work.
How can I capture the http status from curl AND grab the output of time so I can process both?

As written:
RESPONSE_CODE = $( curl --config curl.config )
you have spaces around the assignment which simply does not work in shell (it tries to execute a command RESPONSE_CODE with = as the first argument, etc. You need:
RESPONSE_CODE=$( curl --config curl.config )
The time built-in is hard to redirect. Since you need both HTTP status and real time, you will have to do something to capture both values. One possibility is:
set -- $( (time -p -- curl --config curl.config ) 2>&1 |
awk '/real/{print $2} /^[0-9]+$/{print}')
which will set $1 and $2. Another is array assignment:
data=( $( (time -p -- curl --config curl.config ) 2>&1 |
awk '/real/{print $2} /^[0-9]+$/{print}') )
The HTTP response code should appear before the time.
(Tested using sh -c 'echo 200; sleep 1' in lieu of curl --config curl.config.)

This should work if Curl's response is only a single line:
#!/bin/bash
RESPONSE_CODE=''
TIME=''
while read -r TYPE DATA; do
case "$TYPE" in
curl)
RESPONSE_CODE=$DATA
;;
real)
TIME=$DATA
;;
esac
done < <(exec 2>&1; time -p R=$(curl --config curl.config); echo "curl $R")
Or use an associative array:
#!/bin/bash
declare -A RESPONSE
while read -r TYPE DATA; do
RESPONSE[$TYPE]=$DATA
done < <(exec 2>&1; time -p R=$(curl ...); echo "code $R")
echo "${RESPONSE[code] ${RESPONSE[real]}"

Related

ActiveMQ command line: publish messages to a queue from a file?

I have an app that uses ActiveMQ, and typically, I test it by using AMQ's web UI to send messages to queues that my software is consuming from.
I'd like to semi-automate this and was hoping AMQ's command line has the capability to send a message to a specific queue by either providing that message as text in the command invocation, or ideally, reading it out of a file.
Examples:
./activemq-send queue="my-queue" messageFile="~/someMessage.xml"
or:
./activemq-send queue="my-queue" message="<someXml>...</someXml>"
Is there any way to do this?
You could use the "A" utility to do this.
a -b tcp://somebroker:61616 -p #someMessage.xml my-queue
Disclaimer: I'm the author of A, wrote it once to do just this thing. There are other ways as well, such as the REST interface, a Groovy script and whatnot.
ActiveMQ has a REST interface that you can send messages to from the command line, using, for example, the curl utility.
Here is a script I wrote and use for this very purpose:
#!/bin/bash
#
#
# Sends a message to the message broker on localhost.
# Uses ActiveMQ's REST API and the curl utility.
#
if [ $# -lt 2 -o $# -gt 3 ] ; then
echo "Usage: msgSender (topic|queue) DESTINATION [ FILE ]"
echo " Ex: msgSender topic myTopic msg.json"
echo " Ex: msgSender topic myTopic <<< 'this is my message'"
exit 2
fi
UNAME=admin
PSWD=admin
TYPE=$1
DESTINATION=$2
FILE=$3
BHOST=${BROKER_HOST:-'localhost'}
BPORT=${BROKER_REST_PORT:-'8161'}
if [ -z "$FILE" -o "$FILE" = "-" ] ; then
# Get msg from stdin if no filename given
( echo -n "body=" ; cat ) \
| curl -u $UNAME:$PSWD --data-binary '#-' --proxy "" \
"http://$BHOST:$BPORT/api/message/$DESTINATION?type=$TYPE"
else
# Get msg from a file
if [ ! -r "$FILE" ] ; then
echo "File not found or not readable"
exit 2
fi
( echo -n "body=" ; cat $FILE ) \
| curl -u $UNAME:$PSWD --data-binary '#-' --proxy "" \
"http://$BHOST:$BPORT/api/message/$DESTINATION?type=$TYPE"
fi
Based on Rob Newton's answer this is what i'm using to post a file to a queue. I also post a custom property (which is not possible trough the activemq webconsole)
( echo -n "body=" ; cat file.xml ) | curl --data-binary '#-' -d "customProperty=value" "http://admin:admin#localhost:8161/api/message/$QueueName?type=$QueueType"

Bash Curl Output compare with a variable?

I have this code fetching file size from curl:
file_size=$(curl -sI -b cookies.txt -H "Authorization: Bearer $access_token" -X GET "https://url.com/api/files$file" | grep Content-Length | awk '{print $2}')
I have another set of variables:
output_filesize_lowerBound=25000000
output_filesize_upperBound=55000000
wrong_filesize=317
I want to write an if statement that compares them and allows me to process it. Sample:
if [[ ( "$file_size" > "$output_filesize_lowerBound" ) && ( "$file_size" < "$output_filesize_upperBound" ) ]]
then
echo "Writing"
curl -b cookies.txt -H "Authorization: Bearer $access_token" -X GET "https://url.com/api/files$file" -o "$output_file"
elif [[ ( "$file_size" == "$wrong_filesize" ) ]]
then
echo "Wrong File"
else
echo "There is some problem with the file's size, please check it online"
fi
Somehow it's not working for wrong files i.e. it doesn't go to the second if and executes first every time.
I wasted almost an entire day trying out every alternatives I could find.
First off, I'd suggest using a different strategy for fetching object size. I've used both of these at various times:
file_size="$(curl -s -o/dev/null -w '%{size_download}' "$url")"
or
file_size="$(curl -sI "$url" | awk '{a[$1]=$2} END {print a["Content-Length:"]}')"
The first one downloads the whole object and returns the number of bytes that curl actually saw. It'll work for URLs that don't return the length header. The second one uses curl -I to download only the headers.
Note that you could also parse curl output in pure bash, without using awk. But whatever, it all works. :)
Second, issue is that your if notation may not work with the results you're getting. You may need to add some debug code to make it more obvious where the problem actually lies. I recommend testing each potential failure separately and reporting on the specific problems:
if (( file_size < output_filesize_lowerBound )); then
echo "ERROR: $file_size is too small." >&2
elif (( file_size > output_filesize_upperBound )); then
echo "ERROR: $file_size is too big." >&2
elif (( file_size == wrong_filesize )); then
# This will never be hit if wrong_filesize is < lowerBound.
echo "ERROR: $file_size is just plain wrong." >&2
else
echo "Writing"
...
fi
By testing your limits individually and including the tested data in your error output, it'll be more obvious exactly what is causing your script to behave unexpectedly.
For example, if you want the wrong_filesize test to be done BEFORE file_size is tested against lowerBound, you can simply reorder the tests.

How to get success count, failure count and failure reason when testing rest webservices from file using shell script

Hi i am testing web services using shell script by having multiple if condition, with the shell script coding i am getting success count, failure count and failure reason
success=0
failure=0
if curl -s --head --request DELETE http://localhost/bimws/delete/deleteUser?email=pradeepkumarhe1989#gmail.com | grep "200 OK" > /dev/null; then
success=$((success+1))
else
echo "DeleteUser is not working"$'\r' >> serverLog.txt
failure=$((failure+1))
fi
if curl -s --head --request GET http://localhost/bimws/get/getUserDetails?email=anusha4saju#gmail.com | grep "200 OK" > /dev/null; then
success=$((success+1))
else
curl -s --head --request GET http://localhost/bimws/get/getUserDetails?email=anusha4saju#gmail.com > f1.txt
echo "getUserDetails is not working"$'\r' >> serverLog.txt
failure=$((failure+1))
fi
if curl -s -i -X POST -H "Content-Type:application/json" http://localhost/bimws/post/addProjectLocationAddress -d '{"companyid":"10","projectid":"200","addresstypeid":"5","address":"1234 main st","city":"san jose","state":"CA","zip":"989898","country":"United States"}' | grep "200 OK" > /dev/null; then
success=$((success+1))
else
echo "addProjectLocationAddress is not working"$'\r' >> serverLog.txt
failure=$((failure+1))
fi
echo $success Success
echo $failure failure
but i am looking forward to test the web services from a file like i have file called web_services.txt which contains all my web services using shell script how do i execute and success count, failure count and failure reason
web_services.txt
All are different calls delete,get and post
http://localhost/bimws/delete/deleteUser?email=pradeepkumarhe1989#gmail.com
http://localhost/bimws/get/getUserDetails?email=anusha4saju#gmail.com
http://localhost/bimws/post/addProjectLocationAddress -d '{"companyid":"10","projectid":"200","addresstypeid":"5","address":"1234 main st"
,"city":"san jose","state":"CA","zip":"989898","country":"United States"}'
First of all, your current code does not correctly deal with empty lines. You need to skip those.
Your lines already contain shell commands. Running curl on them makes no sense. Instead, you should evaluate these commands.
Then, you need to modify curl so that it reports whether the request was successful by adding -f:
FILE=D:/WS.txt
success=0
failure=0
while read LINE; do
if test -z "$LINE"; then
continue
fi
if eval $(echo "$LINE" | sed 's/^curl/curl -f -s/') > /dev/null; then
success=$((success+1))
else
echo $LINE >> aNewFile.txt
failure=$((failure+1))
fi
done < $FILE
echo $success Success
echo $failure failure

Write to file if header is a 404 using Bash and Curl on Linux

I have a simple script that accepts 2 arguments, a URL and a log file location. Theoretically, it should capture the header status code from the curl command and if it is a 404, then append the URL to the log file. Any idea where it is failing?
#!/bin/bash
CMP='HTTP/1.1 404 Not Found' # This is the 404 Pattern
OPT=`curl --config /var/www/html/curl.cnf -s -D - "$1" -o /dev/null | grep 404` # Status Response
if [ $OPT = $CMP ]
then
echo "$1" >> "$2" # Append URL to File
fi
Your test is assigning the value of $CMP to $OPT, not comparing for equality. Try the following simpler method, which checks the return code of the grep command rather than looking for the comparison string in its output:
#!/bin/bash
CMP='HTTP/1.1 404 Not Found'
if $(curl -s -I "$1" | grep "$CMP" >/dev/null 2>&1); then
echo "$1" >> "$2"
fi

for loop: commands start from begin every time

I have write the following bash script to check list of domains from domain.list and multiple directories from dir.list.
# is the first domain, it first tries to find file at
http://example.com
if success script finish and exit no problem.
if failed it go to check it at
https://example.com
if ok , script finish and exit,
if not
check for it at
http://example.com/$list of different directories.
If file found script finished and exit , if failed to find
then go to check it at
https://example.com/$list of different directories
But the problem , when the first check failed and second check failed , it goes to third check , but it keep looping , at third command and 4th command, tell it find file or list of directories finished.
I want the script when reach the 3rd command to run it and check for it at list of directories tell the list finish and not to go for the 4th command tell it finished
As at my script it keep checking for single domain at multiple directories and every time to check a new directory it start the whole script from the bagain and run the 1st command and 2nd command again from the begin and I do not need that, big loss of time
Thanks
#!/bin/bash
dirs=(`cat dir.list`)
doms=( `cat domain.list`)
for dom in "${doms[#]}"
do
for dir in "${dirs[#]}"
do
target1="http://${dom}"
target2="https://${dom}"
target3="http://${dom}/${dir}"
target4="https://${dom}/${dir}"
if curl -s --insecure -m2 ${target1}/test.txt | grep "success" > /dev/null ;then
echo ${target1} >> dir.result
break
elif curl -s --insecure -m2 ${target2}/test.txt | grep "success" > /dev/null;then
echo ${target2} >> dir.result
break
elif curl -s --insecure -m2 ${target3}/test.txt | grep "success" > /dev/null; then
echo ${target3} >> dir.result
break
elif curl -s --insecure -m2 ${target4}/test.txt | grep "success" > /dev/null ; then
echo ${target4} >> dir.result
break
fi
done
done
Your code is sub-optimal; if you have a list of 5 'dir' values, you check 5 times whether http://${domain}/test.txt exists — but the chances are that if it didn't exist the first time, it doesn't exist on the other times either.
You use dir to indicate a sub-directory name, but your code uses http://${dom}:${dir} rather than the more normal http://${dom}/${dir}. Technically, what follows the colon up to the first slash is a port number, not a directory. I'm going to assume this is a typo and the colon should be replaced by a slash.
Generally, do not use the back-tick notation; use $(…) instead. Avoid swathes of repeated code, too.
I think you can compress your script down to something like this:
#!/bin/bash
dirs=( $(cat dir.list) )
file=test.txt
fetch_file()
{
if curl -s --insecure -m2 "${1:?}/${file}" | grep "success" > /dev/null
then
echo "${1}"
return 0
else
return 1
fi
}
for dom in $(cat domain.list)
do
for proto in http https
do
fetch_file "${proto}://{$dom}" && break
for dir in "${dirs[#]}"
do
fetch_file "${proto}://${dom}/${dir}" && break 2
done
done
done > dir.result
If the domain list is massive, you could consider using while read dom; do …; done < domain.list instead of using the $(cat domain.list). It would be feasible, and possibly even sensible, to define variable site="${proto}://${dom}" and then use that in the invocations of fetch_file.
You can use this script:
while read dom; do
while read dir; do
target1="http://${dom}"
target2="https://${dom}"
target3="http://${dom}:${dir}"
target4="https://${dom}:${dir}"
if curl -s --insecure -m2 ${target1}/test.txt | grep -q "success"; then
echo ${target1} >> dir.result
break 2
elif curl -s --insecure -m2 ${target2}/test.txt | grep -q "success"; then
echo ${target2} >> dir.result
break 2
elif curl -s --insecure -m2 ${target3}/test.txt | grep -q "success"; then
echo ${target3} >> dir.result
break 2
elif curl -s --insecure -m2 ${target4}/test.txt | grep -q "success"; then
echo ${target4} >> dir.result
break 2
fi
done < dir.list
done < domain.list

Resources