Why does curl not return a value in bash script? - bash

My task is very simple - nevertheless I have been sitting already for hours and have no idea why it doesn't work.
In linux bash script I want to get the result of a webservice call with curl. I'm not interested in the content, only the status code:
#!/bin/bash
set -euo pipefail # put bash in "strict mode"
echo "Before"
response2=$(curl -o /dev/null -s -w '%{http_code}' -u username:password-X POST https://xxxxx.yyyyy.at:8081/MyPath/service/rest/crypto/encrypt -H 'Content-Type: application/json' -d '{"deviceId": "ABCD","payload": "ABCD"}')
echo "after"
It works when there is a valid request
Before...
200
Also, when the path of the service is wrong, it gives http error code
Before...
500
But when the host is wrong (not existent hostname) I get
Before...
and the script terminates (although the call is from a looping menue).
Why is this the case?
The manual call of curl with same parameters gives
000
as output, so why this output is not displayed in my script?
A reproducable example is (server name not existing):
#!/bin/bash
set -euo pipefail
#- Check kms
f_check_kms(){
echo "Before..."
response2=$(curl -o /dev/null -s -w '%{http_code}' -u user:xxxx -X POST https://xxxx.xxx.intra.graz.at:8081/ATM-KeyManagement-REST-Service/service/rest/crypto/encryptUCast -H 'Content-Type: application/json' -d '{"deviceId": "SAG0530000016261", "encryptionSuite": "DLMS_SUITE_0", "securityMode": "AUTHENT_AND_ENCRYPT", "roleId": "001","initialVector": "4D4D4D0000BC614E01234567","payload": "ABCD","usedGuek":"NO","usedGak":"NO"}')
echo "$response2"
}
f_check_kms

You're running your script with set -e to make the shell interpreter exit when any¹ unchecked² command exits with a nonzero status, and when you provide an invalid hostname, curl exits with a nonzero exit status.
Because you're passing -s for silent mode, it doesn't print any error messages about this (you asked it not to!). It does still print the http_code you asked for, but because the script exits, the echo "after" is never reached, and whatever other code you're relying on to print the contents of the response2 variable is likewise never reached.
Suppressing this is as simple as adding a conditional to the end, like the || : sequence below:
response2=$(curl -o /dev/null -s -w '%{http_code}' -u username:password \
-X POST https://xxxxx.yyyyy.at:8081/MyPath/service/rest/crypto/encrypt \
-H 'Content-Type: application/json' \
-d '{"deviceId": "ABCD","payload": "ABCD"}' \
) || : "Ignoring exit status of $?"
You'll be able to see that message when running your script in trace mode (set -x / bash -x yourscript), but it'll be otherwise invisible, and because || is branching on curl's exit status, this marks curl as "checked" so set -e won't decide to exit based on its exit status.
¹ Not really true: set -e has a bunch of exceptions it doesn't exit over, and those exceptions change between individual shell releases.
² This is a very unintuitively-defined word: For example, when you check the exit status of a function, everything that function calls can become "checked", so set -e's behavior is extremely context-sensitive and hard to predict; what's checked when a function is called one time might not be checked when it's called again later.

Related

Format URL with the system date in bash

I would run a .sh script to execute an operation from a server only if the given URL is up.
The URL where I get data updates everyday (but I dont know exactly what time it updates).
A cron job would run this script every five minutes and as soon as the updated URL exists, it runs an Rscript.
I don't know curl or bash enough to update the date according to the system's.
I thought of writing a code in BASH that would look like this :
if curl -s --head --request GET https://example.com-2021-05-10 | grep "200 OK" > /dev/null; then
Rscript ~/mybot.R
else
echo "the page is not up yet"
fi
Just use the date command.
if curl -s --head --request GET https://example.com-$(date '+%Y-%M-%d') | grep "200 OK" > /dev/null; then
Rscript ~/mybot.R
else
echo "the page is not up yet"
fi

Capturing error message of curl in BASH script andchecking status

I need 4 things from curl in a BASH script:
I need to capture brief humanly readable error message from curl into a bash variable.
I need to be able to check that the command completed
successfully or not.
I need the command to run
I don't want anything printed to console unless I echo it.
m=$(curl -u "$user":AP"$pass" -T "$pathA" "$url")
if [ $? -ne 0 ] ; then
echo "Error: ""$m"
fi
The problem is curl puts gibberish into $m. It just dumps the error to console instead of m. I don't want anything printed to console unless I echo it. And I only want to capture error descriptions. I tried many variations, nothing seemed to work for this use-case... at least nothing suggested here on Stack.
curl sends errors to STDERR and will not get captured by $m. The output of curl is sent to STDERR (that gibberish you mentioned).
One solution is to redirect STDERR to STDOUT by adding 2>&1 to your curl invocation in your script:
m=$(curl -u "$user":AP"$pass" -T "$pathA" "$url" 2>&1)
if [ $? -ne 0 ] ; then
echo "Error: ""$m"
fi
You could also use the --fail, --silent and --show-errors flags of curl like in this example if all you care about are the errors:
Making curl send errors to stderr and everything else to stdout
and some information about capturing STDERR to $variable in bash scripts:
Bash how do you capture stderr to a variable?
Similar to one suggested; I m just using the output stderror as $? is always 0.
OUTPUT=$(curl -X POST -H 'Content-type: application/json' --data "{\"text\":\"$MSG \"}" $SLACK_URL 2>1)
echo $OUTPUT
if [[ $OUTPUT == "ok" ]]; then
echo "$FILE_NAME Msg successfully sent"
elif [[ $OUTPUT == "invalid_token" ]]; then
echo "$FILE_NAME Slack url incorrect"
else
echo "$FILE_NAME Some issue Sending msg"
fi

Bash script to check HTTP status before executing tests

Hi I am trying to execute specific tests only if application is up and running (I am using docker), I am trying to achieve this with the help of bash script. What I am expecting is I need to run a loop until I receive 200 status from application, once I receive 200 script should move ahead and execute the test.I am trying bash script as follows
#!/bin/bash
urlstatus=0
until [ $urlstatus -ne 200 ]; do
urlstatus=$(curl -o /dev/null --silent --head --write-out '%{http_code}' "http://localhost:8000/animals")
echo $urlstatus
done
Execute Test if application is up & running
Please let me know what is missing in the script.
Thanks
-ne is the exact opposite of the test you actually want; to loop until the exit status is 200 you should have -eq, or even better (to avoid error messages from the comparison if a non-numeric value is present), =.
#!/bin/sh
fetchstatus() {
curl \
-o /dev/null \
--silent \
--head \
--write-out '%{http_code}' \
"http://localhost:8000/animals"
}
urlstatus=$(fetchstatus) # initialize to actual value before we sleep even once
until [ "$urlstatus" = 200 ]; do # until our result is success...
sleep 1 # wait a second...
urlstatus=$(fetchstatus) # then poll again.
done
But since curl can adjust its exit status to indicate whether a request was successful, you don't even need that. Use --fail, and you can branch directly:
#!/bin/sh
while :; do
curl -sS --fail -o /dev/null "http://localhost:8000/animals") && break
sleep 1 # actually give your server a little rest
done
The && break means that we break out of the loop only if the request was successful; the --fail argument to curl means that it only returns success if the server returned a non-erroneous exit status (such as 200).

curl works, but won't execute in BASH script

The following curl command works from the command line. I get a valid response from the server.
curl -X POST https://somebaseurl/api/v1/auth/login -H "Content-Type:application/json" -d '{"email": "foo#bar,com", "password": "foo"}'
However I am trying to write a BASH script like this
baseUrl=https://somebaseurl
contentTypeJson="\"Content-Type:application/json\""
credentials="'{\"email\": \"foo#bar.com",\"password\": \"foo\"}'"
login="curl -X POST $baseUrl/api/v1/auth/login -H $contentTypeJson -d $credentials"
echo ${login}
response=`${login}`
echo ${response}
I get a bad request response from the server. However if I copy the echoed curl command directly into my terminal it works. What am I doing wrong?
edit:
As requested I get
Bad Request For request 'POST api/v1/auth/login' [Expected application/json]
Bash and cURL can be quite particular how quotes are used within a script. If the escaping gets thrown off then everything else can easily fail. Running the script through shellcheck.net is often very helpful in identifying such issues. Below is a revised version of the script after fixing based upon the suggestions:
#!/bin/bash
baseUrl="https://somebaseurl/api/v1/auth/login"
contentTypeJson="Content-Type:application/json"
credentials="{\"email\": \"foo#bar.com\", \"password\": \"foo\"}"
login="$(curl -X POST "$baseUrl" -H "$contentTypeJson" -d "$credentials")"
echo "${login}"
response="${login}"
echo "${response}"
Executing with backticks interprets the command only as a sequence of words, and doesn't treat quotes specially. To have the shell interpret quotes as if they were interactively typed, use eval ${login} instead.
As an aside, bash has a -x option which will show you commands as they are being executed (run your script with bash -x script.sh instead of bash script.sh or ./script.sh). This will show you the commands correctly quoted, and is more helpful than printing them out using echo.

While loop to execute nagios commands not working properly

I wrote a small bash script in this post: How to search for a string in a text file and perform a specific action based on the result
I noticed that when I ran the script and check the logs, everything appears to be working but when I look at the Nagios UI, almost half of the servers listed in my text file did not get their notifications disabled. A revised version of the script is below:
host=/Users/bob/wsus.txt
password="P#assw0rd123"
while read -r host; do
region=$(echo "$host" | cut -f1 -d-)
if [[ $region == *sea1* ]]
then
echo "Disabling host notifications for: $host"
curl -vs -o /dev/null -d "cmd_mod=2&cmd_typ=25&host=$host&btnSubmit=Commit" https://nagios.$region.blah.com/nagios/cgi-bin/cmd.cgi" -u "bob:$password" -k 2>&1
else
echo "Disabling host notifications for: $host"
curl -vs -o /dev/null -d "cmd_mod=2&cmd_typ=25&host=$host&btnSubmit=Commit" https://nagios.$region.blah02.com/nagios/cgi-bin/cmd.cgi" -u "bob:$password" -k 2>&1
fi
done < wsus.txt >> /Users/bob/disable.log 2>&1
If i run the command against the servers having the issue manually, it does get disabled in the Nagios UI, so I'm a bit confused. FYI, I'm not well versed in Bash either so this was my attempt at trying to automate this process a bit.
1 - There is a missing double-quote before the first https occurence:
You have:
curl -vs -o /dev/null -d "cmd_mod=2&cmd_typ=25&host=$host&btnSubmit=Commit" https://nagios.$region.blah.com/nagios/cgi-bin/cmd.cgi" -u "bob:$password" -k 2>&1
Should be:
curl -vs -o /dev/null -d "cmd_mod=2&cmd_typ=25&host=$host&btnSubmit=Commit" "https://nagios.$region.blah.com/nagios/cgi-bin/cmd.cgi" -u "bob:$password" -k 2>&1
2 - Your first variable host is never used (overwritten inside the while loop).
I'm guessing what you were trying to do was something like:
hosts_file="/Users/bob/wsus.txt"
log_file="/Users/bob/disable.log"
# ...
while read -r host; do
# Do stuff with $host
done < $hosts_file >> $log_file 2>&1
3 - This looks suspicious to me:
if [[ $region == *sea1* ]]
Note: I haven't tested it yet, so this is my general feeling about this, might be wrong.
The $region isn't double-quoted, so make sure there could be no spaces / funny stuff happening there (but this should not be a problem inside a double-bracket test [[).
The *sea* looks like it would be expanded to match your current directory files matching this globbing. If you want to test this as a regular expression, you should use ~= operator or (my favorite for some reason) grep command:
if grep -q ".*sea.*" <<< "$region"; then
# Your code if match
else
# Your code if no match
fi
The -q keeps grep quiet
There is no need for test like [ or [[ because the return code of grep is already 0 if any match
The <<< simply redirects the right strings as the standard input of the left command (avoid useless piping like echo "$region" | grep -q ".*sea.*").
If this doesn't solve your problem, please provide a sample of your input file hosts_file as well as some output logs.
You could also try to see what's really going on under the hood by enclosing your script with set -x and set +x to activate debug/trace mode.

Resources