Bash script to check HTTP status before executing tests - bash

Hi I am trying to execute specific tests only if application is up and running (I am using docker), I am trying to achieve this with the help of bash script. What I am expecting is I need to run a loop until I receive 200 status from application, once I receive 200 script should move ahead and execute the test.I am trying bash script as follows
#!/bin/bash
urlstatus=0
until [ $urlstatus -ne 200 ]; do
urlstatus=$(curl -o /dev/null --silent --head --write-out '%{http_code}' "http://localhost:8000/animals")
echo $urlstatus
done
Execute Test if application is up & running
Please let me know what is missing in the script.
Thanks

-ne is the exact opposite of the test you actually want; to loop until the exit status is 200 you should have -eq, or even better (to avoid error messages from the comparison if a non-numeric value is present), =.
#!/bin/sh
fetchstatus() {
curl \
-o /dev/null \
--silent \
--head \
--write-out '%{http_code}' \
"http://localhost:8000/animals"
}
urlstatus=$(fetchstatus) # initialize to actual value before we sleep even once
until [ "$urlstatus" = 200 ]; do # until our result is success...
sleep 1 # wait a second...
urlstatus=$(fetchstatus) # then poll again.
done
But since curl can adjust its exit status to indicate whether a request was successful, you don't even need that. Use --fail, and you can branch directly:
#!/bin/sh
while :; do
curl -sS --fail -o /dev/null "http://localhost:8000/animals") && break
sleep 1 # actually give your server a little rest
done
The && break means that we break out of the loop only if the request was successful; the --fail argument to curl means that it only returns success if the server returned a non-erroneous exit status (such as 200).

Related

Why does curl not return a value in bash script?

My task is very simple - nevertheless I have been sitting already for hours and have no idea why it doesn't work.
In linux bash script I want to get the result of a webservice call with curl. I'm not interested in the content, only the status code:
#!/bin/bash
set -euo pipefail # put bash in "strict mode"
echo "Before"
response2=$(curl -o /dev/null -s -w '%{http_code}' -u username:password-X POST https://xxxxx.yyyyy.at:8081/MyPath/service/rest/crypto/encrypt -H 'Content-Type: application/json' -d '{"deviceId": "ABCD","payload": "ABCD"}')
echo "after"
It works when there is a valid request
Before...
200
Also, when the path of the service is wrong, it gives http error code
Before...
500
But when the host is wrong (not existent hostname) I get
Before...
and the script terminates (although the call is from a looping menue).
Why is this the case?
The manual call of curl with same parameters gives
000
as output, so why this output is not displayed in my script?
A reproducable example is (server name not existing):
#!/bin/bash
set -euo pipefail
#- Check kms
f_check_kms(){
echo "Before..."
response2=$(curl -o /dev/null -s -w '%{http_code}' -u user:xxxx -X POST https://xxxx.xxx.intra.graz.at:8081/ATM-KeyManagement-REST-Service/service/rest/crypto/encryptUCast -H 'Content-Type: application/json' -d '{"deviceId": "SAG0530000016261", "encryptionSuite": "DLMS_SUITE_0", "securityMode": "AUTHENT_AND_ENCRYPT", "roleId": "001","initialVector": "4D4D4D0000BC614E01234567","payload": "ABCD","usedGuek":"NO","usedGak":"NO"}')
echo "$response2"
}
f_check_kms
You're running your script with set -e to make the shell interpreter exit when any¹ unchecked² command exits with a nonzero status, and when you provide an invalid hostname, curl exits with a nonzero exit status.
Because you're passing -s for silent mode, it doesn't print any error messages about this (you asked it not to!). It does still print the http_code you asked for, but because the script exits, the echo "after" is never reached, and whatever other code you're relying on to print the contents of the response2 variable is likewise never reached.
Suppressing this is as simple as adding a conditional to the end, like the || : sequence below:
response2=$(curl -o /dev/null -s -w '%{http_code}' -u username:password \
-X POST https://xxxxx.yyyyy.at:8081/MyPath/service/rest/crypto/encrypt \
-H 'Content-Type: application/json' \
-d '{"deviceId": "ABCD","payload": "ABCD"}' \
) || : "Ignoring exit status of $?"
You'll be able to see that message when running your script in trace mode (set -x / bash -x yourscript), but it'll be otherwise invisible, and because || is branching on curl's exit status, this marks curl as "checked" so set -e won't decide to exit based on its exit status.
¹ Not really true: set -e has a bunch of exceptions it doesn't exit over, and those exceptions change between individual shell releases.
² This is a very unintuitively-defined word: For example, when you check the exit status of a function, everything that function calls can become "checked", so set -e's behavior is extremely context-sensitive and hard to predict; what's checked when a function is called one time might not be checked when it's called again later.

Bash - check to make sure website is available before continuing, otherwise sleep and try again

I have a script that I want to execute at startup of a Linux host, but it is depending on influxdb running on another host. Since both hosts come up around the same time, I need influxdb up before I can run my script, else the script will fail.
I was thinking that it should be a bash script, that first checks if a port is available using curl. If it is, continue. If it is not, then sleep for 30 seconds and try again, and so on.
So far, I have the right logic to check if influxdb is up, but I can't figure out how to incorporate this into the bash script.
if
curl --head --silent --fail http://tick.home:8086/ping 1> /dev/null
then echo "1"
else echo "0"
fi
If the result is 1, continue with the script. If the result is 0, sleep for 30 seconds, then try the if statement again. What is the best way to accomplish?
try with
until curl --head --silent --fail http://tick.home:8086/ping 1> /dev/null 2>&1; do
sleep 1
done

curl returns empty reply from server bash due to curl failure

i am writing a simple bash script to "curl get" some values. Sometimes the code works and sometimes it fails, and says "empty reply from server".
How to set up a check for this in bash so that if the curl fails once it tries again until it gets the values?
while ! curl ... # add your specific curl statement here
do
{ echo "Exit status of curl: $?"
echo "Retrying ..."
} 1>&2
# you may add a "sleep 10" or similar here to retry only after ten seconds
done
In case you want the output of that curl in a variable, feel free to capture it:
output=$(
while ! curl ... # add your specific curl statement here
do
{ echo "Exit status of curl: $?"
echo "Retrying ..."
} 1>&2
# you may add a "sleep 10" or similar here to retry only after ten seconds
done
)
The messages about the retry are printed to stderr, so they won't mess up the curl output.
People are overcomplicating this:
until contents=$(curl "$url")
do
sleep 10
done
For me sometimes it happens when curl timed out and there is no information about that. Try curl with --connect-timeout 600 (in seconds) like:
curl --connect-timeout 600 "https://api.morph.io/some_stuff/data.json"
Maybe this helps you.
if you wanted to try the command until it succeeded, you could say:
command_to_execute; until (( $? == 0 )); do command_to_execute; done

How to check if an URL exists with the shell and probably curl?

I am looking for a simple shell (+curl) check that would evaluate as true or false if an URL exists (returns 200) or not.
Using --fail will make the exit status nonzero on a failed request. Using --head will avoid downloading the file contents, since we don't need it for this check. Using --silent will avoid status or errors from being emitted by the check itself.
if curl --output /dev/null --silent --head --fail "$url"; then
echo "URL exists: $url"
else
echo "URL does not exist: $url"
fi
If your server refuses HEAD requests, an alternative is to request only the first byte of the file:
if curl --output /dev/null --silent --fail -r 0-0 "$url"; then
I find wget to be a better tool for this than CURL; there's fewer options to remember and you can actually check for its truth value in bash to see if it succeeded or not by default.
if wget --spider http://google.com 2>/dev/null; then
echo "File exists"
else
echo "File does not exist"
fi
The --spider option makes wget just check for the file instead of downloading it, and 2> /dev/null silences wget's stderr output.

How to create a loop in bash that is waiting for a webserver to respond?

How to create a loop in bash that is waiting for a webserver to respond?
It should print a "." every 10 seconds or so, and wait until the server starts to respond.
Update, this code tests if I get a good response from the server.
if curl --output /dev/null --silent --head --fail "$url"; then
echo "URL exists: $url"
else
echo "URL does not exist: $url"
fi
Combining the question with chepner's answer, this worked for me:
until $(curl --output /dev/null --silent --head --fail http://myhost:myport); do
printf '.'
sleep 5
done
I wanted to limit the maximum number of attempts. Based on Thomas's accepted answer I made this:
attempt_counter=0
max_attempts=5
until $(curl --output /dev/null --silent --head --fail http://myhost:myport); do
if [ ${attempt_counter} -eq ${max_attempts} ];then
echo "Max attempts reached"
exit 1
fi
printf '.'
attempt_counter=$(($attempt_counter+1))
sleep 5
done
httping is nice for this. simple, clean, quiet.
while ! httping -qc1 http://myhost:myport ; do sleep 1 ; done
while/until etc is a personal pref.
The poster asks a specific question about printing ., but I think most people coming here are looking for the solution below, as it is a single command that supports finite retries.
curl --head -X GET --retry 5 --retry-connrefused --retry-delay 1 http://myhost:myport
The use of backticks ` ` is outdated.
Use $( ) instead:
until $(curl --output /dev/null --silent --head --fail http://myhost:myport); do
printf '.'
sleep 5
done
You can also combine timeout and tcp commands like this. It will timeout after 60s instead of waiting indefinitely
timeout 60 bash -c 'until echo > /dev/tcp/myhost/myport; do sleep 5; done'
The following snippet:
Wait's until all URLs from the arguments return 200
Expires after 30 second if one URL is not available
One curl requests timeouts after 3 seconds
Just put it into a file and use it like a generic script to wait until the required services are available.
#/bin/bash
##############################################################################################
# Wait for URLs until return HTTP 200
#
# - Just pass as many urls as required to the script - the script will wait for each, one by one
#
# Example: ./wait_for_urls.sh "${MY_VARIABLE}" "http://192.168.56.101:8080"
##############################################################################################
wait-for-url() {
echo "Testing $1"
timeout --foreground -s TERM 30s bash -c \
'while [[ "$(curl -s -o /dev/null -m 3 -L -w ''%{http_code}'' ${0})" != "200" ]];\
do echo "Waiting for ${0}" && sleep 2;\
done' ${1}
echo "${1} - OK!"
}
echo "Wait for URLs: $#"
for var in "$#"; do
wait-for-url "$var"
done
Gist: https://gist.github.com/eisenreich/195ab1f05715ec86e300f75d007d711c
printf "Waiting for $HOST:$PORT"
until nc -z $HOST $PORT 2>/dev/null; do
printf '.'
sleep 10
done
echo "up!"
I took the idea from here: https://stackoverflow.com/a/34358304/1121497
Interesting puzzle. If you have no access or async api with your client, you can try grepping your tcp sockets like this:
until grep '***IPV4 ADDRESS OF SERVER IN REVERSE HEX***' /proc/net/tcp
do
printf '.'
sleep 1
done
But that's a busy wait with 1 sec intervals. You probably want more resolution than that. Also this is global. If another connection is made to that server, your results are invalid.

Resources