BASH - how to force sleep of 300s before continuing the execution, if upon sending a curl post request the server returns an unexpected result - bash

#Socowi guided me to the perfect solution, you can see it at the bottom of the question:
(1)
Here's a practical example of a script whose content consists of 10 curl POST requests, each will result in posting a different comment on my website.
#!/bin/bash
curl "https://mywebsite.com/comment... ...&text=Test-1"; sleep 60;
curl "https://mywebsite.com/comment... ...&text=Test-2"; sleep 60;
curl "https://mywebsite.com/comment... ...&text=Test-3"; sleep 60;
curl "https://mywebsite.com/comment... ...&text=Test-4"; sleep 60;
curl "https://mywebsite.com/comment... ...&text=Test-5"; sleep 60;
curl "https://mywebsite.com/comment... ...&text=Test-6"; sleep 60;
curl "https://mywebsite.com/comment... ...&text=Test-7"; sleep 60;
curl "https://mywebsite.com/comment... ...&text=Test-8"; sleep 60;
curl "https://mywebsite.com/comment... ...&text=Test-9"; sleep 60;
curl "https://mywebsite.com/comment... ...&text=Test-10"; sleep 60;
(2)
When that goes smoothly, here's how the terminal looks like:
(3)
The problem: On random intervals something will go wrong, and instead of what's shown on the screenshot above, I will start getting large amounts of text containing words like "Something went wrong". For an example, it can execute the first 6 curl commands just fine, and on the 7th there will be a bad response... upon which the script continues further, and runs the 8th curl command and gets the same error shown in the terminal, and the script just goes on until the end leaving me with partially finished work.
(4)
The solution desired: I just want the script to pause/wait for 300 seconds whenever an error alike is thrown out in the terminal, before proceeding with running the next curl command in line in the script. The waiting does help, but I have to do it manually at the moment. Kindly help me with a solution how to properly modify my script to achieve the same.
Thank you !
EDIT: The Solution for my problem as described, thanks to #Socowi:
#!/bin/bash
for i in {1..10}; do
if curl "https://mywebsite.com/comment... ...&text=Test-$i" | grep -qF '"status":"ERROR"'; then
sleep 300 # there was an error
else
sleep 60 # no error given
fi
done
exec $SHELL

Usually you could use if curl ... to check the exit status and adapt the sleeping time accordingly. However, in your case curl succeeds to get a response back. curl doesn't care about the content of the response, but you can check the content yourself. In your case a tool for json would be the proper way to parse the response, but a hacky grep does the job as well.
Since you want to print the response to the terminal, we use a variable, so that we can print the response and use grep on it.
#!/bin/bash
for i in {1..10}; do
response=$(curl "https://...&text=Test-$i")
echo "$response"
if grep -qF '"status":"ERROR"' <<< "$response"; then
sleep 300 # there was an error
else
sleep 60 # everything ok
fi
done

Related

Format URL with the system date in bash

I would run a .sh script to execute an operation from a server only if the given URL is up.
The URL where I get data updates everyday (but I dont know exactly what time it updates).
A cron job would run this script every five minutes and as soon as the updated URL exists, it runs an Rscript.
I don't know curl or bash enough to update the date according to the system's.
I thought of writing a code in BASH that would look like this :
if curl -s --head --request GET https://example.com-2021-05-10 | grep "200 OK" > /dev/null; then
Rscript ~/mybot.R
else
echo "the page is not up yet"
fi
Just use the date command.
if curl -s --head --request GET https://example.com-$(date '+%Y-%M-%d') | grep "200 OK" > /dev/null; then
Rscript ~/mybot.R
else
echo "the page is not up yet"
fi

Bash - check to make sure website is available before continuing, otherwise sleep and try again

I have a script that I want to execute at startup of a Linux host, but it is depending on influxdb running on another host. Since both hosts come up around the same time, I need influxdb up before I can run my script, else the script will fail.
I was thinking that it should be a bash script, that first checks if a port is available using curl. If it is, continue. If it is not, then sleep for 30 seconds and try again, and so on.
So far, I have the right logic to check if influxdb is up, but I can't figure out how to incorporate this into the bash script.
if
curl --head --silent --fail http://tick.home:8086/ping 1> /dev/null
then echo "1"
else echo "0"
fi
If the result is 1, continue with the script. If the result is 0, sleep for 30 seconds, then try the if statement again. What is the best way to accomplish?
try with
until curl --head --silent --fail http://tick.home:8086/ping 1> /dev/null 2>&1; do
sleep 1
done

Bash script to check HTTP status before executing tests

Hi I am trying to execute specific tests only if application is up and running (I am using docker), I am trying to achieve this with the help of bash script. What I am expecting is I need to run a loop until I receive 200 status from application, once I receive 200 script should move ahead and execute the test.I am trying bash script as follows
#!/bin/bash
urlstatus=0
until [ $urlstatus -ne 200 ]; do
urlstatus=$(curl -o /dev/null --silent --head --write-out '%{http_code}' "http://localhost:8000/animals")
echo $urlstatus
done
Execute Test if application is up & running
Please let me know what is missing in the script.
Thanks
-ne is the exact opposite of the test you actually want; to loop until the exit status is 200 you should have -eq, or even better (to avoid error messages from the comparison if a non-numeric value is present), =.
#!/bin/sh
fetchstatus() {
curl \
-o /dev/null \
--silent \
--head \
--write-out '%{http_code}' \
"http://localhost:8000/animals"
}
urlstatus=$(fetchstatus) # initialize to actual value before we sleep even once
until [ "$urlstatus" = 200 ]; do # until our result is success...
sleep 1 # wait a second...
urlstatus=$(fetchstatus) # then poll again.
done
But since curl can adjust its exit status to indicate whether a request was successful, you don't even need that. Use --fail, and you can branch directly:
#!/bin/sh
while :; do
curl -sS --fail -o /dev/null "http://localhost:8000/animals") && break
sleep 1 # actually give your server a little rest
done
The && break means that we break out of the loop only if the request was successful; the --fail argument to curl means that it only returns success if the server returned a non-erroneous exit status (such as 200).

curl returns empty reply from server bash due to curl failure

i am writing a simple bash script to "curl get" some values. Sometimes the code works and sometimes it fails, and says "empty reply from server".
How to set up a check for this in bash so that if the curl fails once it tries again until it gets the values?
while ! curl ... # add your specific curl statement here
do
{ echo "Exit status of curl: $?"
echo "Retrying ..."
} 1>&2
# you may add a "sleep 10" or similar here to retry only after ten seconds
done
In case you want the output of that curl in a variable, feel free to capture it:
output=$(
while ! curl ... # add your specific curl statement here
do
{ echo "Exit status of curl: $?"
echo "Retrying ..."
} 1>&2
# you may add a "sleep 10" or similar here to retry only after ten seconds
done
)
The messages about the retry are printed to stderr, so they won't mess up the curl output.
People are overcomplicating this:
until contents=$(curl "$url")
do
sleep 10
done
For me sometimes it happens when curl timed out and there is no information about that. Try curl with --connect-timeout 600 (in seconds) like:
curl --connect-timeout 600 "https://api.morph.io/some_stuff/data.json"
Maybe this helps you.
if you wanted to try the command until it succeeded, you could say:
command_to_execute; until (( $? == 0 )); do command_to_execute; done

How to create a loop in bash that is waiting for a webserver to respond?

How to create a loop in bash that is waiting for a webserver to respond?
It should print a "." every 10 seconds or so, and wait until the server starts to respond.
Update, this code tests if I get a good response from the server.
if curl --output /dev/null --silent --head --fail "$url"; then
echo "URL exists: $url"
else
echo "URL does not exist: $url"
fi
Combining the question with chepner's answer, this worked for me:
until $(curl --output /dev/null --silent --head --fail http://myhost:myport); do
printf '.'
sleep 5
done
I wanted to limit the maximum number of attempts. Based on Thomas's accepted answer I made this:
attempt_counter=0
max_attempts=5
until $(curl --output /dev/null --silent --head --fail http://myhost:myport); do
if [ ${attempt_counter} -eq ${max_attempts} ];then
echo "Max attempts reached"
exit 1
fi
printf '.'
attempt_counter=$(($attempt_counter+1))
sleep 5
done
httping is nice for this. simple, clean, quiet.
while ! httping -qc1 http://myhost:myport ; do sleep 1 ; done
while/until etc is a personal pref.
The poster asks a specific question about printing ., but I think most people coming here are looking for the solution below, as it is a single command that supports finite retries.
curl --head -X GET --retry 5 --retry-connrefused --retry-delay 1 http://myhost:myport
The use of backticks ` ` is outdated.
Use $( ) instead:
until $(curl --output /dev/null --silent --head --fail http://myhost:myport); do
printf '.'
sleep 5
done
You can also combine timeout and tcp commands like this. It will timeout after 60s instead of waiting indefinitely
timeout 60 bash -c 'until echo > /dev/tcp/myhost/myport; do sleep 5; done'
The following snippet:
Wait's until all URLs from the arguments return 200
Expires after 30 second if one URL is not available
One curl requests timeouts after 3 seconds
Just put it into a file and use it like a generic script to wait until the required services are available.
#/bin/bash
##############################################################################################
# Wait for URLs until return HTTP 200
#
# - Just pass as many urls as required to the script - the script will wait for each, one by one
#
# Example: ./wait_for_urls.sh "${MY_VARIABLE}" "http://192.168.56.101:8080"
##############################################################################################
wait-for-url() {
echo "Testing $1"
timeout --foreground -s TERM 30s bash -c \
'while [[ "$(curl -s -o /dev/null -m 3 -L -w ''%{http_code}'' ${0})" != "200" ]];\
do echo "Waiting for ${0}" && sleep 2;\
done' ${1}
echo "${1} - OK!"
}
echo "Wait for URLs: $#"
for var in "$#"; do
wait-for-url "$var"
done
Gist: https://gist.github.com/eisenreich/195ab1f05715ec86e300f75d007d711c
printf "Waiting for $HOST:$PORT"
until nc -z $HOST $PORT 2>/dev/null; do
printf '.'
sleep 10
done
echo "up!"
I took the idea from here: https://stackoverflow.com/a/34358304/1121497
Interesting puzzle. If you have no access or async api with your client, you can try grepping your tcp sockets like this:
until grep '***IPV4 ADDRESS OF SERVER IN REVERSE HEX***' /proc/net/tcp
do
printf '.'
sleep 1
done
But that's a busy wait with 1 sec intervals. You probably want more resolution than that. Also this is global. If another connection is made to that server, your results are invalid.

Resources