Bash - check to make sure website is available before continuing, otherwise sleep and try again - bash

I have a script that I want to execute at startup of a Linux host, but it is depending on influxdb running on another host. Since both hosts come up around the same time, I need influxdb up before I can run my script, else the script will fail.
I was thinking that it should be a bash script, that first checks if a port is available using curl. If it is, continue. If it is not, then sleep for 30 seconds and try again, and so on.
So far, I have the right logic to check if influxdb is up, but I can't figure out how to incorporate this into the bash script.
if
curl --head --silent --fail http://tick.home:8086/ping 1> /dev/null
then echo "1"
else echo "0"
fi
If the result is 1, continue with the script. If the result is 0, sleep for 30 seconds, then try the if statement again. What is the best way to accomplish?

try with
until curl --head --silent --fail http://tick.home:8086/ping 1> /dev/null 2>&1; do
sleep 1
done

Related

Format URL with the system date in bash

I would run a .sh script to execute an operation from a server only if the given URL is up.
The URL where I get data updates everyday (but I dont know exactly what time it updates).
A cron job would run this script every five minutes and as soon as the updated URL exists, it runs an Rscript.
I don't know curl or bash enough to update the date according to the system's.
I thought of writing a code in BASH that would look like this :
if curl -s --head --request GET https://example.com-2021-05-10 | grep "200 OK" > /dev/null; then
Rscript ~/mybot.R
else
echo "the page is not up yet"
fi
Just use the date command.
if curl -s --head --request GET https://example.com-$(date '+%Y-%M-%d') | grep "200 OK" > /dev/null; then
Rscript ~/mybot.R
else
echo "the page is not up yet"
fi

BASH - how to force sleep of 300s before continuing the execution, if upon sending a curl post request the server returns an unexpected result

#Socowi guided me to the perfect solution, you can see it at the bottom of the question:
(1)
Here's a practical example of a script whose content consists of 10 curl POST requests, each will result in posting a different comment on my website.
#!/bin/bash
curl "https://mywebsite.com/comment... ...&text=Test-1"; sleep 60;
curl "https://mywebsite.com/comment... ...&text=Test-2"; sleep 60;
curl "https://mywebsite.com/comment... ...&text=Test-3"; sleep 60;
curl "https://mywebsite.com/comment... ...&text=Test-4"; sleep 60;
curl "https://mywebsite.com/comment... ...&text=Test-5"; sleep 60;
curl "https://mywebsite.com/comment... ...&text=Test-6"; sleep 60;
curl "https://mywebsite.com/comment... ...&text=Test-7"; sleep 60;
curl "https://mywebsite.com/comment... ...&text=Test-8"; sleep 60;
curl "https://mywebsite.com/comment... ...&text=Test-9"; sleep 60;
curl "https://mywebsite.com/comment... ...&text=Test-10"; sleep 60;
(2)
When that goes smoothly, here's how the terminal looks like:
(3)
The problem: On random intervals something will go wrong, and instead of what's shown on the screenshot above, I will start getting large amounts of text containing words like "Something went wrong". For an example, it can execute the first 6 curl commands just fine, and on the 7th there will be a bad response... upon which the script continues further, and runs the 8th curl command and gets the same error shown in the terminal, and the script just goes on until the end leaving me with partially finished work.
(4)
The solution desired: I just want the script to pause/wait for 300 seconds whenever an error alike is thrown out in the terminal, before proceeding with running the next curl command in line in the script. The waiting does help, but I have to do it manually at the moment. Kindly help me with a solution how to properly modify my script to achieve the same.
Thank you !
EDIT: The Solution for my problem as described, thanks to #Socowi:
#!/bin/bash
for i in {1..10}; do
if curl "https://mywebsite.com/comment... ...&text=Test-$i" | grep -qF '"status":"ERROR"'; then
sleep 300 # there was an error
else
sleep 60 # no error given
fi
done
exec $SHELL
Usually you could use if curl ... to check the exit status and adapt the sleeping time accordingly. However, in your case curl succeeds to get a response back. curl doesn't care about the content of the response, but you can check the content yourself. In your case a tool for json would be the proper way to parse the response, but a hacky grep does the job as well.
Since you want to print the response to the terminal, we use a variable, so that we can print the response and use grep on it.
#!/bin/bash
for i in {1..10}; do
response=$(curl "https://...&text=Test-$i")
echo "$response"
if grep -qF '"status":"ERROR"' <<< "$response"; then
sleep 300 # there was an error
else
sleep 60 # everything ok
fi
done

shell script was not terminating

I am executing one shell script from another shell script. The included shell script is not terminating after execution. But when I run it separately, it works fine and terminates normally.
Script 1
#! /bin/bash
WebApp="R"
#----------Check for Web Application Status
localWebAppURL="http://localhost:8082/"
if curl --max-time 5 --output /dev/null --silent --head --fail "$localWebAppURL"; then
WebApp="G"
else
exec ./DownTimeCalc.sh &
fi
echo "WebApp Status|\"WebApp\":\"$WebApp\""
In above script I am calling another script called DownTimeCalc.sh.
DownTimeCalc.sh
#! /bin/bash
WebApp="R"
max=15
for (( i=1; i <= $max; ++i ))
do
if curl --max-time 5 --output /dev/null --silent --head --fail "http://localhost:8082/"; then
WebApp="G"
echo "Status|\"WebApp\":\"$WebApp\""
break
else
WebApp="R"
sleep 10
fi
echo "Status|\"WebApp\":\"$WebApp\""
done
echo "finished"
exit
exec ./DownTimeCalc.sh &
You don't need exec. If you want to run the script and wait for it to complete then just write:
./DownTimeCalc.sh
Or if you want to run it in the background and have the first script continue, write:
./DownTimeCalc.sh &
When you use & the launched process will be launched in the background and will run in the background while other commands from the foreground script run or you interact with the shell. It's doing what you told it. If you press Enter you will see any queued-up stderr output, and if you type fg it will bring the process to the foreground if it is still running.
You probably don't want to use & in this case, though.

Bash script from URL: return code when curl fails?

One of the solutions to execute a bash script directly from a URL is:
bash <(curl -sS http://1.1.1.1/install)
The problem is that when curl fails to retrieve the url, bash get's nothing as input and it ends normally (return code 0).
I would like the whole command to abort with the return code from curl.
UPDATE:
Two possible solutions:
curl -sS http://1.1.1.1/install | bash; exit ${PIPESTATUS[0]}
or:
bash <(curl -sS http://1.1.1.1/install || echo "exit $?")
The last one is kind of hackish (but shorter)
Try this to get curl's returncode:
curl -sS http://1.1.1.1/install | bash
echo ${PIPESTATUS[0]}
Use a temporary file.
trap 'rm "$install"' EXIT
installer=$(mktemp)
curl ... > "$installer" || exit
# Ideally, verify the installer *before* running it
bash "$installer"
Here's why. If you simply pipe whatever curl returns to bash, you are essentially allowing unknown code to execute on your machine. Better to make sure that what you are executing isn't harmful first.
You might ask, how is this different from using a pre-packaged installer, or an RPM, or some other package system? With a package, you can verify via a checksum provided by the packager that the package you are about to install is the same one they are providing. You still have to trust the packager, but you don't have to worry about an attacker modifying the package en route (man-in-the-middle attack).
You could save the output of the curl command in a variable and execute it only if the return status was zero. It might not be so elegant, but it's more compatible to other shells (doesn't rely on $PIPESTATUS, which isn't available on many shells):
install=`curl -sS http://1.1.1.1/install`
if [ $? -ne 0 ]; then
echo "error"
else
echo "$install" | bash
fi

Terminal Application to Keep Web Server Process Alive

Is there an app that can, given a command and options, execute for the lifetime of the process and ping a given URL indefinitely on a specific interval?
If not, could this be done on the terminal as a bash script? I'm almost positive it's doable through terminal, but am not fluent enough to whip it up within a few minutes.
Found this post that has a portion of the solution, minus the ping bits. ping runs on linux, indefinitely; until it's actively killed. How would I kill it from bash after say, two pings?
General Script
As others have suggested, use this in pseudo code:
execute command and save PID
while PID is active, ping and sleep
exit
This results in following script:
#!/bin/bash
# execute command, use '&' at the end to run in background
<command here> &
# store pid
pid=$!
while ps | awk '{ print $1 }' | grep $pid; do
ping <address here>
sleep <timeout here in seconds>
done
Note that the stuff inside <> should be replaces with actual stuff. Be it a command or an ip address.
Break from Loop
To answer your second question, that depends in the loop. In the loop above, simply track the loop count using a variable. To do that, add a ((count++)) inside the loop. And do this: [[ $count -eq 2 ]] && break. Now the loop will break when we're pinging for a second time.
Something like this:
...
while ...; do
...
((count++))
[[ $count -eq 2 ]] && break
done
ping twice
To ping only a few times, use the -c option:
ping -c <count here> <address here>
Example:
ping -c 2 www.google.com
Use man ping for more information.
Better practice
As hek2mgl noted in a comment below, the current solution may not suffice to solve the problem. While answering the question, the core problem will still persist. To aid to that problem, a cron job is suggested in which a simple wget or curl http request is sent periodically. This results in a fairly easy script containing but one line:
#!/bin/bash
curl <address here> > /dev/null 2>&1
This script can be added as a cron job. Leave a comment if you desire more information how to set such a scheduled job. Special thanks to hek2mgl for analyzing the problem and suggesting a sound solution.
Say you want to start a download with wget and while it is running, ping the url:
wget http://example.com/large_file.tgz & #put in background
pid=$!
while kill -s 0 $pid #test if process is running
do
ping -c 1 127.0.0.1 #ping your adress once
sleep 5 #and sleep for 5 seconds
done
A nice little generic utility for this is Daemonize. Its relevant options:
Usage: daemonize [OPTIONS] path [arg] ...
-c <dir> # Set daemon's working directory to <dir>.
-E var=value # Pass environment setting to daemon. May appear multiple times.
-p <pidfile> # Save PID to <pidfile>.
-u <user> # Run daemon as user <user>. Requires invocation as root.
-l <lockfile> # Single-instance checking using lockfile <lockfile>.
Here's an example of starting/killing in use: flickd
To get more sophisticated, you could turn your ping script into a systemd service, now standard on many recent Linuxes.

Resources