Format URL with the system date in bash - bash

I would run a .sh script to execute an operation from a server only if the given URL is up.
The URL where I get data updates everyday (but I dont know exactly what time it updates).
A cron job would run this script every five minutes and as soon as the updated URL exists, it runs an Rscript.
I don't know curl or bash enough to update the date according to the system's.
I thought of writing a code in BASH that would look like this :
if curl -s --head --request GET https://example.com-2021-05-10 | grep "200 OK" > /dev/null; then
Rscript ~/mybot.R
else
echo "the page is not up yet"
fi

Just use the date command.
if curl -s --head --request GET https://example.com-$(date '+%Y-%M-%d') | grep "200 OK" > /dev/null; then
Rscript ~/mybot.R
else
echo "the page is not up yet"
fi

Related

Chcecking the responde code of given URLs - Script don`t stop after checking all URL`s

I have made a script below. The script is checking responde code of every URL listed in the for example .csv file in column A. Everything works as I planed but after checking all URL`s the script is freezed. I have to do ctrl+c combination to stop it. How can I make script automaticly end the run after all URL's are checked.
#!/bin/bash
for link in `cat $1` $2;
do
response=`curl --output /dev/null --silent --write-out %{http_code} $link`;
if [ "$response" == "$2" ]; then
echo "$link";
fi
done
Your script hangs due to $2 in the for link line (when it hangs, check ps aux | grep curl and you'll find a curl process with the response code as the last argument). Also, for link in `cat $1` $2 is not how you should read and process lines from a file.
Assuming your example.csv file only contains one URL per row and nothing else (which basically makes it a plain text file), this code should do what you want:
#!/usr/bin/env bash
while read -r link; do
response=$(curl --output /dev/null --silent --write-out %{http_code} "$link")
if [[ "$response" == "$2" ]]; then
echo "$link"
fi
done < "$1"
Copying your code verbatim and fudging a test file with a few urls separated by whitespace to test with, it does indeed hang. However, removing the $2 from the end of the for allows the script to finish.
for link in `cat $1`;

Bash - check to make sure website is available before continuing, otherwise sleep and try again

I have a script that I want to execute at startup of a Linux host, but it is depending on influxdb running on another host. Since both hosts come up around the same time, I need influxdb up before I can run my script, else the script will fail.
I was thinking that it should be a bash script, that first checks if a port is available using curl. If it is, continue. If it is not, then sleep for 30 seconds and try again, and so on.
So far, I have the right logic to check if influxdb is up, but I can't figure out how to incorporate this into the bash script.
if
curl --head --silent --fail http://tick.home:8086/ping 1> /dev/null
then echo "1"
else echo "0"
fi
If the result is 1, continue with the script. If the result is 0, sleep for 30 seconds, then try the if statement again. What is the best way to accomplish?
try with
until curl --head --silent --fail http://tick.home:8086/ping 1> /dev/null 2>&1; do
sleep 1
done

output not reflecting in cronjob

I have a script which send the output of a command. The command takes few seconds to execute. But when I put the command in the cron, the output is not reflected in the mail received nor in the file from where the script fetches the output.
echo "$(date)" > /home/checks.txt
status=`sysstatus`
echo "$(sysstatus)">> /home/checks.txt
for MAIL in abc#xyz.com def#xyz.com
do
mailx -s "$Date Daily check on system" "$MAIL" < /home/checks.txt
done
exit 0
Giving full path to the command status in the script solved the issue.

curl returns empty reply from server bash due to curl failure

i am writing a simple bash script to "curl get" some values. Sometimes the code works and sometimes it fails, and says "empty reply from server".
How to set up a check for this in bash so that if the curl fails once it tries again until it gets the values?
while ! curl ... # add your specific curl statement here
do
{ echo "Exit status of curl: $?"
echo "Retrying ..."
} 1>&2
# you may add a "sleep 10" or similar here to retry only after ten seconds
done
In case you want the output of that curl in a variable, feel free to capture it:
output=$(
while ! curl ... # add your specific curl statement here
do
{ echo "Exit status of curl: $?"
echo "Retrying ..."
} 1>&2
# you may add a "sleep 10" or similar here to retry only after ten seconds
done
)
The messages about the retry are printed to stderr, so they won't mess up the curl output.
People are overcomplicating this:
until contents=$(curl "$url")
do
sleep 10
done
For me sometimes it happens when curl timed out and there is no information about that. Try curl with --connect-timeout 600 (in seconds) like:
curl --connect-timeout 600 "https://api.morph.io/some_stuff/data.json"
Maybe this helps you.
if you wanted to try the command until it succeeded, you could say:
command_to_execute; until (( $? == 0 )); do command_to_execute; done

How to check if an URL exists with the shell and probably curl?

I am looking for a simple shell (+curl) check that would evaluate as true or false if an URL exists (returns 200) or not.
Using --fail will make the exit status nonzero on a failed request. Using --head will avoid downloading the file contents, since we don't need it for this check. Using --silent will avoid status or errors from being emitted by the check itself.
if curl --output /dev/null --silent --head --fail "$url"; then
echo "URL exists: $url"
else
echo "URL does not exist: $url"
fi
If your server refuses HEAD requests, an alternative is to request only the first byte of the file:
if curl --output /dev/null --silent --fail -r 0-0 "$url"; then
I find wget to be a better tool for this than CURL; there's fewer options to remember and you can actually check for its truth value in bash to see if it succeeded or not by default.
if wget --spider http://google.com 2>/dev/null; then
echo "File exists"
else
echo "File does not exist"
fi
The --spider option makes wget just check for the file instead of downloading it, and 2> /dev/null silences wget's stderr output.

Resources