How do I do a website health check using CURL command - bash

I'm trying to monitor a website using curl but the output doesn't seem to work, please see commands below:
#!/bin/bash
varDate=$(date '+%Y-%m-%d %H:%M:%S')
varCurlError=$(curl -sSf https://website.com > /dev/null)
varHttpCode=$(curl -Is https://website.com | head -n 1)
varResponseTime=$(curl -s -w '%{time_total}' -o /dev/null website.com)
varOutput="$varDate | $varCurlError | $varHttpCode | $varResponseTime"
echo $varOutput
The output looks like this :
| 0.07323 18:51:40 | | HTTP/1.1 200 OK
What can I change or add to fix the output.
Much appreciated.

#!/bin/bash
varDate=$(date '+%Y-%m-%d %H:%M:%S')
varCurlError=$(curl -sSf https://website.com 2>&1 >/dev/null)
varHttpCode=$(curl -Is https://website.com | head -n 1)
varResponseTime=$(curl -s -w '%{time_total}' -o /dev/null website.com | tr -d \\r )
varOutput="$varDate | $varCurlError | $varHttpCode | $varResponseTime"
echo $varOutput
There are two corrections:
tr -d \r was added as per glenn jackman. The CR is causing your varResponseTime to be printed at the beginning of the line. The tr command deletes the CR.
You need to first redirect stderr to stdout before you close file descriptor 1 in your varCurlError statement. Now, errors reported by curl to stderr will be sent to stdout (and captured by your $() enclosure). The output curl sends to stdout will go to the bitbucket. Order is important. >/dev/null 2>&1 doesn't work - it sends stdout and stderr to /dev/null.

#glenn jackman is correct about the need to pipe the curl output to | tr -d '\r'
That is, change your code to
#!/bin/bash
varDate=$(date '+%Y-%m-%d %H:%M:%S' | tr -d '\r')
varCurlError=$(curl -sSf https://website.com | tr -d '\r' > /dev/null)
varHttpCode=$(curl -Is https://website.com | tr -d '\r' | head -n 1)
varResponseTime=$(curl -s -w '%{time_total}' -o /dev/null website.com | tr -d '\r')
varOutput="$varDate | $varCurlError | $varHttpCode | $varResponseTime"
echo "$varOutput"

It can be done with wget so you see if you can get any data and it can be simple like this:
#!/bin/bash
dt=$(date '+%d/%m/%Y %H:%M:%S');
wget domain/yourindex
if [ -f /home/$USER/yourindex ] ; then
#echo $dt GOOD >> /var/log/fix.log
echo GOOD >/dev/null 2>&1
else
#counter measures like sudo systemctl restart php7.2-fpm.service && sudo systemctl restart nginx
echo $dt BROKEN >> /var/log/fix.log
fi
rm login*
exit

Related

Get first match from a CURL grep call

Objective:
I'm trying to write a script that will fetch two URLs from a GitHub release page and do something different with each one.
So far:
Here's what I've got so far.
λ curl -s https://api.github.com/repos/mozilla-iot/gateway/releases/latest | grep "browser_download_url.*tar.gz" | cut -d : -f 2,3 | tr -d \"
This will return the following:
"https://github.com/mozilla-iot/gateway/releases/download/0.8.1/gateway-8c29257704ddb021344bdaaa790909a0eacf3293bab94e02859828a6fd9b900a.tar.gz"
"https://github.com/mozilla-iot/gateway/releases/download/0.8.1/node_modules-921bd0d58022aac43f442647324b8b58ec5fdb4df57a760e1fc81a71627f526e.tar.gz"
I want to be able to create some directories, pull in the first one, navigate in the directories from the newly pulled zip after extracting it, and then pull in the second.
fetching the first line is easy by piping the output to head -n1. for solving your problem, you need more than just fetching the first URL of the cURL output. give this a try:
#!/bin/bash
# fetch your URLs
answer=`curl -s https://api.github.com/repos/mozilla-iot/gateway/releases/latest | grep "browser_download_url.*tar.gz" | cut -d : -f 2,3 | tr -d \"`
# get URLs and file names
first_file=`echo "$answer" | grep -Eo '.+?\.tar\.gz' | head -n1 | tr -d " "`
second_file=`echo "$answer" | grep -Eo '.+?\.tar\.gz' | head -n2 | tail -1 | tr -d " "`
first_file_name=`echo "$answer" | grep -Eo '[^/]+?\.tar\.gz' | head -n1 `
second_file_name=`echo "$answer" | grep -Eo '[^/]+?\.tar\.gz' | head -n2 | tail -1`
#echo $first_file
#echo $first_file_name
#echo $second_file_name
#echo $second_file
# download first file
wget "$first_file"
# extracting first one that must be in the current directory.
# else, change the directory first and put the path before $first_file!
tar -xzf "$first_file_name"
# do your stuff with the second file
You can simply pipe the URLs to xargs curl;
curl -s https://api.github.com/repos/mozilla-iot/gateway/releases/latest |
grep "browser_download_url.*tar.gz" |
cut -d : -f 2,3 | tr -d \" |
xargs curl -O
Or if you want to do some more manipulation on each URL, perhaps loop over the results:
curl ... | grep ... | cut ... | tr ... |
while IFS= read -r url; do
curl -O "$url"
: maybe do things with "$url" here
done
The latter could easily be extended to someting like
... | while IFS= read -r url; do
d=${url##*/}
mkdir -p "$d"
( cd "$d"
curl -O "$url"
tar zxf *.tar.gz
# end of subshell means effects of "cd" end
)
done

Exiting while loop bash script from tail

I have a script that tails a log file, and then uploads the line. I would like to have it exit as soon as the first line is read:
#!/bin/bash
tail -n0 -F "$1" | while read LINE; do
(echo "$LINE" | grep -e "$3") && curl -X POST --silent --data-urlencode \
"payload={\"text\": \"$(echo $LINE | sed "s/\"/'/g")\"}" "$2";
done
If you want to exit as soon as the first line is uploaded you can just add a break:
#!/bin/bash
tail -n0 -F "$1" | while read LINE; do
(echo "$LINE" | grep -e "$3") && curl -X POST --silent --data-urlencode \
"payload={\"text\": \"$(echo $LINE | sed "s/\"/'/g")\"}" "$2" && break;
done
The issue was the tail command wasn't getting killed. A slightly modified version of my script (I didn't end up needing the echo to stdout)
#!/bin/bash
tail -n0 -F "$1" | while read LINE; do
curl -X POST --data-urlencode "payload={\"text\": \"$(echo $LINE | sed "s/\"/'/g")\"}" "$2" && pkill -P $$ tail
done
This answer helped as well: https://superuser.com/questions/270529/monitoring-a-file-until-a-string-is-found

curl in bash script vs curl one liner

This code ouputs a http status of 000 - which seems to indicate something didn't connect properly but when I do this curl outside of the bash script it works fine and produces a 200 so something with this code is off... any guidance?
#!/bin/bash
URLs=$(< test.txt | grep Url | awk -F\ ' { print $2 } ')
# printf "Preparing to check $URLs \n"
for line in $URLs
do curl -L -s -w "%{http_code} %{url_effective}\\n" $line
done
http://beerpla.net/2010/06/10/how-to-display-just-the-http-response-code-in-cli-curl/
your script works on my vt.
I added in a couple of debugging lines, this may help you to see where any metacharacters are getting in, as I would have to agree with the posted coments.
I've output lines in the for to a file which is then printed out with od.
I have amended the curl line to grab the last line, just to get the response code.
#!/bin/bash
echo -n > $HOME/Desktop/urltstfile # truncate urltstfile
URLs=$(cat testurl.txt | grep Url | awk -F\ ' { print $2 } ')
# printf "Preparing to check $URLs \n"
for line in $URLs
do echo $line >> $HOME/Desktop/urltstfile;
echo line:$line:
curl -IL -s -w "%{http_code}\n" $line | tail -1
done
od -c $HOME/Desktop/urltstfile
#do curl -L -s -w "%{http_code} %{url_effective}\\n" "$line\n"

wget bash function without messy output

I am learning to customize wget in a bash function and having trouble. I would like to display Downloading (file):% instead of the messy output of wget. The function below seems close I am having trouble calling it for my specific needs.
For example, my standard wget is:
cd 'C:\Users\cmccabe\Desktop\wget'
wget -O getCSV.txt http://xxx.xx.xxx.xxx/data/getCSV.csv
and that downloads the .csv as a .txt in the directory specified with all the messy wget output.
This function seems like it will do more-or-less what I need, but I can not seem to get it to function correctly using my data. Below is what I have tried. Thank you :).
#!/bin/bash
download() {
local url=$1 wget -O getCSV.txt http://xxx.xx.xxx.xxx/data/getCSV.csv
local destin=$2 'C:\Users\cmccabe\Desktop\wget'
echo -n " "
if [ "$destin" ]; then
wget --progress=dot "$url" -O "$destin" 2>&1 | grep --line-buffered "%" | \
sed -u -e "s,\.,,g" | awk '{printf("\b\b\b\b%4s", $2)}'
else
wget --progress=dot "$url" 2>&1 | grep --line-buffered "%" | \
sed -u -e "s,\.,,g" | awk '{printf("\b\b\b\b%4s", $2)}'
fi
echo -ne "\b\b\b\b"
echo " DONE"
}
EDITED CODE
#!/bin/bash
download () {
url=http://xxx.xx.xxx.xxx/data/getCSV.csv
destin='C:\Users\cmccabe\Desktop\wget'
echo -n " "
if [ "$destin" ]; then
wget -O getCSV.txt --progress=dot "$url" -O "$destin" 2>&1 | grep --line-buffered "%" | \
sed -u -e "s,\.,,g" | awk '{printf("\b\b\b\b%4s", $2)}'
else
wget -O getCSV.txt --progress=dot $url 2>&1 | grep --line-buffered "%" | \
sed -u -e "s,\.,,g" | awk '{printf("\b\b\b\b%4s", $2)}'
fi
echo -ne "\b\b\b\b"
echo " DONE"
menu
}
menu() {
while true
do
printf "\n Welcome to NGS menu (v1), please make a selection from the MENU \n
==================================\n\n
\t 1 Patient QC\n
==================================\n\n"
printf "\t Your choice: "; read menu_choice
case "$menu_choice" in
1) patient ;;
*) printf "\n Invalid choice."; sleep 2 ;;
esac
done
}

Bash Stdout redirect on solaris 10

ok this is working:
trace -t lstat64 -v lstat64 ls "myfilename" 2>pipefile
cat pipefile | grep ct | cut -d '[' -f 2 | cut -d ' ' -f 2
But i dont want to have to use the file "pipefile", how can i redirect the output straight to my grep and cut?
So, you want to ignore stdout and only consider stderr?
trace -t lstat64 -v lstat64 ls "myfilename" 2>&1 1>/dev/null |
grep ct | cut -d '[' -f 2 | cut -d ' ' -f 2
First, the stderr file handle is redirected to whatever the stdout file handle refers to, then the stdout file handle is redirected to /dev/null. Then grep can read from stdin whatever is emitted from trace's stderr.
I got it, I just realized i was getting stderr confused with stdout, this was my solution:
trace -t lstat64 -v lstat64 ls "myfilename" 2>&1 | grep ct | cut -d '[' -f 2 | cut -d ' ' -f 2

Resources