Script that monitors log file quits unexpectedly - bash

I've written simple script that monitors elasticsearch.log looking for specific pattern and finally sending curl POST request.
#!/bin/bash
tail -F -n 0 elasticsearch.log | \
while read -r line
do
echo "$line" | grep '<PATTERN>'
if [[ "$?" -eq 0 ]]
then
curl -X POST <URL>
fi
done
The problem is that script is quits unexpectedly with 0 exit status. Do you have any idea what might be the reason?

Related

Chcecking the responde code of given URLs - Script don`t stop after checking all URL`s

I have made a script below. The script is checking responde code of every URL listed in the for example .csv file in column A. Everything works as I planed but after checking all URL`s the script is freezed. I have to do ctrl+c combination to stop it. How can I make script automaticly end the run after all URL's are checked.
#!/bin/bash
for link in `cat $1` $2;
do
response=`curl --output /dev/null --silent --write-out %{http_code} $link`;
if [ "$response" == "$2" ]; then
echo "$link";
fi
done
Your script hangs due to $2 in the for link line (when it hangs, check ps aux | grep curl and you'll find a curl process with the response code as the last argument). Also, for link in `cat $1` $2 is not how you should read and process lines from a file.
Assuming your example.csv file only contains one URL per row and nothing else (which basically makes it a plain text file), this code should do what you want:
#!/usr/bin/env bash
while read -r link; do
response=$(curl --output /dev/null --silent --write-out %{http_code} "$link")
if [[ "$response" == "$2" ]]; then
echo "$link"
fi
done < "$1"
Copying your code verbatim and fudging a test file with a few urls separated by whitespace to test with, it does indeed hang. However, removing the $2 from the end of the for allows the script to finish.
for link in `cat $1`;

AND exit code in BASH

I am monitoring websites for availability and using curl command to test for 200 status code.
All URLs are contained in a file. So far so good
What I want to achieve is if all URLs are online then exit 0; If any of the URLs are offline exit 1;
How to achieve this in bash?
Try this (where 200 - status code).
ANSWER=`curl -s -o /dev/null -w "%{http_code}" google.com | grep -c 200`
if [[ $ANSWER == "1" ]];then
exit 0
else
exit 1
fi

How to exit bash script after while loop gets true?

i'm new at bash scripting and i am trying to write a script to check if an ethernet device is up and if so exit the script.
That doesn't work as intended, maybe someone can give me a hint.
I start the script, then plug in my device and the script just seems to hang up in the terminal. It is not getting back to the command line.
When the device is plugged in already and the ethernet dev is up the script just runs perfectly. It then echoes 'Connected' and throws me back to command line.
#! /bin/sh
t1=$(ifconfig | grep -o enxca1aea4347b1)
t2='enxca1aea4347b1'
while [ "$t1" != "$t2" ];
do
sleep 1;
done
echo "Connected"
exit 0
You don't even need to make the comparison; just check the exit status of grep.
t2='enxca1aea4347b1'
until ifconfig | grep -q "$t2"; do
sleep 1;
done
echo "Connected"
exit 0
In fact, you don't even need grep:
until [[ "$(ifconfig)" =~ $t2 ]]; do
sleep 1
done
You've made an infinite loop, since you're not updating the value of $t1 inside the while statement.
Instead, try:
#! /bin/sh
t1=$(ifconfig | grep -o enxca1aea4347b1)
t2='enxca1aea4347b1'
while [ "$t1" != "$t2" ];
do
sleep 1;
t1=$(ifconfig | grep -o enxca1aea4347b1)
done
echo "Connected"
exit 0

grep command exit code for unmatched patterns

i have written a shell scripts which runs crontab - l command
To make it more easy to use i have also given the user an ability to pass a command line argument to the script which will act like a pattern input for the grep command, so that the user can filter out all the stuffs which he/she doesn't need to see.
here's the script:-
1 #!/bin/bash
2 if [[ $1 == "" ]]; then
3 echo -e "No Argument passed:- Showing default crontab\n"
4 command=$(crontab -l 2>&1)
5 echo "$command"
6 else
7 rc=$?
8 command=$(crontab -l | grep -- "$1" 2>&1)
9 echo "$command"
10 if [[ $rc != 0 ]] ; then
11 echo -e "grep command on crontab -l was not successful"
12 fi
13 fi
this is how i run it
$ ./DisplayCrontab.sh
Now if i don't pass any command line argument it'll show me the complete crontab
If i pass any garbage pattern which doesn't exists in the crontab it'll show me the following message :-
grep command on crontab -l was not successful
But even if i pass a pattern which does exist in a couple of lines in crontab, i'm getting this kind of output:-
#matching lines
#matching lines
#matching lines
grep command on crontab -l was not successful
Why am i getting grep command not successful at the bottom?, how can i get rid of it?
Is there anything wrong with the script?
You're capturing the exit code before the execution, should be:
command=$(crontab -l | grep -- "$1" 2>&1)
rc=$?
To test this code use numeric operators:
[[ $rc -ne 0 ]]
Grep man:
Normally, the exit status is 0 if selected lines are found and
1 otherwise. But the exit status is 2 if an error occurred

Lynx is stopping loop?

I'll just apologize beforehand; this is my first ever post, so I'm sorry if I'm not specific enough, if the question has already been answered and I just didn't look hard enough, and if I use incorrect formatting of some kind.
That said, here is my issue: In bash, I am trying to create a script that will read a file that lists several dozen URL's. Once it reads each line, I need it to run a set of actions on that, the first being to use lynx to navigate to the website. However, in practice, it will run once perfectly on the first line. Lynx goes, the download works, and then the subsequent renaming and organizing of that file go through as well. But then it skips all the other lines and acts like it has finished the whole file.
I have tested to see if it was lynx causing the issue by eliminating all the other parts of the code, and then by just eliminating lynx. It works without Lynx, but, of course, I need lynx for the rest of the output to be of any use to me. Let me just post the code:
!#/bin/bash
while read line; do
echo $line
lynx -accept_all_cookies $line
echo "lynx done"
od -N 2 -h *.zip | grep "4b50"
echo "od done, if 1 starting..."
if [[ $? -eq 0 ]]
then ls *.*>>logs/zips.log
else
od -N 2 -h *.exe | grep "5a4d"
echo "if 2 starting..."
if [[ $? -eq 0 ]]
then ls *.*>>logs/exes.log
else
od -N 2 -h *.exe | grep "5a4d, 4b50"
echo "if 3 starting..."
if [[ $? -eq 1 ]]
then
ls *.*>>logs/failed.log
fi
echo "if 3 done"
fi
echo "if 2 done"
fi
echo "if 1 done..."
FILE=`(ls -tr *.* | head -1)`
NOW=$(date +"%m_%d_%Y")
echo "vars set"
mv $FILE "criticalfreepri/${FILE%%.*}(ZCH,$NOW).${FILE#*.}" -u
echo "file moved"
rm *.zip *.exe
echo "file removed"
done < "lynx"
$SHELL
Just to be sure, I do have a file called "lynx" that contains the urls separated by a return each. Also, I used all those "echo"s to do my own sort of debugging, but I have tried it with and without the echo's. When I execute the script, the echo's all show up...
Any help is appreciated, and thank you all so much! Hope I didn't break any rules on this post!
PS: I'm on Linux Mint running things through the "terminal" program. I'm scripting with bash in Gedit, if any of that info is relevant. Thanks!
EDIT: Actually, the echo tests repeat for all three lines. So it would appear that lynx simply can't start again in the same loop?
Here is a simplified version of the script, as requested:
!#/bin/bash
while read -r line; do
echo $line
lynx $line
echo "lynx done"
done < "ref/url"
read "lynx"
$SHELL
Note that I have changed the sites the "url" file goes to:
`www.google.com
www.majorgeeks.com
http://www.sophos.com/en-us/products/free-tools/virus-removal-tool.aspx`
Lynx is not designed to use in scripts because it locks the terminal. Lynx is an interactive console browser.
If you want to access URLs in a script use wget, for example:
wget http://www.google.com/
For exit codes see: http://www.gnu.org/software/wget/manual/html_node/Exit-Status.html
to parse the html-content use:
VAR=`wget -qO- http://www.google.com/`
echo $VAR
I found a way which may fulfilled your requirement to run lynx command in loop with substitution of different url link.
Use
echo `lynx $line`
(Echo the lynx $line in single quote('))
instead of lynx $line. You may refer below:
your code
!#/bin/bash
while read -r line; do
echo $line
lynx $line
echo "lynx done"
done < "ref/url"
read "lynx"
$SHELL
try on below
!#/bin/bash
while read -r line; do
echo $line
echo `lynx $line`
echo "lynx done"
done < "ref/url"
I should have answered this question a long time ago. I got the program working, it's now on Github!
Anyway, I simply had to wrap the loop inside a function. Something like this:
progdownload () {
printlog "attmpting download from ${URL}"
if echo "${URL}" | grep -q "http://www.majorgeeks.com/" ; then
lynx -cmd_script="${WORKINGDIR}/support/mgcmd.txt" --accept-all-cookies ${URL}
else wget ${URL}
fi
}
URL="something.com"
progdownload

Resources