shell script with tail/grep not working when run via cron - Mint 18.1 - shell

Running my script in a terminal works fine. It also works fine in Mint 18.2 when run at boot via /etc/rc.local, but on Mint 18.1 it doesn't work. Also on 18.1 it won't run via sudo crontab -e. I'm assuming it's got something to do with the tail/grep part.
Here is the relevant part of my script - up to this point the script works;
# Takes a screen capture every time I type a string that matches one from a list
sudo -i tail -fn0 "$path"k.log | \
while read line ; do
echo "$line" | egrep --line-buffered -i -e "$pattern"
if [ $? = 0 ]
then
matches=$(echo "$line" | egrep --line-buffered -i -o "$pattern")
cap_split=$(echo "$matches" | sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/ /g')
cap_string=$(echo "$cap_split" | sed -e 's/[^A-Za-z0-9\\n._-]/_/g')
sleep 1
DISPLAY=:0.0 scrot "$path"cap/"$stamp"_"$cap_string".png
echo -e "### Match found \"$cap_split\" and cap created ###"
fi
done
Why will it only work in the terminal on Mint 8.1 and not from rc.local or cron?

Related

How do I prevent my bash script (tailing a file) from repeatedly acting on the same line?

I was working on a script that would keep monitoring login to my server or laptop via ssh.
this was the code that I was working with.
slackmessenger() {
curl -X POST -H 'Content-type: application/json' --data '{"text":"'"$1"'"}' myapilinkwashere
## removed it the api link due to slack restriction
}
while true
do
tail /var/log/auth.log | grep sshd | head -n 1 | while read LREAD
do
echo ${LREAD}
var=$(tail -f /var/log/auth.log | grep sshd | head -n 1)
slackmessenger "$var"
done
done
The issue I'm facing is that it keeps sending the old logs due to the while loop. can there be a condition that the loop only sends the new entries/updated enter as opposed to sending the old one over and over again. could not think of a condition that would skip the old entries and only shows old one.
Instead of using head -n 1 to extract a line at a time, iterate over the filtered output of tail -f /var/log/auth.log | grep sshd and process each line once as it comes through.
#!/usr/bin/env bash
# ^^^^- this needs to be a bash script, not a sh script!
case $BASH_VERSION in '') echo "Needs bash, not sh" >&2; exit 1;; esac
while IFS= read -r line; do
printf '%s\n' "$line"
slackmessenger "$line"
done < <(tail -f /var/log/auth.log | grep --line-buffered sshd)
See BashFAQ #9 describing why --line-buffered is necessary.
You could also write this as:
#!/usr/bin/env bash
case $BASH_VERSION in '') echo "Needs bash, not sh" >&2; exit 1;; esac
tail -f /var/log/auth.log |
grep --line-buffered sshd |
tee >(xargs -d $'\n' -n 1 slackmessenger)

grep command inside EOF doesn't seems to be executing on remote hosts [UNIX BASH]

Here is the chunk of code for reference:-
Output:
I have checked the variable values using echo and those looks fine.
But what I want do achieve is searching logs on remote hosts using grep which does not give any output.
for dir in ${log_path}
do
for host in ${Host}
do
if [[ "${userinputserverhost}" == "${host}" ]]
then
ssh -q -T username#userinputserverhost "bash -s" <<-'EOF' 2>&1 | tee -a ${LogFile}
echo -e "Fetching details: \n"
`\$(grep -A 5 -s "\${ID}" "\${dir}"/archive/*.log)`
EOF
fi
break
done
done
First, remove all the crap around the grep.
Second, you're overquoting your vars.
Third, skip the "bash -s" if you can.
ssh -q -T username#userinputserverhost <<-'EOF' 2>&1 | tee -a ${LogFile}
echo -e "Fetching details: \n"
grep -A 5 -s "${ID}" "${dir}"/archive/*.log
EOF
Fourth, I don't see where $ID is set...so if that's being loaded on the remote system by the login or something, then that one would need the dollar sign backslashed.
Finally, be aware that here-docs are great, but sometimes here-strings are simpler if you can spare the quotes.
$: ssh 2>&1 dudeling#sandbox-server '
> date
> whoami
> ' | tee -a foo.txt
Fri Apr 30 09:23:09 EDT 2021
dudeling
$: cat foo.txt
Fri Apr 30 09:23:09 EDT 2021
dudeling
That one is more a matter of taste. Even better, if you can, write your remote-script to a local file & use that. And of course, you can always add set -vx into the script to see what gets remotely executed.
cat >tmpScript <<-'EOF'
echo -e "Fetching details: \n"
set -vx
grep -A 5 -s "${ID}" "${dir}"/archive/*.log
EOF
ssh <tmpScript 2>&1 -q -T username#userinputserverhost | tee -a ${LogFile}
Now you have an exact copy of what was issued for debugging.
Thanks Paul for spending time and coming up with suggestions/solutions.
I have managed to get it working couple of days back. Would have felt happy to say that your solution worked 100% but even satisfied that I got it sorted on my own as it helped me learn some new stuff.
FYI - grep -A 5 -s "${ID}" "${dir}"/archive/*.log - this will work but only by using shell built-in 'declare -p' to declare the variables within EOF. Also, I read somewhere and it is recommended to use EOF unqouted as it caters variable expansion to remote hosts without any trouble.
Below piece of code is working for me in bash:
ssh -q -T username#userinputserverhost <<-EOF 2>&1 | tee -a ${LogFile}
echo -e "Fetching details: \n"
$(declare -p ID)
$(declare -p dir)
grep -A 5 -s "${ID}" "${dir}"/archive/*.log
EOF

OS version capture script - unexpected results when using awk

I have a small shell script as follows that I am using to login to multiple servers to capture whether the target server is using Redhat or Ubuntu as the OS version.
#!/bin/ksh
if [ -f $HOME/osver.report.txt ];then
rm -rf $HOME/osver.report.txt
fi
for x in `cat hostlist`
do
OSVER=$(ssh $USER#${x} "cat /etc/redhat-release 2>/dev/null || grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null")
echo -e "$x \t\t $OSVER" >> osver.report.txt
done
The above script works, however, if I attempt to add in some awk as shown below and the server is a redhat server...my results in the osver.report.txt will only show the hostname and no OS version. I have played around with the quoting, but nothing seems to work.
OSVER=$(ssh $USER#${x} "cat /etc/redhat-release | awk {'print $1,$2,$6,$7'} 2>/dev/null || grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null")
If I change the script as suggested to the following:
#!/bin/bash
if [ -f $HOME/osver.report.txt ];then
rm -rf $HOME/osver.report.txt
fi
for x in cat hostlist
do
OSVER=$(
ssh $USER#${x} bash << 'EOF'
awk '{print "$1,$2,$6,$7"}' /etc/redhat-release 2>/dev/null || grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null
EOF
)
echo -e "$x \t\t $OSVER" >> osver.report.txt
done
Then I get the following errors:
./test.bash: line 9: unexpected EOF while looking for matching `)'
./test.bash: line 16: syntax error: unexpected end of file
You're suffering from a quoting problem. When you pass a quoted command to ssh, you effectively lose one level of quoting (as if you passed the same arguments to sh -c "..."). So the command that you're running on the remote host is actually:
cat /etc/redhat-release | awk '{print ,,,}' | grep -i DISTRIB_DESCRIPTION /etc/lsb-release
One way of resolving this is to pipe your script into a shell, rather than passing it as arguments:
OSVER=$(
ssh $USER#${x} bash <<'EOF'
awk '{print "$1,$2,$6,$7"}' /etc/redhat-release 2>/dev/null ||
grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null
EOF
)
The use of <<'EOF' here inhibits any variable expansion in the here document...without that, expressions like $1 would be expanded locally.
A better solution would be to look into something like ansible which has built-in facilities for sshing to groups of hosts and collecting facts about them, including distribution version information.

sed command garbled for bash 3.0

I have gone through other threads Sed command garbled didnt work
but it didn't helped me
flag=1
echo "enter the folder into which you want to capture"
read logs
mkdir $logs
path=/user/gur40139/shell/angel
for i in $path/*.tra*
do
value=$( grep -ic \*= $i )
if [ $value -ge $flag ]
then
name=`basename $i .tra\*`
echo -e "count is $value\n" >> $path/$logs/log_"$name".txt
sed -n '/\*=/ {n;p}' $i|sed 2n\;G >> $path/$logs/log_"$name".txt
fi
done
echo -e "\nDone\n"
Error:
sed: command garbled: /\*=/ {n;p}
Additional Note: This code is working properly on bash 4.1 version but I want to test it in 3.0, There many options which are not even working like sed --version.
sed -n '/\*=/ {n;p;}' ...
you need to terminate the line after the p so a ; or a new line. Your code will certainly work on recent GNU sed but not on posix version

bash changes execution order of command when run from jenkins

I am using git log to update a release_notes for my project. When I run the script below on my mac laptop everything works as expected, when I run on jenkins running on centos I see the following as the execution order:
script
...
FILE=RELEASE_NOTES
TMP_FILE=${FILE}.tmp
VERSION=$(cat pom.xml | grep "<version>" | head -n1 | sed -e "s/.*\>\(.*\)\<.*/\1/" | tr -d "\-SNAPSHOT")
NAME=$(cat pom.xml | grep "<artifactId>" | head -n1 | sed -e "s/.*\>\(.*\)\<.*/\1/")
echo "$NAME-${VERSION}" > ${TMP_FILE}
git log --pretty="%x09* [%h] %s." $(git describe --abbrev=0)..HEAD >> ${TMP_FILE}
echo "" >> ${TMP_FILE}
if [ -e $FILE ]; then
cat ${FILE} >> ${TMP_FILE}
fi
mv ${TMP_FILE} $FILE
...
jenkins output when run with #!/bin/bash -x
+ FILE=RELEASE_NOTES
+ TMP_FILE=RELEASE_NOTES.tmp
++ tr -d '\-SNAPSHOT'
++ head -n1
++ cat pom.xml
++ sed -e 's/.*\>\(.*\)\<.*/\1/'
++ grep '<version>'
+ VERSION='</'
++ head -n1
++ sed -e 's/.*\>\(.*\)\<.*/\1/'
++ cat pom.xml
++ grep '<artifactId>'
+ NAME='</'
+ echo '</-</'
++ git describe --abbrev=0
I cant figure out why the execution order is changing. Any thoughts?
I don't see any inconsistency. You have several commands running in subshells
(when it sets VERSION and NAME), and those commands have to be executed before
the variable is assigned to, so the /bin/bash -x output above is what I'd expect to see.
If you're talking about the order of the commands within each of those pipelines,
keep in mind that they're all run concurrently, and the exact startup order
might not be specified.
The order in which the individual commands in a pipeline are started (which is what set -x is showing you) doesn't matter. Data still flows from the left to the right. However, you can set the variables using a single call to grep instead of a pipeline. (This does assume GNU grep, however).
VERSION=$( grep -oP -m 1 '(?<=<version>).*(?=-SNAPSHOT)' pom.xml )
NAME=$( grep -oP -m 1 '(?<=<artifactId>).*(<=</artifactId)' pom.xml )
So on mac grep is BSD, linux GNU, so piping seems like it would be the best option to make sure it works on each environment. But found another solution: maven.
VERSION=$(mvn org.apache.maven.plugins:maven-help-plugin:2.1.1:evaluate -Dexpression=project.version | egrep -v "(^[INFO]|Download)" | tr -d "-SNAPSHOT")
NAME=$(mvn org.apache.maven.plugins:maven-help-plugin:2.1.1:evaluate -Dexpression=project.artifactId | egrep -v "(^[INFO]|Download)")
This will let me get the version/name and works on both mac and linux.

Resources