how to compare same file and send mail in linux - bash

I am creating bash script on a file to send diff over the mail.
for below case, I have created two files as "xyz.conf" and "xyz.conf_bkp" to compare
So far, I have come with below script -
file="/a/b/c/xyz.conf"
while true
do
sleep 1
cmp -s $file ${file}_bkp
if [ $? > 0 ]; then
diff $file ${file}_bkp > compare.log
mailx -s "file compare" abc#xyz.com < compare.log
sleep 2
cp $file ${file}_bkp
exit
fi
done
I have scheduled above script to run every second
* * * * 0-5 script.sh
This is working fine but I am looking with different approach like below -
I am looking for to work without creating another backup file Imagine if I have to work multiple files which will lead me to crate those many backups which doesn't look good solution.
Can anyone suggest, how to implement this approach?

I would write it this way.
while cmp "$file" "${file}_bkp"; do
sleep 2
done
diff "$file" "${file}_bkp" | mailx -s "file compare" abc#xyz.com
cp "$file" "${file}_bkp"
I wanted to avoid running both cmp and diff, but I'm not sure that is possible without a temporary file or pipe to hold the data from diff until you determine if mailx should run.
while diff "$file" "${file}_bkp"; do
sleep 2
done | {
mailx -s "file compare" abc#xyz.com
cp "$file" "${file}_bkp"
exit
}
diff will produce no output when its exit status is 0, so when it finally has a non-zero exit status its output is piped (as the output of the while loop) to the compound command that runs mailx and cp. Technically, both mailx and cp can both read from the pipe, but mailx will exhaust all the data before cp runs, and this cp command would ignore its standard input anyway.

Related

Unix script - Comparing number of filename date with my single input date

I am new to Unix scripting, I am trying to create Unix script since one week but I couldn't. Please help me in this.
I have a number of different files more than 100 (all the filenames are different) which the filename contains the date string(ex: 20171101)in the directory. I want compare these filename dates with my input date (today - 10days =20171114),with the files in the directories only using filename string date if it is less than with my input date then I have to delete the file. could anyone please help on this. Thanks
My script:
ten_days_ago=$(date -d "10 days ago" +%Y%m%d)
cd "$destination_dir" ;
ls *.* | awk -F '-' '{print $2}'
ls *.* | awk -F '-' '{print $2}' > removal.txt
while read filedate
do
if [ "$filedate" -lt "$ten_days_ago" ] ; then
cd "$destination_dir" ;
rm *-"$filedate"*
echo "deletion done"
fi
done <removal.txt
this script is working fine. but I need to send a email as well - if the deletion has been done then -one pass email else fail email.
but here within while loop if I am writing the emails then that will iterate
You're probably trying to pipe to mail from the middle of your loop. (Your question should really show this code, otherwise we can't say what's wrong.) A common technique is to redirect the loop's output to a file, and then send that. (Using a temporary file is slightly ugly, but avoids sending an empty message when there is no output from the loop.)
Just loop over the files and decide which to remove.
#!/bin/bash
t=$(mktemp -t tendays.XXXXXXXX) || exit
# Remove temp file if interrupted, or when done
trap 'rm -f "$t"' EXIT HUP INT TERM
ten_days_ago=$(date -d "10 days ago" +%Y%m%d)
for file in *-[1-9]*[1-9]-*; do
date=${file#*-} # strip prefix up through first dash
date=${date%-*} # strip from last dash from the previous result
if [ "$date" -lt "$ten_days_ago" ]; then
rm -v "$file"
fi
done >"$t" 2>&1
test -s "$t" || exit # Quit if empty
mail -s "Removed files" recipient#example.net <"$t"
I removed the (repeated!) cd so this can be run in any directory -- just switch to the directory you want before running the script. This also makes it easier to test in a directory with a set of temporary files.
Collecting the script's standard error also means the mail message will contain any error messages if rm fails for some reason or you have other exceptions.
By the by you should basically never use ls in scripts.

Shell Scripting: Calling mailx in a loop

I am trying to write a script using which I need to send multiple emails with a file as attachment per email. This is because of mail attachment size limitations.
I have zip files in a directory and they are file01.zip, file02.zip etc. and there will be about 4-5 of these files.
-- File count is normally passed in
numFiles=5
fileCounter=1
datestr="`date +"%m/%d/%Y"`"
while [ $fileCounter -le $numFiles ]
do
SUBJECT_LINE="Weekly files ($fileCounter of $numFiles) - $datestr"
echo "[`date`] E-mailing file ($fileCounter of $numFiles) ... "
ZIPFILE="file0$fileCounter.zip"
echo $ZIPFILE
ls -ltr $ZIPFILE
mailx -a "$ZIPFILE" \
-r no-reply#host.com \
-s "$SUBJECT_LINE" \
$TO_LIST < /dev/null
echo "[`date`] Done"
fileCounter=$(( $fileCounter + 1 ))
done
I am trying to call mailx in a loop as you can see. I tried the following as well
for file in file0*.zip
do
...
done
I am able to see the ZIPFILE names when I print them out using echo but the mailx command in the loop returns the following although the files are there:
No such file or directory
I can run the same mailx command from console and have the e-mail sent out. I can also send one e-mail without a loop, but doing so inside a loop seems to cause an issue. Am I missing something?
I likely had one or more characters not visible to the eye in the file name ($ZIPFILE) being passed in as attachment to mailx. I typed parts of the script today again while troubleshooting and that fixed the issue. But the script above is good.

bash call script with variable

What I want to achieve is the following :
I want the subtitles for my TV Show downloaded automatically.
The script "getSubtitle.sh" is ran as soon as the show is downloaded, but it can happen that no subtitle are released yet.
So what I am doing to counter this :
Creating a file each time "getSubtitle.sh" is ran. It contain the location of the script with its arguments, for example :
/Users/theo/logSubtitle/getSubtitle.sh "The Walking Dead - 5x10 - Them.mp4" "The.Walking.Dead.S05E10.480p.HDTV.H264.mp4" "/Volumes/Window HD/Série/The Walking Dead"
If a subtitle has been found, this file will contain only this line, if no subtitle has been found, this file will have 2 lines (the first one being "no subtitle downloaded", and the second one being the path to the script as explained above)
Now, once I get this, I'm planning to run a cron everyday that will do the following :
Remove all file that have only 1 line (Subtitle found), and execute the script again for the remaining file. Here is the full script :
cd ~/logSubtitle/waiting/
for f in *
do nbligne=$(wc -l $f | cut -c 8)
if [ "$nbligne" = "1" ]
then
rm $f
else
command=$(sed -n "2 p" $f)
sh $command 3>&1 1>&2 2>&3 | grep down > $f ; echo $command >> $f
fi
done
This is unfortunately not working, I have the feeling that the script is not called.
When I replace $command by the line in the text file, it is working.
I am sure that $command match the line because of the "echo $command >> $f" at the end of my script.
So I really don't get what I am missing here, any ideas ?
Thanks.
I'm not sure what you're trying to achieve with the cut -c 8 part in wc -l $f | cut -c 8. cut -c 8 will select the 8th character of the output of wc -l.
A suggestion: to check whether your file contains 1 or two lines (and since you'll need the content of the second line, if any, anyway), use mapfile. This will slurp the file in an array, one line per field. You can use the option -n 2 to read at most 2 lines. This will be much more efficient, safe and nice than your solution:
mapfile -t -n 2 ary < file
Then:
if ((${#ary[#]}==1)); then
printf 'File contains one line only: %s\n' "${ary[0]}"
elif ((${#ary[#]==2)); then
printf 'File contains (at least) two lines:\n'
printf ' %s\n' "${ary[#]}"
else
printf >&2 'Error, no lines found in file\n'
fi
Another suggestion: use more quotes!
With this, a better way to write your script:
#!/bin/bash
dir=$HOME/logSubtitle/waiting/
shopt -s nullglob
for f in "$dir"/*; do
mapfile -t -n 2 ary < "$f"
if ((${#ary[#]}==1)); then
rm -- "$f" || printf >&2 "Error, can't remove file %s\n" "$f"
elif ((${#ary[#]}==2)); then
{ sh -c "${ary[1]}" 3>&1 1>&2 2>&3 | grep down; echo "${ary[1]}"; } > "$f"
else
printf >&2 'Error, file %s contains no lines\n' "$f"
fi
done
After the done keyword you can even add the redirection 2>> logfile to a log file if you wish. Make sure the cron job is run with your user: check crontab -l and, if needed, edit it with crontab -e.
Use eval instead of sh. The reason it works with eval and not sh is due to the number of passes to evaluate variables. sh will treat the sed command as its command to execute while eval will evaluate the sed command first and then execute the result.
Briefly explained.

How to read output from bzcat instead of specifying a filename

I need to use 'last' to search through a list of users who logged into a system, i.e.
last -f /var/log/wtmp <username>
Considering the number of bzipped archive files in that directory, and considering I am on a shared system, I am trying to include an inline bzcat, but nothing seems to work. I have tried the following combinations with no success:
last -f <"$(bzcat /var/log/wtmp-*)"
last -f <$(bzcat /var/log/wtmp-*)
bzcat /var/log/wtmp-* | last -f -
Driving me bonkers. Any input would be great!
last (assuming the Linux version) can't read from a pipe. You'll need to temporarily bunzip2 the files to read them.
tempfile=`mktemp` || exit 1
for wtmp in /var/log/wtmp-*; do
bzcat "$wtmp" > "$tempfile"
last -f "$tempfile"
done
rm -f "$tempfile"
You can only use < I/O redirection on one file at a time.
If anything is going to work, then the last line of your examples is it, but does last recognize - as meaning standard input? (Comments in another answer indicate "No, last does not recognize -". Now you see why it is important to follow all the conventions - it makes life difficult when you don't.) Failing that, you'll have to do it the classic way with a shell loop.
for file in /var/log/wtmp-*
do
last -f <(bzcat "$file")
done
Well, using process substitution like that is pure Bash...the classic way would be more like:
tmp=/tmp/xx.$$ # Or use mktemp
trap "rm -f $tmp; exit 1" 0 1 2 3 13 15
for file in /var/log/wtmp-*
do
bzcat $file > $tmp
last -f $tmp
done
rm -f $tmp
trap 0

Concurrent or lock access in bash script function

does anyone know how to lock on a function in bash script?
I wanted to do something like in java (like synchronize), ensuring that each file saved in monitored folder is on hold ever tries to use submit function.
an excerpt from my script:
(...)
ON_EVENT () {
local date = $1
local time = $2
local file = $3
sleep 5
echo "$date $time New file created: $file"
submit $file
}
submit () {
local file = $1
python avsubmit.py -f $file -v
python dbmgr.py -a $file
}
if [ ! -e "$FIFO" ]; then
mkfifo "$FIFO"
fi
inotifywait -m -e "$EVENTS" --timefmt '%Y-%m-%d %H:%M:%S' --format '%T %f' "$DIR" > "$FIFO" &
INOTIFY_PID=$!
trap "on_exit" 2 3 15
while read date time file
do
on_event $date $time $file &
done < "$FIFO"
on_exit
I'm using inotify to monitor a folder when a new file is saved. For each file saved (received), submit to VirusTotal service (avsubmit.py) and TreathExpert (dbmgr.py).
Concurrent access would be ideal to avoid blocking every new file created in monitored folder, but lock submit function should be sufficient.
Thank you guys!
Something like this should work:
if (set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null; then
trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT
# Your code here
rm -f "$lockfile"
trap - INT TERM EXIT
else
echo "Failed to acquire $lockfile. Held by $(cat $lockfile)"
then
Any code using rm in combination with trap or similar facility is inherently flawed against ungraceful kills, panics, system crashes, newbie sysadmins, etc. The flaw is that the lock needs to be manually cleaned after such catastrophic event for the script to run again. That may or may not be a problem for you. It is a problem for those managing many machines or wishing to have an unplugged vacation once in a while.
A modern solution using a file descriptor lock has been around for a while - I detailed it here and a working example is on the GitHub here. If you do not need to track process ID for whatever monitoring or other reasons, there is an interesting suggestion for a self-lock (I did not try it, not sure of its portability guarantee).
You can use a lock file to determine whether or not the file should be submitted.
Inside your ON_EVENT function, you should check if the appropriate lock file exists before calling the submit function. If it does exist, then return, or sleep and check again later to see if it's gone. If it doesn't exist, then create the lock and call submit. After the submit function completes, then delete the lock file.
See this thread for implementation details.
But I liked that files can not get lock stay on the waiting list (cache) to be submitted then or later.
I currently have something like this:
lockfile="./lock"
on_event() {
local date=$1
local time=$2
local file=$3
sleep 5
echo "$date $time New file created: $file"
if (set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null; then
trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT
submit_samples $file
rm -f "$lockfile"
trap - INT TERM EXIT
else
echo "Failed to acquire lockfile: $lockfile."
echo "Held by $(cat $lockfile)"
fi
}
submit_samples() {
local file=$1
python avsubmit.py -f $file -v
python dbmgr.py -a $file
}
Thank you once again ...
I had proplems wiith this approach and found a better solution:
Procmail comes with a lockfile command which does what I wanted:
lockfile -5 -r10 /tmp/lock.file
do something very important
rm -f /tmp/lock.file
lockfile will try to create the specified lockfile. If it exists it iwll retry in 5 seconds, this will be repeated for maximum 10 times. If can create the flile it goes on with the script.
Another solution are the lockfile-progs in debian, example directly from the man page:
Locking a file during a lengthy process:
lockfile-create /some/file
lockfile-touch /some/file &
# Save the PID of the lockfile-touch process
BADGER="$!"
do-something-important-with /some/file
kill "${BADGER}"
lockfile-remove /some/file
If you have GNU Parallel http://www.gnu.org/software/parallel/ installed you can do this:
inotifywait -q -m -r -e CLOSE_WRITE --format %w%f $DIR |
parallel -u python avsubmit.py -f {}\; python dbmgr.py -a {}
It will run at most one python per CPU when a file is written (and closed). That way you can bypass all the locking, and you get the added benefit that you avoid a potential race condition where a file is immediately overwritten (how do you make sure that both the first and the second version was checked?).
You can install GNU Parallel simply by:
wget http://git.savannah.gnu.org/cgit/parallel.git/plain/src/parallel
chmod 755 parallel
cp parallel sem
Watch the intro videos for GNU Parallel to learn more:
https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Resources