script to check last file modified - shell

SHELL SCRIPT TO GET MAIL IF FILE GET MODIFIED
I am writing script to get mail if file has been modified
recip="mungsesagar#gmail.com"
file="/root/sagar/ldapadd.sh"
#stat $file
last_modified=$(stat --printf=%y $file | cut -d. -f1)
#echo $last_modified
mail -s "File ldapadd.sh has changed" $recip
Now I get mail when I run this script but I want to compare two variables so that I can get mail only if file modified or content changed.
How can I store output in variable to compare
Thanks in advance
Sagar

I'd do it this way:
recip="you#example.com"
file="/root/sagar/ldapadd.sh"
ref="/var/tmp/mytimestamp.dummy"
if [ "$file" -nt "$ref" ]; then
mail -s "File ldapadd.sh has changed" $recip
fi
touch -r "$file" "$ref" # update our dummy file to match
The idea is to store the last seen timestamp of the file of interest by copying it to another file (using touch). Then we always know what the last time was, and can compare it against the current timestamp on the file and email as needed.

If I understand your question correct, the logic can be changed by storing the output of "ls -ltr filename" in a temp1 file and comparing the same with the ls -ltr output

I would use find to see the last modifyed file
#!/bin/bash
file=timestamp.txt
if [ ! -f timestamp.txt ];
then
stat -f %Sm -t %Y%m%d%H%M%S $file > timestamp.txt
else
timestamp=$(stat -f %Sm -t %Y%m%d%H%M%S $file)
filetime=$(cat filetime.txt)
if [ "$filetime" = "$timestamp" ];
then
#Do nothing
else
echo "$file has been modified" >> /tmp/email.txt
mail -s "File has changed" "email#domain.com" < /tmp/email.txt
fi
fi

Related

How to rename the filename of that folder name and copy all files into one folder

I am new to unix . I got a requirement like this .
I have xml folder in the server . In that folder , everyday , i will get different employee details for each employee in one folder .
/server/user/home/xml/e1100123/Employeedetails.xml
/server/user/home/xml/e1100123/Employeesalary.xml
/server/user/home/xml/e1100123/Employeeleaves.xml
/server/user/home/xml/e1100123/Employeestatus.xml
/server/user/home/xml/e1100155/Employeedetails.xml
/server/user/home/xml/e1100155/Employeesalary.xml
/server/user/home/xml/e1100155/Employeeleaves.xml
/server/user/home/xml/e1100155/Employeestatus.xml
I have to group all employees in one folder with filename_employeenumber as shown below .
/server/user/home/xml/allemployees/Employeedetails-e1100123.xml
/server/user/home/xml/allemployees/Employeesalary-e1100123.xml
/server/user/home/xml/allemployees/Employeeleaves-e1100123.xml
/server/user/home/xml/allemployees/Employeestatus-e1100123.xml
/server/user/home/xml/allemployees/Employeedetails-e1100155.xml
/server/user/home/xml/allemployees/Employeesalary-e1100155.xml
/server/user/home/xml/allemployees/Employeeleaves-e1100155.xml
/server/user/home/xml/allemployees/Employeestatus-e1100155.xml
How to write a code in unix shell script ?
Thank you ......
Sai
Bare bone of your desired script could be like so:
script.sh
#!/bin/bash
base_dir=/server/user/home/xml
all_dir=${base_dir}/allemployes
e_dirs=$(ls -d ${base_dir}/e*)
e_files=(Employeedetails.xml Employeesalary.xml Employeestatus.xml Employeeleaves.xml)
if [ ! -d "${base_dir}" ]; then
echo "ERROR: ${base_dir} doesn't exists!"
fi
if [ ! -d "${all_dir}" ]; then
echo "INFO: Creating all employees directory at ${all_dir}"
mkdir "${all_dir}"
fi
for e in ${e_dirs}; do
pushd "${e}" > /dev/null
for f in ${e_files[#]}; do
if [ ! -f ${f} ]; then
echo "ERROR: Missing ${f} file for employee $(basename ${e})"
else
new_file=${all_dir}/${f%.*}-$(basename ${e}).xml
cp "${e}/${f}" "${new_file}"
fi
done
popd > /dev/null
done
Notes:
Script assumes that in xml/ folder are only employees folder and every folder starts with lower character e, hence we do e_dirs=$(ls -d ${base_dir}/e*)
Script works only with files Employeedetails.xml Employeesalary.xml Employeestatus.xml Employeeleaves.xml file, ignoring others, in each employee folder.
Script creates allemployes/ folder if it does not exists.
Script makes copies instead of moving (not destructive), so you can use mv instead of cp
I haven't tested the script! So please do not run it on live data :). Feel free to modify it.
Use this script :
#!/bin/bash
sourceDir=$1
destDir=$2
dirname=$(basename $1)
if [ ! -d $2 ] ; then
mkdir $2
fi
for I in $sourceDir/*
do
fname=$(basename $I)
prefix=$(echo $fname | awk -F "." 'NF{NF-=1};1')
suffix=$(echo $fname | awk -F "." '{print $NF}')
target="$destDir/$prefix-$dirname.$suffix"
cp $I $target
# I use cp(Copy) command , you can replace this to mv(Move) command .
echo $target
done
Usage :
./script.sh [SOURCEDIR] [DESTDIR]
Example :
./script.sh /server/user/home/xml/e1100123/ /server/user/home/xml/allemployees/

how to create incremental files with version in shell

how to create files dynamically when i run shell script.
Intially in the /tmp folder i need to check if file like(CFT_AH-120_v1.txt) exist in tmp folder else create CFT_AH-120_v1.txt. Next time when i run shell script it should create CFT_AH-120_v2.txt and in each run it should increment the version number of the file.
In tmp folder i should have files like
CFT_AH-120_v1.txt
CFT_AH-120_v2.txt
CFT_AH-120_v3.txt
i will get CFT_AH-120 from variable dynamically.
#!/bin/bash
export filename
temp=$(find CFT_AH-120-V* | sort -V | tail -1)
if [ -e $temp ]
then echo "ok"
echo $temp
fname="${temp%.*}"
echo $fname
temp1="${temp%[[:digit:]]*}$((${temp##*[[:alpha:]]} + 1))"
echo $temp1
touch $temp1 ".txt"
else
touch CFT_AH-120-V1.txt
echo "nok"
fi
I am not sure the exact requirement for you. As per my understanding this is a simple way of approach.
#!/bin/bash
file_temp=$(find . -name "CFT_AH-120-V*" -type f | sort -V | tail -1)
echo $temp
if [ -z "$temp" ]; then
echo "File not found!"
else
num_temp=$(echo $temp | cut -d '-' -f3- | sed 's/V//')
num_value_incr=$(expr $new_temp + 1)
touch "CFT_AH-120-V$num_value_incr"
echo "New file created!"
fi
Note: This code search for the largest value of the "V"....number.... (eg.***V120) and increment based on that value and also the intermediate number wont be increment. If you need the missing numbers to be created, then the logic for this code needs to be changed.
Hope this might help you!

Shell script to check a log file and print the errors that are found in it

I am pretty new to shell scripting . I am trying to write a script to check for logfile for errors (error strings are hardcoded), and i have to print the lines containing the error . i am able to write the logic but need pointers to read a file from user input.
Appreciate the help thanks.
Logic:
Accept the logfile patch from user
Check if the logfile is present or not
If present search the file for lines containing the error string (eg. Error, ORA)
Print the lines containing error strings , also write the output to a logfile
Read the log file from user
Set error strings
search="ERROR"
set a path for output file
outfile="file1.txt"
Execution logic
find "$mydir" -type f -name "$filename" |while read file
do
RESULT=$(grep "$search" "$file")
if [[ ! -z $RESULT ]]
then
echo "Error(s) in $file: $RESULT" >> "$outfile"
fi
done
I'm not sure what you mean with "need pointers to read a file from user input". I assume "pointers" are script arguments.
You can use this script:
#!/bin/bash
expected=outfile
for f in $#
do
if [ "$expected" = outfile ]; then
OUTFILE=$1
expected=search
elif [ "$expected" = search ]; then
SEARCH=$2
expected=files
elif [[ -f $f ]]; then
RESULT=`grep "$SEARCH" $f`
if [ -n "$RESULT" ]; then
echo -e "\n"
echo "Error(s) in "$f":"
echo $RESULT
echo -e "\n" >> $OUTFILE
echo "Error(s) in "$f":" >> $OUTFILE
echo $RESULT >> $OUTFILE
fi
fi
done
Invoke with:
scriptname outfile search files
where:
scriptname: is the name of file containing the script.
outfile: the name of the output file
search: the text to be searched
files: one or many file name or file patterns.
Examples (I assume the name of the script is searcherror and it is in the system path):
searcherror errorsfound.txt primary /var/log/*.log
searcherror moreerrors.txt "ORA-06502" file1.log /var/log/*.log ./mylogs/*

Delete .mp3 file if .m4a file with same name exists in directory

I ended up with many of my songs in both .m4a and .mp3 format. The duplicate .mp3s are in the same folders as their corresponding .m4as, and I'd like to delete the .mp3s. I'm trying to write a bash script to do that for me, but I'm very new to bash scripting and unsure of what I'm doing wrong. Here's my code:
#!/bin/bash
for f in ~/music/artist/album/* ; do
if [ -f f.m4a ] && [ -f f.mp3 ] ; then
rm f.mp3
echo "dup deleted"
fi
done
I'd really appreciate it if someone could figure out what's going wrong here. Thanks!
#!/bin/bash
# No need to loop through unrelated files (*.txt, directories, etc), right?
for f in ~/music/artist/album/*.m4a; do
f="${f%.*}"
if [[ -f ${f}.mp3 ]]; then
rm -f "${f}.mp3" && echo >&2 "${f}.mp3 deleted"
fi
done
#!/bin/bash
for f in ~/music/artist/album/* ; do
f="${f%.*}" # remove extension in simple cases (not tar.gz)
if [[ -f ${f}.m4a && -f ${f}.mp3 ]] ; then
rm -f "${f}.mp3" && echo "dup deleted"
fi
done

Curl not downloading files correctly

So I have been struggling with this task for eternity and still don't get what went wrong. This program doesn't seem to download ANY pdfs. At the same time I checked the file that stores final links - everything stored correctly. The $PDFURL also checked, stores correct values. Any bash fans ready to help?
#!/bin/sh
#create a temporary directory where all the work will be conducted
TMPDIR=`mktemp -d /tmp/chiheisen.XXXXXXXXXX`
echo $TMPDIR
#no arguments given - error
if [ "$#" == "0" ]; then
exit 1
fi
# argument given, but wrong format
URL="$1"
#URL regex
URL_REG='(https?|ftp|file)://[-A-Za-z0-9\+&##/%?=~_|!:,.;]*[-A-Za-z0-9\+&##/%=~_|]'
if [[ ! $URL =~ $URL_REG ]]; then
exit 1
fi
# go to directory created
cd $TMPDIR
#download the html page
curl -s "$1" > htmlfile.html
#grep only links into temp.txt
cat htmlfile.html | grep -o -E 'href="([^"#]+)\.pdf"' | cut -d'"' -f2 > temp.txt
# iterate through lines in the file and try to download
# the pdf files that are there
cat temp.txt | while read PDFURL; do
#if this is an absolute URL, download the file directly
if [[ $PDFURL == *http* ]]
then
curl -s -f -O $PDFURL
err="$?"
if [ "$err" -ne 0 ]
then
echo ERROR "$(basename $PDFURL)">&2
else
echo "$(basename $PDFURL)"
fi
else
#update url - it is always relative to the first parameter in script
PDFURLU="$1""/""$(basename $PDFURL)"
curl -s -f -O $PDFURLU
err="$?"
if [ "$err" -ne 0 ]
then
echo ERROR "$(basename $PDFURLU)">&2
else
echo "$(basename $PDFURLU)"
fi
fi
done
#delete the files
rm htmlfile.html
rm temp.txt
P.S. Another minor problem I have just spotted. Maybe the problem is with the if in regex? I pretty much would like to see something like that there:
if [[ $PDFURL =~ (https?|ftp|file):// ]]
but this doesn't work. I don't have unwanted parentheses there, so why?
P.P.S. I also ran this script on URLs beginning with http, and the program gave the desired output. However, it still doesn't pass the test.

Resources