OS X script to send email when new file is created - macos

How can I monitor a directory, and send an email whenever a new file is created?
I currently have a script running daily which uses find to search for all files in a directory with a last modified date newer than an empty timestamp file:
#!/bin/bash
folderToWatch="/Path/to/files"
files=files.$$
find $folderToWatch/* -newer timestamp -print > $files
if [ -s "$files" ]
then
# SEND THE EMAIL
touch timestamp
Unfortunately, this also sends emails when files are modified. I know creation date is not stored in Unix, but this information is available in Finder, so can I somehow modify my script to use that information date rather than last modified?

Snow Leopard's find command has a -Bnewer primary that compares the file's "birth time" (aka inode creation time) to the timestamp file's modify time, so it should do pretty much what you want. I'm not sure exactly when this feature was added; it's there in 10.6.4, not there in 10.4.11, and I don't have a 10.5 machine handy to look at. If you need this to work on an earlier version, you can use stat to fake it, something like this:
find "$folderToWatch"/* -newer timestamp -print | \
while IFS="" read file; do
if [[ $(stat -f %B "$file") > $(stat -f %m timestamp) ]]; then
printf "%s\n" "$file"
fi
done >"$files"

You could maintain a manifest:
new_manifest=/tmp/new_manifest.$$
(cd $folderToWatch; find .) > $new_manifest
diff manifest $new_manifest | perl -ne 'print "$1\n" if m{^> \./(.*)}' > $files
mv -f $new_manifest manifest

You may be interested in looking at the change time.
if test `find "text.txt" -cmin +120`
then
echo old enough
fi
See: How do i check in bash whether a file was created more than x time ago

Related

Identify the files year wise and delete from a dir in unix

I need to list out the files which are created in a specific year and then to delete the files. year should be the input.
i tried with date it is working for me. but not able to covert that date to year for comparison in loop to get the list of files.
Below code is giving 05/07 files. but want to list out the files which are created in 2022,2021,etc.,
for file in /tmp/abc*txt ; do
[ "$(date -I -r "$file")" == "2022-05-07" ] && ls -lstr "$file"
done
If you end up doing ls -l anyway, you might just parse the date information from the output. (However, generally don't use ls in scripts.)
ls -ltr | awk '$8 ~ /^202[01]$/'
date -r is not portable, though if you have it, you could do
for file in /tmp/abc*txt ; do
case $(date -I -r "$file") in
2020-* | 2021-* ) ls -l "$file";;
esac
done
(The -t and -r flags to ls have no meaning when you are listing a single file anyway.)
If you don't, the tool of choice would be stat, but it too has portability issues; the precise options to get the information you want will vary between platforms. On Linux, try
for file in /tmp/abc*txt ; do
case $(LC_ALL=C stat -c %y "$file") in
2020-* | 2021-* ) ls -l "$file";;
esac
done
On BSD (including MacOS) try stat -f %Sm -t %Y "$file" to get just the year.
If you need proper portability, perhaps look for a scripting language with wide support, such as Perl or Python. The stat() system call is the fundamental resource for getting metainformation about a file. The find command also has some features for finding files by age, though its default behavior is to traverse subdirectories, too (you can inhibit that with -maxdepth 1; but then the options to select files by age are again not entirely POSIX portable).
To list out files which were last modified in a specific year and then to delete those files, you could use a combination of the find -newer and touch commands:
# given a year as input
year=2022
stampdir=$(mktemp -d)
touch -t ${year}01010000 "$stampdir"/beginning
touch -t $((year+1))01010000 "$stampdir"/end
find /tmp -name 'abc*txt' -type f -newer "$stampdir/beginning" ! -newer "$stampdir/end" -print -delete
rm -r "$stampdir"
First, create a temporary working directory to store the timestamp files; we don't want the find command to accidentally find them. Be careful here; mktemp will probably create a directory in /tmp; this use-case is safe only because we're naming the timestamp files such that they don't match the "abc*txt" pattern from the question.
Next, create bordering timestamp files with the touch command: one that is the newest date in the year, named "beginning", and another for the newest date of the next year, named "end".
Then run the find command; here's the breakdown:
start in /tmp (from the question)
files named with the 'abc*txt' pattern (from the question)
only files (not directories, etc -- from the question)
newer than the beginning timestamp file
not newer (i.e. older) than the end timestamp file
if found, print the filename and then delete it
Finally, clean up the temporary working directory that we created.
Try this:
For checking which files are picked up:
echo -e "Give Year :"
read yr
ls -ltr /tmp | grep "^-" |grep -v ":" | grep $yr | awk -F " " '{ print $9;}'
** You can replace { print $9 ;} with { rm $9; } in the above command for deleting the picked files

GREP date from email header and make it the files creation date

I am on Mac Terminal and want to "grep" a string (which is a UNIX timestamp) out of an email header, convert that into a format the OS can work with and make that the creation date of the file. I want to do that recursively for all mails inside a folder (with multiple possible subfolders).
The structure would probably look something like this:
#!/bin/bash
for i in `ls`
do
# Find the date field (X-Delivery-Time) inside an email header and grep the UNIX timestamp
# convert timestamp to a format the OS can work with
# overwrite the existing creation date with the new one
done
The mails header look like this
X-Envelope-From: <some#mail.com>
X-Envelope-To: <my#mail.com>
X-Delivery-Time: 1535436541
...
Some background: Apple Mail uses the date a file was created as the date displayed within Apple Mail. That’s why after moving mails from one server to another all mails now display the same date which makes sorting impossible.
As I am new to Terminal/Bash any help is appreciated. Thanks
On a Mac this should work, but since I have no mac I cannot test it myself. I assume your email files have the .emlx extension.
For a single directory:
for i in ./*.emlx; do
unixTime=$(grep -m1 '^X-Delivery-Time:' "$i" | grep -Eo '[0-9]+') &&
humanTime=$(date -r "$unixTime" +%Y%m%d%H%M.%S) &&
touch -t "$humanTime" "$i"
done
For a whole directory tree:
fixdate() {
unixTime=$(grep -m1 '^X-Delivery-Time:' "$1" | grep -Eo '[0-9]+') &&
humanTime=$(date -r "$unixTime" +%Y%m%d%H%M.%S) &&
touch -t "$humanTime" "$1"
}
export -f fixdate
find . -name '*.emlx' -exec bash -c 'fixdate "$#"' . {} \;
or, if you have bash 4 or higher installed (macOS still uses 3 by default)
shopt -s globstar
for i in ./**/*.emlx; do
unixTime=$(grep -m1 '^X-Delivery-Time:' "$i" | grep -Eo '[0-9]+') &&
humanTime=$(date -r "$unixTime" +%Y%m%d%H%M.%S) &&
touch -t "$humanTime" "$i"
done
What follows assumes you are using the default macOS utilities (touch, date...) As they are completely outdated some adjustments will be needed if you use more recent versions (e.g. macports or brew). It also assumes that you are using bash.
If you have sub-folders ls is not the right tool. And anyway, the output of ls is not for computers, it is for humans. So, the first thing to do is find all email files. Guess what? The utility that does this is named find:
$ find . -type f -name '*.emlx'
foo/bar.emlx
baz.emlx
...
searches for true files (-type f) starting from the current directory (.) and which name is anything.emlx (-name '*.emlx'). Adapt to your situation. If all files are email files you can skip the -name ... part.
Next we need to loop over all these files and process each of them. This is a bit more complex than for f in ... for several reasons (large number of files, file names with spaces...) A robust way to do this is to redirect the output of a find command to a while loop:
while IFS= read -r -d '' f; do
<process file "$f">
done < <(find . -type f -name '*.emlx' -print0)
The -print0 option of find is used to separate the file names with a null character instead of the default newline character. The < <(find...) part is a way to redirect the output of find to the input of the while loop. The while IFS= read -r -d '' f; do reads each file name produced by find, stores it in shell variable f, preserving the leading and trailing spaces if any (IFS=), the backslashes (-r) and using the null character as separator (-d '').
Now we must code the processing of each file. Let's first retrieve the delivery time, assuming it is always the second word of the last line starting with X-Delivery-Time::
awk '/^X-Delivery-Time:/ {t = $2} END {print t}' "$f"
does that. If you don't know awk already it's time to learn a bit of it. It's one of the very useful Swiss knives of text processing (sed is another). But let's improve it a bit such that it returns the first encountered delivery time instead of the last, stops as soon as it encountered it, and also checks that the timestamp is a real timestamp (digits):
awk '/^X-Delivery-Time:[[:space:]]+[[:digit:]]+$/ {print $2; exit}' "$f"
The [[:space:]]+ part of the regular expression matches 1 or more spaces, tabs,... and the [[:digit:]]+ matches 1 or more digits. ^ and $ match the beginning and the end of the line, respectively. The result can be assigned to a shell variable:
t="$(awk '/^X-Delivery-Time:[[:space:]]+[[:digit:]]+$/ {print $2; exit}' "$f")"
Note that if there was no match the t variable will store the empty string. We will use this later to skip such files.
Once we have this delivery time, which looks like a UNIX timestamp (seconds since 1970/01/01) in your example, we must use it to change the last modification time of the email file. The command that does this is touch:
$ man touch
...
touch [-A [-][[hh]mm]SS] [-acfhm] [-r file] [-t [[CC]YY]MMDDhhmm[.SS]] file ...
...
Unfortunately touch wants a time in the CCYYMMDDhhmm.SS format. No worry, the date utility can be used to convert a UNIX timestamp in any format we like. For instance, with your example timestamp (1535436541):
$ date -r 1535436541 +%Y%m%d%H%M.%S
201808280809.01
We are almost done:
while IFS= read -r -d '' f; do
# uncomment for debugging
# echo "processing $f"
t="$(awk '/^X-Delivery-Time:[[:space:]]+[[:digit:]]+$/ {print $2; exit}' "$f")"
if [ -z "$t" ]; then
echo "no delivery time found in $f"
continue
fi
# uncomment for debugging
# echo touch -t "$(date -r "$t" +%Y%m%d%H%M.%S)" "$f"
touch -t "$(date -r "$t" +%Y%m%d%H%M.%S)" "$f"
done < <(find . -type f -name '*.emlx' -print0)
Note how we test if t is the empty string (if [ -z "$t" ]). If it is, we print a message and jump to the next file (continue). Just put all this in a file with a shebang line and run...
If, instead of the X-Delivery-Time field, you must use a Date field with a more complex and variable format (e.g. Date: Mon, 11 Jun 2018 10:36:14 +0200), the best would be to install a decently recent version of touch with the coreutils package of Mac Ports or Homebrew. Then:
while IFS= read -r -d '' f; do
t="$(awk '/^Date:/ {print gensub(/^Date:[[:space:]+](.*)$/,"\\1","1"); exit}' "$f")"
if [ -z "$t" ]; then
echo "no delivery time found in $f"
continue
fi
touch -d "$t" "$f"
done < <(find . -type f -name '*.emlx' -print0)
The awk command is slightly more complex. It prints the matching line without the Date: prefix. The following sed command would do the same in a more compact form but would not really be more readable:
t="$(sed -rn 's/^Date:\s*(.*)/\1/p;Ta;q;:a' "$f")"

Bash script to copy *.log files into a new directory

In my folder there are different files like this:
stats.log
move_2021-05-24.log
sync_2021-05-24.log
application.log
I want to copy all *.log files with another day than today to a specific folder.
My current script looks like this but it does not work as I thought. It is currently moving all log files i think and not just log files with a date older that todays date.
cd /share/CACHEDEV1_DATA/app
for file in *.log
do
day=$(echo ${file} | cut -d"-" -f3)
now="$(date +'%d')"
if [ "$day" != "$now" ];
then
mv ${file} ~/share/CACHEDEV1_DATA/rclone/logs/
fi
done
I would be glad if I could get advice on how my script would need to look like to work correctly.
I hope you consider logrotate. It can do everything you need and more.
But if you want to roll your own, here is how you can find files older than a day and move them. Note: This will overwrite files with the same name at the destination. I added an echo statement before mv so you can see if looks good to you.
find /share/CACHEDEV1_DATA/app -type f -maxdepth 1 -mtime +1d -print0 | \
while read -rd $'\0' file; do
echo mv "$file" ~/share/CACHEDEV1_DATA/rclone/logs/
done

Issue with the for loop not working correctly

I have multiple jobs which runs based on file indicators.
I am looking to build a unix script to flag the job based on if the file is present for the current day of previous day.
I am maintaining a csv file with the below records in it and a Interval column ( which is in hours).
If the difference of the current time and the modification time of the file ( in hours) is more than what is there in the csv file then it will be flagged as old day file.
-sh-4.2$ cat scorecard_file_details.csv
Scorecard_Name,Path,FileName,Time_Interval_HRS
Scorecard_LDO_ABC_BTS,/NAS/IDQ/Bank_SEN,ABC.EXT,12
Scorecard_LDO_PQR_BTS,/NAS/IDQ/Bank_Prof,PQR.EXT,6
The files come at different path which is path in the above csv file.
Now, I want to match the file name for the csv with the filename at it's corresponding path and get the data may be in another file ( filename, path, flag).
I have come up with the below script but it is currently not returning anything at the highlighted (Bold) step ( it's incomplete as of now).
can anyone please help why the below for is not returning anything although the cat is working fine?
Also, any help with the logic is appreciated.
set -x
CSV_File_Path=/NAS/Target/DQ
**for FileName in $(cat scorecard_file_details.csv | awk -F "," '{ print $3 }'); do
echo $Filename**
CURTIME=$(date +%s)
File_Path=`awk '{ print $2 }'` $FileName
cd $File_Path
Files_in_Path=`ls -ltr | awk '{ print $9 }'`
for files in $Files_in_Path ; do
if [[ "$Files_in_Path" = $FileName ]]; then
TIMEDIFF=echo $(( ($(date +%s) - $(stat $files -c %Y)) / 3600 ))
echo $files","$TIMEDIFF >> /NAS/Target/DQ/file_with_difference.txt
else
echo "File is not present"
fi
done
<<Logic to flag based on time difference and interval>>
done
set +x
I highly suggest you redesign your script to use find command with -printf format option and combination of -mtime | -ctime | -mmin | -cmin filters.
Using find command you can filter a combination of time differences, type, name, path and more. Also assign actions on found files.
Please read intro tutorial here. And detailed manpage here.
In short you can push the time difference calculation into the find command, and than operate on the found files.
For example:
$ find /tmp -mmin +60 -and -mmin -150 -printf "%p %AF %AT \n"
Find files in /tmp folder.
That -mmin +60 -and -mmin -150 modified more than 60 min ago and modified less than 150 min ago.
Print the found files as file path=%p file create date=%AF file create time=%AT
Output:
/tmp/Low 2020-11-12 20:16:45.1960274000
/tmp/StructuredQuery.log 2020-11-12 20:55:00.3057165000
/tmp/~DF3465C10364E2CAFE.TMP 2020-11-12 20:16:45.7578495000
/tmp/~DFAC3AC652357DBBED.TMP 2020-11-12 20:16:46.1726618000
/tmp/~DFC2B1A30DCA4CA52A.TMP 2020-11-12 20:16:46.3941610000

Shell script to archive & delete files older than 5 days based on created date of the files

I am trying to compress 5 days' worth log at a time and moving the compressed files to another location and deleting the logs files from original location. I need bash script to accomplish this. I got the files compressed using the below command, but not able to move them to the archive folder. I also need to compress based on date created. Now it's compressing all the files starting with a specific name.
#!/bin/bash
cd "C:\Users\ann\logs"
for filename in acap*.log*; do
# this syntax emits the value in lowercase: ${var,,*} (bash version 4)
mkdir -p archive
gzip "$filename_.zip" "$filename"
mv "$filename" archive
done
#!/bin/bash
mkdir -p archive
for file in $(find . -mtime +3 -type f -printf "%f ")
do
if [[ "$file" =~ ^acap.*\.log$ ]]
then
tar -czf archive/${file}.tar.gz $file
rm $file
fi
done
This finds all files in the current directory that match the regex and compresses them in an tar for every file. Then it deletes all the files.

Resources