In a bash script, if I want to remove files older than 15 days in a directory, I can run:
find "$DIR" -type f -mtime +15 -exec rm {} \;
Can someone help me with a bash script to remove files older than 15 months in a directory?
Is this the same as ctime in Bash?
According to the man page:
-mtime n[smhdw]
If no units are specified, this primary evaluates to true if the difference between the file last modification time and the time find was started, rounded up to the next
full 24-hour period, is n 24-hour periods.
If units are specified, this primary evaluates to true if the difference between the file last modification time and the time find was started is exactly n units. Please
refer to the -atime primary description for information on supported time units.
Then, at -atime:
-atime n[smhdw]
If no units are specified, this primary evaluates to true if the difference between the file last access time and the time find was started, rounded up to the next full
24-hour period, is n 24-hour periods.
If units are specified, this primary evaluates to true if the difference between the file last access time and the time find was started is exactly n units. Possible
time units are as follows:
s second
m minute (60 seconds)
h hour (60 minutes)
d day (24 hours)
w week (7 days)
Any number of units may be combined in one -atime argument, for example, ``-atime -1h30m''. Units are probably only useful when used in conjunction with the + or - modi-
fier.
So we have weeks. 15 months * 4 weeks/month = 60 weeks.
find $DIR -type f -mtime +60w -exec rm {} \;
use 450 (=15*30) as -mtime parameter.
find $DIR -type f -mtime +450 -exec rm {} \;
http://linux.die.net/man/8/tmpwatch designed for exactly this application. Typically used with cron.
A funny possibility: you can touch a tmp file with timestamp 15 months ago and use it with the (negated) -newer flag of find:
a=$(mktemp)
touch -d '15 months ago' -- "$a"
find "$DIR" -type f \! -newer "$a" -exec rm {} +
rm -- "$a"
This, of course, assumes that your touch and find have these capabilities.
If there's a chance that mktemp creates the file in subdirectories of your directory $DIR, it will get very messy as the file refered to by "$a" could be deleted before the end of the process. In this case, to be 100% sure, use the (negated) -samefile test:
find "$DIR" -type f \! -newer "$a" \! -samefile "$a" -exec rm {} +
You can of course use the -delete command of find if your find supports it. That would give:
a=$(mktemp)
touch -d '15 months ago' -- "$a"
find "$DIR" -type f \! -newer "$a" \! -samefile "$a" -delete
rm -- "$a"
Related
Need to delete log files older than 60 days and compress it if files are greater than 30 days and lesser than 60 days. I have to remove and compress files from 2 paths as mentioned in PURGE_DIR_PATH.
Also have to take the output of find command and redirect it to log file. Basically need to create an entry in the log file whenever a file is deleted. How can i achieve this?
I have to also validate if the directory path is valid or not and put a message in log file if the directory is valid or not
I have written a shell script but doesn't cover all the scenarios. This is my first shell script and need some help. How do i keep just one variable log_retention and
use it to compress files as the condition would be >30 days and <60 days? how do I validate if directories is valid or not? is my IF condition checking that?
Please let me know.
#!/bin/bash
LOG_RETENTION=60
WEB_HOME="/web/local/artifacts"
ENG_DIR="$(dirname $0)"
PURGE_DIR_PATH="$(WEB_HOME)/backup/csvs $(WEB_HOME)/home/archives"
if[[ -d /PURGE_DIR_PATH]] then echo "/PURGE_DIR_PATH exists on your filesystem." fi
for dir_name in ${PURGE_DIR_PATH}
do
echo $PURGE_DIR_PATH
find ${dir_name} -type f -name "*.csv" -mtime +${LOG_RETENTION} -exec ls -l {} \;
find ${dir_name} -type f -name "*.csv" -mtime +${LOG_RETENTION} -exec rm {} \;
done
Off the top of my head -
#!/bin/bash
CSV_DELETE=60
CSV_COMPRESS=30
WEB_HOME="/web/local/artifacts"
PURGE_DIR_PATH=( "$(WEB_HOME)/backup/csvs" "$(WEB_HOME)/home/archives" ) # array, not single string
# eliminate the oldest
find "${PURGE_DIR_PATH[#]}" -type f -name "*.csv" -mtime +${CSV_DELETE} |
xargs -P 100 rm -f # run 100 in bg parallel
# compress the old-enough after the oldest are gone
find "${PURGE_DIR_PATH[#]}" -type f -name "*.csv" -mtime +${CSV_COMPRESS} |
xargs -P 100 gzip # run 100 in bg parallel
Shouldn't need loops.
I've got a working script:
#!/bin/bash
find ~/.backups/ -type f -name '*.tgz' -mtime +0.5 -exec rm {} \;
Nothing wrong with it. Just wondering what -mtime is and how it's calculated. Can't seem to get a hit on Google.
From man find:
-mtime n
File's data was last modified n*24 hours ago. See the
comments for -atime to understand how rounding affects the
interpretation of file modification times.
I'm trying to find all files in a given folder that were modified withing a certain time frame, say between 5 and 15 minutes ago.
Currently I can find anything modified say up to 15 minutes ago by using find -cmin
#!/bin/bash
minutes="15"
FILETYPES=`find . *PATTERN*.txt* -maxdepth 0 -type f -cmin -$minutes`
How do I give it a time frame?
Try this :
find . -name '*pattern.txt' -maxdepth 1 -type f \( -mmin -15 -a -mmin +5 \)
Notes
the parenthesis are not mandatory here with and : -a, but it's necessary for case with or: -o
always use single quotes around the pattern to prevent shell expansion of the wildcard
to give a pattern, use -name or -iname
for the date/hour, -mmin is the way to go for minutes and -mtime for days.
Using find, you can add additional conditions to create the range. Each condition is implied as "and" unless -o is used. You also want -mmin instead of -cmin for modified time (but they are often the same).
find . '*PATTERN*.txt*' -maxdepth 0 -type f -mmin -15 -mmin +5
How to Delete file created in a specific date??
ls -ltr | grep "Nov 22" | rm -- why this is not wrking??
There are three problems with your code:
rm takes its arguments on its command line, but you're not passing any file name on the command line. You are passing data on the standard input, but rm doesn't read that¹. There are ways around this.
The output of ls -ltr | grep "Nov 22" doesn't just consist of file names, it consists of mangled file names plus a bunch of other information such as the time.
The grep filter won't just catch files modified on November 22; it will also catch files whose name contains Nov 22, amongst others. It also won't catch the files you want in a locale that displays dates differently.
The find command lets you search files according to criteria such as their name matching a certain pattern or their date being in a certain range. For example, the following command will list the files in the current directory and its subdirectories that were modified today (going by GMT calendar date). Replace echo by rm -- once you've checked you have the right files.
find . -type f -mtime -2 -exec echo {} +
With GNU find, such as found on Linux and Cygwin, there are a few options that might do a better job:
-maxdepth 1 (must be specified before other criteria) limits the search to the specified directory (i.e. it doesn't recurse).
-mmin -43 matches files modified at most 42 minutes ago.
-newermt "Nov 22" matches files modified on or after November 22 (local time).
Thus:
find . -maxdepth 1 -type f -newermt "Nov 22" \! -newermt "Nov 23" -exec echo {} +
or, further abbreviated:
find -maxdepth 1 -type f -newermt "Nov 22" \! -newermt "Nov 23" -delete
With zsh, the m glob qualifier limits a pattern to files modified within a certain relative date range. For example, *(m1) expands to the files modified within the last 24 hours; *(m-3) expands to the files modified within the last 48 hours (first the number of days is rounded up to an integer, then - denotes a strict inequality); *(mm-6) expands to the files modified within the last 5 minutes, and so on.
¹ rm -i (and plain rm for read-only files) uses it to read a confirmation y before deletion.
If you need to try find for any given day,
this might help
touch -d "2010-11-21 23:59:59" /tmp/date.start;
touch -d "2010-11-23 00:00:00" /tmp/date.end;
find ./ -type f -newer /tmp/date.start ! -newer /tmp/date.end -exec rm {} \;
If your find supports it, as GNU find does, you can use:
find -type f -newermt "Nov 21" ! -newermt "Nov 22" -delete
which will find files that were modified on November 21.
You would be better suited to use the find command:
find . -type f -mtime 1 -exec echo rm {} +
This will delete all files one day old in the current directory and recursing down into its sub-directories. You can use '0' if you want to delete files created today. Once you are satisfied with the output, remove the echo and the files will truly be deleted.
for i in `ls -ltr | grep "NOV 23" | awk '{print $9}'`
do
rm -rf $i
done
Mb better
#!/bin/bash
for i in $(ls -ltr | grep "NOV 23" | awk '{print $9}')
do
rm -rf $i
done
then in the previous comment
I'm writing a bash script that needs to delete old files.
It's currently implemented using :
find $LOCATION -name $REQUIRED_FILES -type f -mtime +1 -delete
This will delete of the files older than 1 day.
However, what if I need a finer resolution that 1 day, say like 6 hours old? Is there a nice clean way to do it, like there is using find and -mtime?
Does your find have the -mmin option? That can let you test the number of mins since last modification:
find $LOCATION -name $REQUIRED_FILES -type f -mmin +360 -delete
Or maybe look at using tmpwatch to do the same job. phjr also recommended tmpreaper in the comments.
Here is the approach that worked for me (and I don't see it being used above)
$ find /path/to/the/folder -name '*.*' -mmin +59 -delete > /dev/null
deleting all the files older than 59 minutes while leaving the folders intact.
You could to this trick: create a file 1 hour ago, and use the -newer file argument.
(Or use touch -t to create such a file).
-mmin is for minutes.
Try looking at the man page.
man find
for more types.
For SunOS 5.10
Example 6 Selecting a File Using 24-hour Mode
The descriptions of -atime, -ctime, and -mtime use the ter-
minology n ``24-hour periods''. For example, a file accessed
at 23:59 is selected by:
example% find . -atime -1 -print
at 00:01 the next day (less than 24 hours later, not more
than one day ago). The midnight boundary between days has no
effect on the 24-hour calculation.
If you do not have "-mmin" in your version of "find", then "-mtime -0.041667" gets pretty close to "within the last hour", so in your case, use:
-mtime +(X * 0.041667)
so, if X means 6 hours, then:
find . -mtime +0.25 -ls
works because 24 hours * 0.25 = 6 hours
If one's find does not have -mmin and if one also is stuck with a find that accepts only integer values for -mtime, then all is not necessarily lost if one considers that "older than" is similar to "not newer than".
If we were able to create a file that that has an mtime of our cut-off time, we can ask find to locate the files that are "not newer than" our reference file.
To create a file that has the correct time stamp is a bit involved because a system that doesn't have an adequate find probably also has a less-than-capable date command that could do things like: date +%Y%m%d%H%M%S -d "6 hours ago".
Fortunately, other old tools can manage this, albeit in a more unwieldy way.
To begin finding a way to delete files that are over six hours old, we first have to find the time that is six hours ago. Consider that six hours is 21600 seconds:
$ date && perl -e '#d=localtime time()-21600; \
printf "%4d%02d%02d%02d%02d.%02d\n", $d[5]+1900,$d[4]+1,$d[3],$d[2],$d[1],$d[0]'
> Thu Apr 16 04:50:57 CDT 2020
202004152250.57
Since the perl statement produces the date/time information we need, use it to create a reference file that is exactly six hours old:
$ date && touch -t `perl -e '#d=localtime time()-21600; \
printf "%4d%02d%02d%02d%02d.%02d\n", \
$d[5]+1900,$d[4]+1,$d[3],$d[2],$d[1],$d[0]'` ref_file && ls -l ref_file
Thu Apr 16 04:53:54 CDT 2020
-rw-rw-rw- 1 root sys 0 Apr 15 22:53 ref_file
Now that we have a reference file exactly six hours old, the "old UNIX" solution for "delete all files older than six hours" becomes something along the lines of:
$ find . -type f ! -newer ref_file -a ! -name ref_file -exec rm -f "{}" \;
It might also be a good idea to clean up our reference file...
$ rm -f ref_file
Here is what one can do for going on the way #iconoclast was wondering about in their comment on another answer.
use crontab for user or an /etc/crontab to create file /tmp/hour:
# m h dom mon dow user command
0 * * * * root /usr/bin/touch /tmp/hour > /dev/null 2>&1
and then use this to run your command:
find /tmp/ -daystart -maxdepth 1 -not -newer /tmp/hour -type f -name "for_one_hour_files*" -exec do_something {} \;
find $PATH -name $log_prefix"*"$log_ext -mmin +$num_mins -exec rm -f {} \;