Ksh command to find files modified during last hour - shell

Can you please tell me most efficient command which can be used to find files modified in a directory in last hour ( more precisely last 60 minutes).
Or
If its not good approach then please tell me how can I compare current time with the timestamp of file creation/modifition
thanks

Use find's option -newermt:
find -newermt 'now -1 hour'
Or simply
find -newermt -1hour
Read more about the usage in find's manual.
Another way for ksh:
BEFORE=$(date -d '-1 hour' '+%s'); find -type f -printf '%T# %p\n' | while read -r TS FILE; do TS=${TS%.*}; [[ TS -ge BEFORE ]] && echo "$FILE"; done
Without using -d:
BEFORE=$(( $(date '+%s') - 3600 )); find -type f -printf '%T# %p\n' | while read -r TS FILE; do TS=${TS%.*}; [[ TS -ge BEFORE ]] && echo "$FILE"; done

This will work on most modern POSIX conforming systems, using ksh[nn] or bash.
filetime=$(perl -e'
use POSIX qw(strftime);
$now_string = strftime "%Y%m%d%H%M.%S", localtime(time - 3601);
print $now_string, "\n";' )
touch -t $filetime /tmp/dummy # here for verification
ls -l /tmp/dummy
find . -newer /tmp/dummy -exec ls -l {} \;
the ls - l command is there to show you the "dummy", remove it after you check what the touch command does.

Touch a file with timestamp of one hour old, and use find's option -newer.
The touch hour can be calculated with a timezone trick (example for GMT):
echo "$(TZ=GMT+1 date +%Y-%m-%d)"
So you do something like
HOUROLDFILE=/tmp/hourold.tmp
touch -t "$(TZ=GMT+1 date '+%y%m%d%H%M')" ${HOUROLDFILE}
find a_directory -newer ${HOUROLDFILE}
rm ${HOUROLDFILE}

Related

Remove files older than the start of the current day

I want to use logic that allows to use the find command to find all files older than today's date.
Using the below has a 24 hour timestamp from the current time:
find /home/test/ -mtime +1
I am trying to achieve a solution that no matter what time it executes in the cron it will check all files older than the start of the day at 00:00. I believe this can be achieved using epoch, but struggling to find the best logic for this.
#!/bin/ksh
touch -t $(date +%Y%m%d0000.00) fence
find /home/test/ ! -newer fence -exec \
sh -c '
for f in "$#"; do
[[ $f -ot fence ]] && printf "%s\n" "$f"
done
' sh {} + \
;
rm fence
Why find(1) has no -older expression. :-(
UNIX find: opposite of -newer option exists?

How to delete files older than 30 days based on the date in the filename [duplicate]

This question already has answers here:
Delete all files older than 30 days, based on file name as date
(3 answers)
Closed 3 years ago.
I have CSV files get updated every day and we process the files and delete the files older than 30 days based on the date in the filename.
Example filenames :
XXXXXXXXXXX_xx00xx_**20171001**.000000_0.csv
I would like to schedule the job in crontab to delete 30 days older files daily.
Path could be /mount/store/
XXXXXXXXXXX_xx00xx_**20171001**.000000_0.csv
if [ $(date -d '-30 days' +%Y%m%d) -gt $D ]; then
rm -rf $D
fi
this above script doesn't seem to help me. Kindly help me on this.
I have been trying this for last two days.
Using CENTOS7
Thanks.
For all files:
Extract the date
touch the file with that date
delete files with the -mtime option
Do this in the desired dir for all files:
f=XXXXXXXXXXX_xx00xx_20171001.000000_0.csv
d=$(echo $f | sed -r 's/[^_]+_[^_]+_(20[0-9]{6})\.[0-9]{6}_.\.csv/\1/')
touch -d $d $f
After performing that for the whole dir, delete the older-thans:
find YourDir -type f -mtime +30 -name "*.csv" -delete
Gnu-sed has the -delete option. Other finds might need -exec rm ... .
Test before. Other pitfalls are different kind of dates, affected by touch (mtime, ctime, atime).
Test, manipulating the date with touch:
touch XXXXXXXXXXX_xx00xx_20171001.000000_0.csv
f=XXXXXXXXXXX_xx00xx_20171001.000000_0.csv; d=$(echo $f | sed -r 's/[^_]+_[^_]+_(20[0-9]{6})\.[0-9]{6}_.\.csv/\1/'); touch -d $d $f
ls -l $f
-rw-rw-r-- 1 stefan stefan 0 Okt 1 00:00 XXXXXXXXXXX_xx00xx_20171001.000000_0.csv
An efficient way to extract date from filename is to use variable expansions
f=XXXXXXXXXXX_xx00xx_20171001.000000_0.csv
d=${f%%.*} # removes largest suffix .*
d=${d##*_} # removes largest prefix *_
Or to use bash specific regex
if [[ $f =~ [0-9]{8} ]]; then echo "$BASH_REMATCH"; fi
Here is a solution if you have dgrep from dateutils.
ls *.csv | dateutils.dgrep -i '%Y%m%d' --le $(date -d "-30 day" +%F) | xargs -d '\n' rm
First we can use either ls or find to obtain a list of filenames. We can then pipe the results to dgrep to filter the filenames that contains a date string which matches our condition (in this case older than 30 days). Finally, we pipe the result to xargs rm to remove all the matched files.
-i '%Y%m%d' input date format as specified in your filename
--le $(date -d "-30 day" +%F) filter dates that are older than 30 days
You can change rm to printf "%s\n" to test the command before actually deleting it.
The following approach does not look at any generation time information of the file, it assumes the date in the filename is unrelated to the day the file is created.
#/usr/bin/env bash
d=$(date -d "-30 days" "+%Y%m%d")
for file in /yourdir/*csv; do
date=${file:$((${#file}-21)):8}
(( date < d )) && rm $file
done

shell script find file older than x days and delete them if they were not listet in log files

I am a newbie to scripting and i need a little shell script doing the following:
find all .txt files they are older than x days
delete them if they were not listed in logfiles (textfiles and gzipped textfiles)
I know the basics about find -mtime, grep, zgrep, etc., but it is very tricky for me to get this in a working script.
I tried something like this:
#! /bin/sh
for file in $(find /test/ -iname '*.txt')
do
echo "$file" ls -l "$file"
echo $(grep $file /test/log/log1)
done
IFS='
'
for i in `find /test/ -ctime +10`; do
grep -q $i log || echo $i # replace echo with rm if satisfied
done
Sets Internal field separator for the cases with spaces in
filenames.
Finds all files older than 10 days in /test/ folder.
Greps path in log file.
I would use something like this:
#!/bin/bash
# $1 is the number of days
log_files=$(ls /var/log)
files=$(find -iname "*.rb" -mtime -$1)
for f in $files; do
found="false"
base=$(basename $f)
for logfile in $log_files; do
res=$(zgrep $base $logfile)
if [ "x$res" != "x" ]; then
found="true"
rm $f
fi
if [ "$found" = "true" ]; then
break
fi
done
done
and call it:
#> ./find_and_delete.sh 10
You could create a small bash script that checks whether a file is in the logs or not:
$ cat ~/bin/checker.sh
#!/usr/bin/env bash
n=$(basename $1)
grep -q $n $2
$ chmod +x ~/bin/checker.sh
And then use it in a single find command:
$ find . -type f ! -exec ./checker.sh {} log \; -exec echo {} \;
This should print only the files to be deleted. Once convinced that it does what you want:
$ find . -type f ! -exec ./checker.sh {} log \; -exec rm {} \;
deletes them.

How to check if a file is older than 30 minutes in unix

I've written a script to iterate though a directory in Solaris. The script looks for files which are older than 30 minutes and echo. However, my if condition is always returning true regardless how old the file is. Someone please help to fix this issue.
for f in `ls -1`;
# Take action on each file. $f store current file name
do
if [ -f "$f" ]; then
#Checks if the file is a file not a directory
if test 'find "$f" -mmin +30'
# Check if the file is older than 30 minutes after modifications
then
echo $f is older than 30 mins
fi
fi
done
You should not parse the output of ls
You invoke find for every file which is unnecessarily slow
You can replace your whole script with
find . -maxdepth 1 -type f -mmin +30 | while IFS= read -r file; do
[ -e "${file}" ] && echo "${file} is older than 30 mins"
done
or, if your default shell on Solaris supports process substitution
while IFS= read -r file; do
[ -e "${file}" ] && echo "${file} is older than 30 mins"
done < <(find . -maxdepth 1 -type f -mmin +30)
If you have GNU find available on your system the whole thing can be done in one line:
find . -maxdepth 1 -type f -mmin +30 -printf "%s is older than 30 mins\n"
Another option would be to use stat to check the time. Something like below should work.
for f in *
# Take action on each file. $f store current file name
do
if [ -f "$f" ]; then
#Checks if the file is a file not a directory
fileTime=$(stat --printf "%Y" "$f")
curTime=$(date +%s)
if (( ( ($curTime - $fileTime) / 60 ) < 30 ))
echo "$f is less than 30 mins old"
then
echo "$f is older than 30 mins"
fi
fi
done
Since you are iterating through a directory you could try the below command which will find all files ending with log type edited in the past 30 min. Using:
-mmin +30 would give all files edited before 30 minutes ago
-mmin -30 would give all files that have changed within the last 30 minutes
find ./ -type f -name "*.log" -mmin -30 -exec ls -l {} \;

How to find files containing exactly 16 lines?

I have to find files that containing exactly 16 lines in Bash.
My idea is:
find -type f | grep '/^...$/'
Does anyone know how to utilise find + grep or maybe find + awk?
Then,
Move the matching files another directory.
Deleting all non-matching files.
I would just do:
wc -l **/* 2>/dev/null | awk '$1=="16"'
Keep it simple:
find . -type f |
while IFS= read -r file
do
size=$(wc -l < "$file")
if (( size == 16 ))
then
mv -- "$file" /wherever/you/like
else
rm -f -- "$file"
fi
done
If your file names can contain newlines then google for the find and read options to handle that.
You should use grep instead of wc because wc counts newline characters \n and will not count if the last line doesn't ends with a newline.
e.g.
grep -cH '' * 2>/dev/null | awk -F: '$2==16'
for more correct approach (without error messages, and without argument list too long error) you should combine it with the find and xargs commands, like
find . -type f -print0 | xargs -0 grep -cH '' | awk -F: '$2==16'
if you don't want count empty lines (so only lines what contains at least one character), you can replace the '' with the '.'. And instead of awk, you can use second grep, like:
find . -type f -print0 | xargs -0 grep -cH '.' | grep ':16$'
this will find all files what are contains 16 non-empty lines... and so on..
GNU sed
sed -E '/^.{16}$/!d' file
A pure bash version:
#!/usr/bin/bash
for f in *; do # Look for files in the present dir
[ ! -f "$f" ] && continue # Skip not simple files
cnt=0
# Count the first 17 lines
while ((cnt<17)) && read x; do ((++cnt)); done<"$f"
if [ $cnt == 16 ] ; then echo "Move '$f'"
else echo "Delete '$f'"
fi
done
This snippet will do the work:
find . -type f -readable -exec bash -c \
'if(( $(grep -m 17 -c "" "$0")==16 )); then echo "file $0 has 16 lines"; else echo "file $0 doesn'"'"'t have 16 lines"; fi' {} \;
Hence, if you need to delete the files that are not 16 lines long, and move those who are 16 lines long to folder /my/folder, this will do:
find . -type f -readable -exec bash -c \
'if(( $(grep -m 17 -c "" "$0")==16 )); then mv -nv "$0" /my/folder; else rm -v "$0"; fi' {} \;
Observe the quoting for "$0" so that it's safe regarding any file name with funny symbols in it (spaces, ...).
I'm using the -v option so that rm and mv are verbose (I like to know what's happening). The -n option to mv is no-clobber: a security to not overwrite an existing file; this option might not be available if you have an old system.
The good thing about this method. It's really safe regarding any filename containing funny symbols.
The bad thing(s). It forks a bash and a grep and an mv or rm for each file found. This can be quite slow. This can be fixed using trickier stuff (while still remaining safe regarding funny symbols in filenames). If you really need it, I can give you a possible answer. It will also break if file can't be (re)moved.
Remark. I'm using the -readable option to find, so that it only considers files that are readable. If you have this option, use it, you'll have a more robust command!
I would go with
find . -type f | while read f ; do
[[ "${f##*/}" =~ ^.{16}$ ]] && mv "${f}" <any_directory> || rm -f "${f}"
done
or
find . -type f | while read f ; do
[[ $(echo -n "${f##*/}" | wc -c) -eq 16 ]] && mv "${f}" <any_directory> || rm -f "${f}"
done
Replace <any_directory> with the directory you actually want to move the files to.
BTW, find command will go sub-directories. if you don't want this, then you should change the find command to fit your need.

Resources