Checking if package is older than 24 hours with bash - bash

I would like to check if my last file is older than 24 hours or not. (project in django)
I have many zip packages in directory so I have to 'filter' the last one with this part of code: ls -1 | sort -n | tail -n1.
My code in .sh file:
#!/bin/bash
file="$HOME/path_directory ls -1 | sort -n | tail -n1"
current=`date +%s`;
last_modified=`stat -c "%Y" $file`;
if [ $(($current-$last_modified)) -gt 86400 ]; then
echo "File is older that 24 hours" | mailx noreply#address -s "Older than 24 hours" me#mailmail.com
else
echo "File is up to date.";
fi;
Here is an error, that I got:
stat: invalid option -- '1'
Try 'stat --help' for more information.
/path_directory/imported_file.sh: line 9: 1538734802-: syntax error: operand expected (error token is "-")
If somebody made something similar, please some hint.

You can obtain the list of files in the directory that have been modified earlier than 1440 minutes (86400 seconds), you can use find for this:
find -maxdepth 1 -mmin +1440
It will thus select all files in this directory (no subdirectories), with a change time in minutes older than 1440.
The + in +1440 is important, since otherwise you will obtain files that are exactly 1440 minutes unmodified.
You can also use -mtime to specify the number in days:
find -maxdepth 1 -mtime +1
If you want all files (in this directory and subdirectories), you can remove the -maxdepth 1.
You can add -type f if you only want to include files, etc. For more flags, and (filtering) options, please read the manpage of find.

I'd advise you to try this:
if test "`find file -mtime +1`"
but if you insist you can fix it by changing it to this:
#!/bin/bash
file="$HOME/path_directory ls -1 | sort -n | tail -n1"
current=$(date +%s);
last_modified=$(stat -c "%Y" $file);
if [ $((current - last_modified)) -gt 86400 ]; then
echo "File is older that 24 hours" | mailx noreply#address -s "Older than 24 hours" me#mailmail.com
else
echo "File is up to date.";
fi;

The file variable are not well formed I belive that you want something like:
file=`find $HOME/path_directory | sort -n | tail -n1`
or
file=$( find $HOME/path_directory | sort -n | tail -n1)
If you like the moderm way

Related

Bash : Find and Remove duplicate files from different folders

I have two folders with some common files, I want to delete duplicate files from xyz folder.
folder1:
/abc/file1.csv
/abc/file2.csv
/abc/file3.csv
/abc/file4.csv
folder2:
/xyz/file1.csv
/xyz/file5.csv
I want to compare both folders and remove duplicate from /xyz folder. Output should be: file5.csv
For now I am using :
find "/xyz" "/abc" "/abc" -printf '%P\n' | sort | uniq -u | -exec rm {} \;
But it failing with reason : if -exec is not a typo you can run the following command to lookup the package that contains the binary:
command-not-found -exec
-bash: -exec: command not found
-exec is an option to find, you've already exited the command find when you started the pipes.
Try xargs instead, it take all the data from stdin and appends to the program.
UNTESTED
find "/xyz" "/abc" "/abc" -printf '%P\n' | sort | uniq -u | xargs rm
Find every file in 234 and 123 directory get filename by -printf, sort them, uniq -d give list of duplications, give back path by sed, using 123 directory to delete the duplications from, and pass files to xargs rm
Command:
find ./234 ./123 -type f -printf '%P\n' | sort | uniq -d | sed 's/^/.\/123\//g' | xargs rm
sed don't needed if you are in the ./123 directory and using full path for folders in find.
Another approach: just find the files in abc and attempt to remove them from xyz:
UNTESTED
find /abc -type f -printf 'rm -f /xyz/%P' | sh
Remove Duplicate Files From Particular Directory
FileList=$(ls)
for D1 in $FileList ;do
if [[ -f $D1 ]]; then
for D2 in $FileList ;do
if [[ -f $D2 ]]; then
if [[ $D1 == $D2 ]]; then
: 'Skip Orignal File'
else
if [[ $(md5sum $D1 | cut -d'=' -f 2 | cut -d ' ' -f 1 ) == $(md5sum $D2 | cut -d'=' -f 2 | cut -d ' ' -f 1 ) ]]; then
echo "Duplicate File Found : $D2"
rm -rf $D2
fi #Detect Duplicate Using MD5
fi #Skip Orginal File
fi #D2 File available Then Next
done
fi #D1 File available Then Next
done

Shell win32 delete oldest directories (recursive)

I need to delete the oldest folders (including their contents) from a certain path. E.g. if there are more than 10 directories, delete the oldest ones until you are below 8 directories. The log would show count of directories before/after + the filesystem before/after and what dirs were deleted.
Thank you in advance!
You should test this first on your backup directory,
#!/bin/bash
DIRCOUNT="$(find . -type d -printf x | wc -c)"
if [ "$DIRCOUNT" -gt 10 ]; then
ls -A1td */ | tail -n -8 | xargs rm -r
fi
if i do not misunderstanding your intentions, below is your answer
#! /usr/bin/env bash
DIRCOUNT="$(find . -maxdepth 1 -type d -printf x | wc -c)"
echo "Now you have $DIRCOUNT dirs"
[[ "$DIRCOUNT" -gt 10 ]] && ls -A1td */ | tail -n $((DIRCOUNT-8)) | xargs rm -r && echo "Now you have 8 dirs"

remove files which contain more than 14 lines in a folder

Unix command used
wc -l * | grep -v "14" | rm -rf
However this grouping doesn't seem to do the job. Can anyone point me towards the correct way?
Thanks
wc -l * 2>&1 | while read -r num file; do ((num > 14)) && echo rm "$file"; done
remove "echo" if you're happy with the results.
Here's one way to print out the names of all files with at least 15 lines (assuming you have Gnu awk, for the nextfile command):
awk 'FNR==15{print FILENAME;nextfile}' *
That will produce an error for any subdirectory, so it's not ideal.
You don't actually want to print the filenames, though. You want to delete them. You can do that in awk with the system function:
# The following has been defanged in case someone decides to copy&paste
awk 'FNR==15{system("echo rm "FILENAME);nextfile}' *
for f in *; do if [ $(wc -l $f | cut -d' ' -f1) -gt 14 ]; then rm -f $f; fi; done
There's a few problems with your solution: rm doesn't take input from stdin, and your grep only finds files who don't have exactly 14 lines. Try this instead:
find . -type f -maxdepth 1 | while read f; do [ `wc -l $f | tr -s ' ' | cut -d ' ' -f 2` -gt 14 ] && rm $f; done
Here's how it works:
find . -type f -maxdepth 1 #all files (not directories) in the current directory
[ #start comparison
wc -l $f #get line count of file
tr -s ' ' #(on the output of wc) eliminate extra whitespace
cut -d ' ' -f 2 #pick just the line count out of the previous output
-gt 14 ] #test if all that was greater than 14
&& rm $f #if the comparison was true, delete the file
I tried to figure out a solution just using find with -exec, but I couldn't figure out a way to test the line count. Maybe somebody else can come up with a way for it

find oldest file from list

I've a file with a list of files in different directories and want to find the oldest one.
It feels like something that should be easy with some shell scripting but I don't know how to approach this. I'm sure it's really easy in perl and other scripting languages but I'd really like to know if I've missed some obvious bash solution.
Example of the contents of the source file:
/home/user2/file1
/home/user14/tmp/file3
/home/user9/documents/file9
#!/bin/sh
while IFS= read -r file; do
[ "${file}" -ot "${oldest=$file}" ] && oldest=${file}
done < filelist.txt
echo "the oldest file is '${oldest}'"
You can use stat to find the last modification time of each file, looping over your source file:
oldest=5555555555
while read file; do
modtime=$(stat -c %Y "$file")
[[ $modtime -lt $oldest ]] && oldest=$modtime && oldestf="$file"
done < sourcefile.txt
echo "Oldest file: $oldestf"
This uses the %Y format of stat, which is the last modification time. You could also use %X for last access time, or %Z for last change time.
Use find() to find the oldest file:
find /home/ -type f -printf '%T+ %p\n' | sort | head -1 | cut -d' ' -f2-
And with source file:
find $(cat /path/to/source/file) -type f -printf '%T+ %p\n' | sort | head -1 | cut -d' ' -f2-

Findind files modified during a specified month

I've build this little script for finding files which was modified in a specified month.
My script worked fine for this last four years, but from a recent update, now with bash V4, this won't work anymore and I don't understand why.
There is the script (fmonth.sh):
#!/bin/sh
month=$1
shift
printf "now\n%s-01\n%s-01 00:00 +1 month\n" $month $month |
date -f - +%s |
xargs printf "n=%s/60-.5;s=%s/60;e=%s/60;n-s;if (n-e <0) 0 else n-e\n" |
bc -l |
xargs printf "find $# -mmin -%.0f -mmin +%.0f -print0\n" |
sh |
xargs -0 /bin/ls -ltrd |
less -S
And this could by used as following:
fmonth.sh 2012-09 /etc /home/user /root /var/lib
Ok, I found the answer: $# could be replaced by $*:
#!/bin/sh
month=$1
shift
printf "now\n%s-01\n%s-01 00:00 +1 month\n" $month $month |
date -f - +%s |
xargs printf "n=%s/60-.5;s=%s/60;e=%s/60;n-s;if (n-e <0) 0 else n-e\n" |
bc -l |
xargs printf "find $* -mmin -%.0f -mmin +%.0f -print0\n" |
sh |
xargs -0 /bin/ls -ltrd |
less -S
This work fine now!
I'm not sure to really understand why... for now... I would search for the reason later...

Resources