I have a folder /var/backup where a cronjob saves a backup of a database/filesystem. It contains a latest.gz.zip and lots of older dumps which are names timestamp.gz.zip.
The folder ist getting bigger and bigger and I would like to create a bash script that does the following:
Keep latest.gz.zip
Keep the youngest 10 files
Delete all other files
Unfortunately, I'm not a good bash scripter so I have no idea where to start. Thanks for your help.
In zsh you can do most of it with expansion flags:
files=(*(.Om))
rm $files[1,-9]
Be careful with this command, you can check what matches were made with:
print -rl -- $files[1,-9]
You should learn to use the find command, possibly with xargs, that is something similar to
find /var/backup -type f -name 'foo' -mtime -20 -delete
or if your find doesn't have -delete:
find /var/backup -type f -name 'foo' -mtime -20 -print0 | xargs -0 rm -f
Of course you'll need to improve a lot, this is just to give ideas.
Related
I need to delete all files with a pattern name: 2020*.js
Inside a specific directory: server/db/migrations/
And then show what it have been deleted: `| xargs``
I'm trying this:
find . -name 'server/db/migrations/2020*.js' #-delete | xargs
But nothing is deleted, and shows nothing.
What I'm doing wrong?
The immediate problem is that -name only looks at the last component of the file name (so 2020xxx.js) and cannot match anything with a slash in it. You can use the -path predicate but the correct solution is to simply delete these files directly:
rm -v server/db/migrations/2020*.js
The find command is useful when you need to traverse subdirectories.
Also, piping the output from find to xargs does not do anything useful; if find prints the names by itself, xargs does not add any value, and if it doesn't, well, xargs can't do anything with an empty input.
If indeed you want to traverse subdirectories, try
find server/db/migrations/ -type f -name '2020*.js' -print -delete
If your shell supports ** you could equally use
rm -v server/db/migrations/**/2020*.js
which however has a robustness problem if there can be very many matching files (you get "command line too long"). In that scenario, probably fall back to find after all.
You're looking for something like this:
find server/db/migrations -type f -name '2020*.js' -delete -print
You have try this:
find . -name 'server/db/migrations/2020*.js' | xargs rm
I'm trying to find the particular below files in the directory using find command pattern in shell script .
The below files will create in the directory "/data/output" in the below format every time.
PO_ABCLOAD0626201807383269.txt
PO_DEF 0626201811383639.txt
So I need to find the above txt files starting from "PO_ABCLOAD" and "PO_DEF" is created or not.if not create for four hours then I need to write in logs.
I written script but I am stuck up to find the file "PO_ABCLOAD" and "PO_DEF format text file in the below script.
Please help on this.
What changes i need to add in the find command.
My script is:
file_path=/data/output
PO_count='find ${file_path}/PO/*.txt -mtime +4 -exec ls -ltr {} + | wc -l'
if [ $PO_count == 0 ]
then
find ${file_path}/PO/*.xml -mtime +4 -exec ls -ltr {} + >
/logs/test/PO_list.txt
fi
Thanks in advance
Welcome to the forum. To search for files which match the names you are looking for you could try the -iname or -name predicates. However, there are other issues with your script.
Modification times
Firstly, I think that find's -mtime test works in a different way than you expect. From the manual:
-mtime n
File's data was last modified n*24 hours ago.
So if, for example, you run
find . -mtime +4
you are searching for files which are more than four days old. To search for files that are more than four hours old, I think you need to use the -mmin option instead; this will search for files which were modified a certain number of minutes ago.
Command substitution syntax
Secondly, using ' for command substitution in Bash will not work: you need to use backticks instead - as in
PO_COUNT=`find ...`
instead of
PO_COUNT='find ...'
Alternatively - even better (as codeforester pointed out in a comment) - use $(...) - as in
PO_COUNT=$(find ...)
Redundant options
Thirdly, using -exec ls -ltr {} + is redundant in this context - since all you are doing is determining the number of lines in the output.
So the relevant line in your script might become something like
PO_COUNT=$(find $FILE_PATH/PO/ -mmin +240 -a -name 'PO_*' | wc -l)
or
PO_COUNT=$(find $FILE_PATH/PO/PO_* -mmin +240 | wc -l)
If you wanted tighter matching of filenames, try (as per codeforester's suggestion) something like
PO_COUNT=$(find $file_path/PO/PO_* -mmin +240 -a \( -name 'PO_DEF*' -o -name 'PO_ABCLOAD*' \) | wc -l)
Alternative file-name matching in Bash
One last thing ...
If using bash, you can use brace expansion to match filenames, as in
PO_COUNT=$(find $file_path/PO/PO_{ABCLOAD,DEF}* -mmin +240 | wc -l)
Although this is slightly more concise, I don't think it is compatible with all shells.
I am trying to write a shell script to copy files with some specific name and creation/modification date from one folder to another. I am finding it hard that how I can do this ?
However i have tried this till now.
srcdir="/media/ubuntu/CA52057F5205720D/Users/st4r8_000/Desktop/26 nov"
dstdir="/media/ubuntu/ubuntu"
find ./ -type f -name 'test*.csv' -mtime -1
Now my question is, is it possible to put that find command into a if condition to get the files found by find.
I am very new to shell script. any help would be really appreciated.
What I found useful for this is the following code. I am sharing this here so that some one who is new like me can take some help from it:
#!/bin/bash
srcdir="/media/ubuntu/CA52057F5205720D/Users/st4r8_000/Desktop/office work/26 nov"
dstdir="/media/ubuntu/ubuntu"
find "$srcdir" -type f -name 'test*.csv' -mtime -1 -exec cp -v {} "$dstdir" \;
I have a command on AIX that finds files containing a phrase and are older than a certain age. however my report is full of permission denied errors. I would like the find to only search files that it has permission for.
I have command for linux that works
find /home/ ! -readable -prune -name 'core.20*' -mtime +7 -print
however in this case i am unable to use readable.
find /home/ -name 'core.20*' -mtime +7 -print 2>/dev/null
works rather well, but this still tries to search the directories costing time.
Just use your
find /home/ -name 'core.20*' -mtime +7 -print 2>/dev/null
When you want to skip dir's without permission, your script must somehow ask Unix for permission. This is exactly what find is doing: when the toplevel is closed, no time is spent on the tree beneath. The only cost is the stderr, and that is what you redirect.
When you want to optimise this for daily use, you might want to make a file with files not changed for over a day in a once-every-6-days crontab, and use the log as input for the daily claing. This solution will not help much and is very dirty. Just stick with your 2>/dev/null
A really simple fix, if it works for you, would be to filter out the errors using grep, something like:
find /home/ -name 'core.20*' -mtime +7 -print 2>/dev/null | grep -v 'Permission denied'
This will hide any results containing the phrase 'Permission denied' (case sensitive).
HTH!
This question already has answers here:
Argument list too long error for rm, cp, mv commands
(31 answers)
Closed 3 years ago.
If I run the command mv folder2/*.* folder, I get "argument list too long" error.
I find some example of ls and rm, dealing with this error, using find folder2 -name "*.*". But I have trouble applying them to mv.
find folder2 -name '*.*' -exec mv {} folder \;
-exec runs any command, {} inserts the filename found, \; marks the end of the exec command.
The other find answers work, but are horribly slow for a large number of files, since they execute one command for each file. A much more efficient approach is either to use + at the end of find, or use xargs:
# Using find ... -exec +
find folder2 -name '*.*' -exec mv --target-directory=folder '{}' +
# Using xargs
find folder2 -name '*.*' | xargs mv --target-directory=folder
find folder2 -name '*.*' -exec mv \{\} /dest/directory/ \;
First, thanks to Karl's answer. I have only minor correction to this.
My scenario:
Millions of folders inside /source/directory, containing subfolders and files inside. Goal is to copy it keeping the same directory structure.
To do that I use such command:
find /source/directory -mindepth 1 -maxdepth 1 -name '*' -exec mv {} /target/directory \;
Here:
-mindepth 1 : makes sure you don't move root folder
-maxdepth 1 : makes sure you search only for first level children. So all it's content is going to be moved too, but you don't need to search for it.
Commands suggested in answers above made result directory structure flat - and it was not what I looked for, so decided to share my approach.
This one-liner command should work for you.
Yes, it is quite slow, but works even with millions of files.
for i in /folder1/*; do mv "$i" /folder2; done
It will move all the files from folder /folder1 to /folder2.
find doesn't work with really long lists of files, it will give you the same error "Argument list too long". Using a combination of ls, grep and xargs worked for me:
$ ls|grep RadF|xargs mv -t ../fd/
It did the trick moving about 50,000 files where mv and find alone failed.