I have a command on AIX that finds files containing a phrase and are older than a certain age. however my report is full of permission denied errors. I would like the find to only search files that it has permission for.
I have command for linux that works
find /home/ ! -readable -prune -name 'core.20*' -mtime +7 -print
however in this case i am unable to use readable.
find /home/ -name 'core.20*' -mtime +7 -print 2>/dev/null
works rather well, but this still tries to search the directories costing time.
Just use your
find /home/ -name 'core.20*' -mtime +7 -print 2>/dev/null
When you want to skip dir's without permission, your script must somehow ask Unix for permission. This is exactly what find is doing: when the toplevel is closed, no time is spent on the tree beneath. The only cost is the stderr, and that is what you redirect.
When you want to optimise this for daily use, you might want to make a file with files not changed for over a day in a once-every-6-days crontab, and use the log as input for the daily claing. This solution will not help much and is very dirty. Just stick with your 2>/dev/null
A really simple fix, if it works for you, would be to filter out the errors using grep, something like:
find /home/ -name 'core.20*' -mtime +7 -print 2>/dev/null | grep -v 'Permission denied'
This will hide any results containing the phrase 'Permission denied' (case sensitive).
HTH!
Related
I need to delete all files with a pattern name: 2020*.js
Inside a specific directory: server/db/migrations/
And then show what it have been deleted: `| xargs``
I'm trying this:
find . -name 'server/db/migrations/2020*.js' #-delete | xargs
But nothing is deleted, and shows nothing.
What I'm doing wrong?
The immediate problem is that -name only looks at the last component of the file name (so 2020xxx.js) and cannot match anything with a slash in it. You can use the -path predicate but the correct solution is to simply delete these files directly:
rm -v server/db/migrations/2020*.js
The find command is useful when you need to traverse subdirectories.
Also, piping the output from find to xargs does not do anything useful; if find prints the names by itself, xargs does not add any value, and if it doesn't, well, xargs can't do anything with an empty input.
If indeed you want to traverse subdirectories, try
find server/db/migrations/ -type f -name '2020*.js' -print -delete
If your shell supports ** you could equally use
rm -v server/db/migrations/**/2020*.js
which however has a robustness problem if there can be very many matching files (you get "command line too long"). In that scenario, probably fall back to find after all.
You're looking for something like this:
find server/db/migrations -type f -name '2020*.js' -delete -print
You have try this:
find . -name 'server/db/migrations/2020*.js' | xargs rm
I have a cron job, every 5 minutes, backing up my MYSQL to files ending in .sql.gz. But this is hundreds of files a day. So I searched the internet and found this bash script which I expected to just work on the files in the /backup folder specified and only on .sql.gz files. but I soon found that it deleted everything in my root folder. :-) I was able to FTP the files back and have my site back up in half an hour, but I still need the script to work as intended. I'm new to bash scripting so I'm asking what did I do wrong in editing the script I found on the internet to my needs? What would work?
Here is the rogue script. DO NOT run this as is. its broken, thats why im here:
find /home/user/backups/*.gz * -mmin +60 -exec rm {} \;
Im suspecting its that last backslash should be /home/user/backups/
And also I should remove the * before -min
so what I need should be:
find /home/user/backups/*.gz -mmin +60 -exec rm {} /home/user/backups/;
Am I correct? Or still missing something?
BTW Im running this on Dreamhost shared hosting CRON. Their support don't want to help with BASH questions really, I tried.
The filename arguments to find should be the directories to start the recursive search. Then use -name and other options to filter down to the files that match the criteria you want.
find /home/user/backups -type f -name '*.sql.gz' -mmin +60 -exec rm {} +
-type f means only select ordinary files
-name '*.sql.gz' means only filenames ending in .sql.gz
-mmin +60 means files more than 60 minutes old
And using + instead of \; at the end of -exec means that it should just run the command once with all the selected filenames, rather than separately for each filename; this is a minor efficiency improvement.
I'm a beginner with this stuff and seem to be running into an issue.
Basically, I have many files with names containing a keyword (let's call it "Category1") within a directory. For example:
ABC-Category1-XYZ.txt
I'm trying to move them from a directory into another directory with the same name as the keyword.
I started with this:
find /path_A -name "*Category1*" -exec mv {} /path_A/Category1 \;
It spit out something like this:
mv: rename /path_A/Category1 to /path_A/Category1/Category1: Invalid
Argument
So I did some fiddling and hypothesized that the problem was caused by the command trying to move the directory Category1 into itself(maybe). I decided to exclude directories from the search so it would only attempt to move files. I came up with this:
find /path_A -name "*Category1*" \(! -type d \) -exec mv {} /path_A/Category1 \;
This did move the files from their original location to where I wanted them, but it still gave me something like:
mv: /path_A/Category1/ABC-Category1-XYZ.txt and
/path_A/Category1/ABC-Category1-XYZ.txt are identical
I'm no expert, so I could be wrong... but I believe the command is trying to find and move the files from their original directory, then find them again. The directory Category1 is a subdirectory of the starting point, /path_A, So i believe it is finding the files it just moved in the directory Category1 and attempting to move them again.
Can anyone help me fix this issue?
You are creating new files that find tries to process. Safest approach is to move them somewhere else not in the path_A you are searching with find.
Or you can use prune to ignore that directory if you don't have any other directory matching:
find /path_A -name '*Category1*' -prune -type f -exec mv {} /path_A/Category1/ \;
Although another post has been accepted, let me post a proper answer.
Would you please try:
find /path_A -name 'Category1' -prune -o -type f -name '*Category1*' -exec mv -- {} /path_A/Category1/ \;
The option -prune is rather a command than a condition. It tells find to
ignore the directory tree specified by the conditions before -prune.
In this case it excludes the directory Category1 from the search.
The following -o is logical OR and may be interpreted something like instead or else. The order of the options makes difference.
Please be noticed the 1st category1 is the directory name to exclude and the 2nd *Category1* is the filenames to find.
If you are not sure which files are the result of find, try to execute:
find /path_A -name 'Category1' -prune -o -type f -name '*Category1*' -print
then tweak the options to see the change of output.
I can use -s in grep to suppress errors, but I don't see an equivalent for the find command in the man page... Is the only option to redirect STDERR>/dev/null?
Or is there an option that handles this? (open to fancy awk and perl solutions if needed)
Example:
$ for dir in `ls /mnt/16_c/`; do find /mnt/16_c/$dir/data/ -mtime +180 -type f -exec echo {} \;; done
find: `/mnt/16_c/test_container/dat/': No such file or directory
You can redirect stderr with 2>/dev/null, for example:
find /mnt/16_c/$dir/data/ -mtime +180 -type f -exec echo {} \; 2>/dev/null
Btw, the code in your question can be replaced with:
find /mnt/16_c/*/data/ -mtime +180 -type f 2>/dev/null
And if there is at least one matching directory,
then you don't even need to suppress stderr,
because find will only search in directories that match this pattern.
I know this isn't exactly what you asked, but my typical approach is to find a path that definitely exists, then use the -path flag to filter.
So instead of find /home/not-a-path, which raises an error, I would do find /home -path "/home/not-a-path/*", which doesn't raise an error.
I had to do this because when I would stream a failed find to /dev/null in a make file, the error would still cause the command to crash. The approach I described above works though.
I have a folder /var/backup where a cronjob saves a backup of a database/filesystem. It contains a latest.gz.zip and lots of older dumps which are names timestamp.gz.zip.
The folder ist getting bigger and bigger and I would like to create a bash script that does the following:
Keep latest.gz.zip
Keep the youngest 10 files
Delete all other files
Unfortunately, I'm not a good bash scripter so I have no idea where to start. Thanks for your help.
In zsh you can do most of it with expansion flags:
files=(*(.Om))
rm $files[1,-9]
Be careful with this command, you can check what matches were made with:
print -rl -- $files[1,-9]
You should learn to use the find command, possibly with xargs, that is something similar to
find /var/backup -type f -name 'foo' -mtime -20 -delete
or if your find doesn't have -delete:
find /var/backup -type f -name 'foo' -mtime -20 -print0 | xargs -0 rm -f
Of course you'll need to improve a lot, this is just to give ideas.