How to make this bash script NOT delete all my files? - bash

I have a cron job, every 5 minutes, backing up my MYSQL to files ending in .sql.gz. But this is hundreds of files a day. So I searched the internet and found this bash script which I expected to just work on the files in the /backup folder specified and only on .sql.gz files. but I soon found that it deleted everything in my root folder. :-) I was able to FTP the files back and have my site back up in half an hour, but I still need the script to work as intended. I'm new to bash scripting so I'm asking what did I do wrong in editing the script I found on the internet to my needs? What would work?
Here is the rogue script. DO NOT run this as is. its broken, thats why im here:
find /home/user/backups/*.gz * -mmin +60 -exec rm {} \;
Im suspecting its that last backslash should be /home/user/backups/
And also I should remove the * before -min
so what I need should be:
find /home/user/backups/*.gz -mmin +60 -exec rm {} /home/user/backups/;
Am I correct? Or still missing something?
BTW Im running this on Dreamhost shared hosting CRON. Their support don't want to help with BASH questions really, I tried.

The filename arguments to find should be the directories to start the recursive search. Then use -name and other options to filter down to the files that match the criteria you want.
find /home/user/backups -type f -name '*.sql.gz' -mmin +60 -exec rm {} +
-type f means only select ordinary files
-name '*.sql.gz' means only filenames ending in .sql.gz
-mmin +60 means files more than 60 minutes old
And using + instead of \; at the end of -exec means that it should just run the command once with all the selected filenames, rather than separately for each filename; this is a minor efficiency improvement.

Related

bash remove files less than string value

A cron action saves database files on an hourly basis and assigns a file name based on year, month, day and hour
/$(date +\%m)/$(date +\%y\%m\%d\%H)_thedb.sql
This leads to archive bloat and the goal is to keep the last file of the day (i.e. delete all those lesser than 15050923* ) in a separate cron action.
What is an effective way of achieving this?
Before you start with complex bash string substitutions, I suggest you try to go after the file date. find can help you with that.
For example, to delete all files in a directory that are older than 5 days, you could try something like this:
find <DIR> -mtime +5 -exec rm {} \;
Now if there are subdirectories in <DIR>, you might also want to include the options -type f to limit the finding to files, and -maxdepth 1 to not search subdirectories.
If you have a file and want to delete everything older than that, you could slightly modify this:
find <DIR> -not -newer <FILE> -not -name <FILE> -exec rm {} \;
I simply don't know why there is no -older search term in find, it seems so obvious.
Warning: I strongly recommend to first leave out -exec and everything after it to check whether the files it finds can all be deleted.

find and remove only if the file is present

I am working on a shell script to clear out files from a folder if they are older than 60 days. I am using find command to do that. I have one more requirement in that as if the file exist then the script can remove such files. If no files are there which are older than 60 days, i don't want my command to give any error messages.
find AA* -mtime +60 -exec rm {} \;
I am using above given command. It gives the error message "No such file or directory" if there is no files are older than 60 days as well as if they are not starts with "AA". I don't want any such messages reported for my command.
Please help.
TIA.
Perhaps you are missing the target, i. e. AA* files in current directory?
How about changing it to
find . -name 'AA*' -mtime +60 -exec rm {} \;
or if you need to check only files or directories starting from 'AA' you could use -regex:
find . -regex './AA.*' -mtime +60 -exec rm {} \;
I don't want any such messages reported for my command.
Redirect command's standard error to /dev/null
... 2>/dev/null

Deleting oldest files with shell

I have a folder /var/backup where a cronjob saves a backup of a database/filesystem. It contains a latest.gz.zip and lots of older dumps which are names timestamp.gz.zip.
The folder ist getting bigger and bigger and I would like to create a bash script that does the following:
Keep latest.gz.zip
Keep the youngest 10 files
Delete all other files
Unfortunately, I'm not a good bash scripter so I have no idea where to start. Thanks for your help.
In zsh you can do most of it with expansion flags:
files=(*(.Om))
rm $files[1,-9]
Be careful with this command, you can check what matches were made with:
print -rl -- $files[1,-9]
You should learn to use the find command, possibly with xargs, that is something similar to
find /var/backup -type f -name 'foo' -mtime -20 -delete
or if your find doesn't have -delete:
find /var/backup -type f -name 'foo' -mtime -20 -print0 | xargs -0 rm -f
Of course you'll need to improve a lot, this is just to give ideas.

Moving large number of files [duplicate]

This question already has answers here:
Argument list too long error for rm, cp, mv commands
(31 answers)
Closed 3 years ago.
If I run the command mv folder2/*.* folder, I get "argument list too long" error.
I find some example of ls and rm, dealing with this error, using find folder2 -name "*.*". But I have trouble applying them to mv.
find folder2 -name '*.*' -exec mv {} folder \;
-exec runs any command, {} inserts the filename found, \; marks the end of the exec command.
The other find answers work, but are horribly slow for a large number of files, since they execute one command for each file. A much more efficient approach is either to use + at the end of find, or use xargs:
# Using find ... -exec +
find folder2 -name '*.*' -exec mv --target-directory=folder '{}' +
# Using xargs
find folder2 -name '*.*' | xargs mv --target-directory=folder
find folder2 -name '*.*' -exec mv \{\} /dest/directory/ \;
First, thanks to Karl's answer. I have only minor correction to this.
My scenario:
Millions of folders inside /source/directory, containing subfolders and files inside. Goal is to copy it keeping the same directory structure.
To do that I use such command:
find /source/directory -mindepth 1 -maxdepth 1 -name '*' -exec mv {} /target/directory \;
Here:
-mindepth 1 : makes sure you don't move root folder
-maxdepth 1 : makes sure you search only for first level children. So all it's content is going to be moved too, but you don't need to search for it.
Commands suggested in answers above made result directory structure flat - and it was not what I looked for, so decided to share my approach.
This one-liner command should work for you.
Yes, it is quite slow, but works even with millions of files.
for i in /folder1/*; do mv "$i" /folder2; done
It will move all the files from folder /folder1 to /folder2.
find doesn't work with really long lists of files, it will give you the same error "Argument list too long". Using a combination of ls, grep and xargs worked for me:
$ ls|grep RadF|xargs mv -t ../fd/
It did the trick moving about 50,000 files where mv and find alone failed.

Shell Command to remove older files

I am making backups of a client's website on a remote FTP location. I have a script (usable without root access on cPanel) which is making backups on given cron and transfer it to remote ftp location. Now the real problem is starting; as we can't have unlimited gigabytes of disk space on any server so we have to limit the backups. I was finding shell command (which can be added to cronjob directly or by creating a bash script and call that script from cron. I want to keep 1 week's daily backups. I want to delete any backup from that directory which is older than 1 week. I found following command which looks promising
find /path/to/files -mtime +30 -exec rm {}\;
But when I ran this command (for testing I replaced 'rm' with 'ls -l') I got following error
find: missing argument to `-exec'
can anybody help to resolve this little issue?
I am running CentOS + cPanel
Thank You
May be you just have to put space after the right bracket:
find /path/to/files -mtime +30 -exec rm {} \;
I couldn't test on CentOS, but on my system it doesn't work if I don't put spaces around the brackets.
The semi-colon must be a separate argument (and a week is 7 days):
find /path/to/files -mtime +7 -exec rm {} ';'
Actually, you would probably do better to use the notation + in place of ; as that will combine as many files names as convenient into a single command execution, rather like xargs does but without invoking xargs. Hence:
find /path/to/files -mtime +7 -exec rm {} +
One other advantage of this is that there are no characters that must be protected from the shell.

Resources