Retrieve Folder Older than Date Based On folder Name - bash

I have a set of snapshots. Each snap shot resides in a accountname folder, and each snap shot is named with a date format as: YYYY-MM-DD-accountname
How can I retrieve the name of the "snap shot folder" where it is older than 2 days old? (The 2017-05-* directories)
Folder structure such as:
/home/snapshots
/home/snapshots/account1
/home/snapshots/account1/2017-05-01-account1
/home/snapshots/account1/2017-05-02-account1
/home/snapshots/account1/2017-05-03-account1
/home/snapshots/account1/2017-05-04-account1
/home/snapshots/account1/2017-05-05-account1
/home/snapshots/account1/2017-05-06-account1
/home/snapshots/account2
/home/snapshots/account2/2017-05-01-account1
/home/snapshots/account2/2017-05-02-account1
/home/snapshots/account2/2017-05-03-account1
/home/snapshots/account2/2017-05-04-account1
/home/snapshots/account2/2017-05-05-account1
/home/snapshots/account2/2017-05-06-account1
For instance... I want to list /home/snapshots/account1/2017-05-01 through /home/snapshots/account1/2017-05-04, given that today is 05/06/2017 (US), and vice-versa for account2
I thought find /home/snapshots/ -type d -mtime +2 -exec -ls -la {} \; may do the trick, but that returned me all folders older directories older than 2 days... and adding maxdepth 1 returned nothing...

Continuing from my comment above, the reason you are having problems is you want to search within /home and then select and delete the snapshots directories found if they are more than two days old. With -execdir, it would be
find /home -type d -name "snapshots" -mtime +2 -execdir rm -r '{}' +
Let me know if you have problems. (also, there is no need to use ls -la within find, the -printf option provide you complete output format control without spawning a multiple separate subshells for each ls call, see man find)
note: you should quote '{}' to protect against filenames with whitespace, etc.
Edit
Sorry I misread your question, Obviously if you only want to delete the account* subdirectories of each snapshots directory, then the search path of /home/snapshots is fine and you then include the account*/*account* designator as #BroSlow correctly caught below.

Related

removing directory and sub directory which is not present in the list

This is my directory structure
find -type f -print0 | xargs -0 ls -t
./lisst.txt ./SAMN03272855/SRR1734376/SRR1734376_1.fastq.gz
./SAMN03272854/SRR1734375/SRR1734375_2.fastq.gz ./SAMN07605670/SRR6006890/SRR6006890_2.fastq.gz
./SAMN03272854/SRR1734375/SRR1734375_1.fastq.gz ./SAMN07605670/SRR6006890/SRR6006890_1.fastq.gz
./SAMN03272855/SRR1734376/SRR1734376_2.fastq.gz
So this is a small subset of my folder/files where i have around 70.
I have a made a list of files which i want to keep and other i would like to delete.
My list.txt contains SAMN03272854,SAMN03272855 but I want to remove SAMN07605670.
I ran this
find . ! -name 'lisst.txt' -type d -exec rm -vrf {} +
It removed everything
QUESTION UPDATE
In my list it contains the folder i want to keep and the one which are not there are to be removed.
The folders which are to be removed also contains subdirectories and files. I want to remove everything
Your command selects each directory in the tree, except a directories of the funny name lisst.txt. Once it finds a directory, you do a recursive remove of this directory. No surprise that your files are gone.
You can't use rm -r when you want to spare certain files from deletion. This means that you also can't remove a directory, which somewhere below in its subtree has a file you want to keep.
I would run two find commands: The first removes all the files, ignoring directories, and second one removes all directories, which are empty (bottom-up). Assuming that SAMN03272854 is indeed a file (as you told us in your question), this would be:
find . -type f \( ! \( -name SAMN03272854 -o -name SAMN03272855 \) \) -exec rm {}
find . -depth -type d -exec rmdir {} 2>/dev/null
The error redirection in the latter command suppresses messages from rmdir for directories which still contain files you want to keep. Of course other messages are also suppressed. I would during debugging run the command without error redirection, to see whether it is basically correct.
Things would get more complicated, if you have files and directories to keep, because to keep a directory likely implies to keep all the files below it. In this case, you can use the -prune option of find, which excludes directories including their subdirectories from being processed. See the find man page, which gives examples for this.

How can I zip all files in the same directory under some condition?

I would like to create a bash script that compresses the files in a folder, for example:
/home/<username>/Desktop/Folder
And for that, if I'm not mistaken, you could do something like this:
zip -r Folder_2021-Jan.zip /home/<username>/Desktop/Folder
But there is one condition: the files to be compressed must be older than 30 days.
I have no idea how to add that condition to the script. I've searched but haven't found anything similar.
Use find with -mtime to check for files older than 30 days
find /home/<username>/Desktop/Folder -maxdepth 1 -mtime +30 -type f -exec zip Folder_2021-Jan.zip '{}' +
Search for only the directory /home//Desktop/Folder and no child directories for files only (type -f) and then execute zip on as many returned entries as possible, using + and {} as a place holder for entries.

How to make this bash script NOT delete all my files?

I have a cron job, every 5 minutes, backing up my MYSQL to files ending in .sql.gz. But this is hundreds of files a day. So I searched the internet and found this bash script which I expected to just work on the files in the /backup folder specified and only on .sql.gz files. but I soon found that it deleted everything in my root folder. :-) I was able to FTP the files back and have my site back up in half an hour, but I still need the script to work as intended. I'm new to bash scripting so I'm asking what did I do wrong in editing the script I found on the internet to my needs? What would work?
Here is the rogue script. DO NOT run this as is. its broken, thats why im here:
find /home/user/backups/*.gz * -mmin +60 -exec rm {} \;
Im suspecting its that last backslash should be /home/user/backups/
And also I should remove the * before -min
so what I need should be:
find /home/user/backups/*.gz -mmin +60 -exec rm {} /home/user/backups/;
Am I correct? Or still missing something?
BTW Im running this on Dreamhost shared hosting CRON. Their support don't want to help with BASH questions really, I tried.
The filename arguments to find should be the directories to start the recursive search. Then use -name and other options to filter down to the files that match the criteria you want.
find /home/user/backups -type f -name '*.sql.gz' -mmin +60 -exec rm {} +
-type f means only select ordinary files
-name '*.sql.gz' means only filenames ending in .sql.gz
-mmin +60 means files more than 60 minutes old
And using + instead of \; at the end of -exec means that it should just run the command once with all the selected filenames, rather than separately for each filename; this is a minor efficiency improvement.

bash remove files less than string value

A cron action saves database files on an hourly basis and assigns a file name based on year, month, day and hour
/$(date +\%m)/$(date +\%y\%m\%d\%H)_thedb.sql
This leads to archive bloat and the goal is to keep the last file of the day (i.e. delete all those lesser than 15050923* ) in a separate cron action.
What is an effective way of achieving this?
Before you start with complex bash string substitutions, I suggest you try to go after the file date. find can help you with that.
For example, to delete all files in a directory that are older than 5 days, you could try something like this:
find <DIR> -mtime +5 -exec rm {} \;
Now if there are subdirectories in <DIR>, you might also want to include the options -type f to limit the finding to files, and -maxdepth 1 to not search subdirectories.
If you have a file and want to delete everything older than that, you could slightly modify this:
find <DIR> -not -newer <FILE> -not -name <FILE> -exec rm {} \;
I simply don't know why there is no -older search term in find, it seems so obvious.
Warning: I strongly recommend to first leave out -exec and everything after it to check whether the files it finds can all be deleted.

Shell Command to remove older files

I am making backups of a client's website on a remote FTP location. I have a script (usable without root access on cPanel) which is making backups on given cron and transfer it to remote ftp location. Now the real problem is starting; as we can't have unlimited gigabytes of disk space on any server so we have to limit the backups. I was finding shell command (which can be added to cronjob directly or by creating a bash script and call that script from cron. I want to keep 1 week's daily backups. I want to delete any backup from that directory which is older than 1 week. I found following command which looks promising
find /path/to/files -mtime +30 -exec rm {}\;
But when I ran this command (for testing I replaced 'rm' with 'ls -l') I got following error
find: missing argument to `-exec'
can anybody help to resolve this little issue?
I am running CentOS + cPanel
Thank You
May be you just have to put space after the right bracket:
find /path/to/files -mtime +30 -exec rm {} \;
I couldn't test on CentOS, but on my system it doesn't work if I don't put spaces around the brackets.
The semi-colon must be a separate argument (and a week is 7 days):
find /path/to/files -mtime +7 -exec rm {} ';'
Actually, you would probably do better to use the notation + in place of ; as that will combine as many files names as convenient into a single command execution, rather like xargs does but without invoking xargs. Hence:
find /path/to/files -mtime +7 -exec rm {} +
One other advantage of this is that there are no characters that must be protected from the shell.

Resources