I have a lot of folders I'd like to backup on a remote location.
I'd like to tar.gz and encrypt all of these, [if possible] in a single command line.
So far, I've successfuly did half the work, with
find . -type d -maxdepth 1 -mindepth 1 -exec tar czf {}.tar.gz {} \;
Now I'd like to add an encryption step to this command, if possible using gnupg.
Can someone help?
No, you can't directly include multiple commands into -exec option of find.
On the other hand, you can easily iterate over the results. For example in bash, you can do:
find . -maxdepth 1 -mindepth 1 -type d | while read dir; do
tar czO "${dir}" | gpg --output "${dir}".tar.gz.asc --encrypt --recipient foo#example.com
done
Related
Need to delete log files older than 60 days and compress it if files are greater than 30 days and lesser than 60 days. I have to remove and compress files from 2 paths as mentioned in PURGE_DIR_PATH.
Also have to take the output of find command and redirect it to log file. Basically need to create an entry in the log file whenever a file is deleted. How can i achieve this?
I have to also validate if the directory path is valid or not and put a message in log file if the directory is valid or not
I have written a shell script but doesn't cover all the scenarios. This is my first shell script and need some help. How do i keep just one variable log_retention and
use it to compress files as the condition would be >30 days and <60 days? how do I validate if directories is valid or not? is my IF condition checking that?
Please let me know.
#!/bin/bash
LOG_RETENTION=60
WEB_HOME="/web/local/artifacts"
ENG_DIR="$(dirname $0)"
PURGE_DIR_PATH="$(WEB_HOME)/backup/csvs $(WEB_HOME)/home/archives"
if[[ -d /PURGE_DIR_PATH]] then echo "/PURGE_DIR_PATH exists on your filesystem." fi
for dir_name in ${PURGE_DIR_PATH}
do
echo $PURGE_DIR_PATH
find ${dir_name} -type f -name "*.csv" -mtime +${LOG_RETENTION} -exec ls -l {} \;
find ${dir_name} -type f -name "*.csv" -mtime +${LOG_RETENTION} -exec rm {} \;
done
Off the top of my head -
#!/bin/bash
CSV_DELETE=60
CSV_COMPRESS=30
WEB_HOME="/web/local/artifacts"
PURGE_DIR_PATH=( "$(WEB_HOME)/backup/csvs" "$(WEB_HOME)/home/archives" ) # array, not single string
# eliminate the oldest
find "${PURGE_DIR_PATH[#]}" -type f -name "*.csv" -mtime +${CSV_DELETE} |
xargs -P 100 rm -f # run 100 in bg parallel
# compress the old-enough after the oldest are gone
find "${PURGE_DIR_PATH[#]}" -type f -name "*.csv" -mtime +${CSV_COMPRESS} |
xargs -P 100 gzip # run 100 in bg parallel
Shouldn't need loops.
I am trying to write command which can look for the files which are older than 14 days and tar those files, I have tried many things but what happens is they find result give the names of the file and the tar command just writes the name into one files.
Command used:
find /dir/subdir/ -type f -mtime +14 | tar -cvf data.tar -T -
I am not strictly looking for gzip will also do.
Operating system is AIX
Please consider the following:
find /dir/subdir/ -type f -mtime +14 > file.list
tar -cvf data.tar -L file.list
You may need to modify the find call using something like -print0 switch on Linux if your file names contain white space-like symbols.
why not simply use
find /dir/subdir/ -type f -mtime +14 -exec tar -cvf foo.tar {} \;
Consider the following folder structure
root/dirA/the_folder
root/dirA/dir2/the_folder
root/dirB/the_folder
root/dirB/dir2/the_folder
I want to recursively find and tar the dirA/the_folder and dirB/the_folder. However when I use
find root/ -name 'the_folder' -type d | xargs tar cvf myTar.tar
It will pack all folders (containing dir2/the_folder) and I don't want that. What is the solution?
In your case, wouldn't just this be enough?
tar cfv mytar.tar root/*/the_folder/
Use the -maxdepth option of find to limit the recursion depth:
find root/ -maxdepth 2 -name 'the_folder' -type d
Try man find for lots of useful options that find offers. You will be surprised. For example, you can do away with the | xargs by using find's -exec option:
find root/ -maxdepth 2 -name 'the_folder' -type d -exec tar cvf myTar.tar {} +
I have an onsite backup folder /backup/ that has an offsite rsynced copy mounted locally as /mnt/offsite/backup/. My onsite backup drive is getting full, so I'd like to delete files older than 365 days, but first check to see if that file exists offsite, and log to file the filenames that were removed (to exclude from rsync).
I've come close with this:
cd /mnt/offsite/backup && find . -type f -mtime +365 -exec rm /backup/{} \; | >> file.lst
However the redirection isn't working. I've tried placing the >> in different places, and can't get it to work with exec in there. I've also tried using xargs rm, and can get the redirect working, but can't get xargs to delete from the second path:
cd /mnt/offsite/backup && find . -type f -mtime +365 >> file.lst | xargs rm /backup/
What's the best approach?
Hope this helps
find /mnt/offsite/backup -type f -mtime +365 -exec rm {} \; -print >> file.lst
I've so far figured out how to use find to recursively unzip all the files:
find . -depth -name `*.zip` -exec /usr/bin/unzip -n {} \;
But, I can't figure out how to remove the zip files one at a time after the extraction. Adding rm *.zip in an -a -exec ends up deleting most of the zip files in each directory before they are extracted. Piping through a script containing the rm command (with -i enabled for testing) causes find to not find any *.zips (or at least that's what it complains). There is, of course, whitespace in many of the filenames but at this point syntaxing in a sed command to add _'s is a bit beyond me. Thank for your help!
have you tried:
find . -depth -name '*.zip' -exec /usr/bin/unzip -n {} \; -exec rm {} \;
or
find . -depth -name '*.zip' -exec /usr/bin/unzip -n {} \; -delete
or running a second find after the unzip one
find . -depth -name '*.zip' -exec rm {} \;
thx for the 2nd command with -delete! helped me a lot..
just 2 (maybe helpful) remarks from my side:
-had to use '.zip' instead of `.zip` on my debian system
-use -execdir instead of -exec > this will extract each zip file within its current folder, otherwise you end up with all extracted content in the dir you invoked the find cmd.
find . -depth -name '*.zip' -execdir /usr/bin/unzip -n {} \; -delete
THX & Regards,
Nord
As mentioned above, this should work.
find . -depth -name '*.zip' -execdir unzip -n {} \; -delete
However, note two things:
The -n option instructs unzip to not overwrite existing files. You may not know if the zip files differ from the similarly named target files. Even so, the -delete will remove the zip file.
If unzip can't unzip the file--say because of an error--it might still delete it. The command will certainly remove it if -exec rm {} \; is used in place of -delete.
A safer solution might be to move the files following the unzip to a separate directory that you can trash when you're sure you have extracted all the files successfully.
Unzip archives in subdir based on the file name (../file.zip -> ../file/..):
for F in $(find . -depth -name *.zip); do unzip "$F" -d "${F%.*}/" && rm "$F"; done
I have a directory filling up with zipped csv files. External processes are writing new zipped files to it often. I wish to bulk unzip and remove the originals as you do.
To do that I use:
unzip '*.zip'
find . | sed 's/$/\.zip/g' | xargs -n 1 rm
It works by searching and expanding all zip files presently in the directory. Later, after it finishes there are potentially new unzipped new files mixed in there too that are not to be deleted yet.
So I delete by finding successfully unzipped *.csv files, and using sed to regenerate the original filenames for deletion which is then fed to rm via the xargs command.