How can I bulk-zip folders in subdirectories without including the parent folder in the zip archives? I have a folder structure like this:
folder01
folder02
file01
file02
When I run:
find . -type d -name "folder02" -exec zip -r '{}'.zip '{}' \;
I get "folder02.zip" which always extracts its contents into a parent folder "folder01". How can I prevent this? For me it creates useless parent folder structures when extracting these archives anywhere else.
Using some simple bash:
find . -type d -name "folder02" -exec bash -c 'cd "$(dirname "{}")"; zip -r "$(basename "{}")".zip "$(basename "{}")"' \;
Related
I have a script to move files of type .txt to a particular folder .It looks for the files in work folder and move it to completed folder.
I would like to make the script generic i.e to enhance the script so that the scripts works not for just one particular folder but other similar folders as well.
Example: If there is a .txt file in folder /tmp/swan/test/work and also in folder /tmp/swan/test11/work, the files should move to /tmp/swan/test/done and /tmp/swan/test11/done respectively.
EDIT:Also, if there is a .txt file in a sub folder like /tmp/swan/test11/work/APX that should also move to /tmp/swan/test11/done
Below is the current script.
#!/bin/bash
MY_DIR=/tmp/swan
cd $MY_DIR
find . -path "*work*" -iname "*.txt" -type f -execdir mv '{}' /tmp/swan/test/done \;
With -execdir, the mv command is executed in whatever directory the file is found in. Since you just want to move the file to a "sibling" directory, each command can use the same relative path ../done.
find . -path "*work*" -iname "*.txt" -type f -execdir mv '{}' ../done \;
One way to do it:
Background:
$ tree
.
├── a
│ └── work
└── b
└── work
Renaming:
find . -type f -name work -exec \
sh -c 'echo mv "$1" "$(dirname "$1")"/done' -- {} \;
Output:
mv ./a/work ./a/done
mv ./b/work ./b/done
You can remove the echo if it does what you want it to.
What about:
find . -path '*work/*.txt' -exec sh -c 'd=$(dirname $(dirname $1))/done; mkdir -p $d; mv $1 $d' _ {} \;
(also creates the target directory if it does not exist already).
I have a folder in /opt/backup in which folders are created every day. In order to save space I would like to gunzip all folders that are older than 2 days.
I don't want to create one single zip file but rather zip each folder on its own, with the name preserved. I have tried:
#!/bin/bash
# Backup files
files=($(find /opt/backup/ -mtime +"2"))
for files in ${files[*]}
do
echo $files
tar cvfz backup.tar.gz $files
done
But all this does is creating a single zip file, I would like each folder separately.
The script will run every 2 days at 02:00 in the morning. How do I write this script, please?
You are making it too complicated. You should find directories that are old enough and simply tar zip those.
find /opt/backup/ -mtime +"2" -type d -exec tar cvfz backup.tar.gz {} \;
This will look for all directories (-type d) and execute a certain command on them (tar cvfz backup.tar.gz {}). In which {} is a placeholder for the directory found.
If you want to preserve the name of the dir, simply use {} a second time:
find /opt/backup/ -mtime +"2" -type d -exec tar cvfz {}.tar.gz {} \;
Note that no quotes are required around {} as special chars will be handled well inside find's exec.
I have a folder with tens of thousands of different file types. Id like to copy them all to a new folder (Copy1) but also rename them all to $RANDOM but keep the extension intact. I realize I can write a line specifying which extension to find and how to name it, but there is got to be a way to do it dynamically, because there are at least 100 file types and may be more in the future.
I have the following so far:
find ./ -name '*.*' -type f -exec bash -c 'cp "$1" "${1/\/123_//_$RANDOM}"' -- {} \;
but that puts the random number after the extension, and also it puts the all in the same folder. I cant figure out how to do the following 2 things:
1 - Keep all paths intact, but in a new root folder (Copy1)
2 - How to have name be $RANDOM.extension, instead of .extension.$RANDOM
PS - by $RANDOM i mean actual randomly generated number. I am interested in keeping folder structure, so we are dealing with a few hundred files at most per directory, but all directories/files need to be renamed to $RANDOM. Another way to look at what I need to do. Copy all contents or Folder1 with all subdirectories and files to Folder2 (where Fodler2 is a $RANDOM name), then rename all folders and files to random names but keep all extensions.
EDIT: Ok i figured out how to rename and keep extension. But I have a problem where its dumping all of the files into the root directory where script is run from. How do I keep them in their respective folders? Command Im using is:
find ./ -name '*.*' -type f -exec bash -c 'mv "$1" $RANDOM.${1##*.}' -- {} \;
Thanks!
Ok i figured out how to rename and keep extension. But I have a
problem where its dumping all of the files into the root directory
where script is run from. How do I keep them in their respective
folders? Command Im using is:
find ./ -name '*.*' -type f -exec bash -c 'mv "$1" $RANDOM.${1##*.}' -- {} \;
Change your command to:
PATH=/bin:/usr/bin find . -name '*.*' -type f -execdir bash -c 'mv "$1" $RANDOM.${1##*.}' -- {} \;
Or alternatively using uuids instead of random numbers:
PATH=/bin:/usr/bin find . -name '*.*' -type f -execdir bash -c 'mv "$1" $(uuidgen).${1##*.}' -- {} \;
Here's what I came up with :
i=1
random="whatever"
find . -name "*.*" -type f | while read f
do
newbase=${f/*./$random$i.} //added counter to filename
cp $f /Path/Name/"$newbase"
((i++))
done
I had to add a counter to random (i), otherwise, if the extensions are similar, your files would overwrite themselves when copied.
In your new folder, your files should look like this :
whatever1.txt
whatever2.txt
etc etc
I hope this is what you were looking for.
Here is the command that worked for me.
find . -name '*.pdf' -type f -exec bash -c 'echo "{}" && cp "$1" ./$RANDOM.${1##*.}' -- {} \;
I have a file structure as follows:
archives/
zips/
zipfolder1.zip
zipfolder2.zip
zipfolder3.zip
...
zipfolderN.zip
I have a script that unzips the folders to the parent directory "archives", but it is unzipping the contents of the folders to the "archives" directory. I need the zipped folders to remain as folders under the "archives" directory. The resultant file structure should look like this:
archives/
zips/
zipfolder1.zip
zipfolder2.zip
...
zipfolder1/
contents...
zipfolder2/
contents...
...
I am currently using the following:
find /home/username/archives/zips/*.zip -type f | xargs -i unzip -d ../ -q '{}'
How can I modify this line to keep the original folder names? Is it as simple as using ../*?
You could use basename to extract the zip into the desired directory:
find /home/username/archives/zips/*.zip -type f -exec sh -c 'unzip -q -d ../"$(basename "{}" .zip)" "{}"' \;
I've so far figured out how to use find to recursively unzip all the files:
find . -depth -name `*.zip` -exec /usr/bin/unzip -n {} \;
But, I can't figure out how to remove the zip files one at a time after the extraction. Adding rm *.zip in an -a -exec ends up deleting most of the zip files in each directory before they are extracted. Piping through a script containing the rm command (with -i enabled for testing) causes find to not find any *.zips (or at least that's what it complains). There is, of course, whitespace in many of the filenames but at this point syntaxing in a sed command to add _'s is a bit beyond me. Thank for your help!
have you tried:
find . -depth -name '*.zip' -exec /usr/bin/unzip -n {} \; -exec rm {} \;
or
find . -depth -name '*.zip' -exec /usr/bin/unzip -n {} \; -delete
or running a second find after the unzip one
find . -depth -name '*.zip' -exec rm {} \;
thx for the 2nd command with -delete! helped me a lot..
just 2 (maybe helpful) remarks from my side:
-had to use '.zip' instead of `.zip` on my debian system
-use -execdir instead of -exec > this will extract each zip file within its current folder, otherwise you end up with all extracted content in the dir you invoked the find cmd.
find . -depth -name '*.zip' -execdir /usr/bin/unzip -n {} \; -delete
THX & Regards,
Nord
As mentioned above, this should work.
find . -depth -name '*.zip' -execdir unzip -n {} \; -delete
However, note two things:
The -n option instructs unzip to not overwrite existing files. You may not know if the zip files differ from the similarly named target files. Even so, the -delete will remove the zip file.
If unzip can't unzip the file--say because of an error--it might still delete it. The command will certainly remove it if -exec rm {} \; is used in place of -delete.
A safer solution might be to move the files following the unzip to a separate directory that you can trash when you're sure you have extracted all the files successfully.
Unzip archives in subdir based on the file name (../file.zip -> ../file/..):
for F in $(find . -depth -name *.zip); do unzip "$F" -d "${F%.*}/" && rm "$F"; done
I have a directory filling up with zipped csv files. External processes are writing new zipped files to it often. I wish to bulk unzip and remove the originals as you do.
To do that I use:
unzip '*.zip'
find . | sed 's/$/\.zip/g' | xargs -n 1 rm
It works by searching and expanding all zip files presently in the directory. Later, after it finishes there are potentially new unzipped new files mixed in there too that are not to be deleted yet.
So I delete by finding successfully unzipped *.csv files, and using sed to regenerate the original filenames for deletion which is then fed to rm via the xargs command.