copy files from subdirectories into same subdirectory in another folder - terminal

I have a folder with subdirectories like:
Folder1
|---subfolder1
|---file1
|---subfolder2
|---file1
|---subfolder3
|---file1
...
I have another folder that has some of the same subfolders, but with new files like:
Folder2
|---subfolder2
|---file1
|---file2
|---subfolder3
|---file1
|---file2
...
How can I copy all the file2 from Folder2 to the same subdirectory in Folder1??
I've been looking for a really long time and the closest I found was:
find . -name "file2" -exec cp --parents {} Folder2 \;
Which performs: (from inside of Folder1) copy all files that named file2 into Folder2 according to the same subdirectory (which I think --parent would specify?)... but it doesn't work (maybe because Folder1 has more subfolders than Folder2?).
Thanks.

Related

Move all *.mp4 files from all subfolders to another specifcied folder

The parent directory has 5 sub folders, each subfolder has .mp4s, .txt, and other file extensions, how to be in the parents folder and enter a terminal command to only pull all *.mp4s into another specified folder in Bash.
find /path/to/src -type f -name “*.mp4” | xargs -iF mv F /path/to/dst
I stand in the specified parent directory and move the files to the other specified folder that I assume is ../other-spec-dir ( a folder that is not in the search path of find)
find . -type f -name "*.mp4s" -exec mv {} ../other-spec-dir \;
Note that if there are files with identical name only the last one will survive.

copying filenames from files of one folder to files of another via bash

I have many different folders with the exact same amount of files in the exact same order.
e.g.:
Folder1 Folder2 Folder3
AFile.jpg 9001.jpg 13004.jpg
BFile.jpg 9002.jpg 13005.jpg
Cfile.jpg 9003.jpg 13006.jpg
I want to copy the filenames of Folder1 to every other folder, so that the outcome is:
Folder1 Folder2 Folder3
AFile.jpg AFile.jpg AFile.jpg
BFile.jpg BFile.jpg BFile.jpg
Cfile.jpg Cfile.jpg Cfile.jpg
However, every suggestion to rename multiple files only renames multiple files using the same replacement string.
Is there a possibility to do this via a bash script?
Here some test directories and files, just to show one way of doing it.
Create the directories.
mkdir Folder{1..3}
Create the files inside the directories.
touch Folder1/{A..C}file.jpg
touch Folder2/{9001..9003}.jpg
touch Folder3/{13004..13006}.jpg
Check the files.
ls Folder{1..3}
The output
Folder1:
Afile.jpg Bfile.jpg Cfile.jpg
Folder2:
9001.jpg 9002.jpg 9003.jpg
Folder3:
13004.jpg 13005.jpg 13006.jpg
The script
#!/usr/bin/env bash
f1=(Folder1/*.jpg)
for dir in Folder{2..3}; do
( cd "$dir/" || exit; files=(*.jpg); for i in "${!f1[#]}"; do mv -v ${files[$i]}" "${f1[$i]##*/}"; done )
done
The output of the script.
renamed '9001.jpg' -> 'Afile.jpg'
renamed '9002.jpg' -> 'Bfile.jpg'
renamed '9003.jpg' -> 'Cfile.jpg'
renamed '13004.jpg' -> 'Afile.jpg'
renamed '13005.jpg' -> 'Bfile.jpg'
renamed '13006.jpg' -> 'Cfile.jpg'
Check the files again.
ls Folder{1..3}
The output
Folder1:
Afile.jpg Bfile.jpg Cfile.jpg
Folder2:
Afile.jpg Bfile.jpg Cfile.jpg
Folder3:
Afile.jpg Bfile.jpg Cfile.jpg
A brief explanation
Save the *.jpg files in an array from Folder1
f1=(Folder1/*.jpg)
cd inside Folder2 and Folder3 using a loop and exit if cd fails so the script will exit immediately.
save the *.jpg files inside folder2 and folder3 in an array named files files=(*.jpg)
Loop through the indices of f1 and files arrays the ! from "${!f1[#]}" is what does it.
The "${f1[$i]##*/}" a form of P.E. (Parameter Expansion) which strips the path name from the file name.
The code inside the ( ) runs in a subshell so you don't have to go back in the previous directory where you started the script.
( cd "$dir"/ || exit; files=(*.jpg); for i in "${!f1[#]}"; do mv -v "${files[$i]}" "${f1[$i]##*/}"; done )

Unzip Folders to Parent Directory Keeping Zipped Folder Name

I have a file structure as follows:
archives/
zips/
zipfolder1.zip
zipfolder2.zip
zipfolder3.zip
...
zipfolderN.zip
I have a script that unzips the folders to the parent directory "archives", but it is unzipping the contents of the folders to the "archives" directory. I need the zipped folders to remain as folders under the "archives" directory. The resultant file structure should look like this:
archives/
zips/
zipfolder1.zip
zipfolder2.zip
...
zipfolder1/
contents...
zipfolder2/
contents...
...
I am currently using the following:
find /home/username/archives/zips/*.zip -type f | xargs -i unzip -d ../ -q '{}'
How can I modify this line to keep the original folder names? Is it as simple as using ../*?
You could use basename to extract the zip into the desired directory:
find /home/username/archives/zips/*.zip -type f -exec sh -c 'unzip -q -d ../"$(basename "{}" .zip)" "{}"' \;

How to copy a directory structure but only include certain files

I found a solution for my question in Windows but I'm using Ubuntu: How to copy a directory structure but only include certain files using Windows batch files?
As the title says, how can I recursively copy a directory structure but only include some files? For example, given the following directory structure:
folder1
folder2
folder3
data.zip
info.txt
abc.xyz
folder4
folder5
data.zip
somefile.exe
someotherfile.dll
The files data.zip and info.txt can appear everywhere in the directory structure. How can I copy the full directory structure, but only include files named data.zip and info.txt (all other files should be ignored)?
The resulting directory structure should look like this:
copy_of_folder1
folder2
folder3
data.zip
info.txt
folder4
folder5
data.zip
Could you tell me a solution for Ubuntu?
$ rsync --recursive --include="data.zip" --include="*.txt" --filter="-! */" dir_1 copy_of_dir_1
To exclude dir3 regardless of where it is in the tree (even if it contains files that would match the --includes):
--exclude 'dir3/' (before `--filter`)
To exclude dir3 only at at specific location in the tree, specify an absolute path, starting from your source dir:
--exclude '/dir1/dir2/dir3/' (before `--filter`)
To exclude dir3 only when it's in dir2, but regardless of where dir2 is:
--exclude 'dir2/dir3/' (before `--filter`)
Wildcards can also be used in the path elements where * means a directory with any name and ** means multiple nested directories.
To specify only files and dirs to include, run two rsyncs, one for the files and one for the dirs. The problem with getting it done in a single rsync is that when you don't include a dir, rsync won't enter the dir and so won't discover any files in that branch that may be matching your include filter. So, you start by copying the files you want while not creating any dirs that would be empty. Then copy any dirs that you want.
$ rsync --recursive --prune-empty-dirs --include="*.txt" --filter="-! */" dir_1 copy_of_dir_1
$ rsync --recursive --include '/dir1/dir2/' --include '/dir3/dir4/' --filter="-! */" dir_1 copy_of_dir_1
You can combine these if you don't mind that your specified dirs don't get copied if they're empty:
$ rsync --recursive --prune-empty-dirs --include="*.txt" --include '/dir1/dir2/' --include '/dir3/dir4/' --filter="-! */" dir_1 copy_of_dir_1
The --filter="-! */" is necessary because rsync includes all files and folders that match none of the filters (imagine it as an invisible --include filter at the end of the list of filters). rsync checks each item to be copied against the list of filters and includes or excludes the item depending on the first match it finds. If there's no match, it hits that invisible --include and goes on to include the item. We wanted to change this default to --exclude, so we added an exclude filter (the - in -! */), then we negate the match (!) and match all dirs (*/). Since this is a negated match, the result is that we allow rsync to enter all the directories (which, as I mentioned earlier, allows rsync to find the files we want).
We use --filter instead of --exclude for the final filter because --exclude does not allow specifying negated matches with the ! operator.
I don't have a beautiful one liner, but since nobody else has answered you can always:
find . -name 'file_name.extension' -print | cpio -pavd /path/to/receiving/folder
For each specific file after copying the directories.
(Make sure you're in the original folder first, of course! :) )
Here is a one-liner using rsync:
rsync -a -f"+ info.txt" -f"+ data.zip" -f'-! */' folder1/ copy_of_folder1/
If you already have a file list, and want a more scalable solution
cat file.list | xargs -i rsync -a -f"+ {}" -f'-! */' folder1/ copy_of_folder1/
cp -pr folder1 copy_of_folder1; find copy_of_folder1 -type f ! \( -name data.zip -o -name info.txt \) -exec rm -f {} \;
first time : copy entirely folder1 to copy_of_folder1
second time : erase all files differents from data.zip and
info.txt
At the end, you have your complete structure with only the file data.zip and info.txt

BASH script to compare two directories & copy contents into a third directory?

Basically I'm trying to copy the contents of both dir1 and dir2 (excluding subdirectories) into dir3. The caveat is that if a file exists in both dir1 and dir2, I need to copy the newer file. Lets say the newer file exists in dir2.
I had:
find dir1 -type f -exec cp {} dir3 \;
find dir2 -type f -exec cp -u {} dir3 \;
Doing it this way leads to this problem: since the files from dir1 are copied before dir2, all files from dir1 (that are now in dir3) are considered newer, and won't be overwritten.
I'm thinking that you have to process a file in dir1, check if it exists in dir2, and then check which is newer. However I'm not sure how to do this, besides that you could use"-nt". I'm thinking that I am just going about this the wrong way.
cp -vfudp dir1/* dir3/
cp -vfudp dir2/* dir3/

Resources