Copy multiple files from one directory to multiple other directories - bash

I have a directory structure
Dir_1
Dir_2
Dir_3
Source
. The directory Source contains the files File_1.txt and File_2.txt.
I want to copy all the files from the directory Source to all the remaining directories, in this case Dir_1, Dir_2 and Dir_3.
For this, I used
for i in $(ls -d */ | grep -v 'Source'); do echo $i | xargs -n 1 cp ./Source/*; done
. I, however, keep getting the message
cp: target ‘5’ is not a directory
It seems cp has problems with the directory names which have spaces in them. How do I resolve this (keeping the spaces in the directory names, obviously)?

Using find you could do something like this:
find . -mindepth 1 -maxdepth 1 -type d ! -name Source -exec cp Source/*.txt {} \;
This command searches the current directory for all subdirectories one level deep, excluding Source and then copies the text files into each.
Hope this helps :)

Related

Making a directory and moving files into that directory that match a pattern

I can pattern match files and move them into a directory using the line below. But I need to make the directory first.
(must make testdir directory first)
find . -type f -name '*-bak*' -exec mv '{}' ./testdir ';'
What I'm trying to do now is have the line of code also create the directory and move the files that match that pattern into that directory using the same line of code.
mkdir -p testdir && find . -type f -name '*-bak*' -exec mv {} testdir/ ';'
Be careful though, if you get two backups with the same name in different folders, you'll only be left with a single copy and all other copies overwritten!
EDIT: use mv -i to get prompted in that case instead of overwriting the files

How to copy all files with the same name into another directory using cp command

I have directory named "Documents". In this directory I have 5 files:
User1.txt
User2.txt
User3.txt
User4.txt
User5.txt
Users-info.zip
index.html
I want to copy only those files in whose names there is a word "user" to another directory. How I can do this with cp command?
cp User* /path/to/dir try this, will be enough.
If you wish unusual way:
find . -type f -name 'User*' -print0 | xargs -0 cp -t /path/to/dir/for/copies/
For your case it is:
cp User[1-9].txt /dst_dir
We copy only files with User in the beginning, than some digit and finally .txt.

Get all content from a folder with subfolder copied to another directory with bash

I have a directory that has many folders in it that have sub folders again and then there are some files. Can I write a bash script that copies all the files from the given directory into one single folder? So that I do not have to navigate through ever single folder and copy the content to another folder.
In the topmost dir under which you want the files to be copied:
find . -type f -exec cp {} /some/new/location \;
Finds all the normal files and then copies them to /some/new/location
You could use find to list all files inside the folder:
find ./source -type f
and then use the output as argument for cp, it would look like this:
cp $(find ./source -type f) destination
There would be problem if there are some files within the original directory tree with conflicting name. In that case cp would refuse to copy additional files with the same name with error like:
cp: will not overwrite just-created destination/t22' with./source/test/t2/t22'
To make copies of files with same name, you can use backup option, like this:
cp --backup=numbered $(find ./source -type f) destination
If you want to see what is happening use -v (verbose) option
cp -v --backup=numbered $(find ./source -type f) destination

How to remove intermediate folders containing only one folder each?

I had been playing around with mv, and now I have a situation.
Earlier, say
Folder1 had file1,2,3.
Now Folder1 has Folder2 which has Folder3 which has Folder4 which contains file1,2,3.
I am trying to write a bash script such that it identifies intermediate folders containing only 1 directory and moves all its contents up one level, ultimately giving back only Folder1->file1,2,3, and rest folders deleted.
I tried to write something like below, but I am :
1.unable to distinguish between file and folder
2.unable to find the file/directory name stored inside current folder
3.Not sure how to do recursively.
#!/bin/bash
echo "Directory Name?"
read dir_name
no_files=`ls -A| wc -l`
if [ $no_file==1 ] && [ itisaDirectory()];
then `mv folder_name/* dir_name`
fi
When you do not care for error messages and want to move all files in subdirs to the current dir and remove the remaining empty dir, do something like
find . -type f -exec mv {} "${dir_name}" \; 2>/dev/null
rm -r */
You ask for something else, only move files where an intermediate directory is unique. That is the case if exactly one subdir has that dir as a parent. The parent of a dir can be found with dirname.
When a dir has one subdir, only one subdir will have it as a parent. You can list all dirs, look for the parent and select the unique paths.
find . -type d -exec dirname {} \; | sort | uniq -u | while read dir; do
echo "${dir} has exactly one subdir"
done
The problem is that the dir can have files as well. We try to improve the above solution:
find . -exec dirname {} \; | sort | uniq -u | while read dir; do
echo "${dir} has exactly one subdir or one file"
done
You can test the content of the dir with if [ -d "${dir}/*" ] but I do not need to know:
find . -exec dirname {} \; | sort | uniq -u | while read dir; do
echo "${dir} has exactly one subdir or one file"
find "${dir}"/*/ -type f -exec mv {} "${dir_name}" \; 2>/dev/null
done
The path ${dir}/*/ will only exist when ${dir} has a subdirectory in it, and will move the files beneath. When $dir only has one file, the find command will find nothing.

shell entering each folder and zip content

So I have some folder
|-Folder1
||-SubFolder1
||-SubFolder2
|-Folder2
||-SubFolder3
||-SubFolder4
Each subfolder contains several jpg I want to zip to the root folder...
I'm a little bit stuck on "How to enter each folder"
Here is my code:
find ./ -type f -name '*.jpg' | while IFS= read i
do
foldName=${PWD##*/}
zip ../../foldName *
done
The better would be to store FolderName+SubFolderName and give it to the zip command as name...
Zipping JPEGs (for Compression) is Usually Wasted Effort
First of all, attempting to compress already-compressed formats like JPEG files is usually a waste of time, and can sometimes result in archives that are larger than the original files. However, it is sometimes useful to do so for the convenience of having a bunch of files in a single package.
Just something to keep in mind. YMMV.
Use Find's -execdir Flag
What you need is the find utility's -execdir flag. The GNU find man page says:
-execdir command {} +
Like -exec, but the specified command is run from the subdirec‐
tory containing the matched file, which is not normally the
directory in which you started find.
For example, given the following test corpus:
cd /tmp
mkdir -p foo/bar/baz
touch foo/bar/1.jpg
touch foo/bar/baz/2.jpg
you can zip the entire set of files with find while excluding the path information with a single invocation. For example:
find /tmp/foo -name \*jpg -execdir zip /tmp/my.zip {} +
Use Zip's --junk-paths Flag
The zip utility on many systems supports a --junk-paths flag. The man page for zip says:
--junk-paths
Store just the name of a saved file (junk the path), and do not
store directory names.
So, if your find utility doesn't support -execdir, but you do have a zip that supports junking paths, you could do this instead:
find /tmp/foo -name \*jpg -print0 | xargs -0 zip --junk-paths /tmp/my.zip
You can use dirname to get the directory name of a file/directory it is located in.
You can also simplify the find command to search only for directories by using -type d. Then you should use basename to get only the name of the subdirs:
find ./*/* -type d | while read line; do
zip --junk-paths "$(basename $line)" $line/*.jpg
done
Explanation
find ./*/* -type d
will print out all directories located in ./*/* which will result in all subdirs of directories located in the current dir
while read line reads each line from the stream and stores it in the variable "line". Thus $line will be the relative path to the subdir, e.g. "Folder1/Subdir2"
"$(basename $line)" returns the only the name of the subdir, e.g. "Subdir2"
Update: add --junk-paths to the zip command if you do not want the directy paths to be stored in the zip filde
So a little check, I finally got something working:
find ./*/* -type d | while read line; do
#printf '%s\n' "$line"
zip ./"$line" "$line"/*.jpg
done
But this create un archive containing:
Subfolder.zip
Folder
|-Subfolder
||-File1.jpg
||-File2.jpg
||-File3.jpg
Instead I fold like it to be:
Subfolder.zip
|-File1.jpg
|-File2.jpg
|-File3.jpg
So I tried using basename and dirname in differnet combination...Always got some error...
And just to learn how to: what if I would like the new archive to be created in the same root directory as "Folder"?
Ok finally got it!
find ./* -name \*.zip -type f -print0 | xargs -0 rm -rf
find ./*/* -type d | while read line; do
#printf '%s\n' "$line"
zip --junk-paths ./"$line" "$line"/*.jpg
done
find . -name \*.zip -type f -mindepth 2 -exec mv -- '{}' . \;
In first row I simply remove all .zip files,
Then I zip all and in the final row I move all zip to the root directory!
Thanks everbody for your help!

Resources