How to put a copy of a file in all folders using bash script? - bash

I have one config file that I want to copy to all the folders of some location using bash script. Before I used such a line:
aws s3 cp ${CONFIG_FILE} ${S3_URI}config.json
It copied one file to my location on the server. Now I have multiple folders in this location and each one needs the config file.
How can I write a loop for that? I'm new in bash so it's a bit hard for me to figure it out.

for d in $(find /base/path/of/your/target/dirs -type d); do cp your_file $d; done

Another solution without doing for loop as find command can do this out of the box:
find /lookup/path/ -type d -exec cp config.json {} \;
What this command do is to search inside a specific path and get a list of directories then it will use exec to copy a file inside these directories one by one

Related

Batch copy files from subdirectories to a new folder?

I would like to batch copy specific files that ends with fastq.gz from each folder (with unique names) to a new directory, but it keeps giving me an error saying that the files cannot be found. Is it because I am using a wildcard wrong?
for f in ./*/split-adapter-quality-trimmed/*.fastq.gz; do
cp *fastq.gz ../../new;
done
Executing for f in ./*/split-adapter-quality-trimmed/*.fastq.gz will already contain the filenames ending with *.fastq.gz in variable f. So use it directly in cp (cp $f destination) inside the loop. If you put an echo $f inside the loop, you can see all the files and verify it before cp.
for f in ./*/split-adapter-quality-trimmed/*.fastq.gz; do
cp $f ../../new;
done
Except if you absolutely want to use a for-loop, you could perform that with one find command:
find ./*/split-adapter-quality-trimmed -name "*fastq.gz" -exec cp {} ../../new \;
It will browse the directories matching ./*/split-adapter-quality-trimmed, looking for each file terminating with fastq.gz, and then execute the needed cp command (in the current directory of the shell, the command line ends with a semi-colon):
cp <found-path> ../../new
(The wildcarded term *fastq.gz is surrounded by quotes to prevent Bash to interpret it, just in case. So is it with the semi-colon.)

Loop through and unzip directories and then unzip items in subdirectories

I have a folder designed in the following way:
-parentDirectory
---folder1.zip
----item1
-----item1.zip
-----item2.zip
-----item3.zip
---folder2.zip
----item1
-----item1.zip
-----item2.zip
-----item3.zip
---folder3.zip
----item1
-----item1.zip
-----item2.zip
-----item3.zip
I would like to write a bash script that will loop through and unzip the folders and then go into each subdirectory of those folders and unzip the files and name those files a certain way.
I have tried the following
cd parentDirectory
find ./ -name \*.zip -exec unzip {} \;
count=1
for fname in *
do
unzip
mv $fname $attempt{count}.cpp
count=$(($count + 1))
done
I thought the first two lines would go into the parentDirectory folder and unzip all zips in that folder and then the for loop would handle the unzipping and renaming. But instead, it unzipped everything it could and placed it in the parentDirectory. I would like to maintain the same directory structure I have.
Any help would be appreciated
excerpt from man unzip
[-d exdir]
An optional directory to which to extract files. By default, all files and subdirectories are recreated in the current directory; the -d option allows extraction in an arbitrary directory (always assuming one has permission to write to the directory).
It's doing exactly what you told it, and what would happen if you had done the same on the command line. Just tell it where to extract, since you want it to extract there.
see Ubuntu bash script: how to split path by last slash? for an example of splitting the path out of fname.
putting it all together, your command executed in the parentDirectory is
find ./ -name \*.zip -exec unzip {} \;
But you want unzip to extract to the directory where it found the file. I was going to just use backticks on dirname {} but I can't get it to work right, as it either executes on the "{}" literal before find, or never executes.
The easiest workaround was to write my own script for unzip which does it in place.
> cat unzip_in_place
unzip $1 -d `dirname $1`
> find . -name "*.zip" -exec ./unzip_in_place {} \;
You could probably alias unzip to do that automatically, but that is unwise in case you ever use other tools that expect unzip to work as documented.

How to copy recursively files with multiple specific extensions in bash

I want to copy all files with specific extensions recursively in bash.
****editing****
I've written the full script. I have list of names in a csv file, I'm iterating through each name in that list, then creating a directory with that same name somewhere else, then I'm searching in my source directory for the directory with that name, inside it there are few files with endings of xlsx,tsv,html,gz and I'm trying to copy all of them into the newly created directory.
sample_list_filepath=/home/lists/papers
destination_path=/home/ds/samples
source_directories_path=/home/papers_final/new
cat $sample_list_filepath/sample_list.csv | while read line
do
echo $line
cd $source_directories_path/$line
cp -r *.{tsv,xlsx,html,gz} $source_directories_path/$line $destination_path
done
This works, but it copies all the files there, with no discrimination for specific extension.
What is the problem?
An easy way to solve your problem is to use find and regex :
find src/ -regex '.*\.\(tsv\|xlsx\|gz\|html\)$' -exec cp {} dest/ \;
find look recursively in the directory you specify (in my example it's src/), allows you to filter with -regex and to apply a command for matching results with -exec
For the regex part :
.*\.
will take the name of the file and the dot before extension,
\(tsv\|xlsx\|gz\|html\)$
verify the extension with those you want.
The exec block is what you do with files you got from regex
-exec cp {} dest/ \;
In this case, you copy what you got ({} meaning) to the destination directory.

How To Copy files in Directory to another Directory using shell script

I have lots of Directories Like Tmp/a-1,a-2...till a-1000.
Each TMP Directory Contains file called log.
Hence I Have to go inside every directory and change the file named log to log_orig.
How to do this via script?
find -name log -type f -exec mv {} {}_orig \; will do (as long you don't have log files in other directories that you don't want to touch)

bash script for copying files between directories

I am writing the following script to copy *.nzb files to a folder to queue them for Download.
I wrote the following script
#!/bin/bash
#This script copies NZB files from Downloads folder to HellaNZB queue folder.
${DOWN}="/home/user/Downloads/"
${QUEUE}="/home/user/.hellanzb/nzb/daemon.queue/"
for a in $(find ${DOWN} -name *.nzb)
do
cp ${a} ${QUEUE}
rm *.nzb
done
it gives me the following error saying:
HellaNZB.sh: line 5: =/home/user/Downloads/: No such file or directory
HellaNZB.sh: line 6: =/home/user/.hellanzb/nzb/daemon.queue/: No such file or directory
Thing is that those directories exsist, I do have right to access them.
Any help would be nice.
Please and thank you.
Variable names on the left side of an assignment should be bare.
foo="something"
echo "$foo"
Here are some more improvements to your script:
#!/bin/bash
#This script copies NZB files from Downloads folder to HellaNZB queue folder.
down="/home/myusuf3/Downloads/"
queue="/home/myusuf3/.hellanzb/nzb/daemon.queue/"
find "${down}" -name "*.nzb" | while read -r file
do
mv "${file}" "${queue}"
done
Using while instead of for and quoting variables that contain filenames protects against filenames that contain spaces from being interpreted as more than one filename. Removing the rm keeps it from repeatedly producing errors and failing to copy any but the first file. The file glob for -name needs to be quoted. Habitually using lowercase variable names reduces the chances of name collisions with shell variables.
If all your files are in one directory (and not in multiple subdirectories) your whole script could be reduced to the following, by the way:
mv /home/myusuf3/Downloads/*.nzb /home/myusuf3/.hellanzb/nzb/daemon.queue/
If you do have files in multiple subdirectories:
find /home/myusuf3/Downloads/ -name "*.nzb" -exec mv {} /home/myusuf3/.hellanzb/nzb/daemon.queue/ +
As you can see, there's no need for a loop.
The correct syntax is:
DOWN="/home/myusuf3/Downloads/"
QUEUE="/home/myusuf3/.hellanzb/nzb/daemon.queue/"
for a in $(find ${DOWN} -name *.nzb)
# escape the * or it will be expanded in the current directory
# let's just hope no file has blanks in its name
do
cp ${a} ${QUEUE} # ok, although I'd normally add a -p
rm *.nzb # again, this is expanded in the current directory
# when you fix that, it will remove ${a}s before they are copied
done
Why don't you just use rm $(a}?
Why use a combination of cp and rm anyway, instead of mv?
Do you realize all files will end up in the same directory, and files with the same name from different directories will overwrite each other?
What if the cp fails? You'll lose your file.

Resources