I am writing the following script to copy *.nzb files to a folder to queue them for Download.
I wrote the following script
#!/bin/bash
#This script copies NZB files from Downloads folder to HellaNZB queue folder.
${DOWN}="/home/user/Downloads/"
${QUEUE}="/home/user/.hellanzb/nzb/daemon.queue/"
for a in $(find ${DOWN} -name *.nzb)
do
cp ${a} ${QUEUE}
rm *.nzb
done
it gives me the following error saying:
HellaNZB.sh: line 5: =/home/user/Downloads/: No such file or directory
HellaNZB.sh: line 6: =/home/user/.hellanzb/nzb/daemon.queue/: No such file or directory
Thing is that those directories exsist, I do have right to access them.
Any help would be nice.
Please and thank you.
Variable names on the left side of an assignment should be bare.
foo="something"
echo "$foo"
Here are some more improvements to your script:
#!/bin/bash
#This script copies NZB files from Downloads folder to HellaNZB queue folder.
down="/home/myusuf3/Downloads/"
queue="/home/myusuf3/.hellanzb/nzb/daemon.queue/"
find "${down}" -name "*.nzb" | while read -r file
do
mv "${file}" "${queue}"
done
Using while instead of for and quoting variables that contain filenames protects against filenames that contain spaces from being interpreted as more than one filename. Removing the rm keeps it from repeatedly producing errors and failing to copy any but the first file. The file glob for -name needs to be quoted. Habitually using lowercase variable names reduces the chances of name collisions with shell variables.
If all your files are in one directory (and not in multiple subdirectories) your whole script could be reduced to the following, by the way:
mv /home/myusuf3/Downloads/*.nzb /home/myusuf3/.hellanzb/nzb/daemon.queue/
If you do have files in multiple subdirectories:
find /home/myusuf3/Downloads/ -name "*.nzb" -exec mv {} /home/myusuf3/.hellanzb/nzb/daemon.queue/ +
As you can see, there's no need for a loop.
The correct syntax is:
DOWN="/home/myusuf3/Downloads/"
QUEUE="/home/myusuf3/.hellanzb/nzb/daemon.queue/"
for a in $(find ${DOWN} -name *.nzb)
# escape the * or it will be expanded in the current directory
# let's just hope no file has blanks in its name
do
cp ${a} ${QUEUE} # ok, although I'd normally add a -p
rm *.nzb # again, this is expanded in the current directory
# when you fix that, it will remove ${a}s before they are copied
done
Why don't you just use rm $(a}?
Why use a combination of cp and rm anyway, instead of mv?
Do you realize all files will end up in the same directory, and files with the same name from different directories will overwrite each other?
What if the cp fails? You'll lose your file.
Related
I have a bash script I'm trying to write
I have 2 base directories:
./tmp/serve/
./src/
I want to go through all the directories in ./tmp and copy the *.html files into the same folder path in ./src
i.e
if I have a html file in ./tmp/serve/app/components/help/ help.html -->
copy to ./src/app/components/help/ And recursively do this for all subdirectories in ./tmp/
NOTE: the folder structures should exist so just need to copy them only. If it doesn't then hopefully it could create the folder for me (not what I want) but with GIT I can track these folders to manually handle those loose html files.
I got as far as
echo $(find . -name "*.html")\n
But not sure how to actually extract the file path with pwd and do what I need to, maybe it's not a one liner and needs to be done with some vars.
something like
for i in `echo $(find /tmp/ -name "*.html")\n
do
cp -r $i /src/app/components/help/
done
going so far to create the directories would take some more time for me.
I'll try to do it on my own and see if I come up with something
but for argument sake if you do run pwd and get a response the pseudo code for that:
pwd
get response
if that directory does not exist in src create that directory
copy all the original directories contents into the new folder at /src/$newfolder
(possibly running two for loops, one to check the directory tree, and then one to go through each original directory, copying all the html files)
You process substitution to loop the output from your find command and create the destination directory(ies) and then copy the file(s):
#!/bin/bash
# accept first parameters to script as src_dir and dest values or
# simply use default values if no parameter(s) passed
src_dir=${1:-/tmp/serve}
dest=${2-src}
while read -r orig_path ; do
# To replace the first occurrence of a pattern with a given string,
# use ${parameter/pattern/string}
dest_path="${orig_path/tmp\/serve/${dest}}"
# Use dirname to remove the filename from the destination path
# and create the destination directory.
dest_dir=$(dirname "${dest_path}")
mkdir -p "${dest_dir}"
cp "${orig_path}" "${dest_path}"
done < <(find "${src_dir}" -name '*.html')
This script copy .html files from src directory to des directory (create the subdirectory if they do not exist)
Find the files, then remove the src directory name and copy them into the destination directory.
#!/bin/bash
for i in `echo $(find src/ -name "*.html")`
do
file=$(echo $i | sed 's/src\///g')
cp -r --parents $i des
done
Not sure if you must use bash constructs or not, but here is a GNU tar solution (if you use GNU tar), which IMHO is the best way to handle this situation because all the metadata for the files (permissions, etc.) are preserved:
find ./tmp/serve -name '*.html' -type f -print0 | tar --null -T - -c | tar -x -v -C ./src --strip-components=3
This finds all the .html files (-type f) in the ./tmp/serve directory and prints them nul-terminated (-print0), then sends these filenames via stdin to tar as nul-terminated literals (--null) for inclusion (-T -), creating (-c) an archive which is then sent to another tar instance which extracts (-x) the archive printing its contents along the way (optional: -v), changing directory to the destination (-C ./src) before commencing and stripping (--strip-components=3) the ./tmp/serve/ prefix from the files. (You could also cd ./tmp/serve beforehand, using find . instead, and change -C to ../../src.)
Using Ubuntu 18.04. Say we have a file called debug.log. You can create a copy called debug_BACKUP.log with either of these commands:
cp debug.log debug_BACKUP.log
cp debug{,_BACKUP}.log
Alternatively, substitute cp with mv to rename the file.
Now suppose we have debug1.log and debug2.log. We would like to create copies called debug1_BACKUP.log and debug2_BACKUP.log. Is there a single command to achieve this?
When I tried either of the following:
cp debug*.log debug*_BACKUP.log
cp debug*{,_BACKUP}.log
the error is cp: target 'debug*_BACKUP.log' is not a directory.
Brace expansions are an instruction for the shell about how to rewrite your command before glob expansion takes place. They aren't passed to the command itself -- cp has no idea if a brace expansion was used. For that matter, cp doesn't even have any idea if a wildcard is used; when you run cp *.txt dir/, the shell generates an array of C strings corresponding to something like cp foo.txt bar.txt baz.txt dir/ before running it.
This means that if you want to rewrite content after wildcard expansion takes place, you need to do it by hand.
for f in debug*.log; do
[[ $f = *_BACKUP.log ]] && continue # skip things that are already backup files
cp "$f" "${f%.log}_BACKUP.log"
done
There are few excellent bulk rename programs, including Perl based file-rename. You can achieve your bulk copy in 3 steps:
Copy the files to tmp sub folder
Perform bulk rename, moving the files back into the current folder
Remove the tmp folder
I would like to batch copy specific files that ends with fastq.gz from each folder (with unique names) to a new directory, but it keeps giving me an error saying that the files cannot be found. Is it because I am using a wildcard wrong?
for f in ./*/split-adapter-quality-trimmed/*.fastq.gz; do
cp *fastq.gz ../../new;
done
Executing for f in ./*/split-adapter-quality-trimmed/*.fastq.gz will already contain the filenames ending with *.fastq.gz in variable f. So use it directly in cp (cp $f destination) inside the loop. If you put an echo $f inside the loop, you can see all the files and verify it before cp.
for f in ./*/split-adapter-quality-trimmed/*.fastq.gz; do
cp $f ../../new;
done
Except if you absolutely want to use a for-loop, you could perform that with one find command:
find ./*/split-adapter-quality-trimmed -name "*fastq.gz" -exec cp {} ../../new \;
It will browse the directories matching ./*/split-adapter-quality-trimmed, looking for each file terminating with fastq.gz, and then execute the needed cp command (in the current directory of the shell, the command line ends with a semi-colon):
cp <found-path> ../../new
(The wildcarded term *fastq.gz is surrounded by quotes to prevent Bash to interpret it, just in case. So is it with the semi-colon.)
I'm trying to create a script that retrieves files (including subfolders) from CVS and stores them into a temporary directory /tmp/projectdir/ (OK), then removes copies of those files from my project directory /home/projectdir/ (not OK) without touching any other files in the project directory or the folder structure itself.
I've been attempting two methods, but I'm running into problems with both. Here's my script so far:
#!/usr/bin/bash
cd /tmp/
echo "removing /tmp/projectdir/"
rm -rf /tmp/projectdir
# CVS login goes here, code redacted
# export files to /tmp/projectdir/dir_1/file_1 etc
cvs export -kv -r $1 projectdir
# method 1
for file in /tmp/projectdir/*
do
# check for zero-length string
if [-n "$file"]; then
echo "removing $file"
rm /home/projectdir/"$file"
fi
done
# method 2
find /tmp/projectdir/ -exec rm -i /home/projectdir/{} \;
Neither method works as intended, because I need some way of stripping /tmp/projectdir/ from the filename (to be replaced with /home/projectdir/) and to prevent them from executing rm /home/projectdir/dir_1 (i.e. the directory and not a specific file), but I'm not sure how to achieve this.
(In case anybody is wondering, the zero-length string bit was an attempt to avoid rm'ing the directory, before I realised /tmp/projectdir/ would also be a part of the string)
You can use:
cd /tmp/projectdir/
find . -type f -exec rm -i /home/projectdir/{} \;
I have a few files with the format ReportsBackup-20140309-04-00 and I would like to send the files with same pattern to the files as the example to the 201403 file.
I can already create the files based on the filename; I would just like to move the files based on the name to their correct folder.
I use this to create the directories
old="directory where are the files" &&
year_month=`ls ${old} | cut -c 15-20`&&
for i in ${year_month}; do
if [ ! -d ${old}/$i ]
then
mkdir ${old}/$i
fi
done
you can use find
find /path/to/files -name "*201403*" -exec mv {} /path/to/destination/ \;
Here’s how I’d do it. It’s a little verbose, but hopefully it’s clear what the program is doing:
#!/bin/bash
SRCDIR=~/tmp
DSTDIR=~/backups
for bkfile in $SRCDIR/ReportsBackup*; do
# Get just the filename, and read the year/month variable
filename=$(basename $bkfile)
yearmonth=${filename:14:6}
# Create the folder for storing this year/month combination. The '-p' flag
# means that:
# 1) We create $DSTDIR if it doesn't already exist (this flag actually
# creates all intermediate directories).
# 2) If the folder already exists, continue silently.
mkdir -p $DSTDIR/$yearmonth
# Then we move the report backup to the directory. The '.' at the end of the
# mv command means that we keep the original filename
mv $bkfile $DSTDIR/$yearmonth/.
done
A few changes I’ve made to your original script:
I’m not trying to parse the output of ls. This is generally not a good idea. Parsing ls will make it difficult to get the individual files, which you need for copying them to their new directory.
I’ve simplified your if ... mkdir line: the -p flag is useful for “create this folder if it doesn’t exist, or carry on”.
I’ve slightly changed the slicing command which gets the year/month string from the filename.