I'm trying to create a script that retrieves files (including subfolders) from CVS and stores them into a temporary directory /tmp/projectdir/ (OK), then removes copies of those files from my project directory /home/projectdir/ (not OK) without touching any other files in the project directory or the folder structure itself.
I've been attempting two methods, but I'm running into problems with both. Here's my script so far:
#!/usr/bin/bash
cd /tmp/
echo "removing /tmp/projectdir/"
rm -rf /tmp/projectdir
# CVS login goes here, code redacted
# export files to /tmp/projectdir/dir_1/file_1 etc
cvs export -kv -r $1 projectdir
# method 1
for file in /tmp/projectdir/*
do
# check for zero-length string
if [-n "$file"]; then
echo "removing $file"
rm /home/projectdir/"$file"
fi
done
# method 2
find /tmp/projectdir/ -exec rm -i /home/projectdir/{} \;
Neither method works as intended, because I need some way of stripping /tmp/projectdir/ from the filename (to be replaced with /home/projectdir/) and to prevent them from executing rm /home/projectdir/dir_1 (i.e. the directory and not a specific file), but I'm not sure how to achieve this.
(In case anybody is wondering, the zero-length string bit was an attempt to avoid rm'ing the directory, before I realised /tmp/projectdir/ would also be a part of the string)
You can use:
cd /tmp/projectdir/
find . -type f -exec rm -i /home/projectdir/{} \;
Related
I have a bash script I'm trying to write
I have 2 base directories:
./tmp/serve/
./src/
I want to go through all the directories in ./tmp and copy the *.html files into the same folder path in ./src
i.e
if I have a html file in ./tmp/serve/app/components/help/ help.html -->
copy to ./src/app/components/help/ And recursively do this for all subdirectories in ./tmp/
NOTE: the folder structures should exist so just need to copy them only. If it doesn't then hopefully it could create the folder for me (not what I want) but with GIT I can track these folders to manually handle those loose html files.
I got as far as
echo $(find . -name "*.html")\n
But not sure how to actually extract the file path with pwd and do what I need to, maybe it's not a one liner and needs to be done with some vars.
something like
for i in `echo $(find /tmp/ -name "*.html")\n
do
cp -r $i /src/app/components/help/
done
going so far to create the directories would take some more time for me.
I'll try to do it on my own and see if I come up with something
but for argument sake if you do run pwd and get a response the pseudo code for that:
pwd
get response
if that directory does not exist in src create that directory
copy all the original directories contents into the new folder at /src/$newfolder
(possibly running two for loops, one to check the directory tree, and then one to go through each original directory, copying all the html files)
You process substitution to loop the output from your find command and create the destination directory(ies) and then copy the file(s):
#!/bin/bash
# accept first parameters to script as src_dir and dest values or
# simply use default values if no parameter(s) passed
src_dir=${1:-/tmp/serve}
dest=${2-src}
while read -r orig_path ; do
# To replace the first occurrence of a pattern with a given string,
# use ${parameter/pattern/string}
dest_path="${orig_path/tmp\/serve/${dest}}"
# Use dirname to remove the filename from the destination path
# and create the destination directory.
dest_dir=$(dirname "${dest_path}")
mkdir -p "${dest_dir}"
cp "${orig_path}" "${dest_path}"
done < <(find "${src_dir}" -name '*.html')
This script copy .html files from src directory to des directory (create the subdirectory if they do not exist)
Find the files, then remove the src directory name and copy them into the destination directory.
#!/bin/bash
for i in `echo $(find src/ -name "*.html")`
do
file=$(echo $i | sed 's/src\///g')
cp -r --parents $i des
done
Not sure if you must use bash constructs or not, but here is a GNU tar solution (if you use GNU tar), which IMHO is the best way to handle this situation because all the metadata for the files (permissions, etc.) are preserved:
find ./tmp/serve -name '*.html' -type f -print0 | tar --null -T - -c | tar -x -v -C ./src --strip-components=3
This finds all the .html files (-type f) in the ./tmp/serve directory and prints them nul-terminated (-print0), then sends these filenames via stdin to tar as nul-terminated literals (--null) for inclusion (-T -), creating (-c) an archive which is then sent to another tar instance which extracts (-x) the archive printing its contents along the way (optional: -v), changing directory to the destination (-C ./src) before commencing and stripping (--strip-components=3) the ./tmp/serve/ prefix from the files. (You could also cd ./tmp/serve beforehand, using find . instead, and change -C to ../../src.)
My download program automatically unrars rar archives, which is all well and good as Sonarr and Radarr need that original video file to import. But now my download HDD fills up with all these video files I no longer need.
I've tried playing around with modifying existing scripts I have, but every step seems to take me further from the goal.
Here's what I have so far (that isnt working and I clearly dont know what im doing). My main problem is I can't get it to find the files correctly yet. This script jumps right to "no files found". So I'm doing the search wrong at the very least. Or I'm pretty sure I might need to completely rewrite from scratch using a different method I'm not aware of..
#!/bin/bash
# Find video files and if it came from a rar, remove it.
# If no directory is given, work in local dir
if [ "$1" = "" ]; then
DIR="."
else
DIR="$1"
fi
# Find all the MKV files in this dir and its subdirs
find "$DIR" -type f -name '*.mkv' | while read filename
do
# If video file and rar file exists, delete mkv.
for f in ...
do
if [[ -f "$DIR/*.mkv" ]] && [[ -f "$DIR/*.rar" ]]
then
# rm $filename
printf "[Dry run delete]: $filename\n"
else
printf "No files found\n"
exit 1
fi
done
Example of directory structure before and after. Note the file names are often different to the extracted file. And I want to leave other folders that don't have rars in them alone.
Before:
/folder/moviename/Movie.that.came.from.rar.2021.dvdrip.mkv
/folder/moviename/movie.rar
/folder/moviename/movie.r00
/folder/moviename/movie.r01
/folder/moviename2/Movie.that.lives.alone.2021.dvdrip.mkv
/folder/moviename2/Movie.2021.dvdrip.nfo
After
# (deleted the mkv only from the first folder)
/folder/moviename/movie.rar
/folder/moviename/movie.r00
/folder/moviename/movie.r01
# (this mkv survives)
/folder/moviename2/Movie.that.lives.alone.2021.dvdrip.mkv
/folder/moviename2/Movie.2021.dvdrip.nfo
TL:DR I would like a script to look recursively in my download drive for video files and rar files, and if it sees both in the same folder, delete the video file.
With GNU find, you can condense this to one command:
find "${1:-.}" -type f -name '*.rar' -execdir sh -c 'echo rm *.mkv' \;
${1:-.} says "use $1, or . if $1 is undefined or empty".
For each .rar file found, this starts a new shell in the directory of the file found (that's what -execdir sh -c '...' does) and runs echo rm *.mkv.
If the list of files to delete looks correct, you can actually delete them by dropping the echo:
find "${1:-.}" -type f -name '*.rar' -execdir sh -c 'rm *.mkv' \;
Two remarks, though:
-execdir rm *.mkv \; would be shorter, but then the glob might be expanded prematurely in case there are .mkv files in the current directory
if a directory contains a .rar file, but no .mkv, this will try to delete a file called literally *.mkv and cause an error message
I have a few files with the format ReportsBackup-20140309-04-00 and I would like to send the files with same pattern to the files as the example to the 201403 file.
I can already create the files based on the filename; I would just like to move the files based on the name to their correct folder.
I use this to create the directories
old="directory where are the files" &&
year_month=`ls ${old} | cut -c 15-20`&&
for i in ${year_month}; do
if [ ! -d ${old}/$i ]
then
mkdir ${old}/$i
fi
done
you can use find
find /path/to/files -name "*201403*" -exec mv {} /path/to/destination/ \;
Here’s how I’d do it. It’s a little verbose, but hopefully it’s clear what the program is doing:
#!/bin/bash
SRCDIR=~/tmp
DSTDIR=~/backups
for bkfile in $SRCDIR/ReportsBackup*; do
# Get just the filename, and read the year/month variable
filename=$(basename $bkfile)
yearmonth=${filename:14:6}
# Create the folder for storing this year/month combination. The '-p' flag
# means that:
# 1) We create $DSTDIR if it doesn't already exist (this flag actually
# creates all intermediate directories).
# 2) If the folder already exists, continue silently.
mkdir -p $DSTDIR/$yearmonth
# Then we move the report backup to the directory. The '.' at the end of the
# mv command means that we keep the original filename
mv $bkfile $DSTDIR/$yearmonth/.
done
A few changes I’ve made to your original script:
I’m not trying to parse the output of ls. This is generally not a good idea. Parsing ls will make it difficult to get the individual files, which you need for copying them to their new directory.
I’ve simplified your if ... mkdir line: the -p flag is useful for “create this folder if it doesn’t exist, or carry on”.
I’ve slightly changed the slicing command which gets the year/month string from the filename.
I'am trying to write simple script that will get files name from one folder and search them in another folder and remove if found them in that folder.
Got two folder like
/home/install/lib
/home/install/bin
/home/install/include
and
/usr/local/lib
/usr/local/bin
/usr/local/include
I want to remove all file's from /usr/local/lib{bin,include} that contains in /home/install/lib{bin,include}. For example having
/home/install/lib/test1
/usr/local/lib/test1
scritp will remove /usr/local/lib/test1. I tried to do it from each separate directory
/home/install/lib:ls -f -exec rm /usr/local/lib/{} \;
but nothing. Can you help me to manage with this simple script?
Create script rmcomm
#!/bin/bash
a="/home/install/$1"
b="/usr/local/$1"
comm -12 <(ls "$a") <(ls "$b") | while read file; do
rm "$b/$file"
done
Then call this script for every pair:
for dir in lib bin include; do rmcomm "$dir"; done
Here's something simple. Remove the echo from the line containing rm to run it after you've ensured it's doing what you want:
#!/bin/bash
dirs[0]=lib
dirs[1]=bin
dirs[2]=include
pushd /home/install
for dir in "${dirs[#]}"
do
for file in $(find $dir -type f)
do
# Remove 'echo' below once you're satisfied the correct files
# are being removed
echo rm /usr/local/$file
done
done
popd
I am writing the following script to copy *.nzb files to a folder to queue them for Download.
I wrote the following script
#!/bin/bash
#This script copies NZB files from Downloads folder to HellaNZB queue folder.
${DOWN}="/home/user/Downloads/"
${QUEUE}="/home/user/.hellanzb/nzb/daemon.queue/"
for a in $(find ${DOWN} -name *.nzb)
do
cp ${a} ${QUEUE}
rm *.nzb
done
it gives me the following error saying:
HellaNZB.sh: line 5: =/home/user/Downloads/: No such file or directory
HellaNZB.sh: line 6: =/home/user/.hellanzb/nzb/daemon.queue/: No such file or directory
Thing is that those directories exsist, I do have right to access them.
Any help would be nice.
Please and thank you.
Variable names on the left side of an assignment should be bare.
foo="something"
echo "$foo"
Here are some more improvements to your script:
#!/bin/bash
#This script copies NZB files from Downloads folder to HellaNZB queue folder.
down="/home/myusuf3/Downloads/"
queue="/home/myusuf3/.hellanzb/nzb/daemon.queue/"
find "${down}" -name "*.nzb" | while read -r file
do
mv "${file}" "${queue}"
done
Using while instead of for and quoting variables that contain filenames protects against filenames that contain spaces from being interpreted as more than one filename. Removing the rm keeps it from repeatedly producing errors and failing to copy any but the first file. The file glob for -name needs to be quoted. Habitually using lowercase variable names reduces the chances of name collisions with shell variables.
If all your files are in one directory (and not in multiple subdirectories) your whole script could be reduced to the following, by the way:
mv /home/myusuf3/Downloads/*.nzb /home/myusuf3/.hellanzb/nzb/daemon.queue/
If you do have files in multiple subdirectories:
find /home/myusuf3/Downloads/ -name "*.nzb" -exec mv {} /home/myusuf3/.hellanzb/nzb/daemon.queue/ +
As you can see, there's no need for a loop.
The correct syntax is:
DOWN="/home/myusuf3/Downloads/"
QUEUE="/home/myusuf3/.hellanzb/nzb/daemon.queue/"
for a in $(find ${DOWN} -name *.nzb)
# escape the * or it will be expanded in the current directory
# let's just hope no file has blanks in its name
do
cp ${a} ${QUEUE} # ok, although I'd normally add a -p
rm *.nzb # again, this is expanded in the current directory
# when you fix that, it will remove ${a}s before they are copied
done
Why don't you just use rm $(a}?
Why use a combination of cp and rm anyway, instead of mv?
Do you realize all files will end up in the same directory, and files with the same name from different directories will overwrite each other?
What if the cp fails? You'll lose your file.