I have a command that seems to successfully replace double quotes (") with single quotes (') for filenames within the current directory. However, I need to do this recursively (i.e., all files within all subdirectories). Here is the code I am working with:
for f in *; do
if [[ -f "$f" ]]; then
new_name=$(echo "$f" | sed "s/\"/'/g")
mv "$f" "$new_name"
fi
done
Any advice would be greatly appreciated.
Best practice (and the much more efficient solution when you're dealing with a large and deeply-nested directory tree) is to use find for this, not a for loop in bash at all. Using Find goes into the tradeoffs between -print0, -exec and -execdir; here, I'm using the last of these:
#!/usr/bin/env bash
# define the function we're going to export for access from a subprocess
do_replace() {
local name new_name old_quotes='"' new_quotes="'"
for name do # loops by default over "$#", the argument list given to the function
new_name=${name//$old_quotes/$new_quotes}
mv -- "$name" "$new_name"
done
}
export -f do_replace # actually export that function
# tell find to start a new copy of bash that can run the function we exported
# in each directory that contains one or more file or directory names with quotes.
# Using ``-execdir ... {} ';'`` to work around a MacOS bug
# ...use ``-execdir ... {} +`` instead with GNU find for better performance.
find . -depth -name '*"*' -execdir bash -c 'do_replace "$#"' _ {} ';'
That way there's a new copy of bash for each directory, so you aren't operating on names with /s in them; this avoids some security holes that can happen if you're renaming files in directories a different user can write to.
That said, the easy thing (in bash 4.0 or later) is to enable globstar, after which ** will recurse:
#!/usr/bin/env bash
# WARNING: This calculates the whole glob before it runs any renames; this can be very
# inefficient in a large directory tree.
case $BASH_VERSION in ''|[123].*) echo "ERROR: Bash 4.0+ required" >&2; exit 1;; esac
shopt -s globstar
for f in **; do
: "put your loop's content here as usual"
done
Related
I'm trying to create a bash script that finds all the files in my dotfiles directory, and symlinks them into ~. If the directory a file is in does not exist, it shall be created.
The script as it is now is just trying to find files and create the two paths, when that works "real" and "symlink" will be used with ln -s. However when trying to save the strings in "real" and "symlink" all i get is line 12: ./.zshrc: Permission denied
What am I doing wrong?
#!/bin/bash
dotfiles=()
readarray -d '' dotfiles < <(find . -type f ! -path '*/.git/*' ! -name "README.md" -type f -print0)
for file in ${dotfiles[#]}; do
dir=$(dirname $file | sed s#.#$HOME#1)
[ ! -d $dir ] && echo "directory $dir not exist!" && mkdir -p $dir
# Create path strings
real=$("$file" | sed s#.#$PWD#1)
symlink=$("$file" | sed s#.#$HOME#1)
echo "Real path: $cur"
echo "Symbolic link path: $new"
done
exit
P.S, I'm a bash noob and am mostly doing this script as a learning experience.
Here is a refactoring which attempts to fix the obvious problems. See comments with # ... for details. Probably also check with http://shellcheck.net/ before you ask for human assistance.
The immediate reason for the error message is that $("$file" ...) does indeed attempt to run $file as a command and capture its output. You probably meant $(echo "$file" ...) but that can often be avoided.
#!/bin/bash
dotfiles=()
readarray -d '' dotfiles < <(find . -type f ! -path '*/.git/*' ! -name "README.md" -type f -print0)
# ... Fix broken quoting throughout
for file in "${dotfiles[#]}"; do
dir=$(dirname "$file" | sed "s#^\.#$HOME#") # ... 1 is not required or useful
# ... Use standard error for diagnostics
[ ! -d "$dir" ] && echo "$0: directory $dir does not exist!" >&2 && mkdir -p "$dir"
# Create path strings
# ... Use parameter substitution
real=$PWD/${file#./}
symlink=$HOME/${$file#./}
# ... You mean $real, not $cur?
echo "Real path: $real"
# ... You mean $symlink, not $new?
echo "Symbolic link path: $symlink"
done
# ... No need to explicitly exit; trust me, you will anyway
#exit
These are just syntactic fixes; I would probably avoid storing the results from find in an array and just loop over them directly, and I haven't checked if the logic actually does what (we might guess) you are perhaps trying to do.
See also Looping over pairs of values in bash for a similar topic, and When to wrap quotes around a shell variable?.
The script still has a pesky assumption that it will be run in the dotfiles directory. Probably a better design would be to explicitly run find in that directory, and refactor accordingly.
sed 's/x/y/' will replace the first occurrence of x with y on each line by definition and by design; there is no reason or need to explicitly add 1 after the final delimiter. (Some sed variants allow a number there to select a different match than the first, but this is not portable; and of course specifying the first match when that's already the default is simply silly. There is a g flag to replace all matches instead of just the first which many beginners want to use everywhere, but of course that's not necessary or useful here either.)
I have hundreds of files that I need to recursively replace as the files are currently stored like so:
/2019/01/
file1.pdf
file2.pdf
/2019/02
file3.pdf
file4.pdf
etc
I then have all of the updated files in another directory like so:
/new-files
file1.pdf
file2.pdf
file3.pdf
file4.pdf
Could someone please tell me the best way of doing this with a bash script? I'd basically like to read the new-files directory and then replace any matching file names in the other folders.
Thanks in advance for any help!
Assuming that the 'new-files' directory and all the directory trees containing PDF files are under the current directory, try this Shellcheck-clean Bash code:
#! /bin/bash -p
find . -path ./new-files -prune -o -type f -name '*.pdf' -print0 \
| while IFS= read -r -d '' pdfpath; do
pdfname=${pdfpath##*/}
new_pdfpath=new-files/$pdfname
if [[ -f $new_pdfpath ]]; then
printf "Replace '%s' with '%s'\n" "$pdfpath" "$new_pdfpath" >&2
# cp -- "$new_pdfpath" "$pdfpath"
fi
done
The -path ./new-files -prune in the find command stops the 'new-files' directory from being searched.
The -o in the find command causes the next test and actions to be tried after checking for 'new-files'.
See BashFAQ/001 (How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?) for an explanation of the use of the -print0 option to find and the while IFS= read -r -d '' .... In short, the code can handle arbitrary file paths, including ones with whitespace and newline characters in them.
See Removing part of a string (BashFAQ/100 (How do I do string manipulation in bash?)) for an explanation of ${pdfpath##*/}.
It's not clear to me if you want to copy or move the new file to replace the old file, or do something else. Run the code as it is to check if it is identifying the correct replacements to be done. If you are happy with it, uncomment the cp line, and modify it to do something different if that is what you want.
The -- in the cp command protects against arguments beginning with dash characters being interpreted as options. It's unnecessary in this case, but I always use it when arguments begin with variable (or other) expansions so the code will remain safe if it is used in other contexts.
I think this calls for a bash array.
#!/usr/bin/env bash
# Make an associative array
declare -A files=()
# Populate array as $files[file.pdf]="/path/to/file.pdf"
for f in 20*/*/*.pdf; do
files[${f##*/}]="$f"
done
# Step through files and replace
for f in new-files/*.pdf; do
if [[ ! -e "${files[${f##*/}]}" ]]; then
echo "ERROR: missing $f" >&2
continue
fi
mv -v "$f" "${files[${f##*/}]}"
done
Note that associative arrays require bash version 4 or above. If you're using the native bash on a Mac, this won't work as-is.
Note also that if you remove continue in the final lines, then the mv command will NOT safely move files that do not exist in the date hash directories, since no target is known.
If you wanted further protection you might use test -nt or friends to confirm that an update is happening in the right direction.
#!/bin/bash
for filenames in $( ls $1 )
do
echo $filenames | grep "\.old$"
if [ ! $filenames = 0 ]
then
$( mv "$1/$filenames" "$1/$filenames.old" )
fi
done
So I think most of the script works. It is intended to take the output of ls for a directory inputed in the first parameter, and search for any files with .old at the end. Any files that do not contain .old will then be renamed.
The script successfully renames the files, but it will add .old to a file already containing the extension. I am assuming that the if variable is wrong, but I cannot figure out which variable to use in this case.
Answer is in the key but if anyone needs to do this here is an even easier way:
#!/bin/bash
for filenames in $( ls $1 | grep -v "\.old$" )
do
$( mv "$1/$filenames" "$1/$filenames.old" )
done
Use `find for this
find /directory/here -type f ! -iname "*.old" -exec mv {} {}.old \;
Problems the original approach
for filenames in $( ls $1 ) Never parse ls output. Check [ this ]
Variables are not double quoted, say in if [ ! $filenames = 0 ]. This results in word-splitting. Use "$filenames" unless you expect word splitting.
So the final script would be
#!/bin/bash
if [ -d "$1" ]
then
find "$1" -type f ! -iname "*.old" -exec mv {} {}.old \;
# use -maxdepth 1 with find if you don't wish to recursively check subdirectories
else
echo "Directory : $1 doesn't exist !"
fi
Usage
./script '/path/to/directory'
Don't use ls in scripts.
#!/bin/bash
for filename in "$1"/*
do
case $filename in *.old) continue;; esac
mv "$filename" "$filename.old"
done
I prefer case over if because it supports wildcard matching naturally and portably. (You could run this with /bin/sh just as well.) If you wanted to use if instead, that'd be
if echo "$filename" | grep -q '\.old$'; then
or more idiomatically, but recent shells only,
if [[ "$filename" == *.old ]]; then
You want to avoid calling additional utility functions if simple shell builtins will do. Why? Each additional utility you call grep, etc. spawns and runs in a separate subshell of its own. (if you are spawning a subshell for every iteration in your loop -- things will really slow down) If the shell doesn't provide a feature, then sure... calling a utility is the right thing to do.
As mentioned above, shell globbing along with parameter expansion with substring removal provides a simple test for determining if a file has an .old extension. All you need is:
for i in "$1"/*; do
[ "${i##*.}" = "old" ] || mv "$i" "${i}.old"
done
(note: this will skip add the .old extension to single file named 'old', but that can be handled separately if needed -- unlikely. Additionally, the solution with find is a fine approach as well)
I solved the problem, as I was misled by my instructor!
$? is the variable which represents the pipeline output which is currently in the forground (which would be grep). The new code is unedited except for
if [ ! $? = 0 ]
Hello i have the following problem: This code is runing in a shell-script without syntax error on my pc but on an other server i get a syntax error "(" unexpected
export data=( $(find "~/" -iname "*.png") )
for i in ${data[#]}
do
mogrify -crop 1024x794 ${i}
rm ${i%.png}-1.png
mv ${i%.png}-0.png ${i}
done
Be sure you are using Bash to execute you script. Maybe add a shebang: #! /usr/bin/bash in the first line of your script. You could also check this with a echo $SHELL command in your script. Bash version might also be different. Check with bash --version on the command line.
The accepted answer is not compliant with best practices -- it'll fail badly with filenames with spaces or newlines in their names. To do it right, if you have a find with -print0:
#!/bin/bash
data=( )
while IFS= read -r -d '' filename; do
data+=( "$filename" )
done < <(find ~ -iname '*.png' -print0)
By the way, you don't really need arrays for this at all -- or even find:
#!/bin/sh
# This version DOES NOT need bash
# function to run for all PNGs in a single directory
translate_all() {
for i in "${1:-$HOME}"/*.png; do
mogrify -crop 1024x794 "$i"
rm "${i%.png}-1.png"
mv "${i%.png}-0.png" "$i"
done
}
# function to run for all PNGs in a single directory and children thereof
translate_all_recursive() {
dir=${1:-$HOME}; dir=${d%/}
translate_all "$dir"
for d in "$dir"/*/; do
translate_all_recursive "$d"
done
done
# actually invoke the latter
translate_all_recursive
References:
BashFAQ #001 ("How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?", showing the correct practice; search for -print0).
UsingFind (likewise).
BashPitfalls #1
Don't Read Lines With For (from the Wooledge wiki)
Part of my Bash script's intended function is to accept a directory name and then iterate through every file.
Here is part of my code:
#! /bin/bash
# sameln --- remove duplicate copies of files in specified directory
D=$1
cd $D #go to directory specified as default input
fileNum=0 #save file numbers
DIR=".*|*"
for f in $DIR #for every file in the directory
do
files[$fileNum]=$f #save that file into the array
fileNum=$((fileNum+1)) #increment the fileNum
echo aFile
done
The echo statement is for testing purposes. I passed as an argument the name of a directory with four regular files, and I expected my output to look like:
aFile
aFile
aFile
aFile
but the echo statement only shows up once.
A single operation
Use find for this, it's perfect for it.
find <dirname> -maxdepth 1 -type f -exec echo "{}" \;
The flags explained: maxdepth defines how deep int he hierarchy you want to look (dirs in dirs in dirs), type f defines files, as opposed to type d for dirs. And exec allows you to process the found file/dir, which is can be accessed through {}. You can alternatively pass it to a bash function to perform more tasks.
This simple bash script takes a dir as argument and lists all it's files:
#!/bin/bash
find "$1" -maxdepth 1 -type f -exec echo "{}" \;
Note that the last line is identical to find "$1" -maxdepth 1 -type f -print0.
Performing multiple tasks
Using find one can also perform multiple tasks by either piping to xargs or while read, but I prefer to use a function. An example:
#!/bin/bash
function dostuff {
# echo filename
echo "filename: $1"
# remove extension from file
mv "$1" "${1%.*}"
# get containing dir of file
dir="${1%/*}"
# get filename without containing dirs
file="${1##*/}"
# do more stuff like echoing results
echo "containing dir = $dir and file was called $file"
}; export -f dostuff
# export the function so you can call it in a subshell (important!!!)
find . -maxdepth 1 -type f -exec bash -c 'dostuff "{}"' \;
Note that the function needs to be exported, as you can see. This so you can call it in a subshell, which will be opened by executing bash -c 'dostuff'. To test it out, I suggest your comment to mv command in dostuff otherwise you will remove all your extensions haha.
Also note that this is safe for weird characters like spaces in filenames so no worries there.
Closing note
If you decide to go with the find command, which is a great choice, I advise you read up on it because it is a very powerful tool. A simple man find will teach you a lot and you will learn a lot of useful options to find. You can for instance quit from find once it has found a result, this can be handy to check if dirs contain videos or not for example in a rapid way. It's truly an amazing tool that can be used on various occasions and often you'll be done with a one liner (kinda like awk).
You can directly read the files into the array, then iterate through them:
#! /bin/bash
cd $1
files=(*)
for f in "${files[#]}"
do
echo $f
done
If you are iterating only files below a single directory, you are better off using simple filename/path expansion to avoid certain uncommon filename issues. The following will iterate through all files in a given directory passed as the first argument (default ./):
#!/bin/bash
srchdir="${1:-.}"
for i in "$srchdir"/*; do
printf " %s\n" "$i"
done
If you must iterate below an entire subtree that includes numerous branches, then find will likely be your only choice. However, be aware that using find or ls to populate a for loop brings with it the potential for problems with embedded characters such as a \n within a filename, etc. See Why for i in $(find . -type f) # is wrong even though unavoidable at times.