How can I replace all underscore chars with a whitespace in multiple file names using Bash Script? Using this code we can replace underscore with dash. But how it works with whitespace?
for i in *.mp3;
do x=$(echo $i | grep '_' | sed 's/_/\-/g');
if [ -n "$x" ];
then mv $i $x;
fi;
done;
Thank you!
This should do:
for i in *.mp3; do
[[ "$i" = *_* ]] && mv -nv -- "$i" "${i//_/ }"
done
The test [[ "$i" = *_* ]] tests if file name contains any underscore and if it does, will mv the file, where "${i//_/ }" expands to i where all the underscores have been replaced with a space (see shell parameter expansions).
The option -n to mv means no clobber: will not overwrite any existent file (quite safe). Optional.
The option -v to mv is for verbose: will say what it's doing (if you want to see what's happening). Very optional.
The -- is here to tell mv that the arguments will start right here. This is always good practice, as if a file name starts with a -, mv will try to interpret it as an option, and your script will fail. Very good practice.
Another comment: When using globs (i.e., for i in *.mp3), it's always very good to either set shopt -s nullglob or shopt -s failglob. The former will make *.mp3 expand to nothing if no files match the pattern (so the loop will not be executed), the latter will explicitly raise an error. Without these options, if no files matching *.mp3 are present, the code inside loop will be executed with i having the verbatim value *.mp3 which can cause problems. (well, there won't be any problems here because of the guard [[ "$i" = *_* ]], but it's a good habit to always use either option).
Hope this helps!
The reason your script is failing with spaces is that the filename gets treated as multiple arguments when passed to mv. You'll need to quote the filenames so that each filename is treated as a single agrument. Update the relevant line in your script with:
mv "$i" "$x"
# where $i is your original filename, and $x is the new name
As an aside, if you have the perl version of the rename command installed, you skip the script and achieve the same thing using:
rename 's/_/ /' *.mp3
Or if you have the more classic rename command:
rename "_" " " *.mp3
Using tr
tr '_' ' ' <file1 >file2
Related
I want to read the full file name, but it always auto separator by space
#!/bin/dash
for file in `ls | grep -E '(jpg)$' | sed 's/\.jpg//g'`
do
echo ${file}
done
This will work when the file name do not contain space or other symbol.
But it do not work when the file name like : A B C.jpg
This will print
A
B
C
It sepator the file name by space, how can i avoid this situation
Unquoted command substitutions are split on IFS (white space by default).
You can use shell glob expansion, and shell suffix removal:
for file in *.jpg; do
[ -f "$file" ] || continue
name=${file%.jpg}
echo "$name"
done
*.[Jj][Pp][Gg] for a case insensitive match (bash also has shopt -s nocasematch).
I have a bash script in my application that configures a few API paths based on the docker environment running the script. The following code works great if the filename is static, however when Angular CLI builds a production version of the application it adds a hash to the filename. How would I tell the bash script to use any filename that fits a pattern? For example how would I find the filename main.28ce25b7ef460b2617ae.bundle.js where the hash could be anything?
My script works with non-prod builds that don't add the hash
#!/bin/bash
sed -i 's|placeholder_base_service_endpoint|'$BASE_SERVICE_ENDPOINT'|g' main.bundle.js
I tried using a wildcard but it doesn't appear to have been able to find the file as the string wasn't replaced.
main.*.bundle.js
On Glob Expansions
If you want a glob to be expanded during an assignment, one way to do so is to expand into an array:
files=( main.*.bundle.js )
As some extra paranoia, you might consider sanity-checking the resulting array:
die() { echo "$*" >&2; exit 1; } # define a "die" function to exit if a test fails
[[ -L ${files[0]} || -e ${files[0]} ]] || die "Glob main.*.bundle.js had no matches"
(( ${#files[#]} == 1 )) || die "More than one match for main.*.bundle.js found"
If instead of asserting that there's only one match you want to put all matches on your sed (or another) command line, you can do that with "${files[#]}".
On sed -i
Note that on some platforms (notably MacOS), sed -i requires an argument giving a file extension to use for backups. To work reliably on such platforms, passing an empty string is appropriate:
sed -i '' \
-e 's|placeholder_base_service_endpoint|'"$BASE_SERVICE_ENDPOINT"'|g' \
"${files[#]}"
Note that after the single-quoted string is ended, a double-quoted string is started before expanding BASE_SERVICE_ENDPOINT; that way we avoid issues if that string can be treated as a glob or split into multiple shell words.
Of course, "${files[#]}" could also just be the literal glob main.*.bundle.js.
I'm trying to write a bash script that read user's input (some files so user can use TAB completion) and copy them into a specific folder.
#/bin/bash
read -e files
for file in $files
do
echo $file
cp "$file" folder/"$file"
done
It's ok for: file1 file2 ...
Or with : file* (even if there is a filename with space in the folder).
But it's not working for filenames with space escaped with backslash \ like : file\ with\ space escaped spaces are ignored and string is split on each spaces, even escaped.
I saw information on quoting, printf, IFS, read and while... I think it's very basic bash script but I can't find a good solution. Can you help me?
Clearing IFS prior to your unquoted expansion will allow globbing to proceed while preventing string-splitting:
IFS=$' \t\n' read -e -a globs # read glob expressions into an array
IFS=''
for glob in "${globs[#]}"; do # these aren't filenames; don't claim that they are.
files=( $glob ) # expand the glob into filenames
# detect the case where no files matched by checking whether the first result exists
# these *would* need to be quoted, but [[ ]] turns off string-splitting and globbing
[[ -e $files || -L $files ]] || {
printf 'ERROR: Glob expression %q did not match any files!\n' "$glob" >&2
continue
}
printf '%q\n' "${files[#]}" # print one line per file matching
cp -- "${files[#]}" folder/ # copy those files to the target
done
Note that we're enforcing the default IFS=$' \t\n' during the read operation, which ensures that unquoted whitespace is treated as a separator between array elements at that stage. Later, with files=( $glob ), by contrast, we have IFS='', so whitespace no longer can break individual names apart.
You can read the filenames into an array, then loop over the array elements:
read -e -a files
for file in "${files[#]}"; do
echo "$file"
cp "$file" folder/"$file"
done
Reading into a single string won't work no matter how you quote: the string will either be split up at each space (when unquoted) or not at all (when quoted). See this canonical Q&A for details (your case is the last item in the list).
This prevents globbing, i.e., file* is not expanded. For a solution that takes this into account, see Charles' answer.
There is a fully functional solution for files and globs.
With the help of using xargs (which is able to preserve quoted strings). But you need to write files with spaces inside quotes:
"file with spaces"
When you use the script: Unquote the read and quote the assignment for listOfFiles.
I am also taking advantage of some ideas on the post of #CharlesDuffy (thanks Charles).
#!/bin/bash
# read -e listOfFiles
listOfFiles='file1 file* "file with spaces"'
IFS=''
while IFS='' read glob; do # read each file expressions into an array
files=( $glob ) # try to expand the glob into filenames
# If no file match the split glob
# Then assume that the glob is a file and test its existence
[[ -e $files || -L $files ]] || {
files="$glob"
[[ -e $files || -L $files ]] || {
printf 'ERROR: Glob "%q" did not match any file!\n' "$glob" >&2
continue
}
}
printf '%q\n' "${files[#]}" # print one line per file matching
cp -- "${files[#]}" folder/ # copy those files to the target
done < <(xargs -n1 <<<"$listOfFiles")
Note that the answers of both Charles Duffy and user2350426 do not preserve escaped *s; they will expand them, too.
Benjamin's approach, however, won't do globbing at all. He is mistaken in that you can first put your globs in a string and then load them into an array.
Then it will work as desired:
globs='file1 file\ 2 file-* file\* file\"\"' # or read -re here
# Do splitting and globbing:
shopt -s nullglob
eval "files=( $globs )"
shopt -u nullglob
# Now we can use ${files[#]}:
for file in "${files[#]}"; do
printf "%s\n" "$file"
done
Also note the use of nullglob to ignore non-expandable globs.
You may also want to use failglob or, for more fine-grained control, code like in the aforementioned answers.
Inside functions, you probably want to declare variables, so they stay local.
The following sed command from commandline returns what I expect.
$ echo './Adobe ReaderScreenSnapz001.jpg' | sed -e 's/.*\./After-1\./'
After-1.jpg <--- result
Howerver, in the following bash script, sed seeems not to act as I expect.
#!/bin/bash
beforeNamePrefix=$1
i=1
while IFS= read -r -u3 -d '' base_name; do
echo $base_name
rename=`(echo ${base_name} | sed -e s/.*\./After-$i./g)`
echo 'Renamed to ' $rename
i=$((i+1))
done 3< <(find . -name "$beforeNamePrefix*" -print0)
Result (with several files with similar names in the same directory):
./Adobe ReaderScreenSnapz001.jpg
Renamed to After-1. <--- file extension is missing.
./Adobe ReaderScreenSnapz002.jpg
Renamed to After-2.
./Adobe ReaderScreenSnapz003.jpg
Renamed to After-3.
./Adobe ReaderScreenSnapz004.jpg
Renamed to After-4.
Where am I wrong? Thank you.
You have omitted the single quotes around the program in your script. Without quoting, the shell will strip the backslash from .*\. yielding a regular expression with quite a different meaning. (You will need double quotes in order for the substitution to work, though. You can mix single and double quotes 's/.*\./'"After-$i./" or just add enough backslashes to escape the escaped escape sequence (sic).
Just use Parameter Expansion
#!/bin/bash
beforeNamePrefix="$1"
i=1
while IFS= read -r -u3 -d '' base_name; do
echo "$base_name"
rename="After-$((i++)).${base_name##*.}"
echo "Renamed to $rename"
done 3< <(find . -name "$beforeNamePrefix*" -print0)
I also fixed some quoting to prevent unwanted word splitting
next code doesnt work because of spaces in file names, How to fix?
IFS = '\n'
for name in `ls `
do
number=`echo "$name" | grep -o "[0-9]\{1,2\}"`
if [[ ! -z "$number" ]]; then
mv "$name" "./$number"
fi
done
Just don't use command substitution: use for name in *.
Replace
for name in `ls`
with:
ls | while read name
Notice: bash variable scoping is awful. If you change a variable inside the loop, it won't take effect outside the loop (in my version it won't, in your version it will). In this example, it doesn't matter.
Notice 2: This works for file names with spaces, but fails for some other strange but valid file names. See Charles Duffy's comment below.
Looks like two potential issues:
First, the IFS variable and it's assignment should not have space in them. Instead of
IFS = '\n' it should be IFS=$'\n'
Secondly, for name in ls will cause issues with filename having spaces and newlines. If you just wish to handle filename with spaces then do something like this
for name in *
I don't understand the significance of the line
number=`echo "$name" | grep -o "[0-9]\{1,2\}"`
This will give you numbers found in filename with spaces in new lines. May be that's what you want.
For me, I had to move to use find.
find /foo/path/ -maxdepth 1 -type f -name "*.txt" | while read name
do
#do your stuff with $name
done