bash rename files issue? - bash

I know nothing about Linux commands o bash scripts so help me please.
I have a lot of file in different directories i want to rename all those files from "name" to "name.xml" using bash file is it possible to do that? I just find usefulness codes on the internet like this:
shopt -s globstar # enable ** globstar/recursivity
for i in **/*.txt; do
echo "$i" "${i/%.txt}.xml";
done
it does not even work.

For the purpose comes in handy the prename utility which is installed by default on many Linux distributions, usually it is distributed with the Perl package. You can use it like this:
find . -iname '*.txt' -exec prename 's/.txt/.xml/' {} \;
or this much faster alternative:
find . -iname '*.txt' | xargs prename 's/.txt/.xml/'

Explanation
Move/rename all files –whatever the extension is– in current directory and below from name to name.xml. You should test using echo before running the real script.
shopt -s globstar # enable ** globstar/recursivity
for i in **; do # **/*.txt will look only for .txt files
[[ -d "$i" ]] && continue # skip directories
echo "$i" "$i.xml"; # replace 'echo' by 'mv' when validated
#echo "$i" "${i/%.txt}.xml"; # replace .txt by .xml
done

Showing */.txt */.xml means effectively there are no files matching the given pattern, as by default bash will use verbatim * if no matches are found.
To prevent this issue you'd have to additionally set shopt -s nullglob to have bash just return nothing when there is no match at all.
After verifying the echoed lines look somewhat reasonable you'll have to replace
echo "$i" "${i/%.txt}.xml"
with
mv "$i" "${i/%.txt}.xml"
to rename the files.

You can use this bash script.
#!/bin/bash
DIRECTORY=/your/base/dir/here
for i in `find $DIRECTORY -type d -exec find {} -type f -name \*.txt\;`;
do mv $i $i.xml
done

Related

Recursively Rename Files and Directories with Bash on macOS

I'm writing a script that will perform some actions, and one of those actions is to find all occurrences of a string in both file names and directory names, and replace it with another string.
I have this so far
find . -name "*foo*" -type f -depth | while read file; do
newpath=${file//foo/bar}
mv "$file" "$newpath"
done
This works fine as long as the path to the file doesn't also contain foo, but that isn't guaranteed.
I feel like the way to approach this is to ONLY change the file names first, then go back through and change the directory names, but even then, if you have a structure that has more than one directory with foo in it, it will not work properly.
Is there a way to do this with built in macOS tools? (I say built-in, because this script is going to be distributed to some other folks in our organization and it can't rely on any packages to be installed).
Separating the path_name from the file_name, something like.
#!/usr/bin/env bash
while read -r file; do
path_name="${file%/*}"; printf 'Path is %s\n' "$path_name"
file_name="${file#"$path_name"}"; printf 'Filename is %s\n' "$file_name"
newpath="$path_name${file_name//foo/bar}"
echo mv -v "$file" "$newpath"
done < <(find . -name "*foo*" -type f)
Have a look at basename and dirname as well.
The printf's is just there to show which is the path and the filename.
The script just replace foo to bar from the file_name, It can be done with the path_name as well, just use the same syntax.
newpath="${path_name//bar/more}${file_name//foo/bar}"
So renaming both path_name and file_name.
Or renaming the path_name and then the file_name like your idea is an option also.
path_name="${file%/*}"
file_name="${file#"$path_name"}"
new_pathname="${path_name//bar/more}"
mv -v "$path_name" "$new_pathname"
new_filename="${file_name//foo/bar}"
mv -v "${new_pathname%/*}$file_name" "$new_pathname$new_filename"
There are no additional external tool/utility used, except from the ones being used by your script.
Remove the echo If you're satisfied with the result/output.
You can use -execdir to run a command on just the filename (basename) in the relevant directory:
find . -depth -name '*foo*' -execdir bash -c 'mv -- "${1}" "${1//foo/bar}"' _ {} \;

Trying to rename certain file types within recursive directories

I have a bunch of files within a directory structure as such:
Dir
SubDir
File
File
Subdir
SubDir
File
File
File
Sorry for the messy formatting, but as you can see there are files at all different directory levels. All of these file names have a string of 7 numbers appended to them as such: 1234567_filename.ext. I am trying to remove the number and underscore at the start of the filename.
Right now I am using bash and using this oneliner to rename the files using mv and cut:
for i in *; do mv "$i" "$(echo $i | cut -d_ -f2-10)"; done
This is being run while I am CD'd into the directory. I would love to find a way to do this recursively, so that it only renamed files, not folders. I have also used a foreach loop in the shell, outside of bash for directories that have a bunch of folders with files in them and no other subdirectories as such:
foreach$ set p=`echo $f | cut -d/ -f1`
foreach$ set n=`echo $f | cut -d/ -f2 | cut -d_ -f2-10`
foreach$ mv $f $p/$n
foreach$ end
But that only works when there are no other subdirectories within the folders.
Is there a loop or oneliner I can use to rename all files within the directories? I even tried using find but couldn't figure out how to incorporate cut into the code.
Any help is much appreciated.
With Perl‘s rename (standalone command):
shopt -s globstar
rename -n 's|/[0-9]{7}_([^/]+$)|/$1|' **/*
If everything looks fine remove -n.
globstar: If set, the pattern ** used in a pathname expansion context will
match all files and zero or more directories and subdirectories. If
the pattern is followed by a /, only directories and subdirectories
match.
bash does provide functions, and these can be recursive, but you don't need a recursive function for this job. You just need to enumerate all the files in the tree. The find command can do that, but turning on bash's globstar option and using a shell glob to do it is safer:
#!/bin/bash
shopt -s globstar
# enumerate all the files in the tree rooted at the current working directory
for f in **; do
# ignore directories
test -d "$f" && continue
# separate the base file name from the path
name=$(basename "$f")
dir=$(dirname "$f")
# perform the rename, using a pattern substitution on the name part
mv "$f" "${dir}/${name/#???????_/}"
done
Note that that does not verify that file names actually match the pattern you specified before performing the rename; I'm taking you at your word that they do. If such a check were wanted then it could certainly be added.
How about this small tweak to what you have already:
for i in `find . -type f`; do mv "$i" "$(echo $i | cut -d_ -f2-10)"; done
Basically just swapping the * with `find . -type f`
Should be possible to do this using find...
find -E . -type f \
-regex '.*/[0-9]{7}_.*\.txt' \
-exec sh -c 'f="${0#*/}"; mv -v "$0" "${0%/*}/${f#*_}"' {} \;
Your find options may be different -- I'm doing this in FreeBSD. The idea here is:
-E instructs find to use extended regular expressions.
-type f causes only normal files (not directories or symlinks) to be found.
-regex ... matches the files you're looking for. You can make this more specific if you need to.
exec ... \; runs a command, using {} (the file we've found) as an argument.
The command we're running uses parameter expansion first to grab the target directory and second to strip the filename. Note the temporary variable $f, which is used to address the possibility of extra underscores being part of the filename.
Note that this is NOT a bash command, though you can of course run it from the bash shell. If you want a bash solution that does not require use of an external tool like find, you may be able to do the following:
$ shopt -s extglob # use extended glob format
$ shopt -s globstar # recurse using "**"
$ for f in **/+([0-9])_*.txt; do f="./$f"; echo mv "$f" "${f%/*}/${f##*_}"; done
This uses the same logic as the find solution, but uses bash v4 extglob to provide better filename matching and globstar to recurse through subdirectories.
Hope these help.

Globbing for only files in Bash

I'm having a bit of trouble with globs in Bash. For example:
echo *
This prints out all of the files and folders in the current directory.
e.g. (file1 file2 folder1 folder2)
echo */
This prints out all of the folders with a / after the name.
e.g. (folder1/ folder2/)
How can I glob for just the files?
e.g. (file1 file2)
I know it could be done by parsing ls but also know that it is a bad idea. I tried using extended blobbing but couldn't get that to work either.
WIthout using any external utility you can try for loop with glob support:
for i in *; do [ -f "$i" ] && echo "$i"; done
I don't know if you can solve this with globbing, but you can certainly solve it with find:
find . -type f -maxdepth 1
You can do what you want in bash like this:
shopt -s extglob
echo !(*/)
But note that what this actually does is match "not directory-likes."
It will still match dangling symlinks, symlinks pointing to not-directories, device nodes, fifos, etc.
It won't match symlinks pointing to directories, though.
If you want to iterate over normal files and nothing more, use find -maxdepth 1 -type f.
The safe and robust way to use it goes like this:
find -maxdepth 1 -type f -print0 | while read -d $'\0' file; do
printf "%s\n" "$file"
done
My go to in this scenario is to use the find command. I just had to use it, to find/replace dozens of instances in a given directory. I'm sure there are many other ways of skinning this cat, but the pure for example above, isn't recursive.
for file in $( find path/to/dir -type f -name '*.js' );
do sed -ie 's#FIND#REPLACEMENT#g' "$file";
done

Bash files or directories readable?

In the bash shell, how do I output files readable by ALL users (that is user, group and others).
I tried find -readable, but it outputs those that are readable by at least one of the users.
Any idea?
There are many factors that could affect how one user could read a file but basically you could search for files with +r attribute on the others group. And this is one way to do it using find:
find -perm -o=r
That would include both files and directories. To be specific to files, add `-type f``:
find -perm -o=r -type f
And it's probably the same as -0004:
find -perm -0004 -type f
Try this :
for i in *; do
[[ $(stat -c %A "$i") =~ (-rw.rw.rw.|drwxrwxrwx) ]] && echo "$i"
done
If you use bash4, you can do it recursively with :
shopt -s globstar
for i in **; do
...

command line : ignore particular file while using wildcards

Consider I have lots of shell scripts in a folder named test. I want to execute all the files in it except one particular file. what do I do? relocating the file or executing the file one after an other manually is not an option. Is there any way I could do this in single line. Or perhaps, adding something to sh path/to/test/*.sh, which executes all files?
for file in test/*; do
[ "$file" != "test/do-not-run.sh" ] && sh "$file"
done
If you are using bash, you can use extended patterns to skip the undesired script:
shopt -s extglob
for file in test/!(do-not-run).sh; do
sh "$file"
done
for FILE in `ls "$YOURPATH"` ; do
test "$FILE" != "do-not-run.sh" && sh "$YOURPATH/$FILE";
done
find path/to/test -name "*.sh" \! -name $pattern_for_unwanted_scripts -exec {} \;
Find will recursively execute all entries in the directory which end in .sh (-name "*.sh") and don't match the unwanted pattern (\! -name $pattern_for_unwanted_scripts).
in bash, provided you do shopt -s extglob you can use "extended globbing" allowing to use !(pattern-list) which matches anything except one of the given patterns.
In your case:
shopt -s extglob
for f in !(do-not-run.sh); do if [ "${f##*.}" == "sh" ]; then sh $f; fi; done

Resources