How can we iterate over the subdirectories of the given directory and get file within those subdirectories in bash. Can I do that using grep command?
This will go one subdirectory deep. The inner for loop will iterate over enclosed files and directories. The if statement will exclude directories. You can set options to include hidden files and directories (shopt -s dotglob).
shopt -s nullglob
for dir in /some/dir/*/
do
for file in "$dir"/*
do
if [[ -f $file ]]
then
do_something_with "$file"
fi
done
done
This will be recursive. You can limit the depth using the -maxdepth option.
find /some/dir -mindepth 2 -type f -exec do_something {} \;
Using -mindepth excludes files in the current directory, but it includes files in the next level down (and below, depending on -maxdepth).
Well, you can do that using grep:
grep -rl ^ /path/to/dir
But why? find is better.
You are probably looking for find(1).
Related
I'm trying to write a shell script that moves all files except for the ones that end with .sh and .py. I also don't want to move directories.
This is what I've got so far:
cd FILES/user/folder
shopt -s extglob
mv !(*.sh|*.py) MoveFolder/ 2>/dev/null
shopt -u extglob
This moves all files except the ones that contain .sh or .py, but all directories are moved into MoveFolder as well.
I guess I could rename the folders, but other scripts already have those folders assigned for their work, so renaming might give me more trouble. I also could add the folder names but whenever someone else creates a folder, I would have to add its name to the script or it will be moved as well.
How can I improve this script to skip all folders?
Use find for this:
find -maxdepth 1 \! -type d \! -name "*.py" \! -name "*.sh" -exec mv -t MoveFolder {} +
What it does:
find: find things...
-maxdepth 1: that are in the current directory...
\! -type d: and that are not a directory...
\! -name "*.py: and whose name does not end with .py...
\! -name "*.sh: and whose name does not end with .sh...
-exec mv -t MoveFolder {} +: and move them to directory MoveFolder
The -exec flag is special: contrary to the the prior flags which were conditions, this one is an action. For each match, the + that ends the following command directs find to aggregate the file name at the end of the command, at the place marked with {}. When all the files are found, find executes the resulting command (i.e. mv -t MoveFolder file1 file2 ... fileN).
You'll have to check every element to see if it is a directory or not, as well as its extension:
for f in FILES/user/folder/*
do
extension="${f##*.}"
if [ ! -d "$f" ] && [[ ! "$extension" =~ ^(sh|py)$ ]]; then
mv "$f" MoveFolder
fi
done
Otherwise, you can also use find -type f and do some stuff with maxdepth and a regexp.
Regexp for the file name based on Check if a string matches a regex in Bash script, extension extracted through the solution to Extract filename and extension in Bash.
From the current directory I have multiple sub directories:
subdir1/
001myfile001A.txt
002myfile002A.txt
subdir2/
001myfile001B.txt
002myfile002B.txt
where I want to strip every character from the filenames before myfile so I end up with
subdir1/
myfile001A.txt
myfile002A.txt
subdir2/
myfile001B.txt
myfile002B.txt
I have some code to do this...
#!/bin/bash
for d in `find . -type d -maxdepth 1`; do
cd "$d"
for f in `find . "*.txt"`; do
mv "$f" "$(echo "$f" | sed -r 's/^.*myfile/myfile/')"
done
done
however the newly renamed files end up in the parent directory
i.e.
myfile001A.txt
myfile002A.txt
myfile001B.txt
myfile002B.txt
subdir1/
subdir2/
In which the sub-directories are now empty.
How do I alter my script to rename the files and keep them in their respective sub-directories? As you can see the first loop changes directory to the sub directory so not sure why the files end up getting sent up a directory...
Your script has multiple problems. In the first place, your outer find command doesn't do quite what you expect: it outputs not only each of the subdirectories, but also the search root, ., which is itself a directory. You could have discovered this by running the command manually, among other ways. You don't really need to use find for this, but supposing that you do use it, this would be better:
for d in $(find * -maxdepth 0 -type d); do
Moreover, . is the first result of your original find command, and your problems continue there. Your initial cd is without meaningful effect, because you're just changing to the same directory you're already in. The find command in the inner loop is rooted there, and descends into both subdirectories. The path information for each file you choose to rename is therefore stripped by sed, which is why the results end up in the initial working directory (./subdir1/001myfile001A.txt --> myfile001A.txt). By the time you process the subdirectories, there are no files left in them to rename.
But that's not all: the find command in your inner loop is incorrect. Because you do not specify an option before it, find interprets "*.txt" as designating a second search root, in addition to .. You presumably wanted to use -name "*.txt" to filter the find results; without it, find outputs the name of every file in the tree. Presumably you're suppressing or ignoring the error messages that result.
But supposing that your subdirectories have no subdirectories of their own, as shown, and that you aren't concerned with dotfiles, even this corrected version ...
for f in `find . -name "*.txt"`;
... is an awfully heavyweight way of saying this ...
for f in *.txt;
... or even this ...
for f in *?myfile*.txt;
... the latter of which will avoid attempts to rename any files whose names do not, in fact, change.
Furthermore, launching a sed process for each file name is pretty wasteful and expensive when you could just use bash's built-in substitution feature:
mv "$f" "${f/#*myfile/myfile}"
And you will find also that your working directory gets messed up. The working directory is a characteristic of the overall shell environment, so it does not automatically reset on each loop iteration. You'll need to handle that manually in some way. pushd / popd would do that, as would running the outer loop's body in a subshell.
Overall, this will do the trick:
#!/bin/bash
for d in $(find * -maxdepth 0 -type d); do
pushd "$d"
for f in *.txt; do
mv "$f" "${f/#*myfile/myfile}"
done
popd
done
You can do it without find and sed:
$ for f in */*.txt; do echo mv "$f" "${f/\/*myfile/\/myfile}"; done
mv subdir1/001myfile001A.txt subdir1/myfile001A.txt
mv subdir1/002myfile002A.txt subdir1/myfile002A.txt
mv subdir2/001myfile001B.txt subdir2/myfile001B.txt
mv subdir2/002myfile002B.txt subdir2/myfile002B.txt
If you remove the echo, it'll actually rename the files.
This uses shell parameter expansion to replace a slash and anything up to myfile with just a slash and myfile.
Notice that this breaks if there is more than one level of subdirectories. In that case, you could use extended pattern matching (enabled with shopt -s extglob) and the globstar shell option (shopt -s globstar):
$ for f in **/*.txt; do echo mv "$f" "${f/\/*([!\/])myfile/\/myfile}"; done
mv subdir1/001myfile001A.txt subdir1/myfile001A.txt
mv subdir1/002myfile002A.txt subdir1/myfile002A.txt
mv subdir1/subdir3/001myfile001A.txt subdir1/subdir3/myfile001A.txt
mv subdir1/subdir3/002myfile002A.txt subdir1/subdir3/myfile002A.txt
mv subdir2/001myfile001B.txt subdir2/myfile001B.txt
mv subdir2/002myfile002B.txt subdir2/myfile002B.txt
This uses the *([!\/]) pattern ("zero or more characters that are not a forward slash"). The slash has to be escaped in the bracket expression because we're still inside of the pattern part of the ${parameter/pattern/string} expansion.
Maybe you want to use the following command instead:
rename 's#(.*/).*(myfile.*)#$1$2#' subdir*/*
You can use rename -n ... to check the outcome without actually renaming anything.
Regarding your actual question:
The find command from the outer loop returns 3 (!) directories:
.
./subdir1
./subdir2
The unwanted . is the reason why all files end up in the parent directory (that is .). You can exclude . by using the option -mindepth 1.
Unfortunately, this was onyl the reason for the files landing in the wrong place, but not the only problem. Since you already accepted one of the answers, there is no need to list them all.
a slight modification should fix your problem:
#!/bin/bash
for f in `find . -maxdepth 2 -name "*.txt"`; do
mv "$f" "$(echo "$f" | sed -r 's,[^/]+(myfile),\1,')"
done
note: this sed uses , instead of / as the delimiter.
however, there are much faster ways.
here is with the rename utility, available or easily installed wherever there is bash and perl:
find . -maxdepth 2 -name "*.txt" | rename 's,[^/]+(myfile),/$1,'
here are tests on 1000 files:
for `find`; do mv 9.176s
rename 0.099s
that's 100x as fast.
John Bollinger's accepted answer is twice as fast as the OPs, but 50x as slow as this rename solution:
for|for|mv "$f" "${f//}" 4.316s
also, it won't work if there is a directory with too many items for a shell glob. likewise any answers that use for f in *.txt or for f in */*.txt or find * or rename ... subdir*/*. answers that begin with find ., on the other hand, will also work on directories with any number of items.
I'm having a bit of trouble with globs in Bash. For example:
echo *
This prints out all of the files and folders in the current directory.
e.g. (file1 file2 folder1 folder2)
echo */
This prints out all of the folders with a / after the name.
e.g. (folder1/ folder2/)
How can I glob for just the files?
e.g. (file1 file2)
I know it could be done by parsing ls but also know that it is a bad idea. I tried using extended blobbing but couldn't get that to work either.
WIthout using any external utility you can try for loop with glob support:
for i in *; do [ -f "$i" ] && echo "$i"; done
I don't know if you can solve this with globbing, but you can certainly solve it with find:
find . -type f -maxdepth 1
You can do what you want in bash like this:
shopt -s extglob
echo !(*/)
But note that what this actually does is match "not directory-likes."
It will still match dangling symlinks, symlinks pointing to not-directories, device nodes, fifos, etc.
It won't match symlinks pointing to directories, though.
If you want to iterate over normal files and nothing more, use find -maxdepth 1 -type f.
The safe and robust way to use it goes like this:
find -maxdepth 1 -type f -print0 | while read -d $'\0' file; do
printf "%s\n" "$file"
done
My go to in this scenario is to use the find command. I just had to use it, to find/replace dozens of instances in a given directory. I'm sure there are many other ways of skinning this cat, but the pure for example above, isn't recursive.
for file in $( find path/to/dir -type f -name '*.js' );
do sed -ie 's#FIND#REPLACEMENT#g' "$file";
done
In the bash shell, how do I output files readable by ALL users (that is user, group and others).
I tried find -readable, but it outputs those that are readable by at least one of the users.
Any idea?
There are many factors that could affect how one user could read a file but basically you could search for files with +r attribute on the others group. And this is one way to do it using find:
find -perm -o=r
That would include both files and directories. To be specific to files, add `-type f``:
find -perm -o=r -type f
And it's probably the same as -0004:
find -perm -0004 -type f
Try this :
for i in *; do
[[ $(stat -c %A "$i") =~ (-rw.rw.rw.|drwxrwxrwx) ]] && echo "$i"
done
If you use bash4, you can do it recursively with :
shopt -s globstar
for i in **; do
...
When using sudo rm -r, how can I delete all files, with the exception of the following:
textfile.txt
backup.tar.gz
script.php
database.sql
info.txt
find [path] -type f -not -name 'textfile.txt' -not -name 'backup.tar.gz' -delete
If you don't specify -type f find will also list directories, which you may not want.
Or a more general solution using the very useful combination find | xargs:
find [path] -type f -not -name 'EXPR' -print0 | xargs -0 rm --
for example, delete all non txt-files in the current directory:
find . -type f -not -name '*txt' -print0 | xargs -0 rm --
The print0 and -0 combination is needed if there are spaces in any of the filenames that should be deleted.
rm !(textfile.txt|backup.tar.gz|script.php|database.sql|info.txt)
The extglob (Extended Pattern Matching) needs to be enabled in BASH (if it's not enabled):
shopt -s extglob
find . | grep -v "excluded files criteria" | xargs rm
This will list all files in current directory, then list all those that don't match your criteria (beware of it matching directory names) and then remove them.
Update: based on your edit, if you really want to delete everything from current directory except files you listed, this can be used:
mkdir /tmp_backup && mv textfile.txt backup.tar.gz script.php database.sql info.txt /tmp_backup/ && rm -r && mv /tmp_backup/* . && rmdir /tmp_backup
It will create a backup directory /tmp_backup (you've got root privileges, right?), move files you listed to that directory, delete recursively everything in current directory (you know that you're in the right directory, do you?), move back to current directory everything from /tmp_backup and finally, delete /tmp_backup.
I choose the backup directory to be in root, because if you're trying to delete everything recursively from root, your system will have big problems.
Surely there are more elegant ways to do this, but this one is pretty straightforward.
I prefer to use sub query list:
rm -r `ls | grep -v "textfile.txt\|backup.tar.gz\|script.php\|database.sql\|info.txt"`
-v, --invert-match select non-matching lines
\| Separator
Assuming that files with those names exist in multiple places in the directory tree and you want to preserve all of them:
find . -type f ! -regex ".*/\(textfile.txt\|backup.tar.gz\|script.php\|database.sql\|info.txt\)" -delete
You can use GLOBIGNORE environment variable in Bash.
Suppose you want to delete all files except php and sql, then you can do the following -
export GLOBIGNORE=*.php:*.sql
rm *
export GLOBIGNORE=
Setting GLOBIGNORE like this ignores php and sql from wildcards used like "ls *" or "rm *". So, using "rm *" after setting the variable will delete only txt and tar.gz file.
Since nobody mentioned it:
copy the files you don't want to delete in a safe place
delete all the files
move the copied files back in place
You can write a for loop for this... %)
for x in *
do
if [ "$x" != "exclude_criteria" ]
then
rm -f $x;
fi
done;
A little late for the OP, but hopefully useful for anyone who gets here much later by google...
I found the answer by #awi and comment on -delete by #Jamie Bullock really useful. A simple utility so you can do this in different directories ignoring different file names/types each time with minimal typing:
rm_except (or whatever you want to name it)
#!/bin/bash
ignore=""
for fignore in "$#"; do
ignore=${ignore}"-not -name ${fignore} "
done
find . -type f $ignore -delete
e.g. to delete everything except for text files and foo.bar:
rm_except *.txt foo.bar
Similar to #mishunika, but without the if clause.
If you're using zsh which I highly recommend.
rm -rf ^file/folder pattern to avoid
With extended_glob
setopt extended_glob
rm -- ^*.txt
rm -- ^*.(sql|txt)
Trying it worked with:
rm -r !(Applications|"Virtualbox VMs"|Downloads|Documents|Desktop|Public)
but names with spaces are (as always) tough. Tried also with Virtualbox\ VMs instead the quotes. It deletes always that directory (Virtualbox VMs).
Just:
rm $(ls -I "*.txt" ) #Deletes file type except *.txt
Or:
rm $(ls -I "*.txt" -I "*.pdf" ) #Deletes file types except *.txt & *.pdf
Make the files immutable. Not even root will be allowed to delete them.
chattr +i textfile.txt backup.tar.gz script.php database.sql info.txt
rm *
All other files have been deleted.
Eventually you can reset them mutable.
chattr -i *
I belive you can use
rm -v !(filename)
Except for the filename all the other files will e deleted in the directory and make sure you are using it in
This is similar to the comment from #siwei-shen but you need the -o flag to do multiple patterns. The -o flag stands for 'or'
find . -type f -not -name '*ignore1' -o -not -name '*ignore2' | xargs rm
You can do this with two command sequences.
First define an array with the name of the files you do not want to exclude:
files=( backup.tar.gz script.php database.sql info.txt )
After that, loop through all files in the directory you want to exclude, checking if the filename is in the array you don't want to exclude; if its not then delete the file.
for file in *; do
if [[ ! " ${files[#]} " ~= "$file" ]];then
rm "$file"
fi
done
The answer I was looking for was to run script, but I wanted to avoid deleting the sript itself. So incase someone is looking for a similar answer, do the following.
Create a .sh file and write the following code:
cp my_run_build.sh ../../
rm -rf * cp
../../my_run_build.sh .
/*amend rest of the script*/
Since no one yet mentioned this, in one particular case:
OLD_FILES=`echo *`
... create new files ...
rm -r $OLD_FILES
(or just rm $OLD_FILES)
or
OLD_FILES=`ls *`
... create new files ...
rm -r $OLD_FILES
You may need to use shopt -s nullglob if some files may be either there or not there:
SET_OLD_NULLGLOB=`shopt -p nullglob`
shopt -s nullglob
FILES=`echo *.sh *.bash`
$SET_OLD_NULLGLOB
without nullglob, echo *.sh *.bash may give you "a.sh b.sh *.bash".
(Having said all that, I myself prefer this answer, even though it does not work in OSX)
Rather than going for a direct command, please move required files to temp dir outside current dir. Then delete all files using rm * or rm -r *.
Then move required files to current dir.
Remove everything exclude file.name:
ls -d /path/to/your/files/* |grep -v file.name|xargs rm -rf