In the bash shell, how do I output files readable by ALL users (that is user, group and others).
I tried find -readable, but it outputs those that are readable by at least one of the users.
Any idea?
There are many factors that could affect how one user could read a file but basically you could search for files with +r attribute on the others group. And this is one way to do it using find:
find -perm -o=r
That would include both files and directories. To be specific to files, add `-type f``:
find -perm -o=r -type f
And it's probably the same as -0004:
find -perm -0004 -type f
Try this :
for i in *; do
[[ $(stat -c %A "$i") =~ (-rw.rw.rw.|drwxrwxrwx) ]] && echo "$i"
done
If you use bash4, you can do it recursively with :
shopt -s globstar
for i in **; do
...
Related
I have a directory that contains subdirectores each containing a particular script and supporting files. I need to verify that the proper files are in place in each of these directories. These directories can change at any time, so I'd like to use bash (I think) and store the following command which returns proper subdirectores in an array
find . -maxdepth 1 -type d -not -name home -not -name lhome -print
and then verify that each of those directories contains the proper files:
file1 file2 file3.sh file4.conf
If it a particular directory does not contain those files, I need to know which directory is the issue and which files are missing. What is the best/proper way to achieve that goal? Maybe bash is the wrong tool and perl or something would be better?
There may be a more integrated way, but here's my shot at it :
while read -rd '' directory; do
files=("file1" "file2" "$directory.sh" "$directory.conf")
for file in "${files[#]}"; do
if [ ! -e "$directory/$file" ]; then
echo "$directory is missing $file"
fi
done
done < <(find . -maxdepth 1 -type d -not -name home -not -name lhome -print0)
Note that this find also returns the current directory. If you wish to avoid that, you might want to add a -mindepth 1 option.
Also to make it into a script, you might want to replace the find kocation . by $1 so you can specify the target more flexibly.
I think something like this might work:
shopt -s nullglob extglob
diff <(while IFS= read -r f; do printf "%s\n" "$f/"{file1,file2,file3.sh,file4.conf}; done < <(printf "%s\n" !(home|lhome))) \
<(printf "%s\n" !(home|lhome)/{file1,file2,file3.sh,file4.conf})
Basically what is happening is that a list of all possible files is generated by the while loop, something like:
c/file1
c/file2
c/file3.sh
c/file4.conf
d/file1
d/file2
d/file3.sh
d/file4.conf
Then another list is generated with the existing files:
c/file1
c/file2
Now all that is missing is to compare the two lists to find the differences:
2,5d1
< c/file2
< c/file3.sh
< c/file4.conf
< d/file1
7,8d2
< d/file3.sh
< d/file4.conf
As you can see this have some serious drawbacks, for one the list of expected files is written twice. And each list is stored in memory which would cause problems if many directories is present.
I'm having a bit of trouble with globs in Bash. For example:
echo *
This prints out all of the files and folders in the current directory.
e.g. (file1 file2 folder1 folder2)
echo */
This prints out all of the folders with a / after the name.
e.g. (folder1/ folder2/)
How can I glob for just the files?
e.g. (file1 file2)
I know it could be done by parsing ls but also know that it is a bad idea. I tried using extended blobbing but couldn't get that to work either.
WIthout using any external utility you can try for loop with glob support:
for i in *; do [ -f "$i" ] && echo "$i"; done
I don't know if you can solve this with globbing, but you can certainly solve it with find:
find . -type f -maxdepth 1
You can do what you want in bash like this:
shopt -s extglob
echo !(*/)
But note that what this actually does is match "not directory-likes."
It will still match dangling symlinks, symlinks pointing to not-directories, device nodes, fifos, etc.
It won't match symlinks pointing to directories, though.
If you want to iterate over normal files and nothing more, use find -maxdepth 1 -type f.
The safe and robust way to use it goes like this:
find -maxdepth 1 -type f -print0 | while read -d $'\0' file; do
printf "%s\n" "$file"
done
My go to in this scenario is to use the find command. I just had to use it, to find/replace dozens of instances in a given directory. I'm sure there are many other ways of skinning this cat, but the pure for example above, isn't recursive.
for file in $( find path/to/dir -type f -name '*.js' );
do sed -ie 's#FIND#REPLACEMENT#g' "$file";
done
I have a really easy question, I have found a bunch of similar questions answered but none that solved this for me.
I have a shell script that goes through a directory and prints out the number of files and directories in a sub directory, followed by the directory name.
However it fails with directories with spaces, it attempts to use each word as a new argument. I have tried putting $dir in quotations but that doesn't help. Perhaps because its already in the echo quotations.
for dir in `find . -mindepth 1 -maxdepth 1 -type d`
do
echo -e "`ls -1 $dir | wc -l`\t$dir"
done
Thanks in advance for your help :)
Warning: Two of the three code samples below use bashisms. Please take care to use the correct one if you need POSIX sh rather than bash.
Don't do any of those things. If your real problem does involve using find, you can use it like so:
shopt -s nullglob
while IFS='' read -r -d '' dir; do
files=( "$dir"/* )
printf '%s\t%s\n' "${#files[#]}" "$dir"
done < <(find . -mindepth 1 -maxdepth 1 -type d -print0)
However, for iterating over only immediate subdirectories, you don't need find at all:
shopt -s nullglob
for dir in */; do
files=( "$dir"/* )
printf '%s\t%s\n' "${#files[#]}" "$dir"
done
If you're trying to do this in a way compatible with POSIX sh, you can try the following:
for dir in */; do
[ "$dir" = "*/" ] && continue
set -- "$dir"/*
[ "$#" -eq 1 ] && [ "$1" = "$dir/*" ] && continue
printf '%s\t%s\n' "$#" "$dir"
done
You shouldn't ever use ls in scripts: http://mywiki.wooledge.org/ParsingLs
You shouldn't ever use for to read lines: http://mywiki.wooledge.org/DontReadLinesWithFor
Use arrays and globs when counting files to do this safely, robustly, and without external commands: http://mywiki.wooledge.org/BashFAQ/004
Always NUL-terminate file lists coming out of find -- otherwise, filenames containing newlines (yes, they're legal in UNIX!) can cause a single name to be read as multiple files, or (in some find versions and usages) your "filename" to not match the real file's name. http://mywiki.wooledge.org/UsingFind
I know nothing about Linux commands o bash scripts so help me please.
I have a lot of file in different directories i want to rename all those files from "name" to "name.xml" using bash file is it possible to do that? I just find usefulness codes on the internet like this:
shopt -s globstar # enable ** globstar/recursivity
for i in **/*.txt; do
echo "$i" "${i/%.txt}.xml";
done
it does not even work.
For the purpose comes in handy the prename utility which is installed by default on many Linux distributions, usually it is distributed with the Perl package. You can use it like this:
find . -iname '*.txt' -exec prename 's/.txt/.xml/' {} \;
or this much faster alternative:
find . -iname '*.txt' | xargs prename 's/.txt/.xml/'
Explanation
Move/rename all files –whatever the extension is– in current directory and below from name to name.xml. You should test using echo before running the real script.
shopt -s globstar # enable ** globstar/recursivity
for i in **; do # **/*.txt will look only for .txt files
[[ -d "$i" ]] && continue # skip directories
echo "$i" "$i.xml"; # replace 'echo' by 'mv' when validated
#echo "$i" "${i/%.txt}.xml"; # replace .txt by .xml
done
Showing */.txt */.xml means effectively there are no files matching the given pattern, as by default bash will use verbatim * if no matches are found.
To prevent this issue you'd have to additionally set shopt -s nullglob to have bash just return nothing when there is no match at all.
After verifying the echoed lines look somewhat reasonable you'll have to replace
echo "$i" "${i/%.txt}.xml"
with
mv "$i" "${i/%.txt}.xml"
to rename the files.
You can use this bash script.
#!/bin/bash
DIRECTORY=/your/base/dir/here
for i in `find $DIRECTORY -type d -exec find {} -type f -name \*.txt\;`;
do mv $i $i.xml
done
How can we iterate over the subdirectories of the given directory and get file within those subdirectories in bash. Can I do that using grep command?
This will go one subdirectory deep. The inner for loop will iterate over enclosed files and directories. The if statement will exclude directories. You can set options to include hidden files and directories (shopt -s dotglob).
shopt -s nullglob
for dir in /some/dir/*/
do
for file in "$dir"/*
do
if [[ -f $file ]]
then
do_something_with "$file"
fi
done
done
This will be recursive. You can limit the depth using the -maxdepth option.
find /some/dir -mindepth 2 -type f -exec do_something {} \;
Using -mindepth excludes files in the current directory, but it includes files in the next level down (and below, depending on -maxdepth).
Well, you can do that using grep:
grep -rl ^ /path/to/dir
But why? find is better.
You are probably looking for find(1).