How can I exclude all "permission denied" messages from "find"? - bash

I need to hide all permission denied messages from:
find . > files_and_folders
I am experimenting when such message arises. I need to gather all folders and files, to which it does not arise.
Is it possible to direct the permission levels to the files_and_folders file?
How can I hide the errors at the same time?

Use:
find . 2>/dev/null > files_and_folders
This hides not just the Permission denied errors, of course, but all error messages.
If you really want to keep other possible errors, such as too many hops on a symlink, but not the permission denied ones, then you'd probably have to take a flying guess that you don't have many files called 'permission denied' and try:
find . 2>&1 | grep -v 'Permission denied' > files_and_folders
If you strictly want to filter just standard error, you can use the more elaborate construction:
find . 2>&1 > files_and_folders | grep -v 'Permission denied' >&2
The I/O redirection on the find command is: 2>&1 > files_and_folders |.
The pipe redirects standard output to the grep command and is applied first. The 2>&1 sends standard error to the same place as standard output (the pipe). The > files_and_folders sends standard output (but not standard error) to a file. The net result is that messages written to standard error are sent down the pipe and the regular output of find is written to the file. The grep filters the standard output (you can decide how selective you want it to be, and may have to change the spelling depending on locale and O/S) and the final >&2 means that the surviving error messages (written to standard output) go to standard error once more. The final redirection could be regarded as optional at the terminal, but would be a very good idea to use it in a script so that error messages appear on standard error.
There are endless variations on this theme, depending on what you want to do. This will work on any variant of Unix with any Bourne shell derivative (Bash, Korn, …) and any POSIX-compliant version of find.
If you wish to adapt to the specific version of find you have on your system, there may be alternative options available. GNU find in particular has a myriad options not available in other versions — see the currently accepted answer for one such set of options.

Note:
This answer probably goes deeper than the use case warrants, and find 2>/dev/null may be good enough in many situations. It may still be of interest for a cross-platform perspective and for its discussion of some advanced shell techniques in the interest of finding a solution that is as robust as possible, even though the cases guarded against may be largely hypothetical.
If your shell is bash or zsh, there's a solution that is robust while being reasonably simple, using only POSIX-compliant find features; while bash itself is not part of POSIX, most modern Unix platforms come with it, making this solution widely portable:
find . > files_and_folders 2> >(grep -v 'Permission denied' >&2)
Note:
If your system is configured to show localized error messages, prefix the find calls below with LC_ALL=C (LC_ALL=C find ...) to ensure that English messages are reported, so that grep -v 'Permission denied' works as intended. Invariably, however, any error messages that do get displayed will then be in English as well.
>(...) is a (rarely used) output process substitution that allows redirecting output (in this case, stderr output (2>) to the stdin of the command inside >(...).
In addition to bash and zsh, ksh supports them as well in principle, but trying to combine them with redirection from stderr, as is done here (2> >(...)), appears to be silently ignored (in ksh 93u+).
grep -v 'Permission denied' filters out (-v) all lines (from the find command's stderr stream) that contain the phrase Permission denied and outputs the remaining lines to stderr (>&2).
Note: There's a small chance that some of grep's output may arrive after find completes, because the overall command doesn't wait for the command inside >(...) to finish. In bash, you can prevent this by appending | cat to the command.
This approach is:
robust: grep is only applied to error messages (and not to a combination of file paths and error messages, potentially leading to false positives), and error messages other than permission-denied ones are passed through, to stderr.
side-effect free: find's exit code is preserved: the inability to access at least one of the filesystem items encountered results in exit code 1 (although that won't tell you whether errors other than permission-denied ones occurred (too)).
POSIX-compliant solutions:
Fully POSIX-compliant solutions either have limitations or require additional work.
If find's output is to be captured in a file anyway (or suppressed altogether), then the pipeline-based solution from Jonathan Leffler's answer is simple, robust, and POSIX-compliant:
find . 2>&1 >files_and_folders | grep -v 'Permission denied' >&2
Note that the order of the redirections matters: 2>&1 must come first.
Capturing stdout output in a file up front allows 2>&1 to send only error messages through the pipeline, which grep can then unambiguously operate on.
The only downside is that the overall exit code will be the grep command's, not find's, which in this case means: if there are no errors at all or only permission-denied errors, the exit code will be 1 (signaling failure), otherwise (errors other than permission-denied ones) 0 - which is the opposite of the intent.
That said, find's exit code is rarely used anyway, as it often conveys little information beyond fundamental failure such as passing a non-existent path.
However, the specific case of even only some of the input paths being inaccessible due to lack of permissions is reflected in find's exit code (in both GNU and BSD find): if a permissions-denied error occurs for any of the files processed, the exit code is set to 1.
The following variation addresses that:
find . 2>&1 >files_and_folders | { grep -v 'Permission denied' >&2; [ $? -eq 1 ]; }
Now, the exit code indicates whether any errors other than Permission denied occurred: 1 if so, 0 otherwise.
In other words: the exit code now reflects the true intent of the command: success (0) is reported, if no errors at all or only permission-denied errors occurred.
This is arguably even better than just passing find's exit code through, as in the solution at the top.
gniourf_gniourf in the comments proposes a (still POSIX-compliant) generalization of this solution using sophisticated redirections, which works even with the default behavior of printing the file paths to stdout:
{ find . 3>&2 2>&1 1>&3 | grep -v 'Permission denied' >&3; } 3>&2 2>&1
In short: Custom file descriptor 3 is used to temporarily swap stdout (1) and stderr (2), so that error messages alone can be piped to grep via stdout.
Without these redirections, both data (file paths) and error messages would be piped to grep via stdout, and grep would then not be able to distinguish between error message Permission denied and a (hypothetical) file whose name happens to contain the phrase Permission denied.
As in the first solution, however, the the exit code reported will be grep's, not find's, but the same fix as above can be applied.
Notes on the existing answers:
There are several points to note about Michael Brux's answer, find . ! -readable -prune -o -print:
It requires GNU find; notably, it won't work on macOS. Of course, if you only ever need the command to work with GNU find, this won't be a problem for you.
Some Permission denied errors may still surface: find ! -readable -prune reports such errors for the child items of directories for which the current user does have r permission, but lacks x (executable) permission. The reason is that because the directory itself is readable, -prune is not executed, and the attempt to descend into that directory then triggers the error messages. That said, the typical case is for the r permission to be missing.
Note: The following point is a matter of philosophy and/or specific use case, and you may decide it is not relevant to you and that the command fits your needs well, especially if simply printing the paths is all you do:
If you conceptualize the filtering of the permission-denied error messages a separate task that you want to be able to apply to any find command, then the opposite approach of proactively preventing permission-denied errors requires introducing "noise" into the find command, which also introduces complexity and logical pitfalls.
For instance, the most up-voted comment on Michael's answer (as of this writing) attempts to show how to extend the command by including a -name filter, as follows:
find . ! -readable -prune -o -name '*.txt'
This, however, does not work as intended, because the trailing -print action is required (an explanation can be found in this answer). Such subtleties can introduce bugs.
The first solution in Jonathan Leffler's answer, find . 2>/dev/null > files_and_folders, as he himself states, blindly silences all error messages (and the workaround is cumbersome and not fully robust, as he also explains). Pragmatically speaking, however, it is the simplest solution, as you may be content to assume that any and all errors would be permission-related.
mist's answer, sudo find . > files_and_folders, is concise and pragmatic, but ill-advised for anything other than merely printing filenames, for security reasons: because you're running as the root user, "you risk having your whole system being messed up by a bug in find or a malicious version, or an incorrect invocation which writes something unexpectedly, which could not happen if you ran this with normal privileges" (from a comment on mist's answer by tripleee).
The 2nd solution in viraptor's answer, find . 2>&1 | grep -v 'Permission denied' > some_file runs the risk of false positives (due to sending a mix of stdout and stderr through the pipeline), and, potentially, instead of reporting non-permission-denied errors via stderr, captures them alongside the output paths in the output file.

Use:
find . ! -readable -prune -o -print
or more generally
find <paths> ! -readable -prune -o <other conditions like -name> -print
to avoid "Permission denied"
AND do NOT suppress (other) error messages
AND get exit status 0 ("all files are processed successfully")
Works with: find (GNU findutils) 4.4.2.
Background:
The -readable test matches readable files. The ! operator returns true, when test is false. And ! -readable matches not readable directories (&files).
The -prune action does not descend into directory.
! -readable -prune can be translated to: if directory is not readable, do not descend into it.
The -readable test takes into account access control lists and other permissions artefacts which the -perm test ignores.
See also find(1) manpage for many more details.

If you want to start search from root "/" , you will probably see output somethings like:
find: /./proc/1731/fdinfo: Permission denied
find: /./proc/2032/task/2032/fd: Permission denied
It's because of permission. To solve this:
You can use sudo command:
sudo find /. -name 'toBeSearched.file'
It asks super user's password, when enter the password you will see result what you really want. If you don't have permission to use sudo command which means you don't have super user's password, first ask system admin to add you to the sudoers file.
You can use redirect the Standard Error Output from (Generally Display/Screen) to some file and avoid seeing the error messages on the screen! redirect to a special file /dev/null :
find /. -name 'toBeSearched.file' 2>/dev/null
You can use redirect the Standard Error Output from (Generally Display/Screen) to Standard output (Generally Display/Screen), then pipe with grep command with -v "invert" parameter to not to see the output lines which has 'Permission denied' word pairs:
find /. -name 'toBeSearched.file' 2>&1 | grep -v 'Permission denied'

I had to use:
find / -name expect 2>/dev/null
specifying the name of what I wanted to find and then telling it to redirect all errors to /dev/null
expect being the location of the expect program I was searching for.

Pipe stderr to /dev/null by using 2>/dev/null
find . -name '...' 2>/dev/null

You can also use the -perm and -prune predicates to avoid descending into unreadable directories (see also How do I remove "permission denied" printout statements from the find program? - Unix & Linux Stack Exchange):
find . -type d ! -perm -g+r,u+r,o+r -prune -o -print > files_and_folders

Redirect standard error. For instance, if you're using bash on a unix machine, you can redirect standard error to /dev/null like this:
find . 2>/dev/null >files_and_folders

While above approaches do not address the case for Mac OS X because Mac Os X does not support -readable switch this is how you can avoid 'Permission denied' errors in your output. This might help someone.
find / -type f -name "your_pattern" 2>/dev/null.
If you're using some other command with find, for example, to find the size of files of certain pattern in a directory 2>/dev/null would still work as shown below.
find . -type f -name "your_pattern" -exec du -ch {} + 2>/dev/null | grep total$.
This will return the total size of the files of a given pattern. Note the 2>/dev/null at the end of find command.

Those errors are printed out to the standard error output (fd 2). To filter them out, simply redirect all errors to /dev/null:
find . 2>/dev/null > some_file
or first join stderr and stdout and then grep out those specific errors:
find . 2>&1 | grep -v 'Permission denied' > some_file

Simple answer:
find . > files_and_folders 2>&-
2>&- closes (-) the standard error file descriptor (2) so all error messages are silenced.
Exit code will still be 1 if any 'Permission denied' errors would otherwise be printed
Robust answer for GNU find:
find . -type d \! \( -readable -executable \) -prune -print -o -print > files_and_folders
Pass extra options to find that -prune (prevent descending into) but still -print any directory (-typed) that does not (\!) have both -readable and -executable permissions, or (-o) -print any other file.
-readable and -executable options are GNU extensions, not part of the POSIX standard
May still return 'Permission denied' on abnormal/corrupt files (e.g., see bug report affecting container-mounted filesystems using lxcfs < v2.0.5)
Robust answer that works with any POSIX-compatible find (GNU, OSX/BSD, etc)
{ LC_ALL=C find . 3>&2 2>&1 1>&3 > files_and_folders | grep -v 'Permission denied'; [ $? = 1 ]; } 3>&2 2>&1
Use a pipeline to pass the standard error stream to grep, removing all lines containing the 'Permission denied' string.
LC_ALL=C sets the POSIX locale using an environment variable, 3>&2 2>&1 1>&3 and 3>&2 2>&1 duplicate file descriptors to pipe the standard-error stream to grep, and [ $? = 1 ] uses [] to invert the error code returned by grep to approximate the original behavior of find.
Will also filter any 'Permission denied' errors due to output redirection (e.g., if the files_and_folders file itself is not writable)

To avoid just the permission denied warnings, tell find to ignore the unreadable files by pruning them from the search. Add an expression as an OR to your find, such as
find / \! -readable -prune -o -name '*.jbd' -ls
This mostly says to (match an unreadable file and prune it from the list) OR (match a name like *.jbd and display it [with ls]). (Remember that by default the expressions are AND'd together unless you use -or.) You need the -ls in the second expression or else find may add a default action to show either match, which will also show you all the unreadable files.
But if you're looking for real files on your system, there is usually no reason to look in /dev, which has many many files, so you should add an expression that excludes that directory, like:
find / -mount \! -readable -prune -o -path /dev -prune -o -name '*.jbd' -ls
So (match unreadable file and prune from list) OR (match path /dev and prune from list) OR (match file like *.jbd and display it).

use
sudo find / -name file.txt
It's stupid (because you elevate the search) and nonsecure, but far shorter to write.

Simply use this to search for a file in your system.
find / -name YOUR_SEARCH_TERM 2>&1 | grep YOUR_SEARCH_TERM
Let's not do unnecessary over engineering, you just want to search for your file right? then that is the command which will list the files for you if they are present in an area accessible to you.

None of the above answers worked for me. Whatever I find on Internet focuses on: hide errors. None properly handles the process return-code / exit-code. I use command find within bash scripts to locate some directories and then inspect their content. I evaluate command find success using the exit-code: a value zero works, otherwise fails.
The answer provided above by Michael Brux works sometimes. But I have one scenario in which it fails! I discovered the problem and fixed it myself. I need to prune files when:
it is a directory AND has no read access AND/OR has no execute access
See the key issue here is: AND/OR. One good suggested condition sequence I read is:
-type d ! -readable ! -executable -prune
This does not work always. This means a prune is triggered when a match is:
it is directory AND no read access AND no execute access
This sequence of expressions fails when read access is granted but no execute access is.
After some testing I realized about that and changed my shell script solution to:
nice find /home*/ -maxdepth 5 -follow \
\( -type d -a ! \( -readable -a -executable \) \) -prune \
-o \
\( -type d -a -readable -a -executable -a -name "${m_find_name}" \) -print
The key here is to place the "not true" for a combined expression:
has read access AND has execute access
Otherwise it has not full access, which means: prune it. This proved to work for me in one scenario which previous suggested solutions failed.
I provide below technical details for questions in the comments section. I apologize if details are excessive.
¿Why using command nice? I got the idea here. Initially I thought it would be nice to reduce process priority when looking an entire filesystem. I realized it makes no sense to me, as my script is limited to few directories. I reduced -maxdepth to 3.
¿Why search within /home*/? This it not relevant for this thread. I install all applications by hand via source code compile with non privileged users (not root). They are installed within "/home". I can have multiple binaries and versions living together. I need to locate all directories, inspect and backup in a master-slave fashion. I can have more than one "/home" (several disks running within a dedicated server).
¿Why using -follow? Users might create symbolic links to directories. It's usefulness depends, I need to keep record of the absolute paths found.

You can use the grep -v invert-match
-v, --invert-match select non-matching lines
like this:
find . > files_and_folders
cat files_and_folders | grep -v "permission denied" > files_and_folders
Should to the magic

-=For MacOS=-
Make a new command using alias: just add in ~/.bash_profile line:
alias search='find / -name $file 2>/dev/null'
and in new Terminal window you can call it:
$ file=<filename or mask>; search
for example:
$ file=etc; search

If you are using CSH or TCSH, here is a solution:
( find . > files_and_folders ) >& /dev/null
If you want output to the terminal:
( find . > /dev/tty ) >& /dev/null
However, as the "csh-whynot" FAQ describes, you should not use CSH.

Optimized solutions for GNU find†
At least for some system+filesystem combinations, find doesn't need to stat a file to get its type. Then you can check if it's a directory before testing readability to speed up the search‡ —I've got some 30% improvement in tests I did. So for long searches or searches that run often enough, use one of these:
Print everything visible
$ find . -print -type d ! -readable -prune
$ find . -type d ! -readable -prune , [expression] -print
Print visible files
$ find . -type d \( ! -readable -prune -o -true \) -o [expression] -print
Print visible directories
$ find . -type d -print ! -readable -prune
$ find . -type d \( ! -readable -prune , [expression] -print \)
Print only readable directories
$ find . -type d ! -readable -prune -o [expression] -print
Notes
† The -readable and , (comma) operators are GNU extensions. This expression
$ find . [expression] , [expression]
is logically equivalent to
$ find . \( [expression] -o -true \) [expression]
‡ This is because find implementations with this optimization enabled will avoid stating non-directory files at all in the discussed use case.
Edit: shell function
Here is a POSIX shell function I ended up with to prepend this test to any expression. It seems to work fine with the implicit -print and command-line options:
findr () {
j=$#; done=
while [ $j -gt 0 ]; do
j=$(($j - 1))
arg="$1"; shift
test "$done" || case "$arg" in
-[A-Z]*) ;; # skip options
-*|\(|!) # find start of expression
set -- "$#" \( -type d ! -readable -prune -o -true \)
done=true
;;
esac
set -- "$#" "$arg"
done
find "$#"
}
The other two alternatives listed in the answers caused either a syntax error in POSIX shell (couldn't even source a file containing the function definition) or bad output in ZSH... Running time seems to be equivalent.

To search the entire file system for some file, e.g. hosts, except the /proc tree, which causes all kinds of errors, I use the following:
# find / -path /proc ! -prune -o -name hosts -type f
/etc/hosts
Note: Because -prune is always true, you have to negate it to avoid seeing the line /proc in the output. I tried the using ! -readable approach and found it returns all kinds of things under /proc that the current user can read. So the "OR" condition doesn't do what you expect/want there.
I started with the example given by the find man page, See the -prune option.

Minimal solution is just to add the readable flag.
find . -name foo -readable

Related

Find Command Exclude Hidden files when using empty flag

I am looking for a way to use the find command to tell if a folder has no files in it. I have tried using the -empty flag, but since I am on macOS the system files the OS places in the directory such as .DS_Store cause find to not consider the directory empty. I have tried telling find to ignore .DS_Store but it still considers the directory not empty because that file is present.
Is there a way to have find exclude certain files from what it considers -empty? Also is there a way to have find return a list of directories with no visible files?
The -empty predicate is rather simple, it's true for a directory if it has any entries other than . or ...
Kind of an ugly solution, but you can use -exec to run another find in each directory which will implement your criteria for deciding what directories you want to include.
Below:
the outer find will execute sh -c for each directory in /starting/point
sh will execute another find with different criteria.
the inner find will print the first match and then quit
read will consume the output (if any) of the inner find. read will have an exit status of 0 only if the inner find printed at least one line, non-zero otherwise
if there was no output from the inner find, the outer find's -exec predicate will evaluate to false
since -exec is followed by -o, the following -print action will be executed only for those directories which do not match the inner find's criteria
find /starting/point \
-type d \( \
-exec sh -c \
'find "$1" -mindepth 1 -maxdepth 1 ! -name ".*" -print -quit | read' \
sh {} \; \
-o -print \
\)
Also note that the 'find FOLDER -empty' is somewhat tricky. It will consider FOLDER empty even if it contains files, as long as these are empty.
Maybe not exactly what was asked, but I prefer the brute force approach if I want to avoid a no-match error on using FOLDER/*. In tcsh:
ls -d FOLDER/* >& /dev/null
if !($status) COMMANDS FOLDER/* ...
A variation of this might be usable here (like also using
ls -d FOLDER/.* | wc -l
and drawing the desired conclusions from the combined results).

How to suppress stderr while using 'ls' command along with assigning the output to a variable?

I am trying to create a script that would search for files on the server as per their name and time stamp and take count of those files into a variable. It works fine when the file is available, but throws an stderr when the file is not available.
In order to suppress the stderr I am trying to redirect it to /dev/null but even that is not helping and the error still shows on the screen. I know I can first check whether the file is available or not using the 'if' statement and then take the count but that would unnecessarily make the script lengthy.
So is there a way I can take the file count and suppress the stderr (if any) along with assigning the output to a variable in just one line of code?
This command runs successfully when a file with name 'example' is present on the server:
file_name=example
file_date=20190901
file_count=`ls -lrt "$PWD"/"$file_name"*"$file_date"* | wc -l`
But when the file is not present, it throws a stderr on the screen like below:
ls: cannot access /home/saurap01/example*20190901*: No such file or directory
In order to suppress this error, I tried redirecting it to /dev/null as below:
file_count=`ls -lrt "$PWD"/"$file_name"*"$file_date"* | wc -l` > /dev/null 2>&1
But even this is not helping in suppressing the error.
Can I just try to hide the stderr by using below:
file_count=`ls -lrt "$PWD"/"$file_name"*"$file_date"* | wc -l` 2>&-
You're not redirecting stderr correctly, 2>/dev/null should come right after the command whose stderr you want to supress, like:
file_count=`ls -lrt "$PWD"/"$file_name"*"$file_date"* 2>/dev/null | wc -l`
But, parsing the output of ls is not a good idea at all, you should use find for tasks like this. For example, using GNU find:
find -maxdepth 1 \
-type f \
-name "$file_name*$file_date*" \
-printf '.' | wc -c
or using any POSIX-compliant find:
find . \
! \( -type d -path '*/*' -prune \) \
-type f \
-name "$file_name*$file_date*" \
-exec printf '.%.s' {} + | wc -c
The immediate solution is to add
2>/dev/null
inside the command substitution.
Your code exhibits a number of antipatterns, so let's also discuss those.
As explained in useless use of ls, the shell already performs wildcard expansions before passing the arguments to ls. Moreover, the options you pass to ls perform a nontrivial amount of additional work; you ignore the size, owner etc yet you cause these to be looked up with the -l option; and you don't care about the sort order, yet you force sorting by timestamp with -rt.
Additionally, as explained in http://mywiki.wooledge.org/ParsingLs, the command ls has a number of features for human-readable output which make it unsuitable for use in scripts.
The shell, out of the box, is usually configured with globbing set up to return the glob pattern itself if it doesn't match any files. In Bash (but not ksh) you can avoid this with
setopt -s nullglob
and then simply print the files;
### BUG; see below
printf '%s\n' ./"$file_name"*"$file_date"* |
wc -l
However, this will produce the wrong result if one of the file names contains a newline - then, the number of lines of output will be different than the number of files. An easy workaround is to avoid the printf and use the shell's features entirely.
set -- ./"$file_name"*"$file_date"*
Thrs will simply assign the wildcard matches to the list of arguments $1, $2 etc, and hence
echo $#
will print the number of files.
If you don't have Bash (and thus not shopt) you can look for whether the glob expands to itself by checking whether the first file exists.
set -- ./"$file_name"*"$file_date"*
if [ -e "$1" ]; then
echo "$#"
else
echo 0
fi
As a minor aside, notice also that "$PWD" is rarely useful to spell out. If you need to convert a relative file name to absolute, or need to know the full path of the current directory for other reasons, it's occasionally useful; but outside of these scenarios, just use the relative path to refer to things in the current directory.

bash find command for directories without permission denied errors

I want to find all files with the name java in folders containing /current/jre/bin/ and without the many permission denied errors.
So I thought find / -type d '*/current/jre/bin/*' 2>/dev/null should do the job.
But the return is nothing. I also tried it without the *, with -wholename (with and without *), with an additional -name, -name but without -type d and some other commands.
If I instead search for the java files with find / -name 'java' 2>/dev/null I receive eleven path, from which I only need three.
Putting the '*/current/jre/bin/*' after -type d confuses find so it cannot determine which path you want to search. If you removed the 2>/dev/null you would see the error find: paths must precede expression.
Instead, use a pipe to grep:
find / -name 'java' 2>/dev/null | grep '/current/jre/bin/'
The proper way to say "the path must contain this" is with -path
find / -type d -path '*/current/jre/bin/*' 2>/dev/null
Specifying a bare string in the predicates is an error, which you would easily have found out if you didn't redirect error messages to /dev/null. Even then, having the command return immediately even though you are scanning the entire file tree should be a dead giveaway.
Pro tip: also add -xdev to the options, to avoid having find going into /dev and etc. If you have your files split on multiple partitions, you will then need to specify each partition you want to search in the path list before the predicates.
(The general syntax is find path1 path2 path3 ... -list -of -predicates.)

How to search for *~ as in anything ending with ~ in a bash script

I'm writing a Bash script and I need to find and move/delete all files with names ending in ~ or beginning and ending with #, that is file~ or #file#, emacs junk files.
I'm trying to use [ -f *~ ] && ( ... move or delete those files ... ) to determine if any files of this kind exist before I try to do anything to them, so as not to get error messages from the rm or mv function if they don't find the files. However, this results in "binary operator expected". I think it has something to do with the fact that ~ is an unary operator. Is there a way to make it work as intended?
Nothing wrong with what you were doing originally for current directory (not any slower than find), though not as one-liney.
#!/bin/bash
for file in *"~"; do
if [ -f "$file" ]; then
#do something with $file
fi
done
Also, "binary operator expected" is just coming from bash expecting a single argument for the "-f" operator, whereas *~ can expand to multiple arguments, e.g.
$ mkdir test && cd test
$ touch "1~"
$ if [ -f *"~" ]; then echo "Confirmed file ending in ~"; fi
Confirmed file ending in ~
$ touch {2..10}"~" && echo *"~"
1~ 10~ 2~ 3~ 4~ 5~ 6~ 7~ 8~ 9~
$ if [ -f *"~" ]; then echo "Confirmed file ending in ~"; fi
bash: [: too many arguments
$ if [ -f "arg1" "arg2"; then echo "Confirmed file ending in ~"; fi
bash: [: arg1: binary operator expected
Not positive why errors are different for the two cases, but pretty sure either error can result depending on expansion.
Your problem stems from the fact that file-testing operators such as -f are not designed to be used with globbing patterns - only with a single, literal path.
You can simply let bash's path expansion (globbing) do the work:
Note: The approaches below are an alternative to using a loop (as demonstrated in #BroSlow's answer).
Simplest approach:
rm -f *'~' '#'*'#'
This removes all matching files, if any, and, if there are no matches, does nothing (and outputs nothing and reports exit code 0) - thanks to the -f option (tip of the hat to #chris).
Caveat: This also silently removes files marked as read-only, IF you have sufficient permissions to make them writable. In other words: if files match that you have intentionally marked as read-only, they will still get removed.
Also, if directories happen to match, they will NOT be removed, an error message will be displayed and the exit code will be 1 - matching files, however, are still removed.
At your own peril you may add -r to also quietly remove any matching directories (whether they're empty or not).
Using find, if explicitly ruling out directories is desired:
To avoid matching directories, you can use find, but to make it safe, the command gets lengthy:
# delete
find . -maxdepth 1 -type f -name '*~' -delete -or -name '#*#' -delete
# move
find . -maxdepth 1 -type f \
-name '*~' -exec mv {} /tmp/ \; -or \
-name '#*#' -exec mv {} /tmp/ \;
(Two general notes on find:
The path itself (., in this case) is by default included in the set of items (not a concern in this particular case due to excluding directories from matching) - to avoid that, add -mindepth 1.
Terminating the command passed to the -exec primary with + rather than \; is generally preferable, as find then substitutes as many matches as will safely fit for {}, resulting in much fewer invocations (typically just 1) of the command (assuming, of course, that your command can take argument lists of variable length) - this is similar to xargs' behavior.
Here's the catch: -exec only accepts commands terminated with + if {} is the command's last argument (and will otherwise fail with the misleading error message find: missing argument to '-exec').
Thus, in the case at hand + cannot be used, because the mv command's last argument must be the target.
)
The shell will expand your *~ to a list of all files ending in ~. So if you have more than one of them, they all will be in the parameter list of -f, but -f handles only one parameter.
Try
find . -name "*~" -print | xargs rm
and read about the parameters to find if you want to stop it from recursing your whole directory structure.
The find command is generally used for things of this nature. It even has a built-in -delete flag.
find -name '*~' -delete
or, with xargs (to move, for example)
# Moves files to /tmp using the replacement string specified with the -I flag
find -name '*~' -print0 | xargs -0 -I _ mv _ /tmp/
If you prefer to use xargs for deletion as well, you can do away with the use of -I
find -name '*~' -print0 | xargs -0 rm
Note the use of the -print0 and -0 flags to specify null-terminated paths. This allows paths with spaces to run properly. Without -0, filenames with spaces (including spaces anywhere in the path) will be treated as two separate (possibly invalid) paths.

Understand pipe and redirection command

I want to understand the real power of pipe and redirection command.As per my understanding,| takes the output of one command result as the input of itself. And, > is helps in output redirecting .If it is so,
find . -name "*.swp" | rm
find . -name "*.swp" > rm
why this command is not working as expected .For me above command means
Find the all files recursively whose extension is .swp in current directory .
take the output of 1. and remove all whose resulted files .
FYI,yes i know how to accomplish this task . it can be done by passing -exec flag .
find . -name "*.swp"-exec rm -rf {} \;
But as I already mentioned,i want to accomplish it with > or | command.
If i was wrong and going in wrong direction,please correct me and explain redirection and pipe command. Where we use whose commands ? please dont mention simple book examples i read all whose thing . try to explain me some complicated thing .
I'll break this down by the three methods you have shown:
> will redirect all output from find into a file named rm (will not work, because you're just appending to a file).
| will pipe output from find into the rm command (will not work, because rm does not read on stdin)
-exec rm -rf {} \; will run rm -rf on each item ({}) that find finds (will work, because it passes the files as argument to rm).
You will want to use -exec flag, or pipe into the xargs command (man xargs), not | or > in order to achieve the desired behavior.
EDIT: as #dmckee said, you can also use the $() operator for string interpolation, ie: rm -rf $(find . -name "*.swp") (this will fail if you have a large number of files, due to argument length limits).
> simply redirects to a file named rm.
Piping via | to rm doesn't work because rm doesn't expect filenames via STDIN.
So you have to use xargs, which passes values from STDIN as arguments:
find . -name "*.swp"|xargs rm
This is dangerous because the filename may contain characters your shell considers a field seperator ($IFS).
So, you use:
find . -name "*.swp" -print0|xargs -0 rm
Which causes find print the filenames \0 sperated to STDOUT and xargs to read the filenames \0 seperated and pass them as arguments to rm.
Of course, the easiest way to achieve this would have been:
rm **/*.swp
assuming you use bash.
You should take some time and read about the basics of shell redirection again :) I think this is a good document: http://wiki.bash-hackers.org/howto/redirection_tutorial
I'll try to explain what went wrong for you:
find . -name "*.swp" | rm
This command redirects the find results, i.e. the stdout of find, to the stdin of the program rm. However, rm does not read on stdin (this is something you can read in the documentation of rm). rm is controlled via command line arguments, not via stdin. I think there is no way to make rm read from stdin at all. That's why nothing is deleted.
find . -name "*.swp" > rm
This command redirects newline-delimited find results (stdout of find) to a file called 'rm'. Again, nothing is deleted :)
Basically the <, >, >>, &>, &>> operators perform redirection from/to a file that actually exists in the file system. The pipe | redirects the standard output from one command to the standard input of another command. Simply spoken there are no files involved here. However, this approach only makes sense if the program to the left of the pipe actually writes something to stdout and the program to the right of the pipe reads from stdin and both programs understand each other, i.e. the reading program (the consumer) understands the output of the feeding program.
Redirection creates a file. So your >rm example just creates a file named ./rm into which the output of your command is saved.
Pipes are essentially a shorthand. one | two is like one >tmp; two <tmp except without the (explicit) temporary file.
Of course, rm doesn't read file names from standard input, so cmd | rm is basically useless (apart from situations where the pipeline continues with yet another command which does something with the input which rm didn't read). If you want that, there's xargs.
find . -name "*.swp" | xargs rm

Resources