Suppress error message from bash loop - bash

I've got the following line in my bash script:
for i in $(find ./TTDD* -type f)
do
It works when there's files in the directory, but when it's empty I get the following:
find ... No such file or directory
How can I suppress that exact error message, as I'm logging output and doesn't care about that specific error message.

The problem is that globs that don't have any matches expand to themselves, and since there's no file named TTDD*, you get this error.
You can rewrite it in different ways. The most straight forward is:
find . -path './TTDD*' -type f
This will show the same files.
If there are other directories in the current dir, it will waste some time going through their files even if they'll never match. If required, you can short-circuit such directories with a less readable find . -path . -o -not -path './TTDD*' -prune -o -type f -print.
NB: iterating over these files with a for loops will break for files with spaces and various other special characters. You can combine this with anubhava's answer to safely read all filenames while also not suppressing all of find's other potentially useful error messages.
while IFS= read -rd '' f; do
printf "Processing [%s]\n" "$f"
done < <(find . -path './TTDD*' -type f -print0 2>/dev/null)

How about you use "-exec"?
This way you can redirect all results of your find command to the input of some other command, e.g., let's say that I want to find all the files within a given folder that are owned by the root user and I want to change the ownership of these files, here is the code:
find ${DataFS}/Data -user root -exec chown someuser:someuser {} \;
The approach you are using is similar to
find ${DataFS}/Data -user root -print0 | xargs -0 chown someuser:someuser
Which is not ideal because it will fail in case it can't find any files (i.e., if it prints an empty string), i.e., chown: missing operand after `someuser:someuser'

Related

Find Command Exclude Hidden files when using empty flag

I am looking for a way to use the find command to tell if a folder has no files in it. I have tried using the -empty flag, but since I am on macOS the system files the OS places in the directory such as .DS_Store cause find to not consider the directory empty. I have tried telling find to ignore .DS_Store but it still considers the directory not empty because that file is present.
Is there a way to have find exclude certain files from what it considers -empty? Also is there a way to have find return a list of directories with no visible files?
The -empty predicate is rather simple, it's true for a directory if it has any entries other than . or ...
Kind of an ugly solution, but you can use -exec to run another find in each directory which will implement your criteria for deciding what directories you want to include.
Below:
the outer find will execute sh -c for each directory in /starting/point
sh will execute another find with different criteria.
the inner find will print the first match and then quit
read will consume the output (if any) of the inner find. read will have an exit status of 0 only if the inner find printed at least one line, non-zero otherwise
if there was no output from the inner find, the outer find's -exec predicate will evaluate to false
since -exec is followed by -o, the following -print action will be executed only for those directories which do not match the inner find's criteria
find /starting/point \
-type d \( \
-exec sh -c \
'find "$1" -mindepth 1 -maxdepth 1 ! -name ".*" -print -quit | read' \
sh {} \; \
-o -print \
\)
Also note that the 'find FOLDER -empty' is somewhat tricky. It will consider FOLDER empty even if it contains files, as long as these are empty.
Maybe not exactly what was asked, but I prefer the brute force approach if I want to avoid a no-match error on using FOLDER/*. In tcsh:
ls -d FOLDER/* >& /dev/null
if !($status) COMMANDS FOLDER/* ...
A variation of this might be usable here (like also using
ls -d FOLDER/.* | wc -l
and drawing the desired conclusions from the combined results).

Why is my `find` command giving me errors relating to ignored directories?

I have this find command:
find . -type f -not -path '**/.git/**' -not -path '**/node_modules/**' | xargs sed -i '' s/typescript-library-skeleton/xxx/g;
for some reason it's giving me these warnings/errors:
find: ./.git/objects/3c: No such file or directory
find: ./.git/objects/3f: No such file or directory
find: ./.git/objects/41: No such file or directory
I even tried using:
-not -path '**/.git/objects/**'
and got the same thing. Anybody know why the find is searching in the .git directory? Seems weird.
why is the find searching in the .git directory?
GNU find is clever and supports several optimizations over a naive implementation:
It can flip the order of -size +512b -name '*.txt' and check the name first, because querying the size will require a second syscall.
It can count the hard links of a directory to determine the number of subdirectories, and when it's seen all it no longers needs to check them for -type d or for recursing.
It can even rewrite (-B -or -C) -and -A so that if the checks are equally costly and free of side effects, the -A will be evaluated first, hoping to reject the file after 1 test instead of 2.
However, it is not yet clever enough to realize that -not -path '*/.git/*' means that if you find a directory .git then you don't even need to recurse into it because all files inside will fail to match.
Instead, it dutifully recurses, finds each file and matches it against the pattern as if it was a black box.
To explicitly tell it to skip a directory entirely, you can instead use -prune. See How to exclude a directory in find . command
Both more efficient and more correct would be to avoid the default -print action, change -not -path ... to -prune, and ensure that xargs is only used with NUL-delimited input:
find . -name .git -prune -o \
-name node_modules -prune -o \
-type f -print0 | xargs -0 sed -i '' s/typescript-library-skeleton/xxx/g '{}' +
Note the following points:
We use -prune to tell find to not even recurse down the undesired directories, rather than -not -path ... to tell it to discard names in those directories after they were found.
We put the -prunes before the -type f, so we're able to match directories for pruning.
We have an explicit action, not depending on the default -print. This is important because the default -print effectively has a set of parenthesis: find ... behaves like find '(' ... ')' -print, not like find ... -print, no if explicit action is given.
We use xargs only with the -0 argument enabling NUL-delimited input, and the -print0 action on the find side to generate a NUL-delimited list of names. NUL is the only character which cannot be present in an arbitrary file path (yes, newlines can be present) -- and thus the only character which is safe to use to separate paths. (If the -0 extension to xargs and the -print0 extension to find are not guaranteed to be available, use -exec sed -i '' ... {} + instead).

find - suppress "No such file or directory" errors

I can use -s in grep to suppress errors, but I don't see an equivalent for the find command in the man page... Is the only option to redirect STDERR>/dev/null?
Or is there an option that handles this? (open to fancy awk and perl solutions if needed)
Example:
$ for dir in `ls /mnt/16_c/`; do find /mnt/16_c/$dir/data/ -mtime +180 -type f -exec echo {} \;; done
find: `/mnt/16_c/test_container/dat/': No such file or directory
You can redirect stderr with 2>/dev/null, for example:
find /mnt/16_c/$dir/data/ -mtime +180 -type f -exec echo {} \; 2>/dev/null
Btw, the code in your question can be replaced with:
find /mnt/16_c/*/data/ -mtime +180 -type f 2>/dev/null
And if there is at least one matching directory,
then you don't even need to suppress stderr,
because find will only search in directories that match this pattern.
I know this isn't exactly what you asked, but my typical approach is to find a path that definitely exists, then use the -path flag to filter.
So instead of find /home/not-a-path, which raises an error, I would do find /home -path "/home/not-a-path/*", which doesn't raise an error.
I had to do this because when I would stream a failed find to /dev/null in a make file, the error would still cause the command to crash. The approach I described above works though.

using find with rsync is failing in a way I can't understand

I am trying to synchronise two directories, omitting certain files.
find . -type f -print0 |rsync -vupt -0 --files-from=- . /tmp/test
This copies every file of course, but works as expected
The problem arises when I try to limit the files copied. So if I want to copy all but the files ending ".part" :
find . -type f -\! -iname "*.part" -print0 |rsync -vupt -0 --files-from=- . /tmp/test
But this fails to copy anything at all.
If I remove the pipe I can see that find is outputting a stream that looks just like the kind of output from the first command but minus the file names I don't want to copy.
(I've tries -not and -name alternatives too without luck)
What am I doing wrong?
Thanks for any help.
The find negation is not a command, but an operator. You can use ! -iname "*.part" to negate that single expression.

How to return the absolute path of recursively matched arguments? (BASH)

OK, so simple enough.. I want to recursively search a directory for files with a specific extension - and then perform an action on those files.
# pwdENTER
/dir
# ls -R | grep .txt | xargs -I {} open {} ENTER
The file /dir/reallyinsubfolder.txt does not exist. ⬅ fails (bad)
Not output, but succeeds.. /dir/fileinthisfolder.txt ⬅ opens silently (good)
This does find ALL the files I am interested in… but only OPEN's those which happen to be "1-level" deep. In this case, the attempt to open /dir/reallyinsubfolder.txt fails, as reallyinsubfolder.txt is actually /dir/sub/reallyinsubfolder.txt.
I understand that grep is simply returning the matched filename… which then chokes (in this case), the open command, as it fails to reach down to the correct sub-directory to execute the file..
How do I get grep to return the full path of a match?
How about using the find command -
find /path/to/dir -type f -iname "*.txt" -exec action to perform {} \;
find . -name *.txt -exec open {};
(Decorate with backslashes of your needing)
I believe you're asking the wrong question; parsing ls(1) output in this fashion is far more trouble than it is worth.
What would work far more reliably:
find /dir -name '*.txt' -print0 | xargs -0 open
or
find /dir -name '*.txt' -exec open {} \;
find(1) does not mangle names nearly as much as ls(1) and makes executing programs on matched files far more reliable.

Resources