Find files that has write permission for current user - bash

I need to find files in a directory which has "write" permissions for the current user.
I tried using find like below:
find $DIR -type f -user $(whoami) -perm -u+w
But the above find looks for file owned by $(whoami).
The files that i am looking for may not be owned by $(whoami) but still has permissions (for eg., 666 permission)
I also tried below (sorry if it looks stupid):
find $DIR -type f -exec test -w {} \;

Your second approach seems to be on the right track; you just forgot to print the filename.
You could make a script
#!/bin/sh
test -w "$1" && echo "$1"
and then use -exec on this script.
UPDATE: If you don't like the idea of having a separate script, you can put it also into one line:
find $DIR -type f -exec bash -c 'test -w "$1" && echo "$1"' x {} \;
The lone x just serves as a placeholder, because this is what bash sees as $0, and you want to assign the current file to $1. Another possibility would be
find $DIR -type f -exec bash -c 'test -w "$0" && echo "$0"' {} \;
but for me, this looks less clear and like a misuse of $0.

Your second command is correct, you just have to print the paths. Usually -print doesn't have to be mentioned, but with a few options like -exec you have to explicitly specify that you want to print the found paths.
find "$DIR" -type f -exec test -w {} \; -print
You may wonder: »Why does this print only writeable files?«
find uses short-circuit evaluation – the next option is only evaluated if the preceding option succeeded.
Example: In the command find -type f -user USER the check -user USER will only be performed on files, not on directories as -type f fails for directories.
The -exec cmd option also acts as a check – the exit status of cmd will be used to determine whether the check passed or not.
Example: find -exec false \; -user USER won't ever perform the check -user USER since the program false never succeeds.
In your case that means that -print will only be executed if test -w succeeded.

Related

Why does "find . -name somedir -exec rm -r {} \;" exit nonzero whenever it successfully deletes a directory?

I want to find a folder in a known folder but with unknown exact path, thus I must use find. When I find it, I use -exec to remove it, but I cannot evaluate if this has succeeded. Behavior somewhat confuses me; if I don't find the folder, I get return code 0:
>find . -name "testuuu" -exec rm -r {} \;
find: ‘./.docker’: Permission denied
>echo $?
0
But when I do find the folder and manage to delete it, it returns error code 1:
> find . -name "test" -exec rm -r {} \;
find: ‘./.docker’: Permission denied
find: ‘./test’: No such file or directory
> echo $?
1
Is this expected behavior? Why?
Further, how can I get return code of "rm" and not "find" and thus evaluate if the folder was deleted?
Use the -depth option to order a depth-first traversal -- that way find doesn't try to find things under your directory after the directory has already been deleted (an operation which, by nature, will always fail).
find . -depth -name "test" -exec rm -r -- '{}' \;
This option is also turned on by default when you use the -delete action in GNU find to delete content.
By the way -- if there are lots of test directories, you could get better performance by making that -exec rm -r -- {} + (which passes each copy of rm as many filenames as will fit on its command line) instead of -exec rm -r -- {} \; (which starts a new copy of rm for each test that is found), at the expense of no longer collecting individual error codes.

Adding simple pipped command bash

I'm learning bash scripting and needed some simple help.
Here is what I have thus far:
find . -type d -empty -not -path "./.git/*" -exec touch {}/.gitkeep \;
So what this does is starts from a root path, finds all directories inside this root path that are empty and do not have a .git folder, and then when that operation is successful it runs -exec touch {}/.gitkeep to create a file .gitkeep inside that empty directory to ensure proper git commits.
What I want now is to echo out the current file path for the gitkeep file just created.
My first question is:
Should I be piping | as so:
find . -type d -empty -not -path "./.git/*" -exec touch {}/.gitkeep | outputFilenameDisplayFunction \;
Or maybe repeat what -exec does as so:
find . -type d -empty -not -path "./.git/*" -exec touch {}/.gitkeep - exec outputFilenameDisplayFunction \;
Or maybe use >
find . -type d -empty -not -path "./.git/*" -exec touch {}/.gitkeep > outputFilenameDisplayFunction \;
None of these commands has been tested yet. I really am looking for explanations so i can be knowledgeable in the future.
As mentioned here, find accepts multiple -exec portions to the command.
In your case, the second one can call a script, as in here:
find . -type d -empty -not -path "./.git/*" -exec touch {}/.gitkeep \; -exec myscript {} \;
Note the \;.
The script would be:
#!/bin/sh
echo "$1" > "afile"
Charles Duffy actually proposes in the comments fir the second -exec:
-exec sh -c 'echo "$1" >>aFile' _ {} \;
avoid the need for an external file storing your script.
Let's start from your stated requirements:
So what this does is starts from a root path, finds all directories inside this root path that are empty and do not have a .git folder, and then when that operation is successful it runs -exec touch {}/.gitkeep to create a file .gitkeep inside that empty directory to ensure proper git commits.
If a directory is empty, it "can't have a .git folder" in the sense of having a child named .git by definition -- if it had any subdirectory, it wouldn't be empty. So we can completely ignore that part of your description in prose -- or interpret to refer to what the code actually appears to be intended to do, pruning any directory which is under .git.
Should that be your intent, -path is the wrong tool for that job altogether, as it still searches the .git tree (and then excludes all the things that it found); instead, use -prune to stop find from recursing down that path at all:
while IFS= read -r -d '' dirname; do
touch -- "${dirname}/.gitkeep"
printf '%q\n' "$dirname" # this goes to the logfile, since we open it for the whole loop
done < <(find . -name .git -prune -o -type d -empty -print0) >logFile
Why prefer this approach?
Instead of starting a shell per directory found (as would happen if you used -exec to start a shell script or a shell), it keeps your initial/primary shell running, and iterates through the loop once per item found.
Because it's running code in that shell, you can use shell functions; modify shell variables (as with (( ++directoriesFound )) to keep a counter, f/e), or perform redirections scoped to the loop (ie. >logFile) to open an output file just once and use if repeatedly within.
On GNU/anything, find has -printf, which makes doing what you want a straight
find -name .git -prune \
-o -type d -empty -printf %p/.gitkeep\\n -execdir touch {}/.gitkeep \;
(note: fixed omitted {}/, and GNU find's -execdir doesn't change the behavior here but is safer than -exec on systems that may find themselves under attack, the exec'd command is run directly in the location find got to rather than causing the executed command to re-walk the path).

Shell stop script if find command fails

Good day.
In a script of fine i have the following find command:
find -maxdepth 1 \! -type d -name "some_file_name_*" -name "*.txt" -name "*_${day_month}_*" -exec cp {} /FILES/directory1/directory2/directory3/ +
I want to know how to stop the script if the command does't find anything.
Use GNU xargs with the -r switch and a pipeline to ensure the output of find is passed to cp only if its non-empty.
find -maxdepth 1 \! -type d -name "some_file_name_*" -name "*.txt" -name "*_${day_month}_*" \
| xargs -r I{} cp "{}" /FILES/directory1/directory2/directory3/
I{} is a place-holder for the output from the find command which is passed to cp,
The flags, -r and I{} represent the following according to the man xargs page,
-r, --no-run-if-empty
If the standard input does not contain any nonblanks, do not run
the command. Normally, the command is run once even if there is
no input. This option is a GNU extension.
-I replace-str
Replace occurrences of replace-str in the initial-arguments with
names read from standard input.
You may add -exec false {} so you get a false exit status when something is found (which makes it a bit upside-down though)
if find . -name foo -exec echo ok ';' -exec false {} +
then
echo 'not found'
exit
fi
echo found
See similar question in stackexchange: How to detect whether “find” found any matches?, in particular this answer which suggests the false trick

How to cd into grep output?

I have a shell script which basically searches all folders inside a location and I use grep to find the exact folder I want to target.
for dir in /root/*; do
grep "Apples" "${dir}"/*.* || continue
While grep successfully finds my target directory, I'm stuck on how I can move the folders I want to move in my target directory. An idea I had was to cd into grep output but that's where I got stuck. Tried some Google results, none helped with my case.
Example grep output: Binary file /root/ant/containers/secret/Documents/2FD412E0/file.extension matches
I want to cd into 2FD412E0and move two folders inside that directory.
dirname is the key to that:
cd $(dirname $(grep "...." ...))
will let you enter the directory.
As people mentioned, dirname is the right tool to strip off the file name from the path.
I would use find for such kind of task:
while read -r file
do
target_dir=`dirname $file`
# do something with "$target_dir"
done < <(find /root/ -type f \
-exec grep "Apples" --files-with-matches {} \;)
Consider using find's -maxdepth option. See the man page for find.
Well, there is actually simpler solution :) I just like to write bash scripts. You might simply use single find command like this:
find /root/ -type f -exec grep Apples {} ';' -exec ls -l {} ';'
Note the second -exec. It will be executed, if the previous -exec command exited with status 0 (success). From the man page:
-exec command ;
Execute command; true if 0 status is returned. All following arguments to find are taken to be arguments to the command until an argument consisting of ; is encountered. The string {} is replaced by the current file name being processed everywhere it occurs in the arguments to the command, not just in arguments where it is alone, as in some versions of find.
Replace the ls -l command with your stuff.
And if you want to execute dirname within the -exec command, you may do the following trick:
find /root/ -type f -exec grep -q Apples {} ';' \
-exec sh -c 'cd `dirname $0`; pwd' {} ';'
Replace pwd with your stuff.
When find is not available
In the comments you write that find is not available on your system. The following solution works without find:
grep -R --files-with-matches Apples "${dir}" | while read -r file
do
target_dir=`dirname $file`
# do something with "$target_dir"
echo $target_dir
done

bash script rm cannot delete folder created by php mkdir

I cannot delete folder created by php mkdir
for I in `echo $*`
do
find $I -type f -name "sess_*" -exec rm -f {} \;
find $I -type f -name "*.bak" -exec rm -f {} \;
find $I -type f -name "Thumbs.db" -exec rm -f {} \;
find $I -type f -name "error.log" -exec sh -c 'echo -n > "{}"' -f {} \;
find $I -type f -path "*/cache/*" -name "*.*" -exec rm -f {} \;
find $I -path "*/uploads/*" -exec rm -rdf {} \;
done
I want to delete under /uploads/ all files and folders please help me thanks...
You should consider changing your find command to use the -o pragma to join your conditions together as the final exec is basically the same. This will avoid recursing the file system repeatedly.
The other answers address your concern about php mkdir. I'll just add that it has nothing to do with the fact it was created with php mkdir rather than any other code or command. It is due to the ownership and permissions.
I think this is most likely because php is running in apache or another http server under a different user than you are invoking the bash script. Or perhaps the files uploaded in uploads/ are owned by the http server's user and not the user invoking it.
Make sure that you run the bash script under the same user as your http server.
To find out which user owns which file do:
ls -l
If you run you bash script as root, you should be able to delete it anyway, but that is not recommended.
Update
To run it as root for nautilus script use the following as your nautilus script:
gksudo runmydeletescript
Then put all the other code into another file with the same path as whatever you have put for runmydeletescript and run chmod +x on it. This is extremely dangerous!
You should probably add -depth to the command to delete sub-directories of upload before the directory itself.
I worry about the -path but I'm not familiar with it.
Also consider using + instead of \; to reduce the number of commands executed.

Resources