Can xargs execute a subshell command for each argument? - bash

I have a command which is attempting to generate UUIDs for files:
find -printf "%P\n"|sort|xargs -L 1 echo $(uuid)
But in the result, xargs is only executing the $(uuid) subshell once:
8aa9e7cc-d3b2-11e4-83a6-1ff1acc22a7e file1
8aa9e7cc-d3b2-11e4-83a6-1ff1acc22a7e file2
8aa9e7cc-d3b2-11e4-83a6-1ff1acc22a7e file3
Is there a one-liner (i.e not a function) to get xargs to execute a subshell command on each input?

This is because the $(uuid) gets expanded in the current shell. You could explicitly call a shell:
find -printf "%P\n"| sort | xargs -I '{}' bash -c 'echo $(uuid) {}'
Btw, I would use the following command:
find -exec bash -c 'echo "$(uuid) ${1#./}"' -- '{}' \;
without xargs.

hek2mgl's answer explains the problem well and his solution works well; this answer looks at performance.
The accepted answer is a tad slow, because it creates a bash process for every input line.
While xargs is generally preferable to and faster than a shell-code loop, in this particular case the roles are reversed, because shell functionality is needed in each iteration.
The following alternative solution uses a while loop to process the input lines, and, on my machine, is about twice as fast as the xargs solution.
find . -printf "%P\n" | sort | while IFS= read -r f; do echo "$(uuid) $f"; done
Note the use of while rather than for, because for cannot robustly parse command output (in short: filenames with embedded whitespace would break the command - see http://mywiki.wooledge.org/DontReadLinesWithFor).
If you're concerned about filenames with embedded newlines (very rare) and use GNU utilities, you could use NUL bytes as separators:
find . -printf "%P\0" | sort -z | while IFS= read -d '' -r f; do echo "$(uuid) $f"; done
Update: The fastest approach is to not use a shell loop at all, as evidenced by ᴳᵁᴵᴰᴼ's clever answer.
See below for a portable version of his answer.
Compatibility note:
The OP's find command implies the use of GNU find (Linux), and uses features (-printf) that may not work on other platforms.
Here's a portable version of ᴳᵁᴵᴰᴼ's answer that uses only POSIX-compliant features of find (and awk).
Note, however, that uuid is not a POSIX utility; since Linux and BSD-like systems (including OSX) have a uuidgen utility, the command uses that instead:
find . -exec printf '%s\t' {} \; -exec uuidgen \; |
awk -F '\t' '{ sub(/.+\//,"", $1); print $2, $1 }' | sort -k2

With a for loop:
for i in $(find -printf "%P\n" | sort) ; do echo "$(uuid) $i"; done
Edit: another way to do this:
find -printf "%P\0" -exec uuid -v 4 \; | sort | awk -F'\0' '{ print $2 " " $1}'
this outputs the filename followed by the uuid (no subshell required) for letting the sort to happen, then swaps the two columns separated by null.

Related

Bash: recursively rename part of a file [duplicate]

I want to go through a bunch of directories and rename all files that end in _test.rb to end in _spec.rb instead. It's something I've never quite figured out how to do with bash so this time I thought I'd put some effort in to get it nailed. I've so far come up short though, my best effort is:
find spec -name "*_test.rb" -exec echo mv {} `echo {} | sed s/test/spec/` \;
NB: there's an extra echo after exec so that the command is printed instead of run while I'm testing it.
When I run it the output for each matched filename is:
mv original original
i.e. the substitution by sed has been lost. What's the trick?
To solve it in a way most close to the original problem would be probably using xargs "args per command line" option:
find . -name "*_test.rb" | sed -e "p;s/test/spec/" | xargs -n2 mv
It finds the files in the current working directory recursively, echoes the original file name (p) and then a modified name (s/test/spec/) and feeds it all to mv in pairs (xargs -n2). Beware that in this case the path itself shouldn't contain a string test.
This happens because sed receives the string {} as input, as can be verified with:
find . -exec echo `echo "{}" | sed 's/./foo/g'` \;
which prints foofoo for each file in the directory, recursively. The reason for this behavior is that the pipeline is executed once, by the shell, when it expands the entire command.
There is no way of quoting the sed pipeline in such a way that find will execute it for every file, since find doesn't execute commands via the shell and has no notion of pipelines or backquotes. The GNU findutils manual explains how to perform a similar task by putting the pipeline in a separate shell script:
#!/bin/sh
echo "$1" | sed 's/_test.rb$/_spec.rb/'
(There may be some perverse way of using sh -c and a ton of quotes to do all this in one command, but I'm not going to try.)
you might want to consider other way like
for file in $(find . -name "*_test.rb")
do
echo mv $file `echo $file | sed s/_test.rb$/_spec.rb/`
done
I find this one shorter
find . -name '*_test.rb' -exec bash -c 'echo mv $0 ${0/test.rb/spec.rb}' {} \;
You can do it without sed, if you want:
for i in `find -name '*_test.rb'` ; do mv $i ${i%%_test.rb}_spec.rb ; done
${var%%suffix} strips suffix from the value of var.
or, to do it using sed:
for i in `find -name '*_test.rb'` ; do mv $i `echo $i | sed 's/test/spec/'` ; done
You mention that you are using bash as your shell, in which case you don't actually need find and sed to achieve the batch renaming you're after...
Assuming you are using bash as your shell:
$ echo $SHELL
/bin/bash
$ _
... and assuming you have enabled the so-called globstar shell option:
$ shopt -p globstar
shopt -s globstar
$ _
... and finally assuming you have installed the rename utility (found in the util-linux-ng package)
$ which rename
/usr/bin/rename
$ _
... then you can achieve the batch renaming in a bash one-liner as follows:
$ rename _test _spec **/*_test.rb
(the globstar shell option will ensure that bash finds all matching *_test.rb files, no matter how deeply they are nested in the directory hierarchy... use help shopt to find out how to set the option)
The easiest way:
find . -name "*_test.rb" | xargs rename s/_test/_spec/
The fastest way (assuming you have 4 processors):
find . -name "*_test.rb" | xargs -P 4 rename s/_test/_spec/
If you have a large number of files to process, it is possible that the list of filenames piped to xargs would cause the resulting command line to exceed the maximum length allowed.
You can check your system's limit using getconf ARG_MAX
On most linux systems you can use free -b or cat /proc/meminfo to find how much RAM you have to work with; Otherwise, use top or your systems activity monitor app.
A safer way (assuming you have 1000000 bytes of ram to work with):
find . -name "*_test.rb" | xargs -s 1000000 rename s/_test/_spec/
Here is what worked for me when the file names had spaces in them. The example below recursively renames all .dar files to .zip files:
find . -name "*.dar" -exec bash -c 'mv "$0" "`echo \"$0\" | sed s/.dar/.zip/`"' {} \;
For this you don't need sed. You can perfectly get alone with a while loop fed with the result of find through a process substitution.
So if you have a find expression that selects the needed files, then use the syntax:
while IFS= read -r file; do
echo "mv $file ${file%_test.rb}_spec.rb" # remove "echo" when OK!
done < <(find -name "*_test.rb")
This will find files and rename all of them striping the string _test.rb from the end and appending _spec.rb.
For this step we use Shell Parameter Expansion where ${var%string} removes the shortest matching pattern "string" from $var.
$ file="HELLOa_test.rbBYE_test.rb"
$ echo "${file%_test.rb}" # remove _test.rb from the end
HELLOa_test.rbBYE
$ echo "${file%_test.rb}_spec.rb" # remove _test.rb and append _spec.rb
HELLOa_test.rbBYE_spec.rb
See an example:
$ tree
.
├── ab_testArb
├── a_test.rb
├── a_test.rb_test.rb
├── b_test.rb
├── c_test.hello
├── c_test.rb
└── mydir
└── d_test.rb
$ while IFS= read -r file; do echo "mv $file ${file/_test.rb/_spec.rb}"; done < <(find -name "*_test.rb")
mv ./b_test.rb ./b_spec.rb
mv ./mydir/d_test.rb ./mydir/d_spec.rb
mv ./a_test.rb ./a_spec.rb
mv ./c_test.rb ./c_spec.rb
if you have Ruby (1.9+)
ruby -e 'Dir["**/*._test.rb"].each{|x|test(?f,x) and File.rename(x,x.gsub(/_test/,"_spec") ) }'
In ramtam's answer which I like, the find portion works OK but the remainder does not if the path has spaces. I am not too familiar with sed, but I was able to modify that answer to:
find . -name "*_test.rb" | perl -pe 's/^((.*_)test.rb)$/"\1" "\2spec.rb"/' | xargs -n2 mv
I really needed a change like this because in my use case the final command looks more like
find . -name "olddir" | perl -pe 's/^((.*)olddir)$/"\1" "\2new directory"/' | xargs -n2 mv
I haven't the heart to do it all over again, but I wrote this in answer to Commandline Find Sed Exec. There the asker wanted to know how to move an entire tree, possibly excluding a directory or two, and rename all files and directories containing the string "OLD" to instead contain "NEW".
Besides describing the how with painstaking verbosity below, this method may also be unique in that it incorporates built-in debugging. It basically doesn't do anything at all as written except compile and save to a variable all commands it believes it should do in order to perform the work requested.
It also explicitly avoids loops as much as possible. Besides the sed recursive search for more than one match of the pattern there is no other recursion as far as I know.
And last, this is entirely null delimited - it doesn't trip on any character in any filename except the null. I don't think you should have that.
By the way, this is REALLY fast. Look:
% _mvnfind() { mv -n "${1}" "${2}" && cd "${2}"
> read -r SED <<SED
> :;s|${3}\(.*/[^/]*${5}\)|${4}\1|;t;:;s|\(${5}.*\)${3}|\1${4}|;t;s|^[0-9]*[\t]\(mv.*\)${5}|\1|p
> SED
> find . -name "*${3}*" -printf "%d\tmv %P ${5} %P\000" |
> sort -zg | sed -nz ${SED} | read -r ${6}
> echo <<EOF
> Prepared commands saved in variable: ${6}
> To view do: printf ${6} | tr "\000" "\n"
> To run do: sh <<EORUN
> $(printf ${6} | tr "\000" "\n")
> EORUN
> EOF
> }
% rm -rf "${UNNECESSARY:=/any/dirs/you/dont/want/moved}"
% time ( _mvnfind ${SRC=./test_tree} ${TGT=./mv_tree} \
> ${OLD=google} ${NEW=replacement_word} ${sed_sep=SsEeDd} \
> ${sh_io:=sh_io} ; printf %b\\000 "${sh_io}" | tr "\000" "\n" \
> | wc - ; echo ${sh_io} | tr "\000" "\n" | tail -n 2 )
<actual process time used:>
0.06s user 0.03s system 106% cpu 0.090 total
<output from wc:>
Lines Words Bytes
115 362 20691 -
<output from tail:>
mv .config/replacement_word-chrome-beta/Default/.../googlestars \
.config/replacement_word-chrome-beta/Default/.../replacement_wordstars
NOTE: The above function will likely require GNU versions of sed and find to properly handle the find printf and sed -z -e and :;recursive regex test;t calls. If these are not available to you the functionality can likely be duplicated with a few minor adjustments.
This should do everything you wanted from start to finish with very little fuss. I did fork with sed, but I was also practicing some sed recursive branching techniques so that's why I'm here. It's kind of like getting a discount haircut at a barber school, I guess. Here's the workflow:
rm -rf ${UNNECESSARY}
I intentionally left out any functional call that might delete or destroy data of any kind. You mention that ./app might be unwanted. Delete it or move it elsewhere beforehand, or, alternatively, you could build in a \( -path PATTERN -exec rm -rf \{\} \) routine to find to do it programmatically, but that one's all yours.
_mvnfind "${#}"
Declare its arguments and call the worker function. ${sh_io} is especially important in that it saves the return from the function. ${sed_sep} comes in a close second; this is an arbitrary string used to reference sed's recursion in the function. If ${sed_sep} is set to a value that could potentially be found in any of your path- or file-names acted upon... well, just don't let it be.
mv -n $1 $2
The whole tree is moved from the beginning. It will save a lot of headache; believe me. The rest of what you want to do - the renaming - is simply a matter of filesystem metadata. If you were, for instance, moving this from one drive to another, or across filesystem boundaries of any kind, you're better off doing so at once with one command. It's also safer. Note the -noclobber option set for mv; as written, this function will not put ${SRC_DIR} where a ${TGT_DIR} already exists.
read -R SED <<HEREDOC
I located all of sed's commands here to save on escaping hassles and read them into a variable to feed to sed below. Explanation below.
find . -name ${OLD} -printf
We begin the find process. With find we search only for anything that needs renaming because we already did all of the place-to-place mv operations with the function's first command. Rather than take any direct action with find, like an exec call, for instance, we instead use it to build out the command-line dynamically with -printf.
%dir-depth :tab: 'mv '%path-to-${SRC}' '${sed_sep}'%path-again :null delimiter:'
After find locates the files we need it directly builds and prints out (most) of the command we'll need to process your renaming. The %dir-depth tacked onto the beginning of each line will help to ensure we're not trying to rename a file or directory in the tree with a parent object that has yet to be renamed. find uses all sorts of optimization techniques to walk your filesystem tree and it is not a sure thing that it will return the data we need in a safe-for-operations order. This is why we next...
sort -general-numerical -zero-delimited
We sort all of find's output based on %directory-depth so that the paths nearest in relationship to ${SRC} are worked first. This avoids possible errors involving mving files into non-existent locations, and it minimizes need to for recursive looping. (in fact, you might be hard-pressed to find a loop at all)
sed -ex :rcrs;srch|(save${sep}*til)${OLD}|\saved${SUBSTNEW}|;til ${OLD=0}
I think this is the only loop in the whole script, and it only loops over the second %Path printed for each string in case it contains more than one ${OLD} value that might need replacing. All other solutions I imagined involved a second sed process, and while a short loop may not be desirable, certainly it beats spawning and forking an entire process.
So basically what sed does here is search for ${sed_sep}, then, having found it, saves it and all characters it encounters until it finds ${OLD}, which it then replaces with ${NEW}. It then heads back to ${sed_sep} and looks again for ${OLD}, in case it occurs more than once in the string. If it is not found, it prints the modified string to stdout (which it then catches again next) and ends the loop.
This avoids having to parse the entire string, and ensures that the first half of the mv command string, which needs to include ${OLD} of course, does include it, and the second half is altered as many times as is necessary to wipe the ${OLD} name from mv's destination path.
sed -ex...-ex search|%dir_depth(save*)${sed_sep}|(only_saved)|out
The two -exec calls here happen without a second fork. In the first, as we've seen, we modify the mv command as supplied by find's -printf function command as necessary to properly alter all references of ${OLD} to ${NEW}, but in order to do so we had to use some arbitrary reference points which should not be included in the final output. So once sed finishes all it needs to do, we instruct it to wipe out its reference points from the hold-buffer before passing it along.
AND NOW WE'RE BACK AROUND
read will receive a command that looks like this:
% mv /path2/$SRC/$OLD_DIR/$OLD_FILE /same/path_w/$NEW_DIR/$NEW_FILE \000
It will read it into ${msg} as ${sh_io} which can be examined at will outside of the function.
Cool.
-Mike
I was able handle filenames with spaces by following the examples suggested by onitake.
This doesn't break if the path contains spaces or the string test:
find . -name "*_test.rb" -print0 | while read -d $'\0' file
do
echo mv "$file" "$(echo $file | sed s/test/spec/)"
done
This is an example that should work in all cases.
Works recursiveley, Need just shell, and support files names with spaces.
find spec -name "*_test.rb" -print0 | while read -d $'\0' file; do mv "$file" "`echo $file | sed s/test/spec/`"; done
$ find spec -name "*_test.rb"
spec/dir2/a_test.rb
spec/dir1/a_test.rb
$ find spec -name "*_test.rb" | xargs -n 1 /usr/bin/perl -e '($new=$ARGV[0]) =~ s/test/spec/; system(qq(mv),qq(-v), $ARGV[0], $new);'
`spec/dir2/a_test.rb' -> `spec/dir2/a_spec.rb'
`spec/dir1/a_test.rb' -> `spec/dir1/a_spec.rb'
$ find spec -name "*_spec.rb"
spec/dir2/b_spec.rb
spec/dir2/a_spec.rb
spec/dir1/a_spec.rb
spec/dir1/c_spec.rb
Your question seems to be about sed, but to accomplish your goal of recursive rename, I'd suggest the following, shamelessly ripped from another answer I gave here:recursive rename in bash
#!/bin/bash
IFS=$'\n'
function RecurseDirs
{
for f in "$#"
do
newf=echo "${f}" | sed -e 's/^(.*_)test.rb$/\1spec.rb/g'
echo "${f}" "${newf}"
mv "${f}" "${newf}"
f="${newf}"
if [[ -d "${f}" ]]; then
cd "${f}"
RecurseDirs $(ls -1 ".")
fi
done
cd ..
}
RecurseDirs .
More secure way of doing rename with find utils and sed regular expression type:
mkdir ~/practice
cd ~/practice
touch classic.txt.txt
touch folk.txt.txt
Remove the ".txt.txt" extension as follows -
cd ~/practice
find . -name "*txt" -execdir sh -c 'mv "$0" `echo "$0" | sed -r 's/\.[[:alnum:]]+\.[[:alnum:]]+$//'`' {} \;
If you use the + in place of ; in order to work on batch mode, the above command will rename only the first matching file, but not the entire list of file matches by 'find'.
find . -name "*txt" -execdir sh -c 'mv "$0" `echo "$0" | sed -r 's/\.[[:alnum:]]+\.[[:alnum:]]+$//'`' {} +
Here's a nice oneliner that does the trick.
Sed can't handle this right, especially if multiple variables are passed by xargs with -n 2.
A bash substition would handle this easily like:
find ./spec -type f -name "*_test.rb" -print0 | xargs -0 -I {} sh -c 'export file={}; mv $file ${file/_test.rb/_spec.rb}'
Adding -type -f will limit the move operations to files only, -print 0 will handle empty spaces in paths.
I share this post as it is a bit related to question. Sorry for not providing more details. Hope it helps someone else.
http://www.peteryu.ca/tutorials/shellscripting/batch_rename
This is my working solution:
for FILE in {{FILE_PATTERN}}; do echo ${FILE} | mv ${FILE} $(sed 's/{{SOURCE_PATTERN}}/{{TARGET_PATTERN}}/g'); done

From UNIX shell, how to find all files containing a specific string, then print the 4th line of each file?

I want to find all files within the current directory that contain a given string, then print just the 4th line of each file.
grep --null -l "$yourstring" * | # List all the files containing your string
xargs -0 sed -n '4p;q' # Print the fourth line of said files.
Different editions of grep have slightly different incantations of --null, but it's usually there in some form. Read your manpage for details.
Update: I believe one of the null file list incantations of grep is a reasonable solution that will cover the vast majority of real-world use cases, but to be entirely portable, if your version of grep does not support any null output it is not perfectly safe to use it with xargs, so you must resort to find.
find . -maxdepth 1 -type f -exec grep -q "$yourstring" {} \; -exec sed -n '4p;q' {} +
Because find arguments can almost all be used as predicates, the -exec grep -q… part filters the files that are eventually fed to sed down to only those that contain the required string.
From other user:
grep -Frl string . | xargs -n 1 sed -n 4p
Give a try to the below GNU find command,
find . -maxdepth 1 -type f -exec grep -l 'yourstring' {} \; | xargs -I {} awk 'NR==4{print; exit}' {}
It finds all the files in the current directory which contains specific string, and prints the line number 4 present in each file.
This for loop should work:
while read -d '' -r file; do
echo -n "$file: "
sed '4q;d' "$file"
done < <(grep --null -l "some-text" *.txt)

xargs with multiple commands

In the current directory, I'd like to print the filename and contents in it.
I can print filenames or contents separately by
find . | grep "file_for_print" | xargs echo
find . | grep "file_for_print" | xargs cat
but what I want is printing them together like this:
file1
line1 inside file1
line2 inside file1
file2
line1 inside file2
line2 inside file2
I read xargs with multiple commands as argument
and tried
find . | grep "file_for_print" | xargs -I % sh -c 'echo; cat;'
but doesn't work.
I'm not familiar with xargs, so don't know what exactly "-I % sh -c" means.
could anyone help me? thank you!
find . | grep "file_for_print" | xargs -I % sh -c 'echo %; cat %;' (OP was missing %s)
To start with, there is virtually no difference between:
find . | grep "file_for_print" | xargs echo
and
find . -name "file_for_print*"
except that the second one will not match filenames like this_is_not_the_file_for_print, and it will print the filenames one per line. It will also be a lot faster, because it doesn't need to generate and print the entire recursive directory structure just in order for grep to toss most of it away.
find . -name "file_for_print*"
is actually exactly the same as
find . -name "file_for_print*" -print
where the -print action prints each matched filename followed by a newline. If you don't provide find with any actions, it assumes you wanted -print. But it has more tricks up its sleeve than that. For example:
find . -name "file_for_print*" -exec cat {} \;
The -exec action causes find to execute the following command, up to the \;, replacing {} with each matching file name.
find does not limit itself to a single action. You can tell it to do however many you want. So:
find . -name "file_for_print*" -print -exec cat {} \;
will probably do pretty well what you want.
For lots more information on this very useful utility, type:
man find
or
info find
and read all about It.
Since it's not been said yet: -I % tells xargs to replace '%' with the arguments in the command you give it. The sh -c '...' just means run the commands '...' in a new shell.
So
xargs -I % sh -c 'echo %; cat %;'
will run echo [filename] followed by cat [filename] for every filename given to xargs. The echo and cat commands will be executed inside a different shell process but this usually doesn't matter. Your version didn't work because it was missing the % signs inside the command passed to xargs.
For what it's worth I would use this command to achieve the same thing:
find -name "*file_for_print*" | parallel 'echo {}; cat {};'
because it's simpler (parallel automatically uses {} as the substitution character and can take multiple commands by default).
In this specific case, each command is executed for each individual file anyway, so there's no advantage in using xargs. You may just append -exec twice to your 'find':
find . -name "*file_for_print*" -exec echo {} \; -exec cat {} \;
In this case-print could be used instead of the first echo as pointed out by rici, but this example shows the ability to execute two arbitrary commands with a single find
What about writing your own bash function?
#!/bin/bash
myFunction() {
while read -r file; do
echo "$file"
cat "$file"
done
}
find . -name "file_for_print*" | myFunction

What is the best way to count "find" results?

My current solution would be find <expr> -exec printf '.' \; | wc -c, but this takes far too long when there are more than 10000 results. Is there no faster/better way to do this?
Why not
find <expr> | wc -l
as a simple portable solution? Your original solution is spawning a new process printf for every individual file found, and that's very expensive (as you've just found).
Note that this will overcount if you have filenames with newlines embedded, but if you have that then I suspect your problems run a little deeper.
Try this instead (require find's -printf support):
find <expr> -type f -printf '.' | wc -c
It will be more reliable and faster than counting the lines.
Note that I use the find's printf, not an external command.
Let's bench a bit :
$ ls -1
a
e
l
ll.sh
r
t
y
z
My snippet benchmark :
$ time find -type f -printf '.' | wc -c
8
real 0m0.004s
user 0m0.000s
sys 0m0.007s
With full lines :
$ time find -type f | wc -l
8
real 0m0.006s
user 0m0.003s
sys 0m0.000s
So my solution is faster =) (the important part is the real line)
This solution is certainly slower than some of the other find -> wc solutions here, but if you were inclined to do something else with the file names in addition to counting them, you could read from the find output.
n=0
while read -r -d ''; do
((n++)) # count
# maybe perform another act on file
done < <(find <expr> -print0)
echo $n
It is just a modification of a solution found in BashGuide that properly handles files with nonstandard names by making the find output delimiter a NUL byte using print0, and reading from it using '' (NUL byte) as the loop delimiter.
This is my countfiles function in my ~/.bashrc (it's reasonably fast, should work for Linux & FreeBSD find, and does not get fooled by file paths containing newline characters; the final wc just counts NUL bytes):
countfiles ()
{
command find "${1:-.}" -type f -name "${2:-*}" -print0 |
command tr -dc '\0' | command wc -c;
return 0
}
countfiles
countfiles ~ '*.txt'
POSIX compliant and newline-proof:
find /path -exec printf %c {} + | wc -c
And, from my tests in /, not even two times slower than the other solutions, which are either not newline-proof or not portable.
Note the + instead of \;. That is crucial for performance, as \; spawns one printf command per file name, whereas + gives as much file names as it can to a single printf command. (And in the possible case where there are too many arguments, Find intelligently spawns new Printfs on demand to cope with it, so it would be as if
{
printf %c very long argument list1
printf %c very long argument list2
printf %c very long argument list3
} | wc -c
were called.)
I needed something where I wouldn't take all output from find as some other commands run also print stuff.
Without need for temporary files this is only possible with a big caveat: You might get (far) more than one line of output as it will execute the output command once for every 800~1600 files.
find . -print -exec sh -c 'printf %c "$#" | wc -c' '' '{}' + # just print the numbers
find . -print -exec sh -c 'echo "Processed `printf %c "$#" | wc -c` items."' '' '{}' +
Generates this result:
Processed 1622 items.
Processed 1578 items.
Processed 1587 items.
An alternative is to use a temporary file:
find . -print -fprintf tmp.file .
wc -c <tmp.file # using the file as argument instead causes the file name to be printed after the count
echo "Processed `wc -c <tmp.file` items." # sh variant
echo "Processed $(wc -c <tmp.file) items." # bash variant
The -print in every of the find commands will not influence the count at all.

"grep -R" replacement?

I have a machine with grep installed but option -R is not compiled-in and there is also no replacement switch.
How can I replace it in bash?
I tried:
for i in `find *`; do
grep 'pattern' $i;
done
but that is not right re-interpretation, isn't it?
Try piping the output of find to xargs so that grep only gets invoked a few times (xargs keeps reading input until it gets so much that more would not fit in an argument list):
find -type f | xargs grep foo
We usually use
find . -exec grep 'pattern' {} \;
That usually works similarly to grep -R.

Resources