assigning pipeline result in a variable - bash

I would like to count the number of *.sav files in the $1 folder.
I tried to use the following command but it doesn't work, what is the problem?
numOfFiles=`ls $1 | grep -o *\.sav | wc -w`

Do not parse the output from ls
Use bash arrays instead.
shopt -s nullglob
sav_files=(*.sav)
echo ${#sav_files[#]}

The main problem is that *.sav is not a regular expression, it's a glob. You likely wanted to grep for \.sav, which you have to escape once for the shell and once again because of the antiquated `` syntax:
numOfFiles=`ls $1 | grep -o \\\\.sav | wc -w`
Additionally, you should not parse the output of ls, and wc -w will cause filenames with spaces to be counted multiple times.
A better solution would be to use find:
numOfFiles=$(find "$1" -maxdepth 1 -name '*.sav' | wc -l)
For an even better, Bash specific solution, see 1_CR's answer.

Don't want search hypothetical problems in file counting, but parsing output from ls can be sometimes wrong idea. Try it yourself, create a file:
touch 'aa bb
cc.sav'
so, create file with space and newline in its name. Usually any ls parsing will fail.
The 1_CR's solution is NICE and ELEGANT.
If you for some reason don't like it, you can use find and don't use wc because wc for example fails is the last line doesn't contain newline. (wc don't count lines but \n chars.
The next will give correct answer:
numOfFiles=$(find . -maxdepth 1 -name \*.sav -print0 | grep -zc .)
the find with -print0 prints all found files "zero ended", and grep -z can read like zero delimited "lines". the -c simple count found lines for the pattern . (any char)

Your regex is wrong, * is a repetition operator to grep but you don't specify what to repeat.
Anyway, the grep is unnecessary; just specify the pattern as a shell glob.
numOfFiles=`ls "$1"/*.sav | wc -l`
Note the quotes around $1 and the use of line count rather than word count; wc -w would return a count of two for underscore_filename.
If you do use grep for something, it's usually a good idea to put the search pattern in single quotes, so you don't have to backslash-escape shell metacharacters. (Even with the backslash, your pattern would end up unescaped because the shell eats one level of escapes in unquoted strings.)
If you have any matches on *.sav in the current directory, the unquoted regex would be expanded as a glob by the shell, and produce interesting results which could be very hard to debug.

Personally I would recommend using quotes around the variables to prevent spaces being a problem.
And additionally, wc -l is a better solution since it counts lines instead of words (files with spaces might count twice).
So try this instead:
numOfFiles=$(ls -1 "$1/"*.sav | wc -l)

Related

Bash: displaying wc with three digit output?

conducting a word count of a directory.
ls | wc -l
if output is "17", I would like the output to display as "017".
I have played with | printf with little luck.
Any suggestions would be appreciated.
printf is the way to go to format numbers:
printf "There were %03d files\n" "$(ls | wc -l)"
ls | wc -l will tell you how many lines it encountered parsing the output of ls, which may not be the same as the number of (non-dot) filenames in the directory. What if a filename has a newline? One reliable way to get the number of files in a directory is
x=(*)
printf '%03d\n' "${#x[#]}"
But that will only work with a shell that supports arrays. If you want a POSIX compatible approach, use a shell function:
countargs() { printf '%03d\n' $#; }
countargs *
This works because when a glob expands the shell maintains the words in each member of the glob expansion, regardless of the characters in the filename. But when you pipe a filename the command on the other side of the pipe can't tell it's anything other than a normal string, so it can't do any special handling.
You coud use sed.
ls | wc -l | sed 's/^17$/017/'
And this applies to all the two digit numbers.
ls | wc -l | sed '/^[0-9][0-9]$/s/.*/0&/'

perform an operation for *each* item listed by grep

How can I perform an operation for each item listed by grep individually?
Background:
I use grep to list all files containing a certain pattern:
grep -l '<pattern>' directory/*.extension1
I want to delete all listed files but also all files having the same file name but a different extension: .extension2.
I tried using the pipe, but it seems to take the output of grep as a whole.
In find there is the -exec option, but grep has nothing like that.
If I understand your specification, you want:
grep --null -l '<pattern>' directory/*.extension1 | \
xargs -n 1 -0 -I{} bash -c 'rm "$1" "${1%.*}.extension2"' -- {}
This is essentially the same as what #triplee's comment describes, except that it's newline-safe.
What's going on here?
grep with --null will return output delimited with nulls instead of newline. Since file names can have newlines in them delimiting with newline makes it impossible to parse the output of grep safely, but null is not a valid character in a file name and thus makes a nice delimiter.
xargs will take a stream of newline-delimited items and execute a given command, passing as many of those items (one as each parameter) to a given command (or to echo if no command is given). Thus if you said:
printf 'one\ntwo three \nfour\n' | xargs echo
xargs would execute echo one 'two three' four. This is not safe for file names because, again, file names might contain embedded newlines.
The -0 switch to xargs changes it from looking for a newline delimiter to a null delimiter. This makes it match the output we got from grep --null and makes it safe for processing a list of file names.
Normally xargs simply appends the input to the end of a command. The -I switch to xargs changes this to substitution the specified replacement string with the input. To get the idea try this experiment:
printf 'one\ntwo three \nfour\n' | xargs -I{} echo foo {} bar
And note the difference from the earlier printf | xargs command.
In the case of my solution the command I execute is bash, to which I pass -c. The -c switch causes bash to execute the commands in the following argument (and then terminate) instead of starting an interactive shell. The next block 'rm "$1" "${1%.*}.extension2"' is the first argument to -c and is the script which will be executed by bash. Any arguments following the script argument to -c are assigned as the arguments to the script. This, if I were to say:
bash -c 'echo $0' "Hello, world"
Then Hello, world would be assigned to $0 (the first argument to the script) and inside the script I could echo it back.
Since $0 is normally reserved for the script name I pass a dummy value (in this case --) as the first argument and, then, in place of the second argument I write {}, which is the replacement string I specified for xargs. This will be replaced by xargs with each file name parsed from grep's output before bash is executed.
The mini shell script might look complicated but it's rather trivial. First, the entire script is single-quoted to prevent the calling shell from interpreting it. Inside the script I invoke rm and pass it two file names to remove: the $1 argument, which was the file name passed when the replacement string was substituted above, and ${1%.*}.extension2. This latter is a parameter substitution on the $1 variable. The important part is %.* which says
% "Match from the end of the variable and remove the shortest string matching the pattern.
.* The pattern is a single period followed by anything.
This effectively strips the extension, if any, from the file name. You can observe the effect yourself:
foo='my file.txt'
bar='this.is.a.file.txt'
baz='no extension'
printf '%s\n'"${foo%.*}" "${bar%.*}" "${baz%.*}"
Since the extension has been stripped I concatenate the desired alternate extension .extension2 to the stripped file name to obtain the alternate file name.
If this does what you want, pipe the output through /bin/sh.
grep -l 'RE' folder/*.ext1 | sed 's/\(.*\).ext1/rm "&" "\1.ext2"/'
Or if sed makes you itchy:
grep -l 'RE' folder/*.ext1 | while read file; do
echo rm "$file" "${file%.ext1}.ext2"
done
Remove echo if the output looks like the commands you want to run.
But you can do this with find as well:
find /path/to/start -name \*.ext1 -exec grep -q 'RE' {} \; -print | ...
where ... is either the sed script or the three lines from while to done.
The idea here is that find will ... well, "find" things based on the qualifiers you give it -- namely, that things match the file glob "*.ext", AND that the result of the "exec" is successful. The -q tells grep to look for RE in {} (the file supplied by find), and exit with a TRUE or FALSE without generating any of its own output.
The only real difference between doing this in find vs doing it with grep is that you get to use find's awesome collection of conditions to narrow down your search further if required. man find for details. By default, find will recurse into subdirectories.
You can pipe the list to xargs:
grep -l '<pattern>' directory/*.extension1 | xargs rm
As for the second set of files with a different extension, I'd do this (as usual use xargs echo rm when testing to make a dry run; I haven't tested it, it may not work correctly with filenames with spaces in them):
filelist=$(grep -l '<pattern>' directory/*.extension1)
echo $filelist | xargs rm
echo ${filelist//.extension1/.extension2} | xargs rm
Pipe the result to xargs, it will allow you to run a command for each match.

Bash: escape characters in backticks

I'm trying to escape characters within backticks in my bash command, mainly to handle spaces in filenames which cause my command to fail.
The command I have so far is:
grep -Li badword `grep -lr goodword *`
This command should result in a list of files that do not contain the word "badword" but do contain "goodword".
Your approach, even if you get the escaping right, will run into problems when the number of files output by the goodword grep reaches the limits on command-line length. It is better to pipe the output of the first grep onto a second grep, like this
grep -lr -- goodword * | xargs grep -Li -- badword
This will correctly handle files with spaces in them, but it will fail if a file name has a newline in it. At least GNU grep and xargs support separating the file names with NUL bytes, like this
grep -lrZ -- goodword * | xargs -0 grep -Li -- badword
EDIT: Added double dashes -- to grep invocations to avoid the case when some file names start with - and would be interpreted by grep as additional options.
How about rewrite it to:
grep -lr goodword * | grep -Li badword

How to apply shell command to each line of a command output?

Suppose I have some output from a command (such as ls -1):
a
b
c
d
e
...
I want to apply a command (say echo) to each one, in turn. E.g.
echo a
echo b
echo c
echo d
echo e
...
What's the easiest way to do that in bash?
It's probably easiest to use xargs. In your case:
ls -1 | xargs -L1 echo
The -L flag ensures the input is read properly. From the man page of xargs:
-L number
Call utility for every number non-empty lines read.
A line ending with a space continues to the next non-empty line. [...]
You can use a basic prepend operation on each line:
ls -1 | while read line ; do echo $line ; done
Or you can pipe the output to sed for more complex operations:
ls -1 | sed 's/^\(.*\)$/echo \1/'
for s in `cmd`; do echo $s; done
If cmd has a large output:
cmd | xargs -L1 echo
You can use a for loop:
for file in * ; do
echo "$file"
done
Note that if the command in question accepts multiple arguments, then using xargs is almost always more efficient as it only has to spawn the utility in question once instead of multiple times.
You actually can use sed to do it, provided it is GNU sed.
... | sed 's/match/command \0/e'
How it works:
Substitute match with command match
On substitution execute command
Replace substituted line with command output.
A solution that works with filenames that have spaces in them, is:
ls -1 | xargs -I %s echo %s
The following is equivalent, but has a clearer divide between the precursor and what you actually want to do:
ls -1 | xargs -I %s -- echo %s
Where echo is whatever it is you want to run, and the subsequent %s is the filename.
Thanks to Chris Jester-Young's answer on a duplicate question.
xargs fails with with backslashes, quotes. It needs to be something like
ls -1 |tr \\n \\0 |xargs -0 -iTHIS echo "THIS is a file."
xargs -0 option:
-0, --null
Input items are terminated by a null character instead of by whitespace, and the quotes and backslash are
not special (every character is taken literally). Disables the end of file string, which is treated like
any other argument. Useful when input items might contain white space, quote marks, or backslashes. The
GNU find -print0 option produces input suitable for this mode.
ls -1 terminates the items with newline characters, so tr translates them into null characters.
This approach is about 50 times slower than iterating manually with for ... (see Michael Aaron Safyans answer) (3.55s vs. 0.066s). But for other input commands like locate, find, reading from a file (tr \\n \\0 <file) or similar, you have to work with xargs like this.
i like to use gawk for running multiple commands on a list, for instance
ls -l | gawk '{system("/path/to/cmd.sh "$1)}'
however the escaping of the escapable characters can get a little hairy.
Better result for me:
ls -1 | xargs -L1 -d "\n" CMD

Use lines in a file as filenames for grep?

I have a file which contains filenames (and the full path to them) and I want to search for a word within all of them.
some pseudo-code to explain:
grep keyword <all files specified in files.txt>
or
cat files.txt > grep keyword
cat files txt | grep keyword
the problem is that I can only get grep to search the filenames, not the contents of the actual files.
cat files.txt | xargs grep keyword
or
grep keyword `cat files.txt`
or (equivalent to previous but harder to mis-read)
grep keyword $(cat files.txt)
should do the trick.
Pitfalls:
If files.txt contains file names with spaces, either solution will malfunction, because "This is a filename.txt" will be interpreted as four files, "This", "is", "a", and "filename.txt". A good reason why you shouldn't have spaces in your filenames, ever.
There are ways around this, but none of them is trivial. (find ... -print0 / xargs -0 is one of them.)
The second (cat) version can result in a very long command line (which might fail when exceeding the limits of your environment). The first (xargs) version handles long input automatically; xargs offers several options to control the details.
Both of the answers from DevSolar work (tested on Linux Ubuntu), but the xargs version is preferable if there may be many files, since it will avoid running into command line length limits.
so:
cat files.txt | xargs grep keyword
is the way to go
tr '\n' '\0' <files.txt | LANG=C xargs -r0 grep -F keyword
tr will delimit names with NUL character so that spaces not significant (note the corresponding -0 option to xargs).
xargs -r will start a single grep process for a "large" number of files, but not start any grep process if there are no files.
LANG=C means use quick routines for matching, rather than slow locale ones
grep -F means use quick string matching rather than slow regular expression matching
bash, ksh & zsh version:
grep keyword $(<files.txt)
Long time when last created a bash shell script, but you could store the result of the first grep (the one finding all filenames) in an array and iterate over it, issuing even more grep commands.
A good starting point should be the bash scripting guide.

Resources