How to preserve newline characters in grep result? - bash

I want to assign the grep result to a variable for further use:
lines=$(cat abc.txt | grep "hello")
but seems newline characters are removed in the result, when I do
echo $lines
only one line is printed. How can I preserve newline characters, so when I echo $lines, it generates the same result as cat abc.txt | grep "hello" does.

You want to say
echo "$lines"
instead of
echo $lines
To elaborate:
echo $lines means "Form a new command by replacing $lines with the contents of the variable named lines, splitting it up on whitespace to form zero or more new arguments to the echo command. For example:
lines='1 2 3'
echo $lines # equivalent to "echo 1 2 3"
lines='1 2 3'
echo $lines # also equivalent to "echo 1 2 3"
lines="1
2
3"
echo $lines # also equivalent to "echo 1 2 3"
All these examples are equivalent, because the shell ignores the specific kind of whitespace between the individual words stored in the variable lines. Actually, to be more precise, the shell splits the contents of the variable on the characters of the special IFS (Internal Field Separator) variable, which defaults (at least on my version of bash) to the three characters space, tab, and newline.
echo "$lines", on the other hand, means to form a single new argument from the exact value of the variable lines.
For more details, see the "Expansion" and "Word Splitting" sections of the bash manual page.

Using the Windows port for grep (not the original question, I kow and not applicable to *nix). I found that -U solved the problem nicely.
From the --help:
-U, --binary do not strip CR characters at EOL (MSDOS)

Related

POSIX/Bash pad variable with trailing newlines

I have a variable with some lines in it and I would like to pad it with a number of newlines defined in another variable. However it seems that the subshell may be stripping the trailing newlines. I cannot just use '\n' with echo -e as the lines may already contain escaped chars which need to be printed as is.
I have found I can print an arbitrary number of newlines using this.
n=5
yes '' | sed -n "1,${n}p;${n}q"
But if I run this in a subshell to store it in the variable, the subshell appears to strip the trailing newlines.
I can approximate the functionality but it's clumsy and due to the way I am using it I would much rather be able to just call echo "$var" or even use $var itself for things like string concatenation. This approximation runs into the same issue with subshells as soon as the last (filler) line of the variable is removed.
This is my approximation
n=5
var="test"
#I could also just set n=6
cmd="1,$((n+1))p;$((n+1))q"
var="$var$(yes '' | sed -n $cmd; echo .)"
#Now I can use it with
echo "$var" | head -n -1
Essentially I need a good way of appending a number of newlines to a variable which can then be printed with echo.
I would like to keep this POSIX compliant if at all possible but at this stage a bash solution would also be acceptable. I am also using this as part of a tool for which I have set a challenge of minimizing line and character count while maintaining readability. But I can work that out once I have a workable solution
Command substitutions with either $( ) or backticks will trim trailing newlines. So don't use them; use the shell's built-in string manipulation:
n=5
var="test"
while [ "$n" -gt 0 ]; do
var="$var
"
n=$((n-1))
done
Note that there must be nothing after the var="$var (before the newline), and nothing before the " on the next line (no indentation!).
A sequence of n newlines:
printf -v spaces "%*s" $n ""
newlines=${spaces// /$'\n'}

bash - IFS changes behavior of echo -n in for loop

I have code that requires a response within a for loop.
Prior to the loop I set IFS="\n"
Within the loop echo -n is ignored (except for the last line).
Note: this is just an example of the behavior of echo -n
Example:
IFS='\n'
for line in `cat file`
do
echo -n $line
done
This outputs:
this is a test
this is a test
this is a test$
with the user prompt occuring only at the end of the last line.
Why is this occuring and is there a fix?
Neither IFS="\n" nor IFS='\n' set $IFS to a newline; instead they set it to literal \ followed by literal n.
You'd have to use an ANSI C-quoted string in order to assign an actual newline: IFS=$'\n'; alternatively, you could use a normal string literal that contains an actual newline (spans 2 lines).
Assigning literal \n had the effect that the output from cat file was not split into lines, because an actual newline was not present in $IFS; potentially - though not with your sample file content - the output could have been split into fields by embedded \ and n characters.
Without either, the entire file contents were passed at once, resulting in a single iteration of your for loop.
That said, your approach to looping over lines from a file is ill-advised; try something like the following instead:
while IFS= read -r line; do
echo -n "$line"
done < file
Never use for loops when parsing files in bash. Use while loops instead. Here is a really good tutorial on that.
http://mywiki.wooledge.org/BashFAQ/001

how to count the number of lines in a variable in a shell script

Having some trouble here. I want to capture output from ls command into variable. Then later use that variable and count the number of lines in it. I've tried a few variations
This works, but then if there are NO .txt files, it says the count is 1:
testVar=`ls -1 *.txt`
count=`wc -l <<< $testVar`
echo '$count'
This works for when there are no .txt files, but the count comes up short by 1 when there are .txt files:
testVar=`ls -1 *.txt`
count=`printf '$testVar' | wc -l`
echo '$count'
This variation also says the count is 1 when NO .txt files exist:
testVar=`ls -1 *.txt`
count=`echo '$testVar' | wc -l`
echo '$count'
Edit: I should mention this is korn shell.
The correct approach is to use an array.
# Use ~(N) so that if the match fails, the array is empty instead
# of containing the pattern itself as the single entry.
testVar=( ~(N)*.txt )
count=${#testVar[#]}
This little question actually includes the result of three standard shell gotchas (both bash and korn shell):
Here-strings (<<<...) have a newline added to them if they don't end with a newline. That makes it impossible to send a completely empty input to a command with a here-string.
All trailing newlines are removed from the output of a command used in command substitution (cmd or preferably $(cmd)). So you have no way to know how many blank lines were at the end of the output.
(Not really a shell gotcha, but it comes up a lot). wc -l counts the number of newline characters, not the number of lines. So the last "line" is not counted if it is not terminated with a newline. (A non-empty file which does not end with a newline character is not a Posix-conformant text file. So weird results like this are not unexpected.)
So, when you do:
var=$(cmd)
utility <<<"$var"
The command substitution in the first line removes all trailing newlines, and then the here-string expansion in the second line puts exactly one trailing newline back. That converts an empty output to a single blank line, and otherwise removes blank lines from the end of the output.
So utility is wc -l, then you will get the correct count unless the output was empty, in which case it will be 1 instead of 0.
On the other hand, with
var=$(cmd)
printf %s "$cmd" | utility
The trailing newline(s) are removed, as before, by the command substitution, so the printf leaves the last line (if any) unterminated. Now if utility is wc -l, you'll end up with 0 if the output was empty, but for non-empty files, the count will not include the last line of the output.
One possible shell-independent work-around is to use the second option, but with grep '' as a filter:
var=$(cmd)
printf %s "${var}" | grep '' | utility
The empty pattern '' will match every line, and grep always terminates every line of output. (Of course, this still won't count blank lines at the end of the output.)
Having said all that, it is always a bad idea to try to parse the output of ls, even just to count the number of files. (A filename might include a newline character, for example.) So it would be better to use a glob expansion combined with some shell-specific way of counting the number of objects in the glob expansion (and some other shell-specific way of detecting when no file matches the glob).
I was going to suggest this, which is a construct I've used in bash:
f=($(</path/to/file))
echo ${#f[#]}
To handle multiple files, you'd just .. add files.
f=($(</path/to/file))
f=+($(</path/to/otherfile))
or
f=($(</path/to/file) $(</path/to/otherfile))
To handle lots of files, you could loop:
f=()
for file in *.txt; do
f+=($(<$file))
done
Then I saw chepner's response, which I gather is more korn-y than mine.
NOTE: loops are better than parsing ls.
You can also use like this:
#!/bin/bash
testVar=`ls -1 *.txt`
if [ -z "$testVar" ]; then
# Empty
count=0
else
# Not Empty
count=`wc -l <<< "$testVar"`
fi
echo "Count : $count"

Why does `echo -e` not work when used with a variable?

#!/bin/bash
RESULT=$(ls)
echo -e "$RESULT" # prints the result of 'ls' with new lines
echo -e $RESULT # prints the result of 'ls' in one line
Why does the second approach print all in one line instead of a new line for each item ? Should'nt the -e option trigger the interpretation of the \n charachters ?
When you run this command:
echo -e $RESULT
Bash performs word splitting on $RESULT; that is, it splits it up by whitespace, and passes the resulting tokens as separate arguments to echo. So you're essentially running this:
echo -e file1.txt file2.txt file3.txt
and echo has no way of knowing that $RESULT contained newlines.
(The -e, by the way, isn't really relevant here. -e doesn't affect the treatment of newline characters, only of an actual sequence like \n — a backslash followed by an n.)
-e is irrelevant.
echo "$RESULT" sees one parameter, a string with embedded newlines.
echo $RESULT sees as many parameters as there are words in $RESULT. The whitespace (including newlines) that separate these words is eaten by the shell.
It's treating the different lines as separate arguments in this case.

grep a pattern and output non-matching part of line

I know it is possible to invert grep output with the -v flag. Is there a way to only output the non-matching part of the matched line? I ask because I would like to use the return code of grep (which sed won't have). Here's sort of what I've got:
tags=$(grep "^$PAT" >/dev/null 2>&1)
[ "$?" -eq 0 ] && echo $tags
You could use sed:
$ sed -n "/$PAT/s/$PAT//p" $file
The only problem is that it'll return an exit code of 0 as long as the pattern is good, even if the pattern can't be found.
Explanation
The -n parameter tells sed not to print out any lines. Sed's default is to print out all lines of the file. Let's look at each part of the sed program in between the slashes. Assume the program is /1/2/3/4/5:
/$PAT/: This says to look for all lines that matches pattern $PAT to run your substitution command. Otherwise, sed would operate on all lines, even if there is no substitution.
/s/: This says you will be doing a substitution
/$PAT/: This is the pattern you will be substituting. It's $PAT. So, you're searching for lines that contain $PAT and then you're going to substitute the pattern for something.
//: This is what you're substituting for $PAT. It is null. Therefore, you're deleting $PAT from the line.
/p: This final p says to print out the line.
Thus:
You tell sed not to print out the lines of the file as it processes them.
You're searching for all lines that contain $PAT.
On these lines, you're using the s command (substitution) to remove the pattern.
You're printing out the line once the pattern is removed from the line.
How about using a combination of grep, sed and $PIPESTATUS to get the correct exit-status?
$ echo Humans are not proud of their ancestors, and rarely invite
them round to dinner | grep dinner | sed -n "/dinner/s/dinner//p"
Humans are not proud of their ancestors, and rarely invite them round to
$ echo $PIPESTATUS[1]
0[1]
The members of the $PIPESTATUS array hold the exit status of each respective command executed in a pipe. $PIPESTATUS[0] holds the exit status of the first command in the pipe, $PIPESTATUS[1] the exit status of the second command, and so on.
Your $tags will never have a value because you send it to /dev/null. Besides from that little problem, there is no input to grep.
echo hello |grep "^he" -q ;
ret=$? ;
if [ $ret -eq 0 ];
then
echo there is he in hello;
fi
a successful return code is 0.
...here is 1 take at your 'problem':
pat="most of ";
data="The apples are ripe. I will use most of them for jam.";
echo $data |grep "$pat" -q;
ret=$?;
[ $ret -eq 0 ] && echo $data |sed "s/$pat//"
The apples are ripe. I will use them for jam.
... exact same thing?:
echo The apples are ripe. I will use most of them for jam. | sed ' s/most\ of\ //'
It seems to me you have confused the basic concepts. What are you trying to do anyway?
I am going to answer the title of the question directly instead of considering the detail of the question itself:
"grep a pattern and output non-matching part of line"
The title to this question is important to me because the pattern I am searching for contains characters that sed will assign special meaning to. I want to use grep because I can use -F or --fixed-strings to cause grep to interpret the pattern literally. Unfortunately, sed has no literal option, but both grep and bash have the ability to interpret patterns without considering any special characters.
Note: In my opinion, trying to backslash or escape special characters in a pattern appears complex in code and is unreliable because it is difficult to test. Using tools which are designed to search for literal text leaves me with a comfortable 'that will work' feeling without considering POSIX.
I used both grep and bash to produce the result because bash is slow and my use of fast grep creates a small output from a large input. This code searches for the literal twice, once during grep to quickly extract matching lines and once during =~ to remove the match itself from each line.
while IFS= read -r || [[ -n "$RESULT" ]]; do
if [[ "$REPLY" =~ (.*)("$LITERAL_PATTERN")(.*) ]]; then
printf '%s\n' "${BASH_REMATCH[1]}${BASH_REMATCH[3]}"
else
printf "NOT-REFOUND" # should never happen
exit 1
fi
done < <(grep -F "$LITERAL_PATTERN" < "$INPUT_FILE")
Explanation:
IFS= Reassigning the input field separator is a special prefix for a read statement. Assigning IFS to the empty string causes read to accept each line with all spaces and tabs literally until end of line (assuming IFS is default space-tab-newline).
-r Tells read to accept backslashes in the input stream literally instead of considering them as the start of an escape sequence.
$REPLY Is created by read to store characters from the input stream. The newline at the end of each line will NOT be in $REPLY.
|| [[ -n "$REPLY" ]] The logical or causes the while loop to accept input which is not newline terminated. This does not need to exist because grep always provides a trailing newline for every match. But, I habitually use this in my read loops because without it, characters between the last newline and the end of file will be ignored because that causes read to fail even though content is successfully read.
=~ (.*)("$LITERAL_PATTERN")(.*) ]] Is a standard bash regex test, but anything in quotes in taken as a literal. If I wanted =~ to consider the regex characters in contained in $PATTERN, then I would need to eliminate the double quotes.
"${BASH_REMATCH[#]}" Is created by [[ =~ ]] where [0] is the entire match and [N] is the contents of the match in the Nth set of parentheses.
Note: I do not like to reassign stdin to a while loop because it is easy to error and difficult to see what is happening later. I usually create a function for this type of operation which acts typically and expects file_name parameters or reassignment of stdin during the call.

Resources