Print regex string found with grep - bash

I have this crazy long command-turned-(bash)script that outputs a tiny table with and object and some other data relate to it. I want to single the name of the object out to use it as a variable to further run more commands.
It outputs something like this:
ID PATH NAME VERSION UPGRADE STATUS
vm-13034 /abc/def somethingelse vX.X.X-XXX-XXXXXXX Up to date
The value I'm interested in is ID, which always is vm-#####. With an online tool I came up with the regex ^vm-\d{5,}$ (, bc the number is incrementing). So I figured vosxls | grep -E ^vm-\d{5,}$ (vosxls is the script's name for the long command) would work but it returns nothing. I tried it with -w, -e, grep -e "vm-\d{5,}" <(vosxls) and a few more adding and remove the enclosing ^ & $ characters when it would throw an error, enclosing the string in soft- and hard quotes or nothing at all, whatever would work.
Everything else that didn't error, returned nothing.
I read some examples with Perl and stuff similar to that but I stayed away and generally have stayed as basic as possible so I can run it in another system without issues. The most I've diverted from the most basic stuff (which are all I know anyway) was egrep which is I believe is the same as grep -e if I'm not mistaken.
How can the string be printed out? Is it because I'm already using a script it somehow affects how grep works? The script has no variables, it is merely a shorthand to avoid typing a lot of difficult to remember syntax+values.
Thanks.

Could be as simple as
vosxls | grep -Eo "^vm-\d{5,}"
Specifying -o allows you to return whatever matches the regex only.
-o, --only-matching
Prints only the matching part of the lines.

Perhaps something like vosxlx | sed -n 's/\(vm.*\ \)\(\/.*\)/\1/p'?
Or maybe vosxlx | awk '/vm/ {print $1}'

foo | perl -ne 'print $1 if /^(vm-\d+)/'
Your version of grep might not have extended regex support:
$:/mnt/c/Users/sukuj$ cat /tmp/s.txt
ID PATH NAME VERSION UPGRADE STATUS
vm-13034 /abc/def somethingelse vX.X.X-XXX-XXXXXXX Up to date
vm-23034 /abc/def somethingelse vX.X.X-XXX-XXXXXXX Up to date
$:/mnt/c/Users/sukuj$ grep -oE 'vm-[0-9]+' /tmp/s.txt
vm-13034
vm-23034
$:/mnt/c/Users/sukuj$ grep -oE 'vm-[0-9]{5,}' /tmp/s.txt
vm-13034
vm-23034
$:/mnt/c/Users/sukuj$ grep -V
grep (GNU grep) 3.4
Copyright (C) 2020 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Written by Mike Haertel and others; see
<https://git.sv.gnu.org/cgit/grep.git/tree/AUTHORS>.
suku#DESKTOP-GID8MQV:/mnt/c/Users/sukuj$

You can't use \d with the -E option of grep to denote a digit. -P, respectively -Po, will work. But if you already know that you need the first field of the last line of the output, you don't need grep. A
vosxlx | tail -n 1 | cut -d ' ' -f 1
would do as well.

Related

How do I trim whitespace, but not newlines, from subshell output in bash?

There are many tens, maybe a hundred or more previous questions that seem "identical" to this already here, but after extensive search, I found NOTHING that even came close to working - though I did learn quite a lot - and so I decided to just RTFM and figure this out on my own.
The Problem
I wanted to search the output of a ps auxwww command to find processes of interest, and the issue was that I can't just simply use cut to find the exact data from them that I wanted. ps, it turns out, tries to columnate the output, adding either extra spaces or tabs that get in the way of using cut to get the correct data.
So, since I'm not a master at bash, I did a search... The answers I found were all focused on either variables - a "backup strategy" from my point of view that itself didn't solve the whole problem - or they only trimmed leading or trailing space or all "whitespace" including newlines. NOPE, Won't Work For Cut! And, neither will removing trailing newlines and so forth.
So, restated, the question is, how do we efficiently end up with the white space defined as simply a single space between other characters without eliminating newlines?
Below, I will give my answer, but I welcome others to give theirs - who knows, maybe someone has a better answer?!
Answer:
At least MY answer - please leave your own, too! - was to do this:
ps auxwww | grep <program> | tr -s [:blank:] | cut -d ' ' -f <field_of_interest>
This worked great!
Obviously, there are many ways to adapt this to other needs.
As an alternative to all of the pipes and grep with cut, you could simply use awk. The benefit of using awkwith the default field-separator (FS) being set to break on whitespace is that it considers any number of whitespace between fields as a single separator.
So using awk will do away with needing to use tr -s to "squeeze" whitespace to define fields. Further, awk gives far greater control over field matching using regular expressions rather than having to rely on grep of a full line and cut to locate a pre-determined field numbers. (though to some extent you will still have to tell awk what field out of the ps command you are interested in)
Using bash, you can also eliminate the pipe | by using process substitution to send the output of ps auxwww to awk on stdin using redirection, e.g. awk ... < <(ps auxwww) for a single tidy command line.
To get your "program" and "file_of_interest" into awk you have two options. You can initialize awk variables using the -v var=value option (there can be multiple -v otions given), or you can use the BEGIN rule to initialize the variables. The only difference being with -v you can provide a shell variable for value and there is no whitespace allowed surrounding the = sign, while within BEGIN any whitespace is ignored.
So in your case a couple of examples to get the virtual memory size for firefox processes, you could use:
awk -v prog="firefox" -v fnum="5" '
$11 ~ prog {print $fnum}
' < <(ps auxwww)
(above if you had myprog=firefox as a shell variable, you could use -v prog="$myprog" to initialize the prog variable for awk)
or using the BEGIN rule, you could do:
awk 'BEGIN {prog = "firefox"; fnum = "5"}
$11 ~ prog {print $fnum }
' < <(ps auxwww)
In each command above, it locates the COMMAND field from ps (field 11) and checks whether it contains firefox and if so it outputs field no. 5 the virtual memory size used by each process.
Both work fine as one-liners as well, e.g.
awk -v prog="firefox" -v fnum="5" '$11 ~ prog {print $fnum}' < <(ps auxwww)
Don't get me wrong, the pipeline is perfectly fine, it will just be slow. For short commands with limited output there won't be much difference, but when the output is large, awk will provide orders of magnitude improvement over having to tr and grep and cut reading over the same records three times.
The reason being, the pipes and the process on each side requires separate processes be spawned by the shell. So minimizes their use, improves the efficiency of what your script is doing. Now if the data is small as are the processes, there isn't much of a difference. However if you are reading a 3G file 3 times over -- that's is the difference in orders of magnitude. Hours verses minutes or seconds.
I had to use single quotes on CentosOS Linux to get tr working like described above:
ps -o ppid= $$ | tr -d '[:space:]'
You can reduce the number of pipes using this Perl one-liner, which uses Perl regexes instead of a separate grep process. This combines grep, tr and cut in a single command, with an easy way to manipulate the output (#F is the array of fields, 0-indexed):
Examples:
# Start an example process to provide the input for `ps` in the next commands:
/Applications/Emacs.app/Contents/MacOS/Emacs-x86_64-10_14 --geometry 109x65 /tmp/foo &
# Print single space-delimited output of `ps` for all emacs processes:
ps auxwww | perl -lane 'print "#F" if $F[10] =~ /emacs/i'
# Prints:
# bar 72144 0.0 0.5 4610272 82320 s006 SN 11:15AM 0:01.31 /Applications/Emacs.app/Contents/MacOS/Emacs-x86_64-10_14 --geometry 109x65 /tmp/foo
# Print emacs PID and file name opened with emacs:
ps auxwww | perl -lane 'print join "\t", #F[1, -1] if $F[10] =~ /emacs/i'
# Prints:
# 72144 /tmp/foo
The Perl one-liner uses these command line flags:
-e : Tells Perl to look for code in-line, instead of in a file.
-n : Loop over the input one line at a time, assigning it to $_ by default.
-l : Strip the input line separator ("\n" on *NIX by default) before executing the code in-line, and append it when printing.
-a : Split $_ into array #F on whitespace or on the regex specified in -F option.
SEE ALSO:
perldoc perlrun: how to execute the Perl interpreter: command line switches
perldoc perlre: Perl regular expressions (regexes)

grep pipe searching for one word, not line

For some reason I cannot get this to output just the version of this line. I suspect it has something to do with how grep interprets the dash.
This command:
admin#DEV:~/TEMP$ sendemail
Yields the following:
sendemail-1.56 by Brandon Zehm
More output below omitted
The first line is of interest. I'm trying to store the version to variable.
TESTVAR=$(sendemail | grep '\s1.56\s')
Does anyone see what I am doing wrong? Thanks
TESTVAR is just empty. Even without TESTVAR, the output is empty.
I just tried the following too, thinking this might work.
sendemail | grep '\<1.56\>'
I just tried it again, while editing and I think I have another issue. Perhaps im not handling the output correctly. Its outputting the entire line, but I can see that grep is finding 1.56 because it highlights it in the line.
$ TESTVAR=$(echo 'sendemail-1.56 by Brandon Zehm' | grep -Eo '1.56')
$ echo $TESTVAR
1.56
The point is grep -Eo '1.56'
from grep man page:
-E, --extended-regexp
Interpret PATTERN as an extended regular expression (ERE, see below). (-E is specified by POSIX.)
-o, --only-matching
Print only the matched (non-empty) parts of a matching line, with each such part on a separate output
line.
Your regular expression doesn't match the form of the version. You have specified that the version is surrounded by spaces, yet in front of it you have a dash.
Replace the first \s with the capitalized form \S, or explicit set of characters and it should work.
I'm wondering: In your example you seem to know the version (since you grep for it), so you could just assign the version string to the variable. I assume that you want to obtain any (unknown) version string there. The regular expression for this in sed could be (using POSIX character classes):
sendemail |sed -n -r '1 s/sendemail-([[:digit:]]+\.[[:digit:]]+).*/\1/ p'
The -n suppresses the normal default output of every line; -r enables extended regular expressions; the leading 1 tells sed to only work on line 1 (I assume the version appears in the first line). I anchored the version number to the telltale string sendemail- so that potential other numbers elsewhere in that line are not matched. If the program name changes or the hyphen goes away in future versions, this wouldn't match any longer though.
Both the grep solution above and this one have the disadvantage to read the whole output which (as emails go these days) may be long. In addition, grep would find all other lines in the program's output which contain the pattern (if it's indeed emails, somebody might discuss this problem in them, with examples!). If it's indeed the first line, piping through head -1 first would be efficient and prudent.
jayadevan#jayadevan-Vostro-2520:~$ echo $sendmail
sendemail-1.56 by Brandon Zehm
jayadevan#jayadevan-Vostro-2520:~$ echo $sendmail | cut -f2 -d "-" | cut -f1 -d" "
1.56

bash: shortest way to get n-th column of output

Let's say that during your workday you repeatedly encounter the following form of columnized output from some command in bash (in my case from executing svn st in my Rails working directory):
? changes.patch
M app/models/superman.rb
A app/models/superwoman.rb
in order to work with the output of your command - in this case the filenames - some sort of parsing is required so that the second column can be used as input for the next command.
What I've been doing is to use awk to get at the second column, e.g. when I want to remove all files (not that that's a typical usecase :), I would do:
svn st | awk '{print $2}' | xargs rm
Since I type this a lot, a natural question is: is there a shorter (thus cooler) way of accomplishing this in bash?
NOTE:
What I am asking is essentially a shell command question even though my concrete example is on my svn workflow. If you feel that workflow is silly and suggest an alternative approach, I probably won't vote you down, but others might, since the question here is really how to get the n-th column command output in bash, in the shortest manner possible. Thanks :)
You can use cut to access the second field:
cut -f2
Edit:
Sorry, didn't realise that SVN doesn't use tabs in its output, so that's a bit useless. You can tailor cut to the output but it's a bit fragile - something like cut -c 10- would work, but the exact value will depend on your setup.
Another option is something like: sed 's/.\s\+//'
To accomplish the same thing as:
svn st | awk '{print $2}' | xargs rm
using only bash you can use:
svn st | while read a b; do rm "$b"; done
Granted, it's not shorter, but it's a bit more efficient and it handles whitespace in your filenames correctly.
I found myself in the same situation and ended up adding these aliases to my .profile file:
alias c1="awk '{print \$1}'"
alias c2="awk '{print \$2}'"
alias c3="awk '{print \$3}'"
alias c4="awk '{print \$4}'"
alias c5="awk '{print \$5}'"
alias c6="awk '{print \$6}'"
alias c7="awk '{print \$7}'"
alias c8="awk '{print \$8}'"
alias c9="awk '{print \$9}'"
Which allows me to write things like this:
svn st | c2 | xargs rm
Try the zsh. It supports suffix alias, so you can define X in your .zshrc to be
alias -g X="| cut -d' ' -f2"
then you can do:
cat file X
You can take it one step further and define it for the nth column:
alias -g X2="| cut -d' ' -f2"
alias -g X1="| cut -d' ' -f1"
alias -g X3="| cut -d' ' -f3"
which will output the nth column of file "file". You can do this for grep output or less output, too. This is very handy and a killer feature of the zsh.
You can go one step further and define D to be:
alias -g D="|xargs rm"
Now you can type:
cat file X1 D
to delete all files mentioned in the first column of file "file".
If you know the bash, the zsh is not much of a change except for some new features.
HTH Chris
Because you seem to be unfamiliar with scripts, here is an example.
#!/bin/sh
# usage: svn st | x 2 | xargs rm
col=$1
shift
awk -v col="$col" '{print $col}' "${#--}"
If you save this in ~/bin/x and make sure ~/bin is in your PATH (now that is something you can and should put in your .bashrc) you have the shortest possible command for generally extracting column n; x n.
The script should do proper error checking and bail if invoked with a non-numeric argument or the incorrect number of arguments, etc; but expanding on this bare-bones essential version will be in unit 102.
Maybe you will want to extend the script to allow a different column delimiter. Awk by default parses input into fields on whitespace; to use a different delimiter, use -F ':' where : is the new delimiter. Implementing this as an option to the script makes it slightly longer, so I'm leaving that as an exercise for the reader.
Usage
Given a file file:
1 2 3
4 5 6
You can either pass it via stdin (using a useless cat merely as a placeholder for something more useful);
$ cat file | sh script.sh 2
2
5
Or provide it as an argument to the script:
$ sh script.sh 2 file
2
5
Here, sh script.sh is assuming that the script is saved as script.sh in the current directory; if you save it with a more useful name somewhere in your PATH and mark it executable, as in the instructions above, obviously use the useful name instead (and no sh).
It looks like you already have a solution. To make things easier, why not just put your command in a bash script (with a short name) and just run that instead of typing out that 'long' command every time?
If you are ok with manually selecting the column, you could be very fast using pick:
svn st | pick | xargs rm
Just go to any cell of the 2nd column, press c and then hit enter
Note, that file path does not have to be in second column of svn st output. For example if you modify file, and modify it's property, it will be 3rd column.
See possible output examples in:
svn help st
Example output:
M wc/bar.c
A + wc/qax.c
I suggest to cut first 8 characters by:
svn st | cut -c8- | while read FILE; do echo whatever with "$FILE"; done
If you want to be 100% sure, and deal with fancy filenames with white space at the end for example, you need to parse xml output:
svn st --xml | grep -o 'path=".*"' | sed 's/^path="//; s/"$//'
Of course you may want to use some real XML parser instead of grep/sed.

Case-insensitive search and replace with sed

I'm trying to use SED to extract text from a log file. I can do a search-and-replace without too much trouble:
sed 's/foo/bar/' mylog.txt
However, I want to make the search case-insensitive. From what I've googled, it looks like appending i to the end of the command should work:
sed 's/foo/bar/i' mylog.txt
However, this gives me an error message:
sed: 1: "s/foo/bar/i": bad flag in substitute command: 'i'
What's going wrong here, and how do I fix it?
Update: Starting with macOS Big Sur (11.0), sed now does support the I flag for case-insensitive matching, so the command in the question should now work (BSD sed doesn't reporting its version, but you can go by the date at the bottom of the man page, which should be March 27, 2017 or more recent); a simple example:
# BSD sed on macOS Big Sur and above (and GNU sed, the default on Linux)
$ sed 's/ö/#/I' <<<'FÖO'
F#O # `I` matched the uppercase Ö correctly against its lowercase counterpart
Note: I (uppercase) is the documented form of the flag, but i works as well.
Similarly, starting with macOS Big Sur (11.0) awk now is locale-aware (awk --version should report 20200816 or more recent):
# BSD awk on macOS Big Sur and above (and GNU awk, the default on Linux)
$ awk 'tolower($0)' <<<'FÖO'
föo # non-ASCII character Ö was properly lowercased
The following applies to macOS up to Catalina (10.15):
To be clear: On macOS, sed - which is the BSD implementation - does NOT support case-insensitive matching - hard to believe, but true. The formerly accepted answer, which itself shows a GNU sed command, gained that status because of the perl-based solution mentioned in the comments.
To make that Perl solution work with foreign characters as well, via UTF-8, use something like:
perl -C -Mutf8 -pe 's/öœ/oo/i' <<< "FÖŒ" # -> "Foo"
-C turns on UTF-8 support for streams and files, assuming the current locale is UTF-8-based.
-Mutf8 tells Perl to interpret the source code as UTF-8 (in this case, the string passed to -pe) - this is the shorter equivalent of the more verbose -e 'use utf8;'.Thanks, Mark Reed
(Note that using awk is not an option either, as awk on macOS (i.e., BWK awk and BSD awk) appears to be completely unaware of locales altogether - its tolower() and toupper() functions ignore foreign characters (and sub() / gsub() don't have case-insensitivity flags to begin with).)
A note on the relationship of sed and awk to the POSIX standard:
BSD sed and awk limit their functionality mostly to what the POSIX sed and
POSIX awk specs mandate, whereas their GNU counterparts implement many more extensions.
Editor's note: This solution doesn't work on macOS (out of the box), because it only applies to GNU sed, whereas macOS comes with BSD sed.
Capitalize the 'I'.
sed 's/foo/bar/I' file
Another work-around for sed on Mac OS X is to install gsedfrom MacPorts or HomeBrew and then create the alias sed='gsed'.
If you are doing pattern matching first, e.g.,
/pattern/s/xx/yy/g
then you want to put the I after the pattern:
/pattern/Is/xx/yy/g
Example:
echo Fred | sed '/fred/Is//willma/g'
returns willma; without the I, it returns the string untouched (Fred).
The sed FAQ addresses the closely related case-insensitive search. It points out that a) many versions of sed support a flag for it and b) it's awkward to do in sed, you should rather use awk or Perl.
But to do it in POSIX sed, they suggest three options (adapted for substitution here):
Convert to uppercase and store original line in hold space; this won't work for substitutions, though, as the original content will be restored before printing, so it's only good for insert or adding lines based on a case-insensitive match.
Maybe the possibilities are limited to FOO, Foo and foo. These can be covered by
s/FOO/bar/;s/[Ff]oo/bar/
To search for all possible matches, one can use bracket expressions for each character:
s/[Ff][Oo][Oo]/bar/
The Mac version of sed seems a bit limited. One way to work around this is to use a linux container (via Docker) which has a useable version of sed:
cat your_file.txt | docker run -i busybox /bin/sed -r 's/[0-9]{4}/****/Ig'
Use following to replace all occurrences:
sed 's/foo/bar/gI' mylog.txt
I had a similar need, and came up with this:
this command to simply find all the files:
grep -i -l -r foo ./*
this one to exclude this_shell.sh (in case you put the command in a script called this_shell.sh), tee the output to the console to see what happened, and then use sed on each file name found to replace the text foo with bar:
grep -i -l -r --exclude "this_shell.sh" foo ./* | tee /dev/fd/2 | while read -r x; do sed -b -i 's/foo/bar/gi' "$x"; done
I chose this method, as I didn't like having all the timestamps changed for files not modified. feeding the grep result allows only the files with target text to be looked at (thus likely may improve performance / speed as well)
be sure to backup your files & test before using. May not work in some environments for files with embedded spaces. (?)
Following should be fine:
sed -i 's/foo/bar/gi' mylog.txt

Colorized grep -- viewing the entire file with highlighted matches

I find grep's --color=always flag to be tremendously useful. However, grep only prints lines with matches (unless you ask for context lines). Given that each line it prints has a match, the highlighting doesn't add as much capability as it could.
I'd really like to cat a file and see the entire file with the pattern matches highlighted.
Is there some way I can tell grep to print every line being read regardless of whether there's a match? I know I could write a script to run grep on every line of a file, but I was curious whether this was possible with standard grep.
Here are some ways to do it:
grep --color 'pattern\|$' file
grep --color -E 'pattern|$' file
egrep --color 'pattern|$' file
The | symbol is the OR operator. Either escape it using \ or tell grep that the search text has to be interpreted as regular expressions by adding -E or using the egrep command instead of grep.
The search text "pattern|$" is actually a trick, it will match lines that have pattern OR lines that have an end. Because all lines have an end, all lines are matched, but the end of a line isn't actually any characters, so it won't be colored.
To also pass the colored parts through pipes, e.g. towards less, provide the always parameter to --color:
grep --color=always 'pattern\|$' file | less -r
grep --color=always -E 'pattern|$' file | less -r
egrep --color=always 'pattern|$' file | less -r
Here's something along the same lines. Chances are, you'll be using less anyway, so try this:
less -p pattern file
It will highlight the pattern and jump to the first occurrence of it in the file.
You can jump to the next occurence with n and to the previous occurence with p. Quit with q.
I'd like to recommend ack -- better than grep, a power search tool for programmers.
$ ack --color --passthru --pager="${PAGER:-less -R}" pattern files
$ ack --color --passthru pattern files | less -R
$ export ACK_PAGER_COLOR="${PAGER:-less -R}"
$ ack --passthru pattern files
I love it because it defaults to recursive searching of directories (and does so much smarter than grep -r), supports full Perl regular expressions (rather than the POSIXish regex(3)), and has a much nicer context display when searching many files.
You can use my highlight script from https://github.com/kepkin/dev-shell-essentials
It's better than grep because you can highlight each match with its own color.
$ command_here | highlight green "input" | highlight red "output"
You can also create an alias. Add this function in your .bashrc (or .bash_profile on osx)
function grepe {
grep --color -E "$1|$" $2
}
You can now use the alias like this: "ifconfig | grepe inet" or "grepe css index.html".
(PS: don't forget to source ~/.bashrc to reload bashrc on current session)
Use colout program: http://nojhan.github.io/colout/
It is designed to add color highlights to a text stream. Given a regex and a color (e.g. "red"), it reproduces a text stream with matches highlighted. e.g:
# cat logfile but highlight instances of 'ERROR' in red
colout ERROR red <logfile
You can chain multiple invocations to add multiple different color highlights:
tail -f /var/log/nginx/access.log | \
colout ' 5\d\d ' red | \
colout ' 4\d\d ' yellow | \
colout ' 3\d\d ' cyan | \
colout ' 2\d\d ' green
Or you can achieve the same thing by using a regex with N groups (parenthesised parts of the regex), followed by a comma separated list of N colors.
vagrant status | \
colout \
'\''(^.+ running)|(^.+suspended)|(^.+not running)'\'' \
green,yellow,red
The -z option for grep is also pretty slick!
cat file1 | grep -z "pattern"
As grep -E '|pattern' has already been suggested, just wanted to clarify that it's possible to highlight a whole line too.
For example, tail -f somelog | grep --color -E '| \[2\].*' (specifically, the part -E '|):
I use rcg from "Linux Server Hacks", O'Reilly. It's perfect for what you want and can highlight multiple expressions each with different colours.
#!/usr/bin/perl -w
#
# regexp coloured glasses - from Linux Server Hacks from O'Reilly
#
# eg .rcg "fatal" "BOLD . YELLOW . ON_WHITE" /var/adm/messages
#
use strict;
use Term::ANSIColor qw(:constants);
my %target = ( );
while (my $arg = shift) {
my $clr = shift;
if (($arg =~ /^-/) | !$clr) {
print "Usage: rcg [regex] [color] [regex] [color] ...\n";
exit(2);
}
#
# Ugly, lazy, pathetic hack here. [Unquote]
#
$target{$arg} = eval($clr);
}
my $rst = RESET;
while(<>) {
foreach my $x (keys(%target)) {
s/($x)/$target{$x}$1$rst/g;
}
print
}
I added this to my .bash_aliases:
highlight() {
grep --color -E "$1|\$"
}
The sed way
As there is already a lot of different solution, but none show sed as solution,
and because sed is lighter and quicker than grep, I prefer to use sed for this kind of job:
sed 's/pattern/\o33[47;31;1m&\o033[0m/' file
This seems less intuitive.
\o33 is the sed syntax to generate the character octal 033 -> Escape.
(Some shells and editors also allow entering <Ctrl>-<V> followed by <Esc>, to type the character directly.)
Esc [ 47 ; 31 ; 1 m is an ANSI escape code: Background grey, foreground red and bold face.
& will re-print the pattern.
Esc [ 0 m returns the colors to default.
You could also highlight the entire line, but mark the pattern as red:
sed -E <file -e \
's/^(.*)(pattern)(.*)/\o33[30;47m\1\o33[31;1m\2\o33[0;30;47m\3\o33[0m/'
Dynamic tail -f, following logfiles
One of advantage of using sed: You could send a alarm beep on console, using bell ascii character 0x7. I often use sed like:
sudo tail -f /var/log/kern.log |
sed -ue 's/[lL]ink .*\([uU]p\|[dD]own\).*/\o33[47;31;1m&\o33[0m\o7/'
-u stand for unbuffered. This ensure that line will be treated immediately.
So I will hear some beep instantly, when I connect or disconnect my ethernet cable.
Of course, instead of link up pattern, you could watch for USB in same file, or even search for from=.*alice#bobserver.org in /var/log/mail.log (If
you're Charlie, anxiously awaiting an email from Alice;)...
To highlight patterns while viewing the whole file, h can do this.
Plus it uses different colors for different patterns.
cat FILE | h 'PAT1' 'PAT2' ...
You can also pipe the output of h to less -R for better reading.
To grep and use 1 color for each pattern, cxpgrep could be a good fit.
Use ripgrep, aka rg: https://github.com/BurntSushi/ripgrep
rg --passthru...
Color is the default:
rg -t tf -e 'key.*tfstate' -e dynamodb_table
--passthru
Print both matching and non-matching lines.
Another way to achieve a similar effect is by modifying your pattern to
match the empty string.
For example, if you are searching using rg foo then using
rg "^|foo" instead will emit every line in every file searched, but only
occurrences of foo will be highlighted.
This flag enables the same behavior without needing to modify the pattern.
Sacrilege, granted, but grep has gotten complacent.
brew/apt/rpm/whatever install ripgrep
You'll never go back.
another dirty way:
grep -A80 -B80 --color FIND_THIS IN_FILE
I did an
alias grepa='grep -A80 -B80 --color'
in bashrc.
Here is a shell script that uses Awk's gsub function to replace the text you're searching for with the proper escape sequence to display it in bright red:
#! /bin/bash
awk -vstr=$1 'BEGIN{repltext=sprintf("%c[1;31;40m&%c[0m", 0x1B,0x1B);}{gsub(str,repltext); print}' $2
Use it like so:
$ ./cgrep pattern [file]
Unfortunately, it doesn't have all the functionality of grep.
For more information , you can refer to an article "So You Like Color" in Linux Journal
One other answer mentioned grep's -Cn switch which includes n lines of Context. I sometimes do this with n=99 as a quick-and-dirty way of getting [at least] a screenfull of context when the egrep pattern seems too fiddly, or when I'm on a machine on which I've not installed rcg and/or ccze.
I recently discovered ccze which is a more powerful colorizer. My only complaint is that it is screen-oriented (like less, which I never use for that reason) unless you specify the -A switch for "raw ANSI" output.
+1 for the rcg mention above. It is still my favorite since it is so simple to customize in an alias. Something like this is usually in my ~/.bashrc:
alias tailc='tail -f /my/app/log/file | rcg send "BOLD GREEN" receive "CYAN" error "RED"'
Alternatively you can use The Silver Searcher and do
ag <search> --passthrough
I use following command for similar purpose:
grep -C 100 searchtext file
This will say grep to print 100 * 2 lines of context, before & after of the highlighted search text.
It might seem like a dirty hack.
grep "^\|highlight1\|highlight2\|highlight3" filename
Which means - match the beginning of the line(^) or highlight1 or highlight2 or highlight3. As a result, you will get highlighted all highlight* pattern matches, even in the same line.
Ok, this is one way,
wc -l filename
will give you the line count -- say NN, then you can do
grep -C NN --color=always filename
If you want highlight several patterns with different colors see this bash script.
Basic usage:
echo warn error debug info 10 nil | colog
You can change patterns and colors while running pressing one key and then enter key.
Here's my approach, inspired by #kepkin's solution:
# Adds ANSI colors to matched terms, similar to grep --color but without
# filtering unmatched lines. Example:
# noisy_command | highlight ERROR INFO
#
# Each argument is passed into sed as a matching pattern and matches are
# colored. Multiple arguments will use separate colors.
#
# Inspired by https://stackoverflow.com/a/25357856
highlight() {
# color cycles from 0-5, (shifted 31-36), i.e. r,g,y,b,m,c
local color=0 patterns=()
for term in "$#"; do
patterns+=("$(printf 's|%s|\e[%sm\\0\e[0m|g' "${term//|/\\|}" "$(( color+31 ))")")
color=$(( (color+1) % 6 ))
done
sed -f <(printf '%s\n' "${patterns[#]}")
}
This accepts multiple arguments (but doesn't let you customize the colors). Example:
$ noisy_command | highlight ERROR WARN
Is there some way I can tell grep to print every line being read
regardless of whether there's a match?
Option -C999 will do the trick in the absence of an option to display all context lines. Most other grep variants support this too. However: 1) no output is produced when no match is found and 2) this option has a negative impact on grep's efficiency: when the -C value is large this many lines may have to be temporarily stored in memory for grep to determine which lines of context to display when a match occurs. Note that grep implementations do not load input files but rather reads a few lines or use a sliding window over the input. The "before part" of the context has to be kept in a window (memory) to output the "before" context lines later when a match is found.
A pattern such as ^|PATTERN or PATTERN|$ or any empty-matching sub-pattern for that matter such as [^ -~]?|PATTERN is a nice trick. However, 1) these patterns don't show non-matching lines highlighted as context and 2) this can't be used in combination with some other grep options, such as -F and -w for example.
So none of these approaches are satisfying to me. I'm using ugrep, and enhanced grep with option -y to efficiently display all non-matching output as color-highlighted context lines. Other grep-like tools such as ag and ripgrep also offer a pass-through option. But ugrep is compatible with GNU/BSD grep and offers a superset of grep options like -y and -Q. For example, here is what option -y shows when combined with -Q (interactive query UI to enter patterns):
ugrep -Q -y FILE ...
Also try:
egrep 'pattern1|pattern2' FILE.txt | less -Sp 'pattern1|pattern2'
This will give you a tabular output with highlighted pattern/s.

Resources