I am trying to find the lines in a file which non of the numbers in those lines are in their preceding line. This file has around 400000 lines. This is an example of the input file:
320 5120
240 326 5120
240 326 5120
241 333 514
240 326 5120
240 326 5120
320 5120
240
100 112
240 326 5120
240 326 5120
320 5120
The expected output results is:
241 333 514
240 326 5120
240
100 112
240 326 5120
So far I could find this command:
$ awk '!seen[$1]++' file
320 5120
240 326 5120
241 333 514
100 112
which I can get the unique number of column 1 and I can do the same separately for other columns. Can I somehow get the information I want from this command? Any help would be appreciated.
A Perl command-line program ("one"-liner), assuming things other than numbers in the file
perl -wnE'
#n = /([0-9]+)/g;
say "#n" if not grep { exists $seen_nums{$_} } #n;
%seen_nums = map { $_ => 1 } #n
' data.txt
This prints the desired output. It also prints the very first line (correctly). Since the program parses lines for numbers it can be used for files with headers, text-only (comment?) lines, etc.
But if the data is sure to have only numbers then we can use Perl's -a switch with which words on each line are available in the #F array. Also shrunk a little to actually fit on a line
perl -wlanE'grep exists $n{$_}, #F or say; %n = map { $_=>1 } #F' data.txt
A brief explanation of switches (see docs linked above)
-w turns on warnings
-l strips the newline, and can tack it back on, with few more subtleties
-a turns on "autosplit" (when used with -n or -p), so that #F is available in the program which contains words on the line. On newer Perls this sets -n as well
-n Critical for processing files or STDIN -- opens the resource and sets up a loop over lines. Run with -MO=Deparse to see what it does
-E The -e is what makes it evaluate everything between the following quotes as Perl code. With capital (E) it also turns on features, what I use mostly for say. (Doing this has drawbacks, since it enables all features and makes things not backwards compatible anymore.)
Note: The first line can be omitted by adding condition $.!=2 to the print
Here is an awk solution:
$ awk 'NR>1{p=1; for (i=1;i<=NF;i++){if($i in a)p=0}} {delete a; for (i=1;i<=NF;i++)a[$i]} p' file
241 333 514
240 326 5120
240
100 112
240 326 5120
How it works
NR>1{...}
Perform the commands in braces for all except the first line. Those commands are:
p=1
Initialize p to true (nonzero)
for (i=1;i<=NF;i++){if($i in a)p=0}
If any field is a key in array a, then set p to false (zero).
delete a
Delete array a.
for (i=1;i<=NF;i++)a[$i]
Create a key in array a for every field on the current line.
p
If p is true, print the line.
Multiple line version
Or, for those who prefer their code spread over multiple lines:
awk '
NR>1{
p=1
for (i=1;i<=NF;i++){
if($i in a)p=0}
}
{
delete a
for (i=1;i<=NF;i++)
a[$i]
}
p' file
Here's a perl one-liner:
$ perl -M-warnings -lane 'print unless #F ~~ %prev; %prev = map { $_ => 1 } #F;' input.txt
320 512
241 333 514
240 326 512
240
100 112
240 326 512
It uses the frowned-upon smart match operator in the name of conciseness. With smartmatch, ARRAY ~~ HASH returns true if any elements of the array are keys in the hash, which is perfect for this use case. If this was a standalone script and not a one-liner I'd probably use a different approach, though.
(Is there a reason the first line of your sample input isn't in your expected output even though it meets the critera?)
Here is a perl solution that does that. It tests for any of the numbers were seen on the previous line.
This includes printing the first line as noted by Shawn which might be needed. If not, just exclude the print join(... line in the code.
#!/usr/bin/perl
use strict;
use warnings;
use List::Util 'any';
open my $fh, '<', 'f0.txt' or die $!;
my #nums = split ' ', <$fh>;
my %seen = map{ $_ => 1} #nums;
print join(' ', #nums), "\n"; # print the first line
while (<$fh>) {
#nums = split;
print unless any {$seen{$_}} #nums;
%seen = map{ $_ => 1} #nums;
}
close $fh or die $!;
Output is:
320 512
241 333 514
240 326 512
240
100 112
240 326 512
A simple awk that checks, by means of a regex-match if the number is in the previous line. The idea is:
the previous line is stored in variable t
if any of the fields is matched to the previous line, we can skip to the next line.
This is done in the following way:
$ awk '{for(i=1;i<=NF;++i) if (FS t FS ~ FS $i FS) {t=$0; next}; t=$0}1'
320 512
241 333 514
240 326 512
240
100 112
240 326 512
The trick to make it work is to ensure that the line starts and stops with a field separator. If we would do the test t ~ $i we could match the number 25 against the number 255. But by ensuring that all numbers are sandwhiched between field separators, we can just do the test FS t FS ~ FS $i FS.
note: if you don't want the first line to be printed, replace the last 1 by (FNR>1)
Given your updated input:
$ awk '$0 !~ p; {gsub(/ /,"|"); p="(^| )("$0")( |$)"}' file
241 333 514
240 326 5120
240
100 112
240 326 5120
The above just converts the previous line read into a regexp like (^| )(320|5120)( |$) and then does a regexp comparison to see if the current line matches it and prints the current line if it doesn't match the modified previous line. This approach would only lead to false matches if your fields contained RE metacharacters which obviously yours don't since they're all-digits
Related
I need to divide my text file. In my text file, I have numbers. This is a small fragment of my input file. In my text file, I have numbers from 29026 to 58050.
29026 29027 29028 29029 29030 29031 29032 29033 29034 29035 29036 29037 29038 29039 29040
29041 29042 29043 29044 29045 ...........................................................
................................................58029 58030 58031 58032 58033 58034 58035
58036 58037 58038 58039 58040 58041 58042 58043 58044 58045 58046 58047 58048 58049 58050
I must create 225 index groups. Every group must have 129 numbers. So my output will look like
[ Lipid 1 ]
29026 29027 29028 29029 ...................................
...............
...........................29150 29151 29152 29153 29154
[ Lipid 2 ]
...
...
[ Lipid 225 ]
57921 57922 57923 57924 57925 57926......
.....
.......................
58044 58045 58046 58047 58048 58049 58050
Do you have any idea?
Edit
My text file
29026 29027 29028 29029 29030 29031 29032 29033 29034 29035 29036 29037 29038 29039 29040
29041 29042 29043 29044 29045 29046 29047 29048 29049 29050 29051 29052 29053 29054 29055
29056 29057 29058 29059 29060 29061 29062 29063 29064 29065 29066 29067 29068 29069 29070
29071 29072 29073 29074 29075 29076 29077 29078 29079 29080 29081 29082 29083 29084 29085
29086 29087 29088 29089 29090 29091 29092 29093 29094 29095 29096 29097 29098 29099 29100
29101 29102 29103 29104 29105 29106 29107 29108 29109 29110 29111 29112 29113 29114 29115
29116 29117 29118 29119 29120 29121 29122 29123 29124 29125 29126 29127 29128 29129 29130
29131 29132 29133 29134 29135 29136 29137 29138 29139 29140 29141 29142 29143 29144 29145
29146 29147 29148 29149 29150 29151 29152 29153 29154 29155 29156 29157 29158 29159 29160
29161 29162 29163 29164 29165 29166 29167 29168 29169 29170 29171 29172 29173 29174 29175
29176 29177 29178 29179 29180 29181 29182 29183 29184 29185 29186 29187 29188 29189 29190
29191 29192 29193 29194 29195 29196 29197 29198 29199 29200 29201 29202 29203 29204 29205
29206 29207 29208 29209 29210 29211 29212 29213 29214 29215 29216 29217 29218 29219 29220
29221 29222 29223 29224 29225 29226 29227 29228 29229 29230 29231 29232 29233 29234 29235
29236 29237 29238 29239 29240 29241 29242 29243 29244 29245 29246 29247 29248 29249 29250
29251 29252 29253 29254 29255 29256 29257 29258 29259 29260 29261 29262 29263 29264 29265
29266 29267 29268 29269 29270 29271 29272 29273 29274 29275 29276 29277 29278 29279 29280
29281 29282 29283 29284 29285 29286 29287 29288 29289 29290 29291 29292 29293 29294 29295
29296 29297 29298 29299 29300 29301 29302 29303 29304 29305 29306 29307 29308 29309 29310
29311 29312 29313 29314 29315 29316 29317 29318 29319 29320 29321 29322 29323 29324 29325
29326 29327 29328 29329 29330 29331 29332 29333 29334 29335 29336 29337 29338 29339 29340
29341 29342 29343 29344 29345 29346 29347 29348 29349 29350 29351 29352 29353 29354 29355
29356 29357 29358 29359 29360 29361 29362 29363 29364 29365 29366 29367 29368 29369 29370
29371 29372 29373 29374 29375 29376 29377 29378 29379 29380 29381 29382 29383 29384 29385
29386 29387 29388 29389 29390 29391 29392 29393 29394 29395 29396 29397 29398 29399 29400
29401 29402 29403 29404 29405 29406 29407 29408 29409 29410 29411 29412 29413 29414 29415
29416 29417 29418 29419 29420 29421 29422 29423 29424 29425 29426 29427 29428 29429 29430
here I have thousands of lines, but I will not paste all of this text
57736 57737 57738 57739 57740 57741 57742 57743 57744 57745 57746 57747 57748 57749 57750
57751 57752 57753 57754 57755 57756 57757 57758 57759 57760 57761 57762 57763 57764 57765
57766 57767 57768 57769 57770 57771 57772 57773 57774 57775 57776 57777 57778 57779 57780
57781 57782 57783 57784 57785 57786 57787 57788 57789 57790 57791 57792 57793 57794 57795
57796 57797 57798 57799 57800 57801 57802 57803 57804 57805 57806 57807 57808 57809 57810
57811 57812 57813 57814 57815 57816 57817 57818 57819 57820 57821 57822 57823 57824 57825
57826 57827 57828 57829 57830 57831 57832 57833 57834 57835 57836 57837 57838 57839 57840
57841 57842 57843 57844 57845 57846 57847 57848 57849 57850 57851 57852 57853 57854 57855
57856 57857 57858 57859 57860 57861 57862 57863 57864 57865 57866 57867 57868 57869 57870
57871 57872 57873 57874 57875 57876 57877 57878 57879 57880 57881 57882 57883 57884 57885
57886 57887 57888 57889 57890 57891 57892 57893 57894 57895 57896 57897 57898 57899 57900
57901 57902 57903 57904 57905 57906 57907 57908 57909 57910 57911 57912 57913 57914 57915
57916 57917 57918 57919 57920 57921 57922 57923 57924 57925 57926 57927 57928 57929 57930
57931 57932 57933 57934 57935 57936 57937 57938 57939 57940 57941 57942 57943 57944 57945
57946 57947 57948 57949 57950 57951 57952 57953 57954 57955 57956 57957 57958 57959 57960
57961 57962 57963 57964 57965 57966 57967 57968 57969 57970 57971 57972 57973 57974 57975
57976 57977 57978 57979 57980 57981 57982 57983 57984 57985 57986 57987 57988 57989 57990
57991 57992 57993 57994 57995 57996 57997 57998 57999 58000 58001 58002 58003 58004 58005
58006 58007 58008 58009 58010 58011 58012 58013 58014 58015 58016 58017 58018 58019 58020
58021 58022 58023 58024 58025 58026 58027 58028 58029 58030 58031 58032 58033 58034 58035
58036 58037 58038 58039 58040 58041 58042 58043 58044 58045 58046 58047 58048 58049 58050
Here is how I understood your problem:
The input is a text file in several lines, with fifteen numbers on each line, separated by spaces or tabs. Some lines (perhaps the last one) may have fewer than fifteen numbers. (In fact in the solution below it doesn't matter how many numbers are on each line.)
You must group the numbers into sets of 129 numbers each, in sequence. The last group may have less than 129 numbers, if the input cardinality is not an exact multiple of 129. In the solution below, it doesn't matter how many input numbers there are (and therefore how many groups there will be in the output).
For each group of 129 numbers, you must get a few lines in the output. First, a title or label that says [Lipid n] where n is the line number, and then the numbers in that group, shown fifteen per line (so, there will be eight full lines and a ninth line with only 9 numbers on it: 129 = 15 * 8 + 9).
Here's how you can do this. First let's start with a small example, and then we can look at what must be changed for a more general solution.
I will assume that your inputs can be arbitrary numbers of any length; of course, if they are consecutive numbers like you showed in your sample data, then the problem is trivial and completely uninteresting. So let's assume your numbers are in fact any numbers at all. (Not really; I wrote the solution for non-negative integers; but it can be re-written for "tokens" of non-blank characters separated by blanks.)
I start with the following input file:
$ cat lipid-inputs
124 150 178 111 143 177 116
154 194 139 183 132 180 133
185 142 101 159 122 184 151
120 188 161 136 113 189 170
We want to group the 28 input numbers into sets of ten numbers each, and present the output with (at most) seven numbers per line. So: There will be two full groups, and a third group with only eight member numbers (since we have only 28 inputs). The desired output looks like this:
[Lipid 1]
124 150 178 111 143 177 116
154 194 139
[Lipid 2]
183 132 180 133 185 142 101
159 122 184
[Lipid 3]
151 120 188 161 136 113 189
170
Strategy: First write the input numbers one per line, so we can then arrange them ten per line (ten: cardinality of desired groups in the output). Then add line numbers (which will go into the label lines). Then edit the "line number" lines to add the "lipid" stuff, and break the data lines into shorter lines, showing seven tokens each (possibly fewer on the last line in each group).
Implementation: tr to break up the tokens one per line; paste reading repeatedly from standard input, ten stdin lines for each output line; then sed = to add the line numbers (on separate lines); and finally a standard sed for the final editing. The command looks like this:
$ tr -s ' ' '\n' < lipid-inputs | paste -d ' ' - - - - - - - - - - |
> sed = | sed -E 's/^[[:digit:]]+$/[Lipid &]/ ;
> s/(([[:blank:]]*[[:digit:]]+){7}) /\1\n/g'
The output is the one I showed already.
To generalize (so you can apply to your problem): The number of tokens per line in the input file is irrelevant. To get 15 tokens per line in the output, change the hard-coded number 7 to 15 on the last line in the command shown above. And to allocate 129 tokens per line, instead of 10, what needs to be changed is the paste command: I show it reading ten times from stdin. You need 129. So it would be better to create a string of 129 dashes separated by space, in a simple command - rather than hard-coding - and to use that string as an input to paste. I show how to do this for my example, you will adapt for yours.
Define variables to hold your relevant values: how many tokens per lipid (129 in your case, 10 in mine) and how many tokens per line in the output (15 in your case, 7 in mine).
$ tokens_per_lipid=10
$ tokens_per_line=7
Then create a variable to hold the string - - - - [...] needed in the paste command. There are several ways to do this, here's just one:
$ paste_arg=$(yes '-' | head -n $tokens_per_lipid | tr '\n' ' ')
Let's check it:
$ echo $paste_arg
- - - - - - - - - -
OK, so let's re-write the command that does what you need. We must use double-quotes for the argument to sed to allow variable expansion.
$ tr -s ' ' '\n' < lipid-inputs | paste -d ' ' $paste_arg |
> sed = | sed -E "s/^[[:digit:]]+$/[Lipid &]/ ;
> s/(([[:blank:]]*[[:digit:]]+){$tokens_per_line}) /\1\n/g"
[Lipid 1]
124 150 178 111 143 177 116
154 194 139
[Lipid 2]
183 132 180 133 185 142 101
159 122 184
[Lipid 3]
151 120 188 161 136 113 189
170
I have no clue what you really are trying to do, but maybe this does what you want
< input sed -zE 's/(([0-9]+[^0-9]+){129})/[ Lipid # ]\n\1\n/g' | awk 'BEGIN { RS = ORS = "]" } { sub("#", NR) } 1' | sed '$d'
It uses Sed to insert the [ Lipid # ] string (with some newline) every 129 occurrences of [0-9]+[^0-9]+ (which is 1 or more digits followed by 1 or more non-digits); then it uses Awk to substitute # with numbers from one (to do so, it interprets the ] as the record separator, and so it can change # to the number of the record NR); finally it uses Sed again to remove the last line which appears as the last record separator from the Awk processing.
I used Awk for inserting the increasing numbers as there's no easy way to do maths in Sed; I used Sed to break the file and insert text in between as requested as I find it easier than doing it in Awk.
If you need to have all numbers on one line in the output, you can do
< input sed -zE 's/[^0-9]+/ /g;s/(([0-9]+[^0-9]+){129})/[ Lipid # ]\n\1\n/g' | awk 'BEGIN { RS = ORS = "]" } { sub("#", NR) } 1' | sed '$d'
where I have just added s/[^0-9]+/ /g; to collapse whatever happens to be between numbers to a single whitespace.
I have a big file whose entries are like this .
Input:
1113
1113456
11134567
12345
1734
123
194567
From this entries , I need to find out the minimum number of prefix which can represent all these entries.
Expected output:
1113
123
1734
194567
If we have 1113 then there is no need to use 1113456 or 1113457.
Things I have tried:
I can use grep -v ^123 and compare with input file and store the unique results in the output file. IF I use a while loop , I dont know , how I can delete the entries from the input file itself.
I will assume that input file is:
790234
790835
795023
79788
7985904
7902713
791
7987
7988
709576
749576
7902712
790856
79780
798599
791453
791454
791455
791456
791457
791458
791459
791460
You can use
awk '!(prev && $0~prev){prev = "^" $0; print}' <(sort file)
Returns
709576
749576
790234
7902712
7902713
790835
790856
791
795023
79780
79788
7985904
798599
7987
7988
How does it work ? First it sorts the file using lexicographic sort (1 < 10 < 2). Then it keeps the minimal prefix and checks if next lines match. If they do they are skipped. If a line doesn't, it will update the minimal prefix and prints the line.
Let's say that input is
71
82
710
First it orders the lines and input becomes (lexicographic sort : 71 < 710 < 82) :
71
710
82
First line is printed because awk variable prev is not set so condition !(prev && $0~prev) is reached. prev becomes 71. On next row, 710 will match regexp ^71 so line is skipped and prev variable stays 71. On next row, 82does not match ^71, condition !(prev && $0~prev) is reached again, line is printed, prev is set to 82.
You may use this awk command:
awk '{
n = (n != "" && index($1, n) == 1 ? n : $1)
}
p != n {
print p = n
}' <(sort file)
1113
123
1734
194567
$ awk 'NR==1 || (index($0,n)!=1){n=$0; print}' <(sort file)
1113
123
1734
194567
I have the following two files:
sequences.txt
158333741 Acaryochloris_marina_MBIC11017_uid58167 158333741 432 1 432 COG0001 0
158339504 Acaryochloris_marina_MBIC11017_uid58167 158339504 491 1 491 COG0002 0
379012832 Acetobacterium_woodii_DSM_1030_uid88073 379012832 430 1 430 COG0001 0
302391336 Acetohalobium_arabaticum_DSM_5501_uid51423 302391336 441 1 441 COG0003 0
311103820 Achromobacter_xylosoxidans_A8_uid59899 311103820 425 1 425 COG0004 0
332795879 Acidianus_hospitalis_W1_uid66875 332795879 369 1 369 COG0005 0
332796307 Acidianus_hospitalis_W1_uid66875 332796307 416 1 416 COG0005 0
allids.txt
COG0001
COG0002
COG0003
COG0004
COG0005
Now I want to read each line in allids.txt, search all lines in sequences.txt (specifically in column 7), and write for each line in allids.txt a file with the filename $line.
my approach is to use a simple grep:
while read line; do
grep "$line" sequences.txt
done <allids.txt
but where do I incorporate the command for the output?
If there is a command that is faster, feel free to suggest!
My expected output:
COG0001.txt
158333741 Acaryochloris_marina_MBIC11017_uid58167 158333741 432 1 432 COG0001 0
379012832 Acetobacterium_woodii_DSM_1030_uid88073 379012832 430 1 430 COG0001 0
COG0002.txt
158339504 Acaryochloris_marina_MBIC11017_uid58167 158339504 491 1 491 COG0002 0
[and so on]
It is quite simple to do it using awk:
awk 'NR==FNR{ids[$1]; next} $7 in ids{print > ($7 ".txt")}' allids.txt sequences.txt
Reference: Effective AWK Programming
I suspect all you really need is:
awk '{print > ($7".txt")}' sequences.txt
That suspicion is based on your IDs file being named allIds.txt (note the all) and there being no IDs in sequences.txt that don't exist in allIds.txt.
Extending your approach, this seemed to work:
while read line; do
# touching is not necessary as pointed out by #123
# touch "$line.txt"
grep "$line" sequences.txt > "$line.txt"
done <allids.txt
It produces text files with the required output. But I cannot comment on the efficiency of this approach.
EDIT:
As has been pointed out in the comments, this method is slow and would break for any file that violates the unsaid assumptions used in the answer. I'm leaving it here people to see how a quick and hacky solution could backfire.
I have a file which contain numbers, say 1 to 300. But the numbers are not continuous. A sample file looks like this
042
043
044
045
078
198
199
200
201
202
203
212
213
214
215
238
239
240
241
242
256
257
258
Now I need to check the continuity of the number series and accordingly write out the output. For example the first 4 numbers are in series, so the output should be
042-045
Next, 078 is a lone number, so the output should be
078
for convenience it can be made to look like
078-078
Then 198 to 203 are continuous. So, next output should be
198-203
and so on. The final output should be like
042-045
078-078
198-203
212-215
238-242
256-258
I just need to know the first and end member of the continuous series and jump on the next series when discontinuity is encountered; The output can be manipulated. I am inclined to use the if statement and can think of a complicated thing like this
num=`cat file | wc -l`
out1=`head -1 file`
for ((i=2;i<=$num;i++))
do
j=`echo $i-1 | bc`
var1=`cat file | awk 'NR='$j'{print}'`
var2=`cat file | awk 'NR='$i'{print}'`
var3=`echo $var2 - $var1 | bc`
if [ $var3 -gt 1 ]
then
out2=$var1
echo $out1-$out2
out1=$var2
fi
done
which works but too lengthy. I am sure there is definitely a short way of doing this.
I am also open to other straight-forward command (or few commands) in shell, awk or a few lines of fortran code that can do it.
Thanking you in anticipation.
This awk one-liner works for given example:
awk 'p+1!=$1{printf "%s%s--",NR==1?"":p"\n",$1}{p=$1}END{print $1}' file
It gives the output for your data as input:
042--045
078--078
198--203
212--215
238--242
256--258
Here is a simple program in Fortran:
program test
implicit none
integer :: first, last, uFile, i, stat
open( file='numbers.txt', newunit=uFile, action='read', status='old' )
read(uFile,*,iostat=stat) i
if ( stat /= 0 ) stop
first = i ; last = i
do
read(uFile,*,iostat=stat) i
if ( stat /= 0 ) exit
if ( i == last+1 ) then
last = i
else
print *,first,'-',last
write(*,'(i3.3,a,i3.3)') first,'-',last
endif
enddo
write(*,'(i3.3,a,i3.3)') first,'-',last
end program
The output is
042-045
078-078
198-203
212-215
238-242
256-258
I created a script that will auto-login to router and checks for current CPU load, if load exceeds a certain threshold I need it print the current CPU value to the standard output.
i would like to search in script o/p for a certain pattern (the value 80 in this case which is the threshold for high CPU load) and then for each instance of the pattern it will check if current value is greater than 80 or not, if true then it will print 5 lines before the pattern followed by then the current line with the pattern.
Question1: how to loop over each instance of the pattern and apply some code on each of them separately?
Question2: How to print n lines before the pattern followed by x lines after the pattern?
ex. i used awk to search for the pattern "health" and print 6 lines after it as below:
awk '/health/{x=NR+6}(NR<=x){print}' ./logs/CpuCheck.log
I would like to do the same for the pattern "80" and this time print 5 lines before it and one line after....only if $3 (representing current CPU load) is exceeding the value 80
below is the output of auto-login script (file name: CpuCheck.log)
ABCD-> show health xxxxxxxxxx
* - current value exceeds threshold
1 Min 1 Hr 1 Hr
Cpu Limit Curr Avg Avg Max
-----------------+-------+------+------+-----+----
01 80 39 36 36 47
WXYZ-> show health xxxxxxxxxx
* - current value exceeds threshold
1 Min 1 Hr 1 Hr
Cpu Limit Curr Avg Avg Max
-----------------+-------+------+------+-----+----
01 80 29 31 31 43
Thanks in advance for the help
Rather than use awk, you could use the -B and -A and switches to grep, which print a number of lines before and after a pattern is matched:
grep -E -B 5 -A 1 '^[0-9]+[[:space:]]+80[[:space:]]+(100|9[0-9]|8[1-9])' CpuCheck.log
The pattern matches lines which start with some numbers, followed by spaces, followed by 80, followed by a number greater between 81 and 100. The -E switch enables extended regular expressions (EREs), which are needed if you want to use the + character to mean "one or more". If your version of grep doesn't support EREs, you can instead use the slightly more verbose \{1,\} syntax:
grep -B 5 -A 1 '^[0-9]\{1,\}[[:space:]]\{1,\}80[[:space:]]\{1,\}\(100\|9[0-9]\|8[1-9]\)' CpuCheck.log
If grep isn't an option, one alternative would be to use awk. The easiest way would be to store all of the lines in a buffer:
awk 'f-->0;{a[NR]=$0}/^[0-9]+[[:space:]]+80[[:space:]]+(100|9[0-9]|8[1-9])/{for(i=NR-5;i<=NR;++i)print i, a[i];f=1}'
This stores every line in an array a. When the third column is greater than 80, it prints the previous 5 lines from the array. It also sets the flag f to 1, so that f-->0 is true for the next line, causing it to be printed.
Originally I had opted for a comparison $3>80 instead of the regular expression but this isn't a good idea due to the varying format of the lines.
If the log file is really big, meaning that reading the whole thing into memory is unfeasible, you could implement a circular buffer so that only the previous 5 lines were stored, or alternatively, read the file twice.
Unfortunately, awk is stream-oriented and doesn't have a simple way to get the lines before the current line. But that doesn't mean it isn't possible:
awk '
BEGIN {
bufferSize = 6;
}
{
buffer[NR % bufferSize] = $0;
}
$2 == 80 && $3 > 80 {
# print the five lines before the match and the line with the match
for (i = 1; i <= bufferSize; i++) {
print buffer[(NR + i) % bufferSize];
}
}
' ./logs/CpuCheck.log
I think the easiest way with awk, by reading the file.
This should use essentially 0 memory except whatever is used to store the line numbers.
If there is only one occurence
awk 'NR==FNR&&$2=="80"{to=NR+1;from=NR-5}NR!=FNR&&FNR<=to&&FNR>=from' file{,}
If there are more than one occurences
awk 'NR==FNR&&$2=="80"{to[++x]=NR+1;from[x]=NR-5}
NR!=FNR{for(i in to)if(FNR<=to[i]&&FNR>=from[i]){print;next}}' file{,}
Input/output
Input
1
2
3
4
5
6
7
8
9
10
11
12
01 80 39 36 36 47
13
14
15
16
17
01 80 39 36 36 47
18
19
20
Output
8
9
10
11
12
01 80 39 36 36 47
13
14
15
16
17
01 80 39 36 36 47
18
How it works
NR==FNR&&$2=="80"{to[++x]=NR+5;from[x]=NR-5}
In the first file if the second field is 80 set to and from to the record number + or - whatever you want.
Increment the occurrence variable x.
NR!=FNR
In the second file
for(i in to)
For each occurrence
if(FNR<=to[i]&&FNR>=from[i]){print;next}
If the current record number(in this file) is between this occurrences to and from then print the line.Next prevents the line from being printed multiple times if occurrences of the pattern are close together.
file{,}
Use the file twice as two args. the {,} expands to file file