Select full block of text delimited by some chars - bash

I have a very large text file (40GB gzipped) where blocks of data are separated by //.
How can I select blocks of data where a certain line matches some criterion? That is, can I grep a pattern and extend the selection in both directions to the // delimiter? I can make no assumptions on the size of the block and the position of the line.
not interesting 1
not interesting 2
//
get the whole block 1
MATCH THIS LINE
get the whole block 2
get the whole block 3
//
not interesting 1
not interesting 2
//
I want to select the block of data with MATCH THIS LINE:
get the whole block 1
MATCH THIS LINE
get the whole block 2
get the whole block 3
I tried sed but can't get my head around the pattern definition. This for example should match from // to MATCH THIS LINE:
sed -n -e '/\/\//,/MATCH THIS LINE/ p' file.txt
But it fails matching the //.
Is it possible to achieve this with GNU command line tools?

With GNU awk (due to multi-char RS), you can set the record separator to //, so that every record is a //-delimited set of characters:
$ awk -v RS="//" '/MATCH THIS LINE/' file
get the whole block 1
MATCH THIS LINE
get the whole block 2
get the whole block 3
Note this leaves an empty line above and below because it catches the new line just after // and prints it back, as well as the last one before the // at the end. To remove them you can pipe to awk 'NF'.
To print the separator between blocks of data you can say (thanks 123):
awk -v RS="//" '/MATCH THIS LINE/{print RT $0 RT}' file

Related

How to get all lines from a file after the last empty line?

Having a file like foo.txt with content
1
2
3
4
5
How do i get the lines starting with 4 and 5 out of it (everything after last empty line), assuming the amount of lines can be different?
Updated
Let's try a slightly simpler approach with just sed.
$: sed -n '/^$/{g;D;}; N; $p;' foo.txt
4
5
-n says don't print unless I tell you to.
/^$/{g;D;}; says on each blank line, clear it all out with this:
g : Replace the contents of the pattern space with the contents of the hold space. Since we never put anything in, this erases the (possibly long accumulated) pattern space. Note that I could have used z since this is GNU, but I wanted to break it out for non-GNU sed's below, and in this case this works for both.
D : remove the now empty line from the pattern space, and go read the next.
Now previously accumulated lines have been wiped if (and only if) we saw a blank line. The D loops back to the beginning, so N will never see a blank line.
N : Add a newline to the pattern space, then append the next line of input to the pattern space. This is done on every line except blanks, after which the pattern space will be empty.
This accumulates all nonblanks until either 1) a blank is hit, which will clear and restart the buffer as above, or 2) we reach EOF with a buffer intact.
Finally, $p says on the LAST line (which will already have been added to the pattern space unless the last line was blank, which will have removed the pattern space...), print the pattern space. The only time this will have nothing to print is if the last line of the file was a blank line.
So the whole logic boils down to: clean the buffer on empty lines, otherwise pile the non-empty lines up and print at the end.
If you don't have GNU sed, just put the commands on separate lines.
sed -n '
/^$/{
g
D
}
N
$p
' foo.txt
Alternate
The method above is efficient, but could potentially build up a very large pattern buffer on certain data sets. If that's not an issue, go with it.
Or, if you want it in simple steps, don't mind more processes doing less work each, and prefer less memory consumed:
last=$( sed -n /^$/= foo.txt|tail -1 ) # find the last blank
next=$(( ${last:-0} + 1 )) # get the number of the line after
cmd="$next,\$p" # compose the range command to print
sed -n "$cmd" foo.txt # run it to print the range you wanted
This runs a lot of small, simple tasks outside of sed so that it can give sed the simplest, most direct and efficient description of the task possible. It will read the target file twice, but won't have to manage filling, flushing, and refilling the accumulation of data in the pattern buffer with records before a blank line. Still likely slower unless you are memory bound, I'd think.
Reverse the file, print everything up to the first blank line, reverse it again.
$ tac foo.txt | awk '/^$/{exit}1' | tac
4
5
Using GNU awk:
awk -v RS='\n\n' 'END{printf "%s",$0}' file
RS is the record separator set to empty line.
The END statement prints the last record.
try this:
tail +$(($(grep -nE ^$ test.txt | tail -n1 | sed -e 's/://g')+1)) test.txt
grep your input file for empty lines.
get last line with tail => 5:
remove unnecessary :
add 1 to 5 => 6
tail starting from 6
You can try with sed :
sed -n ':A;$bB;/^$/{x;s/.*//;x};H;n;bA;:B;H;x;s/^..//;p' infile
With GNU sed:
sed ':a;/$/{N;s/.*\n\n//;ba;}' file

Remove all lines except the last which start with the same string

I'm using awk to process a file to filter lines to specific ones of interest. With the output which is generated, I'd like to be able to remove all lines except the last which start with the same string.
Here's an example of what is generated:
this is a line
duplicate remove me
duplicate this should go too
another unrelated line
duplicate but keep me
example remove this line
example but keep this one
more unrelated text
Lines 2 and 3 should be removed because they start with duplicate, as does line 5. Therefore line 5 should be kept, as it is the last line starting with duplicate.
The same follows for line 6, since it begins with example, as does line 7. Therefore line 7 should be kept, as it is the last line which starts with example.
Given the example above, I'd like to produce the following output:
this is a line
another unrelated line
duplicate but keep me
example but keep this one
more unrelated text
How could I achieve this?
I tried the following, however it doesn't work correctly:
awk -f initialProcessing.awk largeFile | awk '{currentMatch=$1; line=$0; getline; nextMatch=$1; if (currentMatch != nextMatch) {print line}}' -
Why don't you read the file from the end to the beginning and print the first line containing duplicate? This way you don't have to worry about what was printed or not, hold the line, etc.
tac file | awk '/duplicate/ {if (f) next; f=1}1' | tac
This sets a flag f the first time duplicate is seen. From the second timem, this flag makes the line be skipped.
If you want to make this generic in a way that every first word is printed just the last time, use an array approach:
tac file | awk '!seen[$1]++' | tac
This keeps track of the first words that have appeared so far. They are stored in the array seen[], so that by saying !seen[$1]++ we make it True just when $1 occurs for the first time; from the second time on, it evaluates as False and the line is not printed.
Test
$ tac a | awk '!seen[$1]++' | tac
this is a line
another unrelated line
duplicate but keep me
example but keep this one
more unrelated text
You could use an (associative) array to always keep the last occurence:
awk '{last[$1]=$0;} END{for (i in last) print last[i];}' file

Merge two blank lines into one

I am looking for a solution of turning file A to file B, which requires merging two blank lines into one.
File-A:
// Comment 1
// Comment 2
// Comment 3
// Comment 4
// Comment 5
File-B:
// Comment 1
// Comment 2
// Comment 3
// Comment 4
// Comment 5
From this post, I know how to delete empty lines, I am wondering how to merge two consecutive blank lines into one.
PS: blank means that it could be empty OR there might be a tab or a space in the line.
sed -r 's/^\s+$//' infile | cat -s > outfile
sed removes any whitespace on a blank line. The -s option to cat squeezes consecutive blank lines into one.
This might work for you (GNU sed):
sed '$!N;s/^\s*\n\s*$//;P;D' file
This will convert 2 blank lines into one.
If you want to replace multiple blank lines into one:
sed ':a;$!N;s/^\s*\n\s*$//;ta;P;D' file
On reflection a far simpler solution is:
sed ':a;N;s/\n\s*$//;ta' file
Which squeezes one or more blank lines to a single blank line.
An even easier solution uses the range condition:
sed '/\S/,/^\s*$/!d' file
This deletes any blank lines other than those following a non-blank line.
Here is a simple solution with awk:
awk '!NF && !a++; NF {print;a=0}' file
// Comment 1
// Comment 2
// Comment 3
// Comment 4
// Comment 5
NF counts the number of fields; note that a line composed entirely of spaces and tabs counts as a blank line, too.
a is used to count blank lines, and if it's more than 1, skip it.
This page might come handy. TL;DR as follows:
# delete all CONSECUTIVE blank lines from file except the first; also
# deletes all blank lines from top and end of file (emulates "cat -s")
sed '/./,/^$/!d' # method 1, allows 0 blanks at top, 1 at EOF
sed '/^$/N;/\n$/D' # method 2, allows 1 blank at top, 0 at EOF
This should work:
sed 'N;s/^\([[:space:]]*\)\n\([[:space:]]*\)$/\1\2/;P;D' file
awk -v RS='([[:blank:]]*\n){2,}' -v ORS="\n\n" 1 file
I had hoped to produce a shorter Perl version, but Perl does not use regular expressions for its record separator.
awk does not edit in-place. You would have to do this:
awk -v RS='([[:blank:]]*\n){2,}' -v ORS="\n\n" 1 file > tmp && mv tmp file

BASH - Selective deletion

I have a file which looks like this:
Guest-List 1
All present
Guest-list 2
All present
Guest-List 3
Guest-list 4
All present
Guest-list 5
I want to remove the line containing "All present" and its title (the line just above "All present"). The desired output would be:
Guest-List 3
Guest-list 5
I am interested in implementing this using sed. Because I am a rookie, other possible solutions without sed will be appreciated as well (when answering please provide detailed explanation so I can learn) : )
(I know can delete a line matching a regex, and could store the line above it sending it to the hold buffer, something like this: sed '/^.*present$/d; h' ... then the "g" command would copy the hold buffer back to the pattern space... but how do I tell sed to delete that as well?)
Thanks in advance!
You can use fgrep like this:
fgrep -v -f <(fgrep 'All present' -B1 file) file
Guest-List 3
Guest-list 5
sed -n '/All present$/{s/.*//;x;d;};x;p;${x;p;}' file | sed '/^$/d'
Where file is your file.
This is an adapted example from here.
It has a great explanation:
In order to delete the line prior to the pattern,we store every line in a buffer called as hold space. Whenever the pattern matches, we delete the content present in both, the pattern space which contains the current line, the hold space which contains the previous line.
Let me explain this command: x;p; ; This gets executed for every line.
x exchanges the content of pattern space with hold space. p prints the pattern space. As a result, every time, the current line goes to hold space, and the previous line comes to pattern space and gets printed. When the pattern /All Present/ matches, we empty(s/.*//) the pattern space, and exchange(x) with the hold space(as a result of which the hold space becomes empty) and delete(d) the pattern space which contains the previous line. And hence, the current and the previous line gets deleted on encountering the pattern Linux. The ${x;p;} is to print the last line which will remain in the hold space if left.
The second part of sed is to remove the empty lines created by the first sed command.
If you are using more than the s, g, and p (with -n) commands in sed then you are using language constructs that became obsolete in the mid-1970s when awk was invented.
sed is an excellent tool for simple substitutions on a single line, for anything else just use awk:
$ cat file
Guest-List 1
All present
Guest-list 2
All present
Guest-List 3
Guest-list 4
All present
Guest-list 5
$ awk 'NR==FNR{ if (/All present/) {skip[FNR-1]; skip[FNR]} next} !(FNR in skip)' file file
Guest-List 3
Guest-list 5
The above just parses the file twice - first time to create an array named skip of the line numbers (FNR) you do not want output, and the second time to print the lines that are not in that array. Simple, clear, maintainable, extensible, ....

Number of total records

I'm quite new using AWK. I just discover the FNR variable. I just wonder if it is possible to get the number of total records before processing the file?
So the FNR at the end of the file.
I just need it to do something like that
awk 'FNR<TOTALRECORDS-4 {print}'
In order to delete the 4 last lines of the files.
Thanks
If you merely want to print all but the last 4 lines of a file, use a different tool. But if you are doing some other processing with awk and need to incorporate this, just store the lines in a buffer and print them as needed. That is, store the most recent 4 lines, and print the last one in the buffer when you get a newline. For example:
awk 'NR>4 { print a[i%4]} {a[i++%4]=$0}' input
This keeps 4 lines in the array a. If we are in the first 4 lines of the file, do nothing but store the line in a. If we are on a line greater than 4, the first thing you do is print the line 4 lines back (stored in a at index i%4) You can put commands that manipulate $0 between these two action statements as needed.
To remove the last 4 lines from a file, you can just use head:
head -n -4 somefile > outputfile

Resources