BASH - Selective deletion - bash

I have a file which looks like this:
Guest-List 1
All present
Guest-list 2
All present
Guest-List 3
Guest-list 4
All present
Guest-list 5
I want to remove the line containing "All present" and its title (the line just above "All present"). The desired output would be:
Guest-List 3
Guest-list 5
I am interested in implementing this using sed. Because I am a rookie, other possible solutions without sed will be appreciated as well (when answering please provide detailed explanation so I can learn) : )
(I know can delete a line matching a regex, and could store the line above it sending it to the hold buffer, something like this: sed '/^.*present$/d; h' ... then the "g" command would copy the hold buffer back to the pattern space... but how do I tell sed to delete that as well?)
Thanks in advance!

You can use fgrep like this:
fgrep -v -f <(fgrep 'All present' -B1 file) file
Guest-List 3
Guest-list 5

sed -n '/All present$/{s/.*//;x;d;};x;p;${x;p;}' file | sed '/^$/d'
Where file is your file.
This is an adapted example from here.
It has a great explanation:
In order to delete the line prior to the pattern,we store every line in a buffer called as hold space. Whenever the pattern matches, we delete the content present in both, the pattern space which contains the current line, the hold space which contains the previous line.
Let me explain this command: x;p; ; This gets executed for every line.
x exchanges the content of pattern space with hold space. p prints the pattern space. As a result, every time, the current line goes to hold space, and the previous line comes to pattern space and gets printed. When the pattern /All Present/ matches, we empty(s/.*//) the pattern space, and exchange(x) with the hold space(as a result of which the hold space becomes empty) and delete(d) the pattern space which contains the previous line. And hence, the current and the previous line gets deleted on encountering the pattern Linux. The ${x;p;} is to print the last line which will remain in the hold space if left.
The second part of sed is to remove the empty lines created by the first sed command.

If you are using more than the s, g, and p (with -n) commands in sed then you are using language constructs that became obsolete in the mid-1970s when awk was invented.
sed is an excellent tool for simple substitutions on a single line, for anything else just use awk:
$ cat file
Guest-List 1
All present
Guest-list 2
All present
Guest-List 3
Guest-list 4
All present
Guest-list 5
$ awk 'NR==FNR{ if (/All present/) {skip[FNR-1]; skip[FNR]} next} !(FNR in skip)' file file
Guest-List 3
Guest-list 5
The above just parses the file twice - first time to create an array named skip of the line numbers (FNR) you do not want output, and the second time to print the lines that are not in that array. Simple, clear, maintainable, extensible, ....

Related

How to get all lines from a file after the last empty line?

Having a file like foo.txt with content
1
2
3
4
5
How do i get the lines starting with 4 and 5 out of it (everything after last empty line), assuming the amount of lines can be different?
Updated
Let's try a slightly simpler approach with just sed.
$: sed -n '/^$/{g;D;}; N; $p;' foo.txt
4
5
-n says don't print unless I tell you to.
/^$/{g;D;}; says on each blank line, clear it all out with this:
g : Replace the contents of the pattern space with the contents of the hold space. Since we never put anything in, this erases the (possibly long accumulated) pattern space. Note that I could have used z since this is GNU, but I wanted to break it out for non-GNU sed's below, and in this case this works for both.
D : remove the now empty line from the pattern space, and go read the next.
Now previously accumulated lines have been wiped if (and only if) we saw a blank line. The D loops back to the beginning, so N will never see a blank line.
N : Add a newline to the pattern space, then append the next line of input to the pattern space. This is done on every line except blanks, after which the pattern space will be empty.
This accumulates all nonblanks until either 1) a blank is hit, which will clear and restart the buffer as above, or 2) we reach EOF with a buffer intact.
Finally, $p says on the LAST line (which will already have been added to the pattern space unless the last line was blank, which will have removed the pattern space...), print the pattern space. The only time this will have nothing to print is if the last line of the file was a blank line.
So the whole logic boils down to: clean the buffer on empty lines, otherwise pile the non-empty lines up and print at the end.
If you don't have GNU sed, just put the commands on separate lines.
sed -n '
/^$/{
g
D
}
N
$p
' foo.txt
Alternate
The method above is efficient, but could potentially build up a very large pattern buffer on certain data sets. If that's not an issue, go with it.
Or, if you want it in simple steps, don't mind more processes doing less work each, and prefer less memory consumed:
last=$( sed -n /^$/= foo.txt|tail -1 ) # find the last blank
next=$(( ${last:-0} + 1 )) # get the number of the line after
cmd="$next,\$p" # compose the range command to print
sed -n "$cmd" foo.txt # run it to print the range you wanted
This runs a lot of small, simple tasks outside of sed so that it can give sed the simplest, most direct and efficient description of the task possible. It will read the target file twice, but won't have to manage filling, flushing, and refilling the accumulation of data in the pattern buffer with records before a blank line. Still likely slower unless you are memory bound, I'd think.
Reverse the file, print everything up to the first blank line, reverse it again.
$ tac foo.txt | awk '/^$/{exit}1' | tac
4
5
Using GNU awk:
awk -v RS='\n\n' 'END{printf "%s",$0}' file
RS is the record separator set to empty line.
The END statement prints the last record.
try this:
tail +$(($(grep -nE ^$ test.txt | tail -n1 | sed -e 's/://g')+1)) test.txt
grep your input file for empty lines.
get last line with tail => 5:
remove unnecessary :
add 1 to 5 => 6
tail starting from 6
You can try with sed :
sed -n ':A;$bB;/^$/{x;s/.*//;x};H;n;bA;:B;H;x;s/^..//;p' infile
With GNU sed:
sed ':a;/$/{N;s/.*\n\n//;ba;}' file

Select full block of text delimited by some chars

I have a very large text file (40GB gzipped) where blocks of data are separated by //.
How can I select blocks of data where a certain line matches some criterion? That is, can I grep a pattern and extend the selection in both directions to the // delimiter? I can make no assumptions on the size of the block and the position of the line.
not interesting 1
not interesting 2
//
get the whole block 1
MATCH THIS LINE
get the whole block 2
get the whole block 3
//
not interesting 1
not interesting 2
//
I want to select the block of data with MATCH THIS LINE:
get the whole block 1
MATCH THIS LINE
get the whole block 2
get the whole block 3
I tried sed but can't get my head around the pattern definition. This for example should match from // to MATCH THIS LINE:
sed -n -e '/\/\//,/MATCH THIS LINE/ p' file.txt
But it fails matching the //.
Is it possible to achieve this with GNU command line tools?
With GNU awk (due to multi-char RS), you can set the record separator to //, so that every record is a //-delimited set of characters:
$ awk -v RS="//" '/MATCH THIS LINE/' file
get the whole block 1
MATCH THIS LINE
get the whole block 2
get the whole block 3
Note this leaves an empty line above and below because it catches the new line just after // and prints it back, as well as the last one before the // at the end. To remove them you can pipe to awk 'NF'.
To print the separator between blocks of data you can say (thanks 123):
awk -v RS="//" '/MATCH THIS LINE/{print RT $0 RT}' file

Sed replace the first value

I want to replace the first value (in first column and line so here 1) and add one to this value, so I have a file like this
1
1 1
2 5
1 6
I use this sentence
read -r a < file
echo $aa
sed "s/$aa/$(($aa + 1))/" file
# or
sed 's/$aa/$(($aa + 1))/' file
But when I make that, he change all first column one into two. I have try to change the quote but it make nothing.
restrict the script to first line only, i.e.
sed '1s/old/new/'
awk might be a better tool for this.
awk 'NR==1{$1=$1+1}1'
for the first line add 1 to the first field and print. Can be rewritten as
awk 'NR==1{$1+=1}1'
or
awk 'NR==1{$1++}1'
perl -p0e 's/(\d+)/$1+1/e' file

Reverse lines in a file two by two

I'm trying to reverse the lines in a file, but I want to do it two lines by two lines.
For the following input:
1
2
3
4
…
97
98
I would like the following output:
97
98
…
3
4
1
2
I found lots of ways to reverse a file line by line (especially on this topic: How can I reverse the order of lines in a file?).
tac. The simplest. Doesn't seem to have an option for what I want, even if I tried to play around with options -r and -s.
tail -r (not POSIX compliant). Not POSIX compliant, my version doesn't seem to have anything to do that.
Remains three sed formula, and I think a little modification would do the trick. But I'm not even understanding what they're doing, and thus I'm stuck here.
sed '1!G;h;$!d'
sed -n '1!G;h;$p'
sed 'x;1!H;$!d;x'
Any help would be appreciated. I'll try to understand these formula and to give answer to this question by myself.
Okay, I'll bite. In pure sed, we'll have to build the complete output in the hold buffer before printing it (because we see the stuff we want to print first last). A basic template can look like this:
sed 'N;G;h;$!d' filename # Incomplete!
That is:
N # fetch another line, append it to the one we already have in the pattern
# space
G # append the hold buffer to the pattern space.
h # save the result of that to the hold buffer
$!d # and unless the end of the input was reached, start over with the next
# line.
The hold buffer always contains the reversed version of the input processed so far, and the code takes two lines and glues them to the top of that. In the end, it is printed.
This has two problems:
If the number of input lines is odd, it prints only the last line of the file, and
we get a superfluous empty line at the end of the input.
The first is because N bails out if no more lines exist in the output, which happens with an odd number of input lines; we can solve the problem by executing it conditionally only when the end of the input was not yet reached. Just like the $!d above, this is done with $!N, where $ is the end-of-input condition and ! inverts it.
The second is because at the very beginning, the hold buffer contains an empty line that G appends to the pattern space when the code is run for the very first time. Since with $!Nwe don't know if at that point the line counter is 1 or 2, we should inhibit it conditionally on both. This can be done with 1,2!G, where 1,2 is a range spanning from line 1 to line 2, so that 1,2!G will run G if the line counter is not between 1 and 2.
The whole script then becomes
sed '$!N;1,2!G;h;$!d' filename
Another approach is to combine sed with tac, such as
tac filename | sed -r 'N; s/(.*)\n(.*)/\2\n\1/' # requires GNU sed
That is not the shortest possible way to use sed here (you could also use tac filename | sed -n 'h;$!{n;G;};p'), but perhaps easier to understand: Every time a new line is processed, N fetches another line, and the s command swaps them. Because tac feeds us the lines in reverse, this restores pairs of lines to their original order.
The key difference to the first approach is the behavior for an odd number of lines: with the second approach, the first line of the file will be alone without a partner, whereas with the first it'll be the last.
I would go with this:
tac file | while read a && read b; do echo $b; echo $a; done
Here is an awk you can use:
cat file
1
2
3
4
5
6
7
8
awk '{a[NR]=$0} END {for (i=NR;i>=1;i-=2) print a[i-1]"\n"a[i]}' file
7
8
5
6
3
4
1
2
It store all line in an array a, then print it out in reverse, two by two.

Using Bash to Manually Edit a Text or Fastq file

I would like to manually edit a Fastq file using Bash to multiple similar lines.
In Fastq files a sequence read starts on line 2 and then is found every fourth line (ie lines 2,6,10,14...).
I would like to create an edited text file that is identical to a Fastq file except the first 6 characters of the sequencing reads are trimmed off.
Unedited Fastq:
#M03017:21:000000000
GAGAGATCTCTCTCTCTCTCT
+
111>>B1FDFFF
Edited Fastq:
#M03017:21:000000000
TCTCTCTCTCTCTCT
+
111>>B1FDFFF
GNU sed can do that:
sed -i~ '2~4s/^.\{6\}//' file
The address 2~4 means "start on line 2, repeat each 4 lines".
s means replace, ^ matches the line beginning, . matches any character, \{6\} specifies the length (a "quantifier"). The replacement string is empty (//).
-i~ replaces the file in place, leaving a backup with the ~ appended to the filename.
I guess awk is perfect for this:
$ awk 'NR%4==2 {gsub(/^.{6}/,"")} 1' file
#M03017:21:000000000
TCTCTCTCTCTCTCT
+
111>>B1FDFFF
This removes the first 6 characters in all the lines in the 4k+2 position.
Explanation
NR%4==2 {} do things if the number of record (number of line) is on 4k+2 form.
gsub(/^.{6}/,"") replace the 6 first chars with empty string.
1 as evaluated to True, print the line.

Resources