Delete 4 consecutive lines after a match in a file - bash

I am in the process of deleting around 33k of zones on a DNS server. I used this awk string to find the matching rows in my zones.conf file:
awk -v RS= -v ORS='\n\n' '/domain.com/' zones.conf
This give me the output down below, which is what I want.
zone "domain.com" {
type master;
file "/etc/bind/db/domain.com";
};
The problem I am facing now, is to delete the 4 lines.
Is it possible to use sed or awk to perform this action?
EDIT:
I have decided that I want to run in in a while loop. List.txt contain the domain which I want to remove from the zones.conf file.
Every row is defined as the variable '${line}' and is defined in the awk (which was provided by "l'L'l")
The string was originaly:
awk -v OFS='\n\n' '/domain.com/{n=4}; n {n--; next}; 1' < zones.conf > new.conf
I tried to modify it so it would accept a variable, but without result:
#!/bin/bash
while read line
do
awk -v OFS='\n\n' '/"'${line}'"/{n=4}; n {n--; next}; 1' zones.conf > new.conf
done<list.txt
Thanks in advance

This is quite easy with sed:
sed -i '/zone "domain.com"/,+4d' zones.conf
With a variable:
sed -i '/zone "'$domain'"/,+4d' zones.conf
Full working example:
#!/bin/bash
while read domain
do
sed -i '/zone "'$domain'"/,+4d' zones.conf
done<list.txt

You should be able to modify your existing awk command to remove a specified number of lines once the match is found, for example:
awk -v OFS='\n\n' '/domain.com/{n=4}; n {n--; next}; 1' < zones.conf > new.conf
This would remove 4 lines after the initial domain.com is found, giving you the correct newlines.
Output:
zone "other.com" {
type master;
file "/etc/bind/db/other.com";
};
zone "foobar.com" {
type master;
file "/etc/bind/db/foobar.com";
};

My sed solution would be
sed '/zone "domain.com"/{:l1;/};\n$/!{N;bl1};d}' file > newfile
#But the above would be on the slower end if you're dealing with 33k zones
For inplace editing use the -i option with sed like below :
sed -i.bak '/zone "domain.com"/{:l1;/};\n$/!{N;bl1};d}' file
#Above will create a backup of the original file with a '.bak' extension
For using variables
#!/bin/bash
while read domain #capitalized variables are usually reserved for the system
do
sed '/zone "'"${domain}"'"/{:l1;/};\n$/!{N;bl1};d}' file > newfile
# for inplace edit use below
# sed -i.bak '/zone "'"${domain}"'"/{:l1;/};\n$/!{N;bl1};d}' file
done<list.txt

Related

grep text after keyword with unknown spaces and remove comments

I am having trouble saving variables from file using grep/sed/awk.
The text in file.txt is on the form:
NUM_ITER = 1000 # Number of iterations
NUM_STEP = 1000
And I would like to save these to bash variables without the comments.
So far, I have attempted this:
grep -oP "^NUM_ITER[ ]*=\K.*#" file.txt
which yields
1000 #
Any suggestions?
I would use awk, like this:
awk -F'[=[:blank:]#]+' '$1 == "NUM_ITER" {print $2}' file
To store it in a variable:
NUM_ITER=$(awk -F'[=[:blank:]#]+' '$1 == "NUM_ITER" {print $2}' file)
As long as a line can only contain a single match, this is easy with sed.
sed -n '# Remove comments
s/[ ]*#.*//
# If keyword found, remove keyword and print value
s/^NUM_ITER[ ]*=[ ]*//p' file.txt
This can be trimmed down to a one-liner if you remove the comments.
sed -n 's/[ ]*#.*//;s/^NUM_ITER[ ]*=[ ]*//p' file.txt
The -n option turns off printing, and the /p flag after the final substitution says to print that line after all only if the substitution was successful.

Add the first line to the beginning of a file to each line with shell

I have a lot of files with the first line of them as an identifier. The subsequent lines are products of the identifier. Here is an example of the file:
0G000001:
Product_2221
Product_2222
Product_2122
...
I want to put the identifier at the beginning of every line of the file. The final output would be like this:
0G000001: Product_2221
0G000001: Product_2222
0G000001: Product:2122
....
I want to make a loop for all the files that I have. I've been trying with:
for i in $(echo `head -n1 file.$i.txt);
do
cat - file.$i.txt > file_id.$i.txt;
done
But I only duplicate the first line of the file. I know that sed can add specific text at the beginning of the file but I can't figure it out to specify that the text is the first line of the file and in a loop context.
No explicit loop necessary:
awk '
FNR==1 { close(out); out=FILENAME; sub(/\./,"_id&",out); hdr=$0; next }
{ print hdr, $0 > out }
' file.*.txt
With awk:
awk 'NR==1 { prod = $0 } NR>1 { print prod, $0 }' infile
Output:
0G000001: Product_2221
0G000001: Product_2222
0G000001: Product_2122
A sed command to do what you want could look like this:
$ sed '1{h;d};G;s/\(.*\)\n\(.*\)/\2 \1/' infile
0G000001: Product_2221
0G000001: Product_2222
0G000001: Product_2122
This does the following:
1 { # On the first line
h # Copy the pattern space to the hold space
d # Delete the line, move to next line
}
G # Append the hold space to the pattern space
s/\(.*\)\n\(.*\)/\2 \1/ # Swap the lines in the pattern space
Some seds might complain about {h;d} and require an extra semicolon, {h;d;}.
To do this in-place for a file, you can use
sed -i '1{h;d};G;s/\(.*\)\n\(.*\)/\2 \1/' infile
for GNU sed, or
sed -i '' '1{h;d};G;s/\(.*\)\n\(.*\)/\2 \1/' infile
for macOS sed. Or, if your sed doesn't support -i at all:
sed '1{h;d};G;s/\(.*\)\n\(.*\)/\2 \1/' infile > tmpfile && mv tmpfile infile
To do it in a loop over all files in a directory:
for f in /path/to/dir/*; do
sed -i '1{h;d};G;s/\(.*\)\n\(.*\)/\2 \1/' "$f"
done
or even directly with a glob:
sed -i '1{h;d};G;s/\(.*\)\n\(.*\)/\2 \1/' /path/to/dir/*
The latter works for sure with GNU sed; not sure about other seds.
sed + head solution:
for f in *.txt; do sed -i '1d; s/^/'"$(head -n1 $f)"' /' "$f"; done
-i - to modify file in-place
1d; - delete the 1st line
$(head -n1 $f) - extract the 1st line from file (getting identifier)
s/^/<identifier> / - prepend identifier to each line in file
This might work for you (GNU sed):
sed -ri '1h;1d;G;s/(.*)\n(.*)/\2 \1/' file ...
Save the first line in the hold space (HS) and then delete it from the pattern space (PS). For every line (other than the first), append the HS to the PS and then swap the lines and replace the newline with a space.

How to delete matching lines using sed in while loop

Below is the content of claim_note file
B|2050013344207770
B|2050013344157085
I have Input file which has values
B|2050013344207770|xxx|xxx
B|2050013344157085|xxx|xxx
B|2050013344157999|xxx|xxx
I am using below code to delete matching line in Input file, but my code delete only first matching pattern
cat claim_note | while read FILE
do
echo $FILE
sed -n "/$FILE/!p" Input > TempInput
mv TempInput Input
done
Rather than looping and using send on every line you can use awk:
awk -F'|' 'FNR==NR{a[$1,$2]; next} !(($1,$2) in a)' claim_note Input
B|2050013344157999|xxx|xxx
You can use this grep:
grep -vf claim.txt input.txt
Output:
B|2050013344157999|xxx|xxx

String handling and looping using awk and sed

I am trying to AWK a file to parse two column values as a pair and then use them in a loop to check the status of the particular application with respect to the server.
Syntax of the file:
CELL **NAME_OF_CELL** MC **SERVERNAME.COM**/PORT_NUMBER
FILE:
#Cells
cell app_dynamics_21 mc dynamics21.xxxx.com/5021
cell windows_app mc windows_app.app.com/5041
I am interested in name_of_cell and servername.com, so my awk looks like
sed '/^\s*$/d' $FILE |grep -v -e"#" -e"server" -e"gw_ps" | awk '{print $2" "$4}' | grep -i -e"windows" -e"app" -e"smartphone" > $DIRECTORY/CLEAN_FILE
CLEAN_FILE looks as mentioned below,
server unixhost2.test.com:3115
app_dynamics_21 dynamics21.xxxx.com/5021
windows_app windows_app.app.com/5041
As per my sed i shouldn't be seeing server in my clean_file, next i would like to read each line and hold NAME_OF_CELL in one variable and SERVERNAME.COM in another to verify status of the application and server.
Need help with SED and AWK to extract these NAME_OF_CELL and SERVERNAME.COM from the file.
awk can do what sed and grep can do, so to put it all into one script:
awk -v IGNORECASE=1 '
/#/ || /server/ || /gw_ps/ {next}
/windows/ || /app/ || /smartphone/ {print $2, $4}
' "$FILE" > clean_file
But you probably want /^[[:blank:]]*#/ to remove comments, not blindly remove any line with a hash anywhere.

replacing strings in a configuration file with shell scripting

I have a configuration file with fields separated by semicolons ;. Something like:
user#raspberrypi /home/pi $ cat file
string11;string12;string13;
string21;string22;string23;
string31;string32;string33;
I can get the strings I need with awk:
user#raspberrypi /home/pi $ cat file | grep 21 | awk -F ";" '{print $2}'
string22
And I'd like to change string22 to hello_world via a script.
Any idea how to do it? I think it should be with sed but I have no idea how.
I prefer perl better than sed. Here a one-liner that modifies the file in-place.
perl -i -F';' -lane '
BEGIN { $" = q|;| }
if ( m/21/ ) { $F[1] = q|hello_world| };
print qq|#F|
' infile
Use -i.bak instead of -i to create a backup file with .bak as suffix.
It yields:
string11;string12;string13
string21;hello_world;string23
string31;string32;string33
First drop the useless use of cat and grep so:
$ cat file | grep 21 | awk -F';' '{print $2}'
Becomes:
$ awk -F';' '/21/{print $2}' file
To change this value you would do:
$ awk '/21/{$2="hello_world"}1' FS=';' OFS=';' file
To store the changes back to the file:
$ awk '/21/{$2="hello_world"}1' FS=';' OFS=';' file > tmp && mv tmp file
However if all you want to do is replace string22 with hello_world I would suggest using sed instead:
$ sed 's/string22;/hello_world;/g' file
With sed you can use the -i option to store the changes back to the file:
$ sed -i 's/string22;/hello_world;/g' file
Even though we can do this in awkeasily as Sudo suggested i prefer perl since it does inline replacement.
perl -pe 's/(^[^\;]*;)[^\;]*(;.*)/$1hello_world$2/g if(/21/)' your_file
for in line just add an i
perl -pi -e 's/(^[^\;]*;)[^\;]*(;.*)/$1hello_world$2/g if(/21/)' your_file
Tested below:
> perl -pe 's/(^[^\;]*;)[^\;]*(;.*)/$1"hello_world"$2/g if(/21/)' temp
string11;string12;string13;
string21;"hello_world";string23;
string31;string32;string33;
> perl -pe 's/(^[^\;]*;)[^\;]*(;.*)/$1hello_world$2/g if(/21/)' temp
string11;string12;string13;
string21;hello_world;string23;
string31;string32;string33;
>

Resources