replace and store a value in the same file, shell - bash

How can I replace the value of a column and store the output into the same file?
example input, file:
#car|year|model
toyota|1998|corrola
toyota|2006|yaris
opel|2001|corsa
replace "corrola" with "corrolacoupe"
and store it to the input file
#car|year|model
toyota|1998|corrolacoupe
toyota|2006|yaris
opel|2001|corsa
I have tried this
awk -F '|' -v col=$column -v val=$value '/^[^#]/ FNR==NR {print $col = val }' OFS='|' $FILE >> $FILE

To simply replace the value in (row,col) with a new value:
$ awk -F'|' -v OFS='|' -v row=2 -v col=3 -v val=corollacoupe 'NR==row {$col=val} 1' file
#car|year|model
toyota|1998|corollacoupe
toyota|2006|yaris
opel|2001|corsa
This will set the value of input field col to val, but only in the input record row. The 1 in the end will ensure each record is printed by default. Input and output field separators are set via -F option and OFS variable.
If you need to make these changes in-place, create a temporary output file and then copy it over the original:
$ awk ... file >file.tmp && cp file{.tmp,}
Alternatively, in GNU awk, you can use the inplace library via -i inplace option:
$ awk -i inplace -F'|' -v OFS='|' -v row=2 -v col=3 -v val=corollacoupe 'NR==row {$col=val} 1' file
If you wish to skip the comments, and count only non-comment rows:
$ awk -F'|' -v OFS='|' -v row=1 -v col=3 -v val=x '/^[^#]/ {nr++} nr==row {$col=val} 1' file
#car|year|model
toyota|1998|x
toyota|2006|yaris
opel|2001|corsa

An ed solution that modifies the file in-place without any temporary files could be something like:
ed "$FILE" <<< $',s/|corrola$/|corrolacoupe/g\nw'
which uses an ANSI-C string to prevent special characters from being treated specially, then matches |corrola at the end of any line and replaces it with |corrolacoupe. Then we issue the w command to ed to have it write the file back out

A really simple solution.
darby#Debian:~/Scrivania$ cat file
#car|year|model
toyota|1998|corrola
toyota|2006|yaris
opel|2001|corsa
darby#Debian:~/Scrivania$ sed -ri 's#^(.+)\|(.+)\|corrola$#\1|\2|corrolacoupe#' file
darby#Debian:~/Scrivania$ cat file
#car|year|model
toyota|1998|corrolacoupe
toyota|2006|yaris
opel|2001|corsa
darby#Debian:~/Scrivania$

Related

Inserting the filename before the first line of a text file - File extension

~$ awk -i inplace -v ORS='\r\n' 'FNR==1{print FILENAME}1' *
This works great.
But I want to exclude the file extension.
.... sub(/\.[^.]+$/, "", FILENAME) ...
I don't know how to mix it. Or is there another way?
How should I fix it?
1st solution(without using a new variable and editing FILENAME variable itself): Could you please try following, based on your shown samples. I would recommend to run this program on a single file once then run on all the files because you are doing in-place updates, once you are happy with that file's update then run your command on all the files.
awk -i inplace -v ORS='\r\n' 'FNR==1{sub(/\.[^.]+$/,"",FILENAME);print FILENAME}1' Input_file
OR
awk -i inplace -v ORS='\r\n' '
FNR==1{
match(FILENAME,/.*\./)
print substr(FILENAME,RSTART,RLENGTH-1)
}
1' Input_file
2nd solution(Using a variable for FILENAME and we could use file's name later on if needed): OR DO NOT change FILENAME default variable rather create a temp variable which has its filename in it so that in case we need to use file's name in later of program we could use it, try like:
awk -i inplace -v ORS='\r\n' '
FNR==1{
file=FILENAME
sub(/\.[^.]+$/,"",file)
print file
}
1' Input_file
OR we could use match function also as follows:
awk -i inplace -v ORS='\r\n' '
FNR==1{
file=FILENAME
match(file,/.*\./)
print substr(file,RSTART,RLENGTH-1)
}
1' Input_file
NOTE: Change Input_file to * or your file's extensions to pass it to your awk program.
With GNU awk, you can use gensub:
awk -i inplace -v ORS='\r\n' 'FNR==1{print gensub(/\.[^.]+$/, "", 1, FILENAME)}1' *
The gensub(/\.[^.]+$/, "", 1, FILENAME) means:
/\.[^.]+$/ - finds . followed with one or more non-dot chars till end of string
"" - replaces with an empty string
1 - searches only once
FILENAME - this is the input text.
See gawk string functions reference.

Text processing - Split the first column into two columns based on a character

Split the first column in the file into two columns based on a character.
The data inside the brackets () should be moved to the new column removing the brackets.
Given csv file:
Col1(col2),col3,col4,col5
a(23),12,test(1),test2
b(30),15,test1(2),test3
Expected File:
Col1,col2,col3,col4,col5
a,23,12,test(1),test2
b,30,15,test1(2),test3
I tried the below code. I am not able to extract data between the brackets and also it takes every occurence of "()".
awk -F"(" '$1=$1' OFS="," filename
Take your pick:
$ sed 's/(\([^)]*\))/,\1/' file
Col1,col2,col3,col4,col5
a,23,12,test(1),test2
b,30,15,test1(2),test3
$ sed 's/(/,/; s/)//' file
Col1,col2,col3,col4,col5
a,23,12,test(1),test2
b,30,15,test1(2),test3
.
$ awk '{sub(/\(/,","); sub(/\)/,"")} 1' file
Col1,col2,col3,col4,col5
a,23,12,test(1),test2
b,30,15,test1(2),test3
$ awk 'match($0,/\([^)]*\)/){$0= substr($0,1,RSTART-1) "," substr($0,RSTART+1,RLENGTH-2) substr($0,RSTART+RLENGTH) } 1' file
Col1,col2,col3,col4,col5
a,23,12,test(1),test2
b,30,15,test1(2),test3
$ awk 'BEGIN{FS=OFS=","} split($1,a,/[()]/) > 1{$1=a[1] "," a[2]} 1' file
Col1,col2,col3,col4,col5
a,23,12,test(1),test2
b,30,15,test1(2),test3
$ gawk '{$0=gensub(/\(([^)]*)\)/,",\\1",1)} 1' file
Col1,col2,col3,col4,col5
a,23,12,test(1),test2
b,30,15,test1(2),test3
$ gawk 'match($0,/([^(]*)\(([^)]*)\)(.*)/,a){$0=a[1] "," a[2] a[3]} 1' file
Col1,col2,col3,col4,col5
a,23,12,test(1),test2
b,30,15,test1(2),test3
Those last 2 require GNU awk for gensub() and the 3rd arg to match() respectively. There are alternatives too.
For completeness...
posix shell
while IFS= read -r line; do
car=${line%%)*}
caar=${car%%(*}
cdar=${car##*(}
cdr=${line#*)}
printf '%s\n' "$caar,$cdar$cdr"
done < file
I don't think you can solve using cut alone.
Could you please try 1 more sed solution, which may help you.
sed 's/\([^(]*\)(\([^)]*\))\(.*\)/\1,\2\3/' Input_file

Delete 4 consecutive lines after a match in a file

I am in the process of deleting around 33k of zones on a DNS server. I used this awk string to find the matching rows in my zones.conf file:
awk -v RS= -v ORS='\n\n' '/domain.com/' zones.conf
This give me the output down below, which is what I want.
zone "domain.com" {
type master;
file "/etc/bind/db/domain.com";
};
The problem I am facing now, is to delete the 4 lines.
Is it possible to use sed or awk to perform this action?
EDIT:
I have decided that I want to run in in a while loop. List.txt contain the domain which I want to remove from the zones.conf file.
Every row is defined as the variable '${line}' and is defined in the awk (which was provided by "l'L'l")
The string was originaly:
awk -v OFS='\n\n' '/domain.com/{n=4}; n {n--; next}; 1' < zones.conf > new.conf
I tried to modify it so it would accept a variable, but without result:
#!/bin/bash
while read line
do
awk -v OFS='\n\n' '/"'${line}'"/{n=4}; n {n--; next}; 1' zones.conf > new.conf
done<list.txt
Thanks in advance
This is quite easy with sed:
sed -i '/zone "domain.com"/,+4d' zones.conf
With a variable:
sed -i '/zone "'$domain'"/,+4d' zones.conf
Full working example:
#!/bin/bash
while read domain
do
sed -i '/zone "'$domain'"/,+4d' zones.conf
done<list.txt
You should be able to modify your existing awk command to remove a specified number of lines once the match is found, for example:
awk -v OFS='\n\n' '/domain.com/{n=4}; n {n--; next}; 1' < zones.conf > new.conf
This would remove 4 lines after the initial domain.com is found, giving you the correct newlines.
Output:
zone "other.com" {
type master;
file "/etc/bind/db/other.com";
};
zone "foobar.com" {
type master;
file "/etc/bind/db/foobar.com";
};
My sed solution would be
sed '/zone "domain.com"/{:l1;/};\n$/!{N;bl1};d}' file > newfile
#But the above would be on the slower end if you're dealing with 33k zones
For inplace editing use the -i option with sed like below :
sed -i.bak '/zone "domain.com"/{:l1;/};\n$/!{N;bl1};d}' file
#Above will create a backup of the original file with a '.bak' extension
For using variables
#!/bin/bash
while read domain #capitalized variables are usually reserved for the system
do
sed '/zone "'"${domain}"'"/{:l1;/};\n$/!{N;bl1};d}' file > newfile
# for inplace edit use below
# sed -i.bak '/zone "'"${domain}"'"/{:l1;/};\n$/!{N;bl1};d}' file
done<list.txt

Changing column value in CSV file

In csv file I have below columns and I try to change the second column's value with
awk -F ',' -v OFS=',' '$1 { $2=$2*2; print}' path/file.csv > output.csv.
But it returns zero and removes double quotations.
file.csv
"sku","0.47","supplierName"
"sku","3.14","supplierName"
"sku","3.56","supplierName"
"sku","4.20","supplierName"
output.csv
"sku",0,"supplierName"
"sku",0,"supplierName"
"sku",0,"supplierName"
"sku",0,"supplierName"
You may specify more than one character in FS value.
$ awk -v FS="\",\"" -v OFS="\",\"" '{$2=$2*2}1' file
"sku","0.94","supplierName"
"sku","6.28","supplierName"
"sku","7.12","supplierName"
"sku","8.4","supplierName"
Try this if you want to round upto two decimal places.
$ awk -v FS="\",\"" -v OFS="\",\"" '{$2=sprintf("%.2f",$2*2)}1' file
"sku","0.94","supplierName"
"sku","6.28","supplierName"
"sku","7.12","supplierName"
"sku","8.40","supplierName"

replacing strings in a configuration file with shell scripting

I have a configuration file with fields separated by semicolons ;. Something like:
user#raspberrypi /home/pi $ cat file
string11;string12;string13;
string21;string22;string23;
string31;string32;string33;
I can get the strings I need with awk:
user#raspberrypi /home/pi $ cat file | grep 21 | awk -F ";" '{print $2}'
string22
And I'd like to change string22 to hello_world via a script.
Any idea how to do it? I think it should be with sed but I have no idea how.
I prefer perl better than sed. Here a one-liner that modifies the file in-place.
perl -i -F';' -lane '
BEGIN { $" = q|;| }
if ( m/21/ ) { $F[1] = q|hello_world| };
print qq|#F|
' infile
Use -i.bak instead of -i to create a backup file with .bak as suffix.
It yields:
string11;string12;string13
string21;hello_world;string23
string31;string32;string33
First drop the useless use of cat and grep so:
$ cat file | grep 21 | awk -F';' '{print $2}'
Becomes:
$ awk -F';' '/21/{print $2}' file
To change this value you would do:
$ awk '/21/{$2="hello_world"}1' FS=';' OFS=';' file
To store the changes back to the file:
$ awk '/21/{$2="hello_world"}1' FS=';' OFS=';' file > tmp && mv tmp file
However if all you want to do is replace string22 with hello_world I would suggest using sed instead:
$ sed 's/string22;/hello_world;/g' file
With sed you can use the -i option to store the changes back to the file:
$ sed -i 's/string22;/hello_world;/g' file
Even though we can do this in awkeasily as Sudo suggested i prefer perl since it does inline replacement.
perl -pe 's/(^[^\;]*;)[^\;]*(;.*)/$1hello_world$2/g if(/21/)' your_file
for in line just add an i
perl -pi -e 's/(^[^\;]*;)[^\;]*(;.*)/$1hello_world$2/g if(/21/)' your_file
Tested below:
> perl -pe 's/(^[^\;]*;)[^\;]*(;.*)/$1"hello_world"$2/g if(/21/)' temp
string11;string12;string13;
string21;"hello_world";string23;
string31;string32;string33;
> perl -pe 's/(^[^\;]*;)[^\;]*(;.*)/$1hello_world$2/g if(/21/)' temp
string11;string12;string13;
string21;hello_world;string23;
string31;string32;string33;
>

Resources