I'm trying to learn how to use awk with gsub for a particular field, but passing the name, not the number of the column on this data:
especievalida,nom059
Rhizophora mangle,Amenazada (A)
Avicennia germinans,Amenazada (A)
Laguncularia racemosa,Amenazada (A)
Cedrela odorata,Sujeta a protección especial (Pr)
Litsea glaucescens,En peligro de extinción (P)
Conocarpus erectus,Amenazada (A)
Magnolia schiedeana,Amenazada (A)
Carpinus caroliniana,Amenazada (A)
Ostrya virginiana,Sujeta a protección especial (Pr)
I tried
awk -F, -v OFS="," '{gsub("\\(.*\\)", "", $2 ) ; print $0}'
removes everything between parentheses on the second ($2) column; but I'd really like to be able to pass "nom059" to the expression, to get the same result
When reading the first line of your input file (the header line) build an array (f[] below) that maps the field name to the field number. Then you can access the fields by just using their names as an index to f[] to get their numbers and then contents:
$ cat tst.awk
BEGIN {
FS = OFS = ","
}
NR==1 {
for (i=1; i<=NF; i++) {
f[$i] = i
}
}
{
gsub(/\(.*\)/,"",$(f["nom05"]))
print
}
$ awk -f tst.awk file
especievalida,nom059
Rhizophora mangle,Amenazada
Avicennia germinans,Amenazada
Laguncularia racemosa,Amenazada
Cedrela odorata,Sujeta a protección especial
Litsea glaucescens,En peligro de extinción
Conocarpus erectus,Amenazada
Magnolia schiedeana,Amenazada
Carpinus caroliniana,Amenazada
Ostrya virginiana,Sujeta a protección especial
By the way, read https://www.gnu.org/software/gawk/manual/gawk.html#Computed-Regexps for why you should be using gsub(/.../ (a constant or literal regexp) instead of gsub("..." (a dynamic or computed regexp).
Could you please try following. I have made an awk variable named header_value where you could mention field name on which you want to use gsub.
awk -v header_value="nom059" '
BEGIN{
FS=OFS=","
}
FNR==1{
for(i=1;i<=NF;i++){
if($i==header_value){
field_value=i
}
}
print
next
}
{
gsub(/\(.*\)/, "",$field_value)
}
1
' Input_file
Explanation: Adding explanation of above code.
awk -v header_value="nom059" ' ##Starting awk program here and creating a variable named header_value whose value is set as nom059.
BEGIN{ ##Starting BEGIN section of this program here.
FS=OFS="," ##Setting FS and OFS value as comma here.
} ##Closing BEGIN section here.
FNR==1{ ##Checking condition if FNR==1, line is 1st line then do following.
for(i=1;i<=NF;i++){ ##Starting a for loop which starts from i=1 to till value of NF.
if($i==header_value){ ##checking condition if any field value is equal to variable header_value then do following.
field_value=i ##Creating variable field_value whose value is variable i value.
}
}
print ##Printing 1st line here.
next ##next will skip all further statements from here.
}
{
gsub(/\(.*\)/, "",$field_value) ##Now using gsub to Globally substituting everything between ( to ) with NULL in all lines.
}
1 ##Mentioning 1 will print edited/non-edited line.
' Input_file ##Mentioning Input_file name here.
Related
File a is containing field names:
timestamp,name,ip
File b contains values:
2021-12-17 16:01:19.970,app1,10.0.0.0
2021-12-17 16:01:19.260,app1,10.0.0.1
When I use awk as follows:
awk 'BEGIN{FS=",";OFS="\n"} {if(NR%3==0){print "----"};$1=$1;print;}' b
I get:
----
2021-12-17 16:01:19.970
app1
10.0.0.0
----
2021-12-17 16:01:19.260
app1
10.0.0.1
Any way to merge key:value in each line?
The output I want is:
----
timestamp:2021-12-17 16:01:19.970
app:app1
ip:10.0.0.0
----
timestamp:2021-12-17 16:01:19.260
app:app1
ip:10.0.0.1
With your shown samples, please try following awk program.
awk '
BEGIN{ FS="," }
FNR==NR{
for(i=1;i<=NF;i++){
heading[i]=$i
}
next
}
{
print "----"
for(i=1;i<=NF;i++){
print heading[i]":"$i
}
}
' filea fileb
Explanation: Adding detailed explanation for above.
awk ' ##Starting awk program from here.
BEGIN{ FS="," } ##Stating BEGIN section of this program and set FS to , here.
FNR==NR{ ##Checking condition which will be TRUE when filea is being read.
for(i=1;i<=NF;i++){ ##Traversing through all fields here.
heading[i]=$i ##Setting heading array index as i and value as current field.
}
next ##next will skip all further statements from here.
}
{
print "----" ##printing ---- here.
for(i=1;i<=NF;i++){ ##Traversing through all fields here.
print heading[i]":"$i ##Printing heading with index i and colon and value of current field.
}
}
' filea fileb ##Mentioning Input_file names here.
I want to copy one csv header to another in row wise with some modifications
Input csv
name,"Mobile Number","mobile1,mobile2",email2,Address,email21
test, 123456789,+123456767676,a#test.com,testaddr,a1#test.com
test1,7867778,8799787899898,b#test,com, test2addr,b2#test.com
In new csv this should be like this and file should also be created. And for sting column I will pass the column name so only that column will be converted to string
name.auto()
Mobile Number.auto()
mobile1,mobile2.string()
email2.auto()
Address.auto()
email21.auto()
As you see above all these header with type modification should be inserted in different rows
I have tried with below command but this is only for copy first row
sed '1!d' input.csv > output.csv
You may try this alternative gnu awk command as well:
awk -v FPAT='"[^"]+"|[^,]+' 'NR == 1 {
for (i=1; i<=NF; ++i)
print gensub(/"/, "", "g", $i) "." ($i ~ /,/ ? "string" : "auto") "()"
exit
}' file
name.auto()
Mobile Number.auto()
mobile1,mobile2.string()
email2.auto()
Address.auto()
email21.auto()
Or using sed:
sed -i -e '1i 1234567890.string(),My address is test.auto(),abc3#gmail.com.auto(),120000003.auto(),abc-003.auto(),3.com.auto()' -e '1d' test.csv
EDIT: As per OP's comment to print only first line(header) please try following.
awk -v FPAT='[^,]*|"[^"]+"' '
FNR==1{
for(i=1;i<=NF;i++){
if($i~/^".*,.*"$/){
gsub(/"/,"",$i)
print $i".string()"
}
else{
print $i".auto()"
}
}
exit
}
' Input_file > output_file
Could you please try following, written and tested with GUN awk with shown samples.
awk -v FPAT='[^,]*|"[^"]+"' '
FNR==1{
for(i=1;i<=NF;i++){
if($i~/^".*,.*"$/){
gsub(/"/,"",$i)
print $i".string()"
}
else{
print $i".auto()"
}
}
next
}
1
' Input_file
Explanation: Adding detailed explanation for above.
awk -v FPAT='[^,]*|"[^"]+"' ' ##Starting awk program and setting FPAT to [^,]*|"[^"]+".
FNR==1{ ##Checking condition if this is first line then do following.
for(i=1;i<=NF;i++){ ##Running for loop from i=1 to till NF value.
if($i~/^".*,.*"$/){ ##Checking condition if current field starts from " and ends with " and having comma in between its value then do following.
gsub(/"/,"",$i) ##Substitute all occurrences of " with NULL in current field.
print $i".string()" ##Printing current field and .string() here.
}
else{ ##else do following.
print $i".auto()" ##Printing current field dot auto() string here.
}
}
next ##next will skip all further statements from here.
}
1 ##1 will print current line.
' Input_file ##Mentioning Input_file name here.
I'm trying to
print the first 3 columns
find all fields with "Eury_gr1_" and print them to the 4th column
if there are no "Eury_gr1_" in the whole row print 0 in the 4th column.
Input looks like the below named "final_pcs_mod_test.csv":
PC_00001,143,143.0,Eury_gr2_(111),Eury_gr5_(19),Unk_unclust_(1),Eury_gr1_(6),MAV_eury_(6)
PC_00004,137,137.0,Eury_gr6_(20),Eury_gr11_(24),Eury_gr14_(24),Eury_gr8_(8),Eury_gr12_(13)
PC_00027,109,109.0,Eury_gr1_(80),MAV_eury_(8)
The desired output will look like the below named "eury1":
PC_00001,143,143.0,Eury_gr1_(6)
PC_00004,137,137.0,0
PC_00027,109,109.0,Eury_gr1_(80)
The command I'm using is:
awk 'BEGIN {FS=","};{for(i=4;i<=NF;i++){if($i~/^Eury_gr1_/){a=$i} else {a="0"}} print $1,$2,$3,a}' final_pcs_mod_test.csv > eury1
The actual output is:
PC_00001,143,143.0,0
PC_00004,137,137.0,0
PC_00027,109,109.0,Eury_gr1_(80)
As you can see the first row is missing a "Eury_gr1_" entry. Looks like the code is only looking in the first specified column and not searching all columns as I want. I've been messing around with for(i=4;i<=4;i++) etc... but so far cannot seem to get it to find entries in the last columns of the input. The whole input file has a max of 17 columns. What am I doing wrong?
Could you please try following, written and tested with shown samples in GNU awk. Output will be same as shown samples.
awk '
BEGIN{
FS=OFS=","
}
{
for(i=4;i<=NF;i++){
if($i~/Eury_gr1_\([0-9]+\)/){
found=(found?found OFS:"")$i
}
}
if(found==""){ $4="0" }
else { $4=found }
print $1,$2,$3,$4
found=""
}' Input_file
OR
awk '
BEGIN{
FS=OFS=","
}
{
for(i=1;i<=NF;i++){
if(i<=3){
val1=(val1?val1 OFS:"")$i
}
else if(i>3){
if($i~/Eury_gr1_\([0-9]+\)/){
found=(found?found OFS:"")$i
}
}
}
if(found==""){ $4="0" }
else { $4=found }
print val1,$4
found=val1=""
}' Input_file
Explanation: Adding detailed explanation for above.
awk ' ##Starting awk program from here.
BEGIN{ ##Starting BEGIN section from here of this program.
FS=OFS="," ##Setting field separator and output field separator to comma here.
}
{
for(i=1;i<=NF;i++){ ##Traversing through all the fields of current line here.
if(i<=3){ ##Checking condition if field number of lesser than or equal to 3 then do following.
val1=(val1?val1 OFS:"")$i ##Creating val1 and keep adding values there.
}
else if(i>3){ ##else if field number is greater than 3 then do following.
if($i~/Eury_gr1_\([0-9]+\)/){ ##Checking if current field is Eury_gr1_(digits) then do following.
found=(found?found OFS:"")$i ##Creating variable found and keep adding values there.
}
}
}
if(found==""){ $4="0" } ##Checking condition if found is NULL then make 4th field as zero.
else { $4=found } ##else set found value to 4th field here.
print val1,$4 ##Printing val1 and 4th field here.
found=val1="" ##Nullifying val1 and found here.
}' Input_file ##Mentioning Input_file name here.
OP's attempt fix: As per OP's comments fixing OP's attempt here. But this will match only 1 occurrence of Eury_gr1 each line, for looking for all occurrences please refer my above solution.
awk '
BEGIN{
FS=","
}
{
for(i=4;i<=NF;i++){
if($i~/^Eury_gr1_\([0-9]+\)$/){ a1 }
}
print $1,$2,$3,a1
a1=""
}' Input_file
I have a use case where I need to replace the values of certain fields with some string. The field value has to be picked up from config file at runtime and should replace each character in that field with 'X'.
Input:
Hello~|*World Good~|*Bye
Output:
Hello~|*XXXXX Good~|*XXX
To do this I am using below command
awk -F "~\|\*" -v OFS="~|*" '{gsub(/[a-zA-Z0-9]/,"X",$ordinal_position)}1' $temp_directory/$file_basename
Here I would like to use ordinal_position variable where I will pass the field number.
I have already tried below command but it is not working.
awk -F '~\|\*' -v var="$"25 -v OFS='~|*' '{gsub(/[a-zA-Z0-9]/,"X",var)}1' $temp_directory/$file_basename
Pass the field number as an integer and precede the variable name with a $ (or enclose in $() for better readability) in the awk program for referencing that field. Like:
awk -v var=25 '{ gsub(/regex/, "replacement", $var) } 1' file
Could you please try following, here in awk variable named fields you can mention all the fields which you want to change and rest will be taken care in the solution(like OP has shown 2nd and 3rd fields in samples so putting 2,3 in here OP could change values as per need). Written and tested with shown samples in GNU awk.
awk -v fields="2,3" '
BEGIN{
FS=OFS="|"
num=split(fields,fieldIn,",")
for(i=1;i<=num;i++){
arrayfieldsIn[fieldIn[i]]
}
}
function fieldChange(field_number){
delete array
num=split($field_number,array," ")
gsub(/[a-zA-Z0-9]/,"X",array[1])
for(i=2;i<=num;i++){
val=val array[i]
}
$field_number=array[1] " " val
val=""
}
{
for(j=1;j<=NF;j++){
if(j in arrayfieldsIn){
fieldChange(j)
}
}
}
1
' Input_file
Explanation: Adding detailed explanation for above.
awk -v fields="2,3" ' ##Starting awk program from here and setting value of variable fields with value of 2,3.
BEGIN{ ##Starting BEGIN section of this program here.
FS=OFS="|" ##Setting FS and OFS values as | here.
num=split(fields,fieldIn,",") ##Splitting fields variable into array fieldIn and delimited with comma here.
for(i=1;i<=num;i++){ ##Starting for loop from 1 to till value of num here.
arrayfieldsIn[fieldIn[i]] ##Creating array arrayfieldsIn with index fieldIn here.
}
}
function fieldChange(field_number){ ##Creating function here for changing field values.
delete array ##Deleting array here.
num=split($field_number,array," ") ##Splitting field_number into array with delimiter as space here.
gsub(/[a-zA-Z0-9]/,"X",array[1]) ##Globally substituting alphabets and digits with X in array[1] here.
for(i=2;i<=num;i++){ ##Running for loop from 2 to till num here.
val=val array[i] ##Creating variable val which has array value here.
}
$field_number=array[1] " " val ##Setting field_number to array value and val here.
val="" ##Nullify val here.
}
{
for(j=1;j<=NF;j++){ ##Running loop till value of NF here.
if(j in arrayfieldsIn){ ##Checking if j is present in array then do following.
fieldChange(j) ##Calling fieldChange with variable j here.
}
}
}
1 ##1 will print line here.
' Input_file ##Mentioning Input_file name here.
I have a file called bin.001.fasta looking like this:
>contig_655
GGCGGTTATTTAGTATCTGCCACTCAGCCTCGCTATTATGCGAAATTTGAGGGCAGGAGGAAACCATGAC
AGTAGTCAAGTGCGACAAGC
>contig_866
CCCAGACCTTTCAGTTGTTGGGTGGGGTGGGTGCTGACCGCTGGTGAGGGCTCGACGGCGCCCATCCTGG
CTAGTTGAAC
...
What I wanna do is to get a new file, where the 1st column is retrieved contig IDs and the 2nd column is the filename without .fasta:
contig_655 bin.001
contig_866 bin.001
Any ideas how to make it ?
Could you please try following.
awk -F'>' '
FNR==1{
split(FILENAME,array,".")
file=array[1]"."array[2]
}
/^>/{
print $2,file
}
' Input_file
OR more generic if your Input_file has more than 2 dots then run following.
awk -F'>' '
FNR==1{
match(FILENAME,/.*\./)
file=substr(FILENAME,RSTART,RLENGTH-1)
}
/^>/{
print $2,file
}
' Input_file
Explanation: Adding detailed explanation for above code.
awk -F'>' ' ##Starting awk program from here and setting field separator as > here for all lines.
FNR==1{ ##Checking condition if this is first line then do following.
split(FILENAME,array,".") ##Splitting filename which is passed to this awk program into an array named array with delimiter .
file=array[1]"."array[2] ##Creating variable file whose value is 1st and 2nd element of array with DOT in between as per OP shown sample.
}
/^>/{ ##Checking condition if a line starts with > then do following.
print $2,file ##Printing 2nd field and variable file value here.
}
' Input_file ##Mentioning Input_file name here.