how can i make awk process the BEGIN block for each file it parses? - bash

i have an awk script that i'm running against a pair of files. i'm calling it like this:
awk -f script.awk file1 file2
script.awk looks something like this:
BEGIN {FS=":"}
{ if( NR == 1 )
{
var=$2
FS=" "
}
else print var,"|",$0
}
the first line of each file is colon-delimited. for every other line, i want it to return to the default whitespace file seperator.
this works fine for the first file, but fails because FS is not reset to : after each file, because the BEGIN block is only processed once.
tldr: is there a way to make awk process the BEGIN block once for each file i pass it?
i'm running this on cygwin bash, in case that matters.

If you're using gawk version 4 or later there's the BEGINFILE block. From the manual:
BEGINFILE and ENDFILE are additional special patterns whose bodies are executed before reading the first
record of each command line input file and after reading the last record of each file. Inside the BEGINFILE
rule, the value of ERRNO will be the empty string if the file could be opened successfully. Otherwise, there
is some problem with the file and the code should use nextfile to skip it. If that is not done, gawk produces
its usual fatal error for files that cannot be opened.
For example:
touch a b c
awk 'BEGINFILE { print "Processing: " FILENAME }' a b c
Output:
Processing: a
Processing: b
Processing: c
Edit - a more portable way
As noted by DennisWilliamson you can achieve a similar effect with FNR == 1 at the beginning of your script. In addition to this you could change FS from the command-line directly, e.g.:
awk -f script.awk FS=':' file1 FS=' ' file2
Here the FS variable will retain whatever value it had previously.

Instead of:
BEGIN {FS=":"}
use:
FNR == 1 {FS=":"}

The FNR variable should do the trick for you. It's the same as NR except it is scoped within the file, so it resets to 1 for every input file.
http://unstableme.blogspot.ca/2009/01/difference-between-awk-nr-and-fnr.html
http://www.unix.com/shell-programming-scripting/46931-awk-different-between-nr-fnr.html

When you want a POSIX complient version, the best is to do:
(FNR == 1) { FS=":"; $0=$0 }
This states that, if the File record number (FNR) equals one, we reset the field separator FS. However, you also need to reparse $0 and reset the values of all other fields and the NF built-in variable.
This is equivalent to the GNU awk 4.x BEGINFILE if and only if the record separator (RS) stays unchanged.

Related

Different results when running awk from command line versus writing awk script using #!/bin/awk -f

I am writing a simple awk script to read a file holding a single number (single line with a single field), subtract a constant and then write the result to another file. This is a warmup exercise to do a more complex problem. So, if the input file has X, then the output file has X-C
When I write the following in the command line, it works:
awk '{$1 = $1 - 10; print $0}' test.dat > out.dat
The output looks like this (for X = 30 and C = 10):
20
However, I wrote the following awk script :
#!/bin/awk
C=10
{$1 = $1 - C; print $0}
Next, when I run the awk script using:
./script.awk test.dat > out.dat
I get an output file with two lines as follows :
X
X-C
for example, if X=30 and C=10 I get an output file having
30
20
Why is the result different in both cases? I tried removing "-f" in the shebang but I receive and error when I do this.
This is your awk program:
C=10
{$1 = $1 - C; print $0}
Recall that awk programs take the form of a list of pattern-action pairs.
Missing action results in the default action being performed (print the input). Missing pattern is considered to return true.
Your program is equivalent to:
C=10 { print $0 }
1 { $1 = $1 -C ; print $0 }
The first pattern C=10 assigns 10 to variable C and because assignments return the value assigned, returns 10. 10 is not false, so the pattern matches, and the default action happens.
The second line has a default pattern that returns true. So the action always happens.
These two pattern-action pairs are invoked for every record that is input. So, with one record input, there will be two copies printed on output.

Take string from multiple files and copy to new file and print filename into second column in bash

I have multiple files containing this information:
sP12345.txt
COMMENT Method: conceptual translation.
FEATURES Location/Qualifiers
source 1..3024
/organism="H"
/isolate="sP12345"
/isolation_source="blood"
/host="Homo sapiens"
/db_xref="taxon:11103"
/collection_date="31-Mar-2014"
/note="genotype: 3"
sP4567.txt
COMMENT Method: conceptual translation.
FEATURES Location/Qualifiers
source 1..3024
/organism="H"
/isolate="sP4567"
/isolation_source="blood"
/host="Homo sapiens"
/db_xref="taxon:11103"
/collection_date="31-Mar-2014"
/note="genotype: 2"
Now I would like to get the /note="genotype: 3" and copy only the number that is after genotype: copy it to a new textfile and print the filename from which is has been taken as column 2.
Expected Output:
3 sP12345
2 sP4567
I tried this code: but it only prints the first column and not the filename:
awk -F'note="genotype: ' -v OFS='\t' 'FNR==1{++c} NF>1{print $2, c}' *.txt > output_file.txt
You may use:
awk '/\/note="genotype: /{gsub(/^.* |"$/, ""); f=FILENAME; sub(/\.[^.]+$/, "", f); print $0 "\t" f}' sP*.txt
3 sP12345
2 sP4567
$ awk -v OFS='\t' 'sub(/\/note="genotype:/,""){print $0+0, FILENAME}' sP12345.txt sP4567.txt
3 sP12345.txt
2 sP4567.txt
You can do:
awk '/\/note="genotype:/{split($0,a,": "); print a[2]+0,"\t",FILENAME}' sP*.txt
3 sP12345.txt
2 sP4567.txt
With your shown samples, in GNU awk please try following awk code.
awk -v RS='/note="genotype: [0-9]*"' '
RT{
gsub(/.*: |"$/,"",RT)
print RT,FILENAME
nextfile
}
' *.txt
Explanation: Simple explanation would be, passing all .txt files to GNU awk program here. Then setting RS(record separator) as /note="genotype: [0-9]*" as per shown samples and requirement. In main program of awk, using gsub(global substitution) to removing everything till colon followed by space AND " at the end of value of RT with NULL. Then printing value of RT followed by current file's name. Using nextfile will directly take program to next file skipping rest of contents of file, to save sometime for us.

How do pipes inside awk work (Sort with keeping header)

The following command outputs the header of a file and sorts the records after the header. But how does it work? Can anyone explain this command?
awk 'NR == 1; NR > 1 {print $0 | "sort -k3"}'
Could you please go through following once(only for explanation purposes). For learning more concepts on awk I suggest go through Stack overflow's nice awk learning section
awk ' ##Starting awk program from here.
NR == 1; ##Checking if line is first line then print it.
##awk works on method of condition then action since here is NO ACTION mentioned so by default printing of current line will happen
NR > 1{ ##If line is more than 1st line then do following.
print $0 | "sort -k3" ##It will be keep printing lines into memory and before printing it will sort them with their 3rd field.
}'
Understanding the awk command:
Overall an awk program is build out of (pattern){action} pairs which stat that if pattern returns a non-zero value, action is executed. One does not necessarily, need to write both. If pattern is omitted, it defaults to 1 and if action is omitted, it defaults to print $0.
When looking at the command in question:
awk 'NR == 1; NR > 1 {print $0 | "sort -k3"}'
We notice that there are two action-pattern pairs. The first reads NR == 1 and states that if we are processing the first record (pattern) then print the record (default action). The second is a bit more tricky. The pattern is clear, the action on the other hand needs some explaining.
awk has knowledge of 4 output statements that can redirect the output. One of these reads expression | cmd . It essentially means that awk will write output to a stream that is piped as input to a command cmd. It will keep on writing the output to that stream until the stream is explicitly closed using a close(cmd) statement or by simply terminating awk.
In case of the OP, the action reads { print $0 | "sort -k3" }, meaning that it will print all records $0 to a stream that is used as input of the shell command sort -k3. Only when the program finishes will sort write its output.
Recap: the command of the OP will print the first line of a file, and sort the consecutive lines according the third column.
Alternative solutions:
Using GNU awk, it is better to do:
awk '(FNR==1);{a[$3]=$0}
END{PROCINFO["sorted_in"]="#ind_str_asc"
for(i in a) print a[i]
}' file
Using pure shell, it is better to do:
cat file | (read -r; printf "%s\n" "$REPLY"; sort -k3)
Related questions:
Is there a way to ignore header lines in a UNIX sort?
| is one of redirections supported by print and printf - in this case pipe to command sort -k3. You might also use redirection to write to file using >:
awk 'NR == 1; NR > 1 {print $0 > "output.txt"}'
or append to file using >>:
awk 'NR == 1; NR > 1 {print $0 >> "output.txt"}'
First will write to file output.txt all lines but first, second will append to output.txt all lines but first.

setting the NR to 1 does not work (awk)

I have the following script in bash.
awk -F ":" '{if($1 ~ "^fall")
{ NR==1
{{printf "\t<course id=\"%s\">\n",$1} } } }' file1.txt > container.xml
So what I have a small file. If ANY line starts with fall, then I want the first field of the VERY first line.
So I did that in the code and set NR==1. However, it does not do the job!!!
Try this:
awk -F: 'NR==1 {id=$1} $1~/^fall/ {printf "\t<course id=\"%s\">\n",id}' file1.txt > container.xml
Notes:
NR==1 {id=$1}
This saves the course ID from the first line
$1~/^fall/ {printf "\t<course id=\"%s\">\n",id}
If any line begins with fall, then the course ID is printed.
The above code illustrates that awk commands can be preceded by conditions. Thus, id=$1 is executed only if we are on the first line: NR==1. If this way, it is often unnecessary to have explicit if statements.
In awk, assignment with done with = while tests for equality are done with ==.
If this doesn't do what you want, then please add sample input and corresponding desired output to the question.
awk -F: 'NR==1{x=$1}/^fail/{printf "\t<course id=\"%s\">\n",x;exit}' file
Note:
if the file has any line beginning with fail, print the 1st field in very first line in certain format (xml tag).
no matter how many lines with fail as start, it outputs the xml tag only once.
if the file has no line starts with fail, it outputs nothing.
#!awk -f
BEGIN {
FS = ":"
}
NR==1 {
foo = $1
}
/^fall/ {
printf "\t<course id=\"%s\">\n", foo
}
Also note
BUGS
The -F option is not necessary given the command line variable assignment
feature; it remains only for backwards compatibility.
awk man page

How to set FS to eof?

I want to read whole file not per lines. How to change field separator to eof-symbol?
I do:
awk "^[0-9]+∆DD$1[PS].*$" $(ls -tr)
$1 - param (some integer), .* - message that I want to print. There is a problem: message can contains \n. In that way this code prints only first line of file. How can I scan whole file not per lines?
Can I do this using awk, sed, grep? Script must have length <= 60 characters (include spaces).
Assuming you mean record separator, not field separator, with GNU awk you'd do:
gawk -v RS='^$' '{ print "<" $0 ">" }' file
Replace the print with whatever you really want to do and update your question with some sample input and expected output if you want help with that part too.
The portable way to do this, by the way, is to build up the record line by line and then process it in the END section:
awk '{rec = rec (NR>1?RS:"") $0} END{ print "<" rec ">" }' file
using nf = split(rec,flds) to create fields if necessary.

Resources