How to do a if else match on pattern in awk - shell

I've tried the below command:
awk '/search-pattern/ {print $1}'
How do I write the else part for the above command?

Classic way:
awk '{if ($0 ~ /pattern/) {then_actions} else {else_actions}}' file
$0 represents the whole input record.
Another idiomatic way
based on the ternary operator syntax selector ? if-true-exp : if-false-exp
awk '{print ($0 ~ /pattern/)?text_for_true:text_for_false}'
awk '{x == y ? a[i++] : b[i++]}'
awk '{print ($0 ~ /two/)?NR "yes":NR "No"}' <<<$'one two\nthree four\nfive six\nseven two'
1yes
2No
3No
4yes

A straightforward method is,
/REGEX/ {action-if-matches...}
! /REGEX/ {action-if-does-not-match}
Here's a simple example,
$ cat test.txt
123
456
$ awk '/123/{print "O",$0} !/123/{print "X",$0}' test.txt
O 123
X 456
Equivalent to the above, but without violating the DRY principle:
awk '/123/{print "O",$0}{print "X",$0}' test.txt
This is functionally equivalent to awk '/123/{print "O",$0} !/123/{print "X",$0}' test.txt

Depending what you want to do in the else part and other things about your script, choose between these options:
awk '/regexp/{print "true"; next} {print "false"}'
awk '{if (/regexp/) {print "true"} else {print "false"}}'
awk '{print (/regexp/ ? "true" : "false")}'

The default action of awk is to print a line. You're encouraged to use more idiomatic awk
awk '/pattern/' filename
#prints all lines that contain the pattern.
awk '!/pattern/' filename
#prints all lines that do not contain the pattern.
# If you find if(condition){}else{} an overkill to use
awk '/pattern/{print "yes";next}{print "no"}' filename
# Same as if(pattern){print "yes"}else{print "no"}

This command will check whether the values in the $1 $2 and $7-th column are greater than 1, 2, and 5.
!IF! the values do not mach they will be ignored by the filter we declared in awk.
(You can use logical Operators and = "&&"; or= "||".)
awk '($1 > 1) && ($2 > 1) && ($7 > 5)'
You can monitoring your system with the "vmstat 3" command, where "3" means a 3 second delay between the new values
vmstat 3 | awk '($1 > 1) && ($2 > 1) && ($7 > 5)'
I stressed my computer with 13GB copy between USB connected HardDisks, and scrolling youtube video in Chrome browser.

Related

Convert slurm accounting output

I'm looking for a way to get the elapsed time output to always include days, at the moment I can't see away in defining an output format so I'm looking at using cut, awk, sed or similar command(s) to do this after the output has been generated.
So any ideas how I can change output such as:
JobID|Partition|User|State|Elapsed|
902464|interactive-a|bob|COMPLETED|10-00:10:40
968491|interactive-a|bob|COMPLETED|12:49:20
970801|interactive-a|sam|COMPLETED|07:00:46
912973|interactive-a|tom|COMPLETED|41-02:34:41
971356|interactive-a|mat|COMPLETED|04:36:35
971912|interactive-a|mat|COMPLETED|02:12:02
972668|interactive-a|mat|COMPLETED|00:09:06
Into this format (the last column has 0- added where needed)
JobID|Partition|User|State|Elapsed|
902464|interactive-a|bob|COMPLETED|10-00:10:40|
968491|interactive-a|bob|COMPLETED|0-12:49:20|
970801|interactive-a|sam|COMPLETED|0-07:00:46|
912973|interactive-a|tom|COMPLETED|41-02:34:41|
971356|interactive-a|mat|COMPLETED|0-04:36:35|
971912|interactive-a|mat|COMPLETED|0-02:12:02|
972668|interactive-a|mat|COMPLETED|0-00:09:06|
Thanks
$ sed 's/|\([0-9:]\{1,\}\)$/|0-\1/' file
JobID|Partition|User|State|Elapsed|
902464|interactive-a|bob|COMPLETED|10-00:10:40
968491|interactive-a|bob|COMPLETED|0-12:49:20
970801|interactive-a|sam|COMPLETED|0-07:00:46
912973|interactive-a|tom|COMPLETED|41-02:34:41
971356|interactive-a|mat|COMPLETED|0-04:36:35
971912|interactive-a|mat|COMPLETED|0-02:12:02
972668|interactive-a|mat|COMPLETED|0-00:09:06
In awk:
$ awk -F\| '$5 ~ /-|E/ || ($5 = "0-" $5) && gsub(/ /,"|")' file
-F\| set FS to |
$5 ~ /-|E/ matches and prints records with - OR E in fifth field
|| logical OR, ie. if previous didn't match, then:
($5 = "0-" $5) prepend 0- to fifth field
&& gsub(/ /,"|") AND replace those space-replaced field separators with |s.
above could be removed if -v OFS="|" was used:
$ awk -v OFS=\| -F\| '$5 ~ /-|E/ || ($5 = "0-" $5)' file
$ awk -v OFS=\| -F\| '$5 ~ /-|E/ || ($5 = "0-" $5)' file

Using awk to search for a line that starts with but also contains a string

I have a file that has multiple lines that starts with a keyword. I only want to modify one of them and it's easy to distinguish the two. I want the one that is under the [dbinfo] section. The domain name is static so I know that won't change.
awk -F '=' '$1 ~ /^dbhost/ {print $NF};' myfile.txt
myfile.txt
[ual]
path=/web/
dbhost=ez098sf
[dbinfo]
dbhost=ec0001.us-east-1.localdomain
dbname=ez098sf_default
dbpass=XXXXXX
You can use this awk command to first check for presence of [dbinfo] section and then modify dbhost parameter:
awk -v h='newhost' 'BEGIN{FS=OFS="="}
$0 == "[dbinfo]" {sec=1} sec && $1 == "dbhost"{$2 = h; sec=0} 1' file
[ual]
path=/web/
dbhost=ez098sf
[dbinfo]
dbhost=newhost
dbname=ez098sf_default
dbpass=XXXXXX
You want to utilize a little bit of a state machine here:
awk -F '=' '
$0 ~ /^\[.*\]/ {in_db_info=($0=="[dbinfo]"}
$0 ~ /^dbhost/{if (in_db_info) print $2;}' myfile.txt
You can also do it with sed:
sed '/\[dbinfo\]/,/\[/s/\(^dbhost=\).*/\1domain.com/' myfile.txt

awk: sort file based on user input

I have this simple awk code:
awk -F, 'BEGIN{OFS=FS} {print $2,$1,$3}' $1
Works great, except I've hardcoded how I want to sort the comma-delimited fields of my plaintext file. I want to be able to specify at run time in which order I'd like to sort my fields.
One hacky way I thought about doing this was this:
read first
read second
read third
TOTAL=$first","$second","$third
awk -F, 'BEGIN{OFS=FS} {print $TOTAL}' $1
But this doesn't actually work:
awk: illegal field $(), name "TOTAL"
Also, I know a bit about awk's ability to accept user input:
BEGIN {
getline first < "-"
}
$1 == first {
}
But I wonder whether the variables created can in turn be used as variables in the original print command? Is there a better way?
You have to let bash expand $TOTAL before awk is called, so that awk sees the value of $TOTAL, not the literal string $TOTAL. This means using double, not single, quotes.
read first
read second
read third
# Dynamically construct the awk script to run
TOTAL="\$$first,\$$second,\$$third"
SCRIPT="BEGIN{OFS=FS} {print $TOTAL}"
awk -F, "$SCRIPT" "$1"
A safer method is to pass the field numbers as awk variables.
awk -F, -v c1="$first" -v c2="$second" -v c3="$third" 'BEGIN{OFS=FS} {print $c1, $c2, $c3}' "$1"
All you need is:
awk -v order='3 1 2' 'BEGIN{split(order,o)} {for (i=1;i<=NF;i++) printf "%s%s", $(o[i]), (i<NF?OFS:ORS)}'
e.g.:
$ echo 'a b c' | awk -v order='3 1 2' 'BEGIN{split(order,o)} {for (i=1;i<=NF;i++) printf "%s%s", $(o[i]), (i<NF?OFS:ORS)}'
c a b
$ echo 'a b c' | awk -v order='2 3 1' 'BEGIN{split(order,o)} {for (i=1;i<=NF;i++) printf "%s%s", $(o[i]), (i<NF?OFS:ORS)}'
b c a

Creating an array with awk and passing it to a second awk operation

I have a column file and I want to print all the lines that do not contain the string SOL, and to print only the lines that do contain SOL but has the 5th column <1.2 or >4.8.
The file is structured as: MOLECULENAME ATOMNAME X Y Z
Example:
151SOL OW 6554 5.160 2.323 4.956
151SOL HW1 6555 5.188 2.254 4.690 ----> as you can see this atom is out of the
151SOL HW2 6556 5.115 2.279 5.034 threshold, but it need to be printed
What I thought is to save a vector with all the MOLECULENAME that I want, and then tell awk to match all the MOLECULENAME saved in vector "a" with the file, and print the complete output. ( if I only do the first awk i end up having bad atom linkage near the thershold)
The problem is that i have to pass the vector from the first awk to the second... I tried like this with a[], but of course it doesn't work.
How can i do this ?
Here is the code I have so far:
a[] = (awk 'BEGIN{i=0} $1 !~ /SOL/{a[i]=$1;i++}; /SOL/ && $5 > 4.8 {a[i]=$1;i++};/SOL/ &&$5<1.2 {a[i]=$1;i++}')
awk -v a="$a[$i]" 'BEGIN{i=0} $1 ~ $a[i] {if (NR>6540) {for (j=0;j<3;j++) {print $0}} else {print $0}
You can put all of the same molecule names in one row by using sort on the file and then running this AWK which basically uses printf to print on the same line until a different molecule name is found. Then, a new line starts. The second AWK script is used to detect which molecules names have 3 valid lines in the original file. I hope this can help you to solve your problem
sort your_file | awk 'BEGIN{ molname=""; } ( $0 !~ "SOL" || ( $0 ~ "SOL" && ( $5<1.2 || $5>4.8 ) ) ){ if($1!=molname){printf("\n");molname=$1}for(i=1;i<=NF;i++){printf("%s ",$i);}}' | awk 'NF>12 {print $0}'
awk '!/SOL/ || $5 < 1.2 || $5 > 4.8' inputfile.txt
Print (default behaviour) lines where:
"SOL" is not found
SOL is found and fifth column < 1.2
SOL is found and fifth column > 4.8
SOLVED! Thanks to all, here is how i solved it.
#!/bin/bash
file=$1
awk 'BEGIN {molecola="";i=0;j=1;}
{if ($1 !~ /SOL/) {print $0}
else if ( $1 != molecola && $1 ~ /SOL/ ) {
for (j in arr_comp) {if( arr_comp[j] < 1.2 || arr_comp[j] > 5) {for(j in arr_comp) {print arr_mol[j] };break}}
delete(arr_comp)
delete(arr_mol)
arr_mol[0]=$0
arr_comp[0]=$5
molecola=$1
j=1
}
else {arr_mol[j]=$0;arr_comp[j]=$5;j++} }' $file

creating a ":" delimited list in bash script using awk

I have following lines
380:<CHECKSUM_VALIDATION>
393:</CHECKSUM_VALIDATION>
437:<CHECKSUM_VALIDATION>
441:</CHECKSUM_VALIDATION>
I need to format it as below
CHECKSUM_VALIDATION:380:393
CHECKSUM_VALIDATION:437:441
Is it possible to achieve above output using "awk"? [I'm using bash]
Thanks you!
Here you go:
awk -F '[:<>/]+' '{ n = $1; getline; print $2 ":" n ":" $1 }'
Explanation:
Set the field separator with -F to be a sequence of a mix of :<>/ characters, this way the first field will be the number, and the second will be CHECKSUM_VALIDATION
Save the first field in variable n and read the next line (which would overwrite $1)
Print the line: a combination of the number from the previous line, and the fields on the current line
Another approach without using getline:
awk -F '[:<>/]+' 'NR % 2 { n = $1 } NR % 2 == 0 { print $2 ":" n ":" $1 }'
This one uses the record counter NR to determine whether it's time to print: if NR is odd, save the first field in n, if NR is even, then print.
You can try this sed,
sed 'N; s/\([0-9]\+\):<\(.*\)>\n\([0-9]\+\):<\(.*\)>/\2:\1:\3/' file.txt
Test:
sat:~$ sed 'N; s/\([0-9]\+\):<\(.*\)>\n\([0-9]\+\):<\(.*\)>/\2:\1:\3/' file.txt
CHECKSUM_VALIDATION:380:393
CHECKSUM_VALIDATION:437:441
Another way:
awk -F: '/<C/ {printf "CHECKSUM_VALIDATION:%d:",$1; next} {print $1}'
Here is one gnu awk
awk -F"[:\n<>]" 'NR==1{print $3,$1,$5;f=$3;next} $3{print f,$3,$7}' OFS=":" RS="</CH" file
CHECKSUM_VALIDATION:380:393
CHECKSUM_VALIDATION:437:441
Based on Jonas post and avoiding getline, this awk should do:
awk -F '[:<>/]+' '/<C/ {f=$1;next} { print $2,f,$1}' OFS=\: file
CHECKSUM_VALIDATION:380:393
CHECKSUM_VALIDATION:437:441

Resources