I have a YML file with content similar to,
test:
volumes:
- /u01/test-service/conf:/root/config
testmanager:
port:
- "2222:80"
I want to delete test or testmanager block based on some conditions. Here is the awk expression I found here,
awk '{sub(/\r$/, "")}
$1 == "test:"{t=1}
t==1 && $1 != "test:" {t++; next}
t==2 && /:\s*$/{t=0}
t != 2'
This deletes everything under test but keeps the string "test:". Something like this,
reportservice:
reportmanager:
port:
- "2222:80"
How to fix this? Please help.
With the shown input you can use this awk command with empty RS:
awk -v RS= '!/^[[:blank:]]*test:/' file.yml
testmanager:
port:
- "2222:80"
This assumes there is an empty line between each block. If this doesn't work you can modify your existing command you can do:
awk '{sub(/\r$/, "")}
$1 == "test:"{t=1}
t==1 && $1 != "test:" {t++; next}
t==2 && /:\s*$/{t=0}
!t' file.yml
Related
I'm trying to edit 3 columns in a file if the value in column 1 equals a specific string. This is my current attempt:
cp file file.copy
awk -F':' 'OFS=":" { if ($1 == "root1") $2="test"; print}' file.copy>file
rm file.copy
I've only been able to get the awk command working with one column being changed, I want to be able to edit $3 and $8 as well. Is this possible in the same command? Or is it only possible with separate awk commands or with a different command all together?
Edit note: The real command i'll be passing variables to the columns, i.e. $2=$var
It'll be used to edit the /etc/passwd file, sample input/output:
root:$6$fR7Vrjyp$irnF38R/htMSuk0efLSnAten/epf.5v7gfs0q.NcjKcFPeJmB/4TnnmgaAoTUE9.n4p4UyWOgFwB1guJau8AL.:17976::::::
You can create multiple statements for the if condition with a block {}.
awk -F':' 'OFS=":" { if ($1 == "root1") {$2="test"; $3="test2";} print}' file.copy>file
You can also improve your command by using awk's default "workflow": condition{commands}. For this you need to bring the OFS to the input variables (-v flag)
awk -F':' -v OFS=":" '$1=="root1"{$2="test"; $3="test2"; print}' file.copy>file
You may use
# Fake sample values
v1=pass1
v2=pass2
awk -v var1="$v1" -v var2="$v2" 'BEGIN{FS=OFS=":"} $1 == "root1" { $2 = var1; $3 = var2}1' file > tmp && mv tmp file
See the online awk demo:
s="root1:xxxx:yyyy
root11:xxxx:yyyy
root1:zzzz:cccc"
v1=pass1
v2=pass2
awk -v var1="$v1" -v var2="$v2" 'BEGIN{FS=OFS=":"} $1 == "root1" { $2 = var1; $3 = var2}1' <<< "$s"
Output:
root1:pass1:pass2
root11:xxxx:yyyy
root1:pass1:pass2
Note:
-v var1="$v1" -v var2="$v2" pass the variables you need to use in the awk command
BEGIN{FS=OFS=":"} set the field separator
$1 == "root1" check if Field 1 is equal to some value
{ $2 = var1; $3 = var2 } set Field 2 and 3 values
1 calls the default print command
file > tmp && mv tmp file helps you "shrink" the "replace-inplace-like" code.
I have a property file meant for Java like:
oracle {
username = "bla"
password = "blabla"
driver = "driver1"
}
postgres {
username = "pg"
password = "pg"
driver = "pg-driver"
}
when read into java I can extract the oracle.driver property which returns driver1.
Now I want to extract the same string in a bash script.
I have tried something like:
grep -A5 oracle application.conf | grep -Po 'driver = ".*?"' | grep -Po '".*"'
returning "driver1" (including the quotes). I also tried using sed substitute but that also did not yield the driver1 string.
How can I retriever only driver1?
Whenever you have name -> value mappings in your data, first creating an array to store those mappings (f[] below) and then accessing the data by it's name provides the simplest, clearest and easiest to enhance solution:
$ awk -v RS= '$1=="oracle"{ for (i=3;i<=NF;i+=3) f[$i]=$(i+2); print f["username"]}' file
"bla"
$ awk -v RS= '$1=="oracle"{ for (i=3;i<=NF;i+=3) f[$i]=$(i+2); print f["password"]}' file
"blabla"
$ awk -v RS= '$1=="oracle"{ for (i=3;i<=NF;i+=3) f[$i]=$(i+2); print f["driver"]}' file
"driver1"
$ awk -v name="driver" -v RS= '$1=="oracle"{ for (i=3;i<=NF;i+=3) f[$i]=$(i+2); print f[name]}' file
"driver1"
With single awk command - will work in ANY awk implementation:
awk '/oracle/{ f=1 }f && $1=="driver"{ gsub(/"/,""); print $3; exit }' file
/oracle/{ f=1 } - on encountering line matching the pattern oracle - set active flag f
f && $1=="driver" - if it's "active" processed section ("oracle") and the 1st field $1 is equal to driver:
gsub(/"/,"") - remove double quotes from the line
print $3 - print the 3rd field which is the driver value
exit - exit the script execution immediately avoiding redundant processing
The output:
driver1
Using awk you can do this using an empty record separator:
awk -v RS= '/^[[:blank:]]*oracle/{
gsub(/.*driver[[:blank:]]*=[[:blank:]]*|\n.*$|"/, ""); print}' application.conf
driver1
Empty RS makes all the continuous non-empty lines a single record.
You can try with sed too
database='oracle'
search='driver'
sed -n '
/'"$database"'/!d
:A
n
/'"$search"'/!bA
s/[^"]*"\([^"]*\)"/\1/
p
q
' application.conf
I've tried the below command:
awk '/search-pattern/ {print $1}'
How do I write the else part for the above command?
Classic way:
awk '{if ($0 ~ /pattern/) {then_actions} else {else_actions}}' file
$0 represents the whole input record.
Another idiomatic way
based on the ternary operator syntax selector ? if-true-exp : if-false-exp
awk '{print ($0 ~ /pattern/)?text_for_true:text_for_false}'
awk '{x == y ? a[i++] : b[i++]}'
awk '{print ($0 ~ /two/)?NR "yes":NR "No"}' <<<$'one two\nthree four\nfive six\nseven two'
1yes
2No
3No
4yes
A straightforward method is,
/REGEX/ {action-if-matches...}
! /REGEX/ {action-if-does-not-match}
Here's a simple example,
$ cat test.txt
123
456
$ awk '/123/{print "O",$0} !/123/{print "X",$0}' test.txt
O 123
X 456
Equivalent to the above, but without violating the DRY principle:
awk '/123/{print "O",$0}{print "X",$0}' test.txt
This is functionally equivalent to awk '/123/{print "O",$0} !/123/{print "X",$0}' test.txt
Depending what you want to do in the else part and other things about your script, choose between these options:
awk '/regexp/{print "true"; next} {print "false"}'
awk '{if (/regexp/) {print "true"} else {print "false"}}'
awk '{print (/regexp/ ? "true" : "false")}'
The default action of awk is to print a line. You're encouraged to use more idiomatic awk
awk '/pattern/' filename
#prints all lines that contain the pattern.
awk '!/pattern/' filename
#prints all lines that do not contain the pattern.
# If you find if(condition){}else{} an overkill to use
awk '/pattern/{print "yes";next}{print "no"}' filename
# Same as if(pattern){print "yes"}else{print "no"}
This command will check whether the values in the $1 $2 and $7-th column are greater than 1, 2, and 5.
!IF! the values do not mach they will be ignored by the filter we declared in awk.
(You can use logical Operators and = "&&"; or= "||".)
awk '($1 > 1) && ($2 > 1) && ($7 > 5)'
You can monitoring your system with the "vmstat 3" command, where "3" means a 3 second delay between the new values
vmstat 3 | awk '($1 > 1) && ($2 > 1) && ($7 > 5)'
I stressed my computer with 13GB copy between USB connected HardDisks, and scrolling youtube video in Chrome browser.
I have a file that has multiple lines that starts with a keyword. I only want to modify one of them and it's easy to distinguish the two. I want the one that is under the [dbinfo] section. The domain name is static so I know that won't change.
awk -F '=' '$1 ~ /^dbhost/ {print $NF};' myfile.txt
myfile.txt
[ual]
path=/web/
dbhost=ez098sf
[dbinfo]
dbhost=ec0001.us-east-1.localdomain
dbname=ez098sf_default
dbpass=XXXXXX
You can use this awk command to first check for presence of [dbinfo] section and then modify dbhost parameter:
awk -v h='newhost' 'BEGIN{FS=OFS="="}
$0 == "[dbinfo]" {sec=1} sec && $1 == "dbhost"{$2 = h; sec=0} 1' file
[ual]
path=/web/
dbhost=ez098sf
[dbinfo]
dbhost=newhost
dbname=ez098sf_default
dbpass=XXXXXX
You want to utilize a little bit of a state machine here:
awk -F '=' '
$0 ~ /^\[.*\]/ {in_db_info=($0=="[dbinfo]"}
$0 ~ /^dbhost/{if (in_db_info) print $2;}' myfile.txt
You can also do it with sed:
sed '/\[dbinfo\]/,/\[/s/\(^dbhost=\).*/\1domain.com/' myfile.txt
I'm having problems when running a script from cron. First I found difficulties accessing to SQLite; now it's AWK commands that are running me crazy.
The problematic line is this:
sens=`awk -F, '{ if($2 == '${num}' && $4 == '$tipogalis' && $9 == "0")print $1 }' /usr/xbow/xserve/galtel/relasens`
Don't want to bother you with the details; it's the main line of a while loop that has to read the value of a column inside a file. It works perfectly from the command line, but running as a cron job gives no value to the variable "sens".
I already checked out that all the variables inside the line are read OK (num, tipogalis, etc.), so I'm pretty sure the problem is related with the amount of "&&" or with the "print" function.
Just in case someone wants to suggest something about the enviroment vars, I already added the following lines at the beggining of the script:
LANG=en_US.UTF-8
export LANG
But made no difference.
Any other suggestion, please? I know the problem must be really tiny. Devil is always in details...
I assume you have verified that the num and tipogalis variables hold the correct values when you run in cron.
I'm guessing you're missing quotes around the constants in the awk if statement.
sens=`awk -F, '{ if($2 == "'${num}'" && $4 == "'$tipogalis'" && $9 == "0")print $1 }' /usr/xbow/xserve/galtel/relasens`
I'd use the -v option to pass the value into awk instead of piecing together the quoting.
sens=$(awk -F, -v val2="$num" -v val4="$tipogalis" '$2 == val2 && $4 == val4 && $9 == "0" {print $1}' /usr/xbow/xserve/galtel/relasens)
In the end the correct way was this one:
sens=awk -F, '{ if($2 == '${num}' && $4 == '$tipogalis' && $9 == '0')print $1 }' /usr/xbow/xserve/galtel/relasens
but the problem was not the line. My call was made to a $9 that was never equal to '0' due to an internal problem.
I'm sorry. This post can be even completely erased to avoid confusing other users.