There is a command which prints out to file range of values from CSV file:
date1var=mm/dd/yyyy hh:mm:ss
date2var=mm/dd/yyyy hh:mm:ss
awk -F, -v d1var="$date1var" -v d2var="$date2var" '$1 > d1var && $1 <= d2var {print $0 }' OFS=, plot_data.csv > graph1.csv
I'm just guessing if it's possible to include my variables to the output filename?
Final name of the file should be similar to:
graph_d1var-d2var.csv
Any ideas?
You can redirect the output of print command to a file name, like:
awk -F, -v d1var="$date1_var" -v d2var="$date2var" '
$1 > d1var && $1 <= d2var {
print > ("graph_" d1var "-" d2var ".csv")
}'
OFS=, plot_data.csv
This uses the values of d1var and d2var to create the name of the output file. If you want the name of the variables, surround the whole name in double quotes.
Let the shell handle it: you're starting with shell variables after all
date1var=mm/dd/yyyy hh:mm:ss
date2var=mm/dd/yyyy hh:mm:ss
awk -F, -v OFS=, -v d1var="$date1var" \
-v d2var="$date2var" \
'
# awk script is unchanged
' plot_data.csv > "graph1_${date1var}-${date2var}.csv"
#!/bin/bash
date1var="1234"
date2var="5678"
awk -F, -v d1="$date1var" -v d2="$date2var" '{print > ("graph" d1 "-" d2 ".txt")}' OFS=, plot_data.csv
Note that you can't compare date strings in awk like you are trying to do. You also have a typo, in that you have written date1_var with an underscore whereas you have used date1var without an underscore further on.
I guess the short answer is that you can print to a named file with print > "filename" and that you can concatenate (join) strings by placing them beside each other like this string2 = string1 "and" string3;
Related
I have semicolon-separated columns, and I would like to add some characters to a specific column.
aaa;111;bbb
ccc;222;ddd
eee;333;fff
to the second column I want to add '#', so the output should be;
aaa;#111;bbb
ccc;#222;ddd
eee;#333;fff
I tried
awk -F';' -OFS=';' '{ $2 = "#" $2}1' file
It adds the character but removes all semicolons with space.
You could use sed to do your job:
# replaces just the first occurrence of ';', note the absence of `g` that
# would have made it a global replacement
sed 's/;/;#/' file > file.out
or, to do it in place:
sed -i 's/;/;#/' file
Or, use awk:
awk -F';' '{$2 = "#"$2}1' OFS=';' file
All the above commands result in the same output for your example file:
aaa;#111;bbb
ccc;#222;ddd
eee;#333;fff
#atb: Try:
1st:
awk -F";" '{print $1 FS "#" $2 FS $3}' Input_file
Above will work only when your Input_file has 3 fields only.
2nd:
awk -F";" -vfield=2 '{$field="#"$field} 1' OFS=";" Input_file
Above code you could put any field number and could make it as per your request.
Here I am making field separator as ";" and then taking a variable named field which will have the field number in it and then that concatenating "#" in it's value and 1 is for making condition TRUE and not making and action so by default print action will happen of current line.
You just misunderstood how to set variables. Change -OFS to -v OFS:
awk -F';' -v OFS=';' '{ $2 = "#" $2 }1' file
but in reality you should set them both to the same value at one time:
awk 'BEGIN{FS=OFS=";"} { $2 = "#" $2 }1' file
I have a file that has multiple lines that starts with a keyword. I only want to modify one of them and it's easy to distinguish the two. I want the one that is under the [dbinfo] section. The domain name is static so I know that won't change.
awk -F '=' '$1 ~ /^dbhost/ {print $NF};' myfile.txt
myfile.txt
[ual]
path=/web/
dbhost=ez098sf
[dbinfo]
dbhost=ec0001.us-east-1.localdomain
dbname=ez098sf_default
dbpass=XXXXXX
You can use this awk command to first check for presence of [dbinfo] section and then modify dbhost parameter:
awk -v h='newhost' 'BEGIN{FS=OFS="="}
$0 == "[dbinfo]" {sec=1} sec && $1 == "dbhost"{$2 = h; sec=0} 1' file
[ual]
path=/web/
dbhost=ez098sf
[dbinfo]
dbhost=newhost
dbname=ez098sf_default
dbpass=XXXXXX
You want to utilize a little bit of a state machine here:
awk -F '=' '
$0 ~ /^\[.*\]/ {in_db_info=($0=="[dbinfo]"}
$0 ~ /^dbhost/{if (in_db_info) print $2;}' myfile.txt
You can also do it with sed:
sed '/\[dbinfo\]/,/\[/s/\(^dbhost=\).*/\1domain.com/' myfile.txt
I have this input file: file_in.txt (delimited by pipe)
3345:tyg|rty|27|0|0|ty6|{89|io|}62|0
3346:tyg|rtyuio|63|0|1|ty6|{89|gh|}45|0
3347:tyu|ray|24|0|0|ty6|{89|uh|}27|0
3348:tyg|rtoy|93|0|1|ty6|{89|yh|}1|0
3349:tyo|rtert|28|0|0|ty6|{89|gh|}27|0
I want to get only those lines which have 9th field value as }27 using '|' as delimiter so that my output should be:
3347:tyu|ray|24|0|0|ty6|{89|uh|}27|0
3349:tyo|rtert|28|0|0|ty6|{89|gh|}27|0
Below command works fine:
awk -F"|" '{ if ($9 == "}27") print $0 }' file_in.txt
But I want to use a shell variable instead of "}27" for which I tried this:
taskid="}27"
awk -v tid="$taskid" -F"|" '{ if ($9 == "}tid") print $0 }' file_in.txt
Please help me figure out where I am going wrong with this command.
Any other command suggestions to achieve the same are appreciated.
This should work:
taskid="}27"
awk -F'|' -v tid="$taskid" '$9 == tid' file
Output:
3347:tyu|ray|24|0|0|ty6|{89|uh|}27|0
3349:tyo|rtert|28|0|0|ty6|{89|gh|}27|0
Assuming your shell variable $tasked has the value 27, you want to use one of these forms:
build the string with the open brace in the shell
awk -v tid="}$taskid" -F"|" '$9 == tid' file
or do it in awk --- awk's string concatenation is just placing strings side-by-side with optional whitespace in between
awk -v tid="$taskid" -F"|" '$9 == "}" tid' file
Your own command should have worked with this change:
$ ksh
$ taskid=}27
$ awk -v tid=$taskid -F"|" '{ if ($9 == tid) print $0}' file_in.txt
Output:
3347:tyu|ray|24|0|0|ty6|{89|uh|}27|0
3349:tyo|rtert|28|0|0|ty6|{89|gh|}27|0
I'm trying to print the date inside of an awk command. I cannot find a way around the fact that the arguments for gawk are put inside single quotes, which negate the execution that I need for date:
gawk '/.*(ge|ga).*/ { print $1 "," $2 "," date } ' >> file.csv
gawk '/.*(ge|ga).*/ { print $1 "," $2 "," echo date } ' >> file.csv
gawk '/.*(ge|ga).*/ { print $1 "," $2 "," `date` } ' >> file.csv
What is a way around this inside the gawk command ? Thanks.
It's not 100% clear what you're trying to do here (some input and desired output would be useful) but I think this is what you want:
gawk -v date="$(date)" -v OFS=, '/g[ea]/ { print $1, $2, date }'
This sets an awk variable date based on the output of the date command and prints it after the first and second field. I've set the output field separator OFS to make your print command neater.
Alternatively (and probably preferred) is to use the strftime function available in GNU awk:
gawk -v OFS=, '/g[ea]/ { print $1, $2, strftime() }'
The format of the output is slightly different but can be adjusted by passing a format string to the function. See the GNU awk documentation for more details on that.
I have also simplified your regular expression, based on the suggestions made in the comments (thanks).
I read a few topics but still cannot solve the problem.
This is test file for example:
1:abc:100:/k/ll
2:abd:120:/k/gg
3:www:3:/k/ll
4:rrr:66:/k/gg
5:ddd:140:/k/ll
This is my code:
ZM=${2:-test}
VAR=$1
awk -F':' -v one="$VAR" '$4 ~ one $3 > 100' $ZM
I want for this script to write these lines, where the 3 field is greater than 100, and 4 field contains the string specified in the variable, eg. "ll".
For example:
./test.sh ll
Output:
1:abc:100:/k/ll
5:ddd:140:/k/ll
What am I doing wrong? Thanks for your responses!
FOR $3>100
awk -v FS=":" -v one=$VAR '{if($3>100 && $4~one){print $0}}' my_file
FOR $3>=100 (since your output is different from your request)
awk -v FS=":" -v one=$VAR '{if($3>=100 && $4~one){print $0}}' my_file