AWK: Passing two arguments and weird error - bash

I have made an awk implementation of grep -c ^str file and I want to pass the file and str arguments from a shell script. I am using awk -v twice to pass the arguments but I get a awk: cannot open var1 (No such file or directory) error.
I just can't get around it, I've been trying for almost an hour.
My code:
read -p "Give me a file name " file
read -p "Give me a string " str
awk -v var1="$file" -v var2="$str" 'BEGIN{print var1; print var2}{/^var2/}' var1 |
awk '{if ($0 != "/s") {count++}} END {print count}'

It should be:
awk -v var1="$file" -v var2="$str" 'BEGIN{print var1; print var2}{/^var2/}' $file
awk vars can only be acceded inside awk code (delimited by single quotes in this case) not at shell level where var1 means nothing.
Note that var2 value will be just a literal string between slashes /^var2/, use $0 ~ "^"var instead to access var2 value.
In fact, your awk code can be rewritten as:
awk -v var="$str" '$0 ~ "^"var && $0 != "/s"{count++}END{print count}' $file

Related

awk issue inside for loop

I have many files with different names that end with txt.
rtfgtq56.txt
fgutr567.txt
..
So I am running this command
for i in *txt
do
awk -F "\t" '{print $2}' $i | grep "K" | awk '{print}' ORS=';' | awk -F "\t" '{OFS="\t"; print $i, $1}' > ${i%.txt*}.k
done
My problem is that I want to add the name of every file in the first column, so I run this part:
awk -F "\t" '{OFS="\t"; print $i, $1}' > ${i%.txt*}
$i means the file that are in the for loop,
but it did not work because awk can't read the $i in the for loop.
Do you know how I can solve it?
You want to refactor eveything into a single Awk script anyway, and take care to quote your shell variables.
for i in *.txt
do
awk -F "\t" '/K/{a = a ";" $2}
END { print FILENAME, substr(a, 1) }' "$i" > "${i%.txt*}.k"
done
... assuming I untangled your logic correctly. The FILENAME Awk variable contains the current input file name.
More generally, if you genuinely want to pass a variable from a shell script to Awk, you can use
awk -v awkvar="$shellvar" ' .... # your awk script here
# Use awkwar to refer to the Awk variable'
Perhaps see also useless use of grep.
Using the -v option of awk, you can create an awk Variable based on a shell variable.
awk -v i="$i" ....
Another possibility would be to make i an environment variable, which means that awk can access it via the predefined ENVIRON array, i.e. as ENVIRON["i"].

Awk output to shell variable with shell variable as search pattern

I'm trying to find (with awk) the IP of a specific ethernet interface using the hostname as a search patter (suffixed by the name of the ethernet interface). I wrote this little script but it outputs nothing and I don't understand why...
#!/bin/bash
name=$(hostname -s)-eth3
IP1=`awk -v var=$name '/var/ {print $1}' /etc/hosts`
echo $IP1
Could you please make few changes as shown following, which may help you.
#!/bin/bash
name=$(hostname -s)-eth3
IP1=$(awk -v var=$name '$0 ~ var{print $1}' "/etc/hosts")
echo "$IP1"
changes like backtick is not encouraged for storing values in bash variables $ should be used and use echo "$var" too.
Inside slashes, awk treats var as a literal string, not a variable.
Thus, replace:
/var/
With:
$0 ~ var
Thus use:
#!/bin/bash
name=$(hostname -s)-eth3
ip1=`awk -v var=$name '$0 ~ var {print $1}' /etc/hosts`
echo "$ip1"
Example
The first awk script below produces no match but the second does:
$ echo host-eth1 | awk -v var='host-eth1' '/var/ {print $1}'
$ echo host-eth1 | awk -v var='host-eth1' '$0 ~ var {print $1}'
host-eth1

Bash: AWK - $1 as first parameter of shell script

I spent on this 2 hours and get nothing. I want to get $1 and $2 as a first command line input of shell script, but I couldn't manage this. And $3 and $0 would be columns in awk. I try different methods but nothing works for me.
awk -F':' -v "limit=1000" '{ if ( $3 >=limit ) gsub("~/$1/",~/$2/); print \$0}' file.txt
the cleanest method is to explicitly pass the values from shell to awk with awk's -v option:
awk -F: -v limit=1000 -v patt="~/$1/" -v repl="~/$2/" '
$3 >=limit {gsub(patt,repl); print}
' file.txt
When your awk line is part of a script file, and you want to use $1 and $2 from the script in your awk command, you should temporary stop the literal string with a single quote and start it again.
awk -F':' -v "limit=1000" '{ if ( $3 >=limit ) gsub("~/'$1'/",~/'$2'/); print $0}' file.txt
You didn't post any sample input or expected output so this is a guess but you probably want something like this:
awk -F':' -v limit=1000 -v arg1="$1" -v arg2="$2" '$3 >= limit{gsub("~/" arg1 "/","~/" arg2 "/"); print}' file.txt

awk printf with variable

The following expression works as expected:
$ awk 'BEGIN {print 4/3}'
1.33333
However if I use a variable in place of the literal value then it does not print
as expected:
$ awk -v foo=4/3 'BEGIN {print foo}'
4/3
How can use use a variable with an awk printf expression?
This is a workaround:
$ printf 'BEGIN {print %s}' 4/3 | awk -f-
1.33333
Note that foo=4/3 sets foo to the string 4/3. When that is printed via %f, '4/3' is treated as 4; when that is printed with %s, it is printed as 4/3. If you want to evaluate the expression, you need it evaluated inside the script.
For example:
awk 'END {printf "%f\n", foonum/fooden }' foonum=4 fooden=3 /dev/null
Note that bash does not do floating point arithmetic. Thus this produces 1 as the output:
awk 'END {printf "%s\n", foo }' foo=$((4/3)) /dev/null
Maybe you want to use bc:
$ bc -l <<< "4/3"
1.33333333333333333333
$
You have a syntax issue, following works:
awk 'END {foo=4/3; printf "%f", foo}' /dev/null
OR to pass the value from shell to awk:
foo=$(bc -l <<< 4/3) && awk -v foo=$foo 'END {printf "%f\n", foo}' /dev/null
Please note that awk doesn't have any eval() function So if you want to evaluate a maths expression inside awk by passing whole expression in a String like your example then use something like this:
awk 'END {"bc -l <<< " foo|getline foo; printf("%f\n", foo)}' foo='4/3' /dev/null
Edit without use of bc:
awk 'END {split(foo, a, "/"); printf("%f\n", a[1]/a[2])}' foo='4/3' /dev/null

Passing bash input variables to awk

Trying to pass a variable into awk from user input:
Have tried variations of awk -v with errors stating 'awk: invalid -v option', even though the option is listed in man files.
#! /bin/bash
read -p "Enter ClassID:" CLASS
read -p "Enter FacultyName:" FACULTY
awk '/FacultyName/ {print}' data-new.csv > $FACULTY.csv
awk -vclass=${CLASS} '/class/ {print}' data-new.csv >> $FACULTY.csv
echo Class is $CLASS
echo Faculty member is $FACULTY
Some versions of awk require a space between the -v and the variable assignment. Also, you should put the bash variable in double-quotes to prevent unwanted parsing by the shell (e.g. word splitting, wildcard expansion) before it's passed to awk. Finally, in awk /.../ is a constant regular expression (i.e. /class/ will search for the string "class", not the value of the variable "class"). With all of this corrected, here's the awk command that I think will do what you want:
awk -v class="${CLASS}" '$0 ~ class {print}' data-new.csv >> $FACULTY.csv
Now, is there any reason you're using this instead of:
grep "$CLASS" data-new.csv >> $FACULTY.csv
Your script is not clear to me, but these all work:
CLASS=ec123
echo | awk -vclass=$CLASS '{print class}'
echo | awk -vclass=${CLASS} '{print class}'

Resources