Need help with awk
Trying to get awk o/p assigned to variable; getting following error.
var= $(awk '{print $2;next;}' <<< "$OLD_ADDR")
echo $var
Where ...OLD_ADDR = "gateway 192.168.1.1"
******** Error from console log using set -x *******
+++ awk '{print $2;next;}'
++ var=
++ 192.168.1.21
-bash: 192.168.1.21: command not found
++ echo
Thank you,
pisignage#ariemtech.com
Spaces matter.
Consider:
var= $(awk '{print $2;next;}' <<< "$OLD_ADDR")
This tells the shell to set the variable var to empty and then run whatever command the awk statement prints out. In your case, that command is 192.168.1.1. Since the shell can find no such command on the PATH, the error is generated:
-bash: 192.168.1.21: command not found
The solution is to write:
var=$(awk '{print $2;next;}' <<< "$OLD_ADDR")
This will, by contrast, assign the output of the awk command to var.
When making an assignment in shell, no spaces are allowed on either side of the equal sign.
Related
This question already has answers here:
How to pass parameters to a Bash script?
(4 answers)
Closed 1 year ago.
At the beginning I have a file.txt, which contains several informations that I will take using the grep command as you see in the script.
What I want is to give the script the file I want instead of file.txt but without changing the file name each time in the script for example if the file is named Me.txt I don’t want to go into the script and write Me.txt in each grep command especially if I have dozens of orders.
Is there a way to do this?
#!/bin/bash
grep teste file.txt > testline.txt
awk '{print $2}' testline.txt > test.txt
echo '#'
echo '#'
grep remote file.txt > remoteline.txt
awk '{print $3}' remoteline.txt > remote.txt
echo '#'
echo '#'
grep adresse file.txt > adresseline.txt
awk '{print $2}' adresseline.txt > adresse.txt
Using a parameter, as many contributors here suggested, is of course the obvious approach, and the one which is usually taken in such case, so I want to extend this idea:
If you do it naively as
filename=$1
you have to supply the name on every invocation. You can improve on this by providing a default value for the case the parameter is missing:
filename=${1:-file.txt}
But sometimes you are in a situation, where for some time (working on a specific task), you always need the same filename over and over, and the default value happens to be not the one you need. Another possibility to pass information to a program is via the environment. If you set the filename by
filename=${MOOFOO:-file.txt}
it means that - assuming your script is called myscript.sh - if you invoke your script by
MOOFOO=myfile.txt myscript.sh
it uses myfile.txt, while if you call it by
myscript.sh
it uses the default file.txt. You can also set MOOFOO in your shell, as
export MOOFOO=myfile.txt
and then, even a lone execution of
myscript.sh
with use myfile.txt instead of the default file.txt
The most flexible approach is to combine both, and this is what I often do in such a situation. If you do in your script a
filename=${1:-${MOOFOO:-file.txt}}
it takes the name from the 1st parameter, but if there is no parameter, takes it from the variable MOOFOO, and if this variable is also undefined, uses file.txt as the last fallback.
You should pass the filename as a command line parameter so that you can call your script like so:
script <filename>
Inside the script, you can access the command line parameters in the variables $1, $2,.... The variable $# contains the number of command line parameters passed to the script, and the variable $0 contains the path of the script itself.
As with all variables, you can choose to put the variable name in curly brackets which has advantages sometimes: ${1}, ${2}, ...
#!/bin/bash
if [ $# = 1 ]; then
filename=${1}
else
echo "USAGE: $(basename ${0}) <filename>"
exit 1
fi
grep teste "${filename}" > testline.txt
awk '{print $2}' testline.txt > test.txt
echo '#'
echo '#'
grep remote "${filename}" > remoteline.txt
awk '{print $3}' remoteline.txt > remote.txt
echo '#'
echo '#'
grep adresse "${filename}" > adresseline.txt
awk '{print $2}' adresseline.txt > adresse.txt
By the way, you don't need two different files to achieve what you want, you can just pipe the output of grep straight into awk, e.g.:
grep teste "${filename}" | awk '{print $2}' > test.txt
but then again, awk can do the regex match itself, reducing it all to just one command:
awk '/teste/ {print $2}' "${filename}" > test.txt
Is it possible to store an awk script inside a shell variable; for example:
export script="'{printf(\$2); printf("\"\\n\"");}'"
echo $script
'{printf($2); printf("\n");}'
The script functions properly when I call it directly as such:
awk '{printf($2); printf("\n");}' testFile.txt
prints proper output
When I try and pass the script as a shell variable, I run into issues.
awk $script testFile.txt
awk: syntax error at source line 1
context is
>>> ' <<<
missing }
awk: bailing out at source line 1
I get a slightly different error when I wrap the variable in double quotes
awk "$script" testFile.txt
awk: syntax error at source line 1
context is
>>> ' <<<
awk: bailing out at source line 1
I'm still learning exactly how shell expansions work, I would appreciate any suggestions about what I am missing here.
Error in your quoting
export script='{printf($2); printf("\n");}'
awk "${script}" YourFile
I am not sure about the proper answer to this, but a very ugly (and probably unstable depending on the $script contents) workaround would be:
echo $script | awk '{print "awk "$0" testFile.txt"}' | bash
This is just printing the contents of $script in an awk statement that is then executed by bash. I am not particularly proud of this, but maybe it helps!
When you type
awk '{printf($2); printf("\n");}' testFile.txt
awk only sees {printf($2); printf("\n");} -- the shell removes the quotes
(see Quote Removal in the bash manual)
Heed #NeronLeVelu's answer.
I have this script that's designed to assign variables to commands that collect information about a system and then echo them back. This works very well for the first few commands, but the last one continues to return the value without "PRETTY_NAME=" stripped out of the output.
Is there some problem with this that I'm not seeing?
I have tried using grep to separate awk:
grep PRETTY_NAME /etc/*-release | awk -F '=' '{print $2}'
Using escaped quotes:
awk -F \"=\" '/PRETTY_NAME/ {print $2}' /etc/*-release
Whole block (edited somewhat for relevance)
declare -A CMDS=(
[primaryMacAddress]="cat /sys/class/net/$(ip route show default | awk '/default/ {print $5}')/address"
[primaryIpAddress]="hostname --ip-address"
[hostname]="hostname"
[osType]="awk -F '=' '/PRETTY_NAME/ {print $2}' /etc/*-release"
)
#This bit is actually nested in another function
for kpair in "${!CMDS[#]}" do
echo "$kpair=\"$( eval ${CMDS[$kpair]} )\""
done
Results when run from .sh file:
osType="PRETTY_NAME="Red Hat Enterprise Linux Server 7.4 (Maipo)""
expected:
osType=""Red Hat Enterprise Linux Server 7.4 (Maipo)""
When this command is run by itself, it seems to work as intended:
$ awk -F '=' '/PRETTY_NAME/ {print $2}' /etc/*-release
"Red Hat Enterprise Linux Server 7.4 (Maipo)"
Because your Awk command is specified in double quotes, interior dollar signs are subject to special treatment: the $2 is treated as a parameter substitution by your shell, and so the array element doesn't store the text $2 but rather its expansion. The Awk interpreter never sees the $2 syntax.
However, you have a second problem in your command dispatcher. Your eval command does not prevent word splitting:
eval ${CMDS[$kpair]}
you want this:
eval "${CMDS[$kpair]}"
without the quotes, your command is arbitrarily chopped into fields on whitespace. Then eval catenates the pieces together, using one space between them, and evaluates the resulting syntax. The difference can be demonstrated with the following example:
$ cmd="awk '/foo/ { print \$1\" \"\$2 }'"
$ echo 'foo a' | eval $cmd
foo a
$ echo 'foo a' | eval "$cmd"
foo a
We can just use echo to understand the issue:
$ echo $cmd
awk '/foo/ { print $1" "$2 }'
$ echo "$cmd"
awk '/foo/ { print $1" "$2 }'
The substitution of $cmd and the subsequent word splitting is done irrespective of any shell syntax that `cmd contains. We can see the pieces like this:
$ for x in $cmd ; do echo "<$x>" ; done
<awk>
<'/foo/>
<{>
<print>
<$1">
<"$2>
<}'>
When we execute eval $cmd, the above pieces are generated and re-combined by eval and evaluated. Needless to say, you don't want your command syntax to be chopped up and re-combined like this; who knows what sort of hidden bug will arise. It may be okay for the commands you have now, but as a generic command dispatch mechanism, it is flawed.
I have a bash variable: agent1.ip with 192.168.100.137 as its value. When I refer to it in echo like this:
echo $agent1.ip
the result is:
.ip
How can I access the value?
UPDATE: my variables are:
Bash itself doesn't understand variable names with dots in them, but that doesn't mean you can't have such a variable in your environment. Here's an example of how to set it and get it all in one:
env 'agent1.ip=192.168.100.137' bash -c 'env | grep ^agent1\\.ip= | cut -d= -f2-'
Since bash.ip is not a valid identifier in bash, the environment string bash.ip=192.168.100.37 is not used to create a shell variable on shell startup.
I would use awk, a standard tool, to extract the value from the environment.
bash_ip=$(awk 'BEGIN {print ENVIRON["bash.ip"]}')
The cleanest solution is:
echo path.data | awk '{print ENVIRON[$1]}'
Try this:
export myval=`env | grep agent1.port | awk -F'=' '{print $2}'`;echo $myval
Is your code nested, and using functions or scripts that use ksh?
Dotted variable names are an advanced feature in ksh93. A simple case is
$ a=1
$ a.b=123
$ echo ${a.b}
123
$ echo $a
1
If you first attempt to assign to a.b, you'll get
-ksh: a.b=123: no parent
IHTH
I would like to replace a variable inside the the awk command with a bash variable.
For example:
var="one two three"
echo $var | awk "{print $2}"
I want to replace the $2 with the var variable. I have tried awk -v as well as something like awk "{ print ${$wordnum} } to no avail.
Sightly different approach:
$ echo $var
one two three
$ field=3
$ echo $var | awk -v f="$field" '{print $f}'
three
$ field=2
$ echo $var | awk -v f="$field" '{print $f}'
two
You've almost got it...
$ myfield='$3'
$ echo $var | awk "{print $myfield}"
three
The hard quotes on the first line prevent interpretation of $3 by the shell. The soft quotes on the second line allow variable replacement.
You can concatenate parts of awk statements with variables. Maybe this is what you want in your script file:
echo $1|awk '{print($'$2');}'
Here the parts {print($ and the value of local variable $2 and );} are concatenated and given to awk.
EDIT: After some advice rather don't use this. Maybe as a one-time solution. It's better to get accustomed to doing it right right away - see link in first comment.