Bash - nested variable expansion inside command assignment - bash

I'm sure this is simple, but I'm new to bash scripts and the syntactical process here is beyond me. I can't seem to find the right search terms to find what I need. This script is really just a stepping stone to my final version.
Invocation: ./myscript.sh testFile
Script:
#!/bin/bash
file=$1
awk='{print $9}' # do not expand $9
awk="'/$file/$awk'" # DO expand file argument
echo "$awk" # prints '/graphic/{print $9}' (as expected)
echo "ls -l | awk $awk" # prints ls -l | awk '/graphic/{print $9}' (as expected)
test="$(ls -l | awk $awk)" # error
echo "$test"
Output:
'/testFile/{print $9}'
ls -l | awk '/testFile/{print $9}'
awk: syntax error at source line 1
context is
>>> ' <<<
missing }
awk: bailing out at source line 1
Even though I can copy and run the second echo'd line and it works successfully, the failure of the command leads me to believe this is not simple string concat but some crazier voodoo.
I've tried some other version as well like making a variable containing the whole command, but then I get even less expected output.
If I do test="$($awk)" I get
'/testFile/{print $9}'
ls -l | awk '/testFile/{print $9}'
ls: $9}': No such file or directory
ls: '/testFile/{print: No such file or directory
ls: awk: No such file or directory
ls: |: No such file or directory
If I do test=$(awk) I get
'/testFile/{print $9}'
ls -l | awk '/testFile/{print $9}'
usage: awk [-F fs] [-v var=value] [-f progfile | 'prog'] [file ...]
Since my Google queries basically only contain the words "bash command variable assignment", I can't get anything related to the nested variable expansion that I have here. I understand what it's doing based on the error, but I couldn't say why or how to fix it.
If someone could provide a fix as well as explain or point me to a resource explaining what's going on here, it would be greatly appreciated. Or maybe there's even another approach that would simplify the logic.
Thanks!

Change to:
test="$(ls -l | awk "$awk")" # error
awk requires the script to be a single argument. But when you expand a variable outside double quotes, the shell performs word splitting, so $awk is expanded into two arguments:
'{print
$9}'
The quotes keep the expansion as a single argument.
Also, take the single quotes out of
awk="'/$file/$awk'"
Single quotes are not processed after expanding a variable, so they'll be passed literally to awk. Putting double quotes around $awk achieves the result you were trying to get with these quotes.

Related

give a file without changing the name in script [duplicate]

This question already has answers here:
How to pass parameters to a Bash script?
(4 answers)
Closed 1 year ago.
At the beginning I have a file.txt, which contains several informations that I will take using the grep command as you see in the script.
What I want is to give the script the file I want instead of file.txt but without changing the file name each time in the script for example if the file is named Me.txt I don’t want to go into the script and write Me.txt in each grep command especially if I have dozens of orders.
Is there a way to do this?
#!/bin/bash
grep teste file.txt > testline.txt
awk '{print $2}' testline.txt > test.txt
echo '#'
echo '#'
grep remote file.txt > remoteline.txt
awk '{print $3}' remoteline.txt > remote.txt
echo '#'
echo '#'
grep adresse file.txt > adresseline.txt
awk '{print $2}' adresseline.txt > adresse.txt
Using a parameter, as many contributors here suggested, is of course the obvious approach, and the one which is usually taken in such case, so I want to extend this idea:
If you do it naively as
filename=$1
you have to supply the name on every invocation. You can improve on this by providing a default value for the case the parameter is missing:
filename=${1:-file.txt}
But sometimes you are in a situation, where for some time (working on a specific task), you always need the same filename over and over, and the default value happens to be not the one you need. Another possibility to pass information to a program is via the environment. If you set the filename by
filename=${MOOFOO:-file.txt}
it means that - assuming your script is called myscript.sh - if you invoke your script by
MOOFOO=myfile.txt myscript.sh
it uses myfile.txt, while if you call it by
myscript.sh
it uses the default file.txt. You can also set MOOFOO in your shell, as
export MOOFOO=myfile.txt
and then, even a lone execution of
myscript.sh
with use myfile.txt instead of the default file.txt
The most flexible approach is to combine both, and this is what I often do in such a situation. If you do in your script a
filename=${1:-${MOOFOO:-file.txt}}
it takes the name from the 1st parameter, but if there is no parameter, takes it from the variable MOOFOO, and if this variable is also undefined, uses file.txt as the last fallback.
You should pass the filename as a command line parameter so that you can call your script like so:
script <filename>
Inside the script, you can access the command line parameters in the variables $1, $2,.... The variable $# contains the number of command line parameters passed to the script, and the variable $0 contains the path of the script itself.
As with all variables, you can choose to put the variable name in curly brackets which has advantages sometimes: ${1}, ${2}, ...
#!/bin/bash
if [ $# = 1 ]; then
filename=${1}
else
echo "USAGE: $(basename ${0}) <filename>"
exit 1
fi
grep teste "${filename}" > testline.txt
awk '{print $2}' testline.txt > test.txt
echo '#'
echo '#'
grep remote "${filename}" > remoteline.txt
awk '{print $3}' remoteline.txt > remote.txt
echo '#'
echo '#'
grep adresse "${filename}" > adresseline.txt
awk '{print $2}' adresseline.txt > adresse.txt
By the way, you don't need two different files to achieve what you want, you can just pipe the output of grep straight into awk, e.g.:
grep teste "${filename}" | awk '{print $2}' > test.txt
but then again, awk can do the regex match itself, reducing it all to just one command:
awk '/teste/ {print $2}' "${filename}" > test.txt

BASH: parse a variable with awk

I have a variable that contains the result of the command whereis ls which is:
ls: /bin/ls /usr/share/man/man1/ls.1.gz
I need to search through this variable and retrieve this specific portion and save it into a new variable, newVar:
/bin
I have tried echo $var | awk '{print $2}'
but this grabs /bin/ls
I then need to search through my $PATH variable finding the substring /bin: (I was thinking with my newVar as a match somehow) and somehow remove this portion of $PATH and update $PATH to reflect that change.
Quite new to bash scripting and any help would be greatly appreciated.
You might just use dirname and which:
dirname "$(which ls)"
You may use this awk:
whereis ls | awk '{sub(/\/ls$/, "", $2); print $2}'
sub function strips trailing /ls from 2nd column of whereis output.
$ echo "$var" | cut -d/ -f2
bin

Is it possible to pass a script to awk inside a shell variable?

Is it possible to store an awk script inside a shell variable; for example:
export script="'{printf(\$2); printf("\"\\n\"");}'"
echo $script
'{printf($2); printf("\n");}'
The script functions properly when I call it directly as such:
awk '{printf($2); printf("\n");}' testFile.txt
prints proper output
When I try and pass the script as a shell variable, I run into issues.
awk $script testFile.txt
awk: syntax error at source line 1
context is
>>> ' <<<
missing }
awk: bailing out at source line 1
I get a slightly different error when I wrap the variable in double quotes
awk "$script" testFile.txt
awk: syntax error at source line 1
context is
>>> ' <<<
awk: bailing out at source line 1
I'm still learning exactly how shell expansions work, I would appreciate any suggestions about what I am missing here.
Error in your quoting
export script='{printf($2); printf("\n");}'
awk "${script}" YourFile
I am not sure about the proper answer to this, but a very ugly (and probably unstable depending on the $script contents) workaround would be:
echo $script | awk '{print "awk "$0" testFile.txt"}' | bash
This is just printing the contents of $script in an awk statement that is then executed by bash. I am not particularly proud of this, but maybe it helps!
When you type
awk '{printf($2); printf("\n");}' testFile.txt
awk only sees {printf($2); printf("\n");} -- the shell removes the quotes
(see Quote Removal in the bash manual)
Heed #NeronLeVelu's answer.

BASH - add prefix (file path) to each line in text file using awk

I am trying to get the full path of a files within a directory. So far this is what I have in bash.
prefix="s3://${s3_bucket}/${s3_folder}/$(date --date="$i days ago" +"%Y/%m/%d")/"
#echo $prefix
aws s3 ls s3://${s3_bucket}/${s3_folder}/$(date --date="$i days ago" +"%Y/%m/%d")/ | sed -n 's/.*\([0-9][0-9]-h.*gz\)/\1/p' | awk '$0="${prefix}"$0' >> ${s3_files_1}
In my output, I am getting the following:
${prefix}file1.gz
${prefix}file2.gz
The output I am looking for is something like below.
s3://my_bucket/my_folder/file1.gz
s3://my_bucket/my_folder/file2.gz
My issue is with the way the awk command is interpreting the variable ${prefix}. Can anyone please help?
You can use -v to pass shell variable contents to awk:
prefix="s3://my_bucket/my_folder/"
echo "file1.gz" | awk -v myprefix="${prefix//\\/\\\\}" '{ print myprefix $0 }'
Sadly, awk -v is not data safe. This example uses parameter expansion to escape backslashes to avoid them being mangled.

How do i count 1 or more items in comma separated input in Shell

Here's my issue, i know how to count the files using the following two strategies but i have a problem with each one.
I am using '.sh' extension.
First:
count=`echo $2 | awk -F, {'print NF'}`
causes my program to throw an error at me: awk: cannot execute - No such file or directory
Secondly:
count=`echo $2 | tr -cd , | wc -c`
Works if you have multiple values separated by commas, however, it will not work if the input is a single item with no commas.
Like i said, this was previously working with the awk but for some reason when i ran it on the physical device instead of the virtual machine it gave me that error.
any ideas?
Thing I know are NOT the issue:
Version of shell is the same.
Try count=$(echo ${2} | awk -F, '{print NF}') instead - you have your braces and quotes inside-out.
Although, it seems your bigger problem is that awk appears to not be executable... You might try which awk and ls -l $(which awk) to see what's up with that...

Resources