How to print the result of a system command in awk - bash

I have the following single line in my bash script:
echo "foo" | awk -F"=" '{char=system("echo $1 | cut -c1");}{print "this is the result: "$char;}' >> output.txt
I want to print the first letter of "foo" using awk, such that I would get:
this is the result: f
in my output file, but instead, I get:
this is the result: foo
What am i doing wrong?
Thanks

No, this is not the way system command works inside awk.
What's happening in OP's code:
You are giving a shell command in system which is good(for some cases) but there is a problem in this one that you should give it like system("echo " $0" | cut -c1") to get its first character AND you need NOT to have a variable etc to save its value and print it in awk.
You are trying to save its result to a variable but it will not have its value(system command's value) but its status. It doesn't work like shell style in awk in here.
So your variable named char will have 0 value(which is a success status from system command) and when you are printing $char it is printing whole line(because in awk: print $0 means print whole line).
You could do this in a single awk by doing:
echo "foo" | awk '{print substr($0,1,1)}'
OR with GNU awk specifically:
echo "foo" | awk 'BEGIN{FS=""} {print $1}'

you're not using much of awk, same can be done with printf
$ echo "foo" | xargs printf "this is the result: %.1s\n"
this is the result: f
or, directly
$ printf "this is the result: %.1s\n" foo
this is the result: f

Related

Assign bash value from value in specific line

I have a file that looks like:
>ref_frame=1
TPGIRYQYNVLPQGWKGSPAIFQSSMTKILEPFRKQNPDIVIYQYMDDLYVGSD
>ref_frame=2
HQGLDISTMCFHRDGKDHQQYSKVA*QKS*SLLENKIQT*LSINTWMICM*DLT
>ref_frame=3
TRD*ISVQCASTGMERITSNIPK*HDKNLRAF*KTKSRHSYLSIHG*FVCRI*
>test_3_2960_3_frame=1
TPGIRYQYNVLPQGWKGSPAIFQSSMTKILEPSRKQNPDIVIYQYMDDLYVGSD
I want to assign a bash variable so that echo $variable gives test_3_2960
The line/row that I want to assign the variable to will always be line 7. How can I accomplish this using bash?
so far I have:
variable=`cat file.txt | awk 'NR==7'`
echo $variable = >test_3_2960_3_frame=1
Using sed
$ variable=$(sed -En '7s/>(([^_]*_){2}[0-9]+).*/\1/p' input_file)
$ echo "$variable"
test_3_2960
No pipes needed here...
$: variable=$(awk -F'[>_]' 'NR==7{ OFS="_"; print $2, $3, $4; exit; }' file)
$: echo $variable
test_3_2960
-F is using either > or _ as field separators, so your data starts in field 2.
OFS="_" sets the Output Field Separator, but you could also just use "_" instead of commas.
exit keeps it from wasting time bothering to read beyond line 7.
If you wish to continue with awk
$ variable=$(awk 'NR==7' file.txt | awk -F "[>_]" '{print $2"_"$3"_"$4}')
$ echo $variable
test_3_2960

Running awk command and print $1 with script that get arguments

I have my script (called test.sh) as follow:
#!/bin/bash
for i in "cat myfile | awk -F',' '{print $1}'"; do
.....
My problem is that my script receives arguments (./tesh.sh arg1 arg2) and '{print $1}' take the script argument (arg1) instead awk result, how can I solve it?
Your original problem is that you wrote the $1 between double-quotes.
"cat myfile | awk -F',' '{print $1}'"
bash variables are still substituted by their value if they are in a double-quoted string, disregarding the fact that they are between single-quotes inside the double-quotes. This is the reason why $1 is being replaced by arg1.
The second problem is that you want to execute the command:
cat myfile | awk -F',' '{print $1}'
but for this you need to use the notation $( command ) or `command`, the latter is however not advised as nesting is difficult.
So, your for-loop should read something like:
#!/usr/bin/env bash
for i in $(awk -F ',' '{print $1}' myfile); do
...
done

AWK -F with print all but last record

/Home/in/test_file.txt
echo /Home/in/test_file.txt | awk -F'/' '{ print $2,$3 }'
Gives the result as:
Home in
But I need /Home/in/ as the result .I have to get all except test_file.txt
How to achieve this?
$ echo '/Home/in/test_file.txt' | awk '{sub("/[^/]+$","")} 1'
/Home/in
$ echo '/Home/in/test_file.txt' | awk '{sub("[^/]+$","")} 1'
/Home/in/
$ echo '/Home/in/test_file.txt' | sed 's:/[^/]*$::'
/Home/in
$ echo '/Home/in/test_file.txt' | sed 's:[^/]*$::'
/Home/in/
$ dirname '/Home/in/test_file.txt'
/Home/in
Your attempt awk -F'/' '{ print $2,$3 }' didn't do what you wanted as -F'/' is telling awk to split the input into fields at every / and then print $2,$3 is telling awk to print the 2nd and 3rd fields separated by a blank char (the default value for OFS). You could do:
$ echo '/Home/in/test_file.txt' | awk 'BEGIN{FS=OFS="/"} { print "",$2,$3,"" }'
/Home/in/
to get the expected output but it'd be the wrong approach since it's removing the field you don't want AND removing the input separators AND then adding new output separators which happen to the have the same value as the input separators rather than simply removing the field you don't want like the other solutions above do.
echo /Home/in/test_file.txt | awk -F'/[^/]*$' '{ print $1 }'
..will print the everything but the trailing slash
There are several ways to achieve this:
Using dirname:
$ dirname /home/in/test_file.txt
/home/in
Using Shell substitution:
$ var="/home/in/test_file.txt"
$ echo "${var%/*}"
/home/in
Using sed: (See Ed Morton)
Using AWK:
$ echo "/home/in/test_file.txt" | awk -F'/' '{OFS=FS;$NF=""}1'
/home/in/
Remark: all these work since you can't have a filename with a forward slash (Is it possible to use "/" in a filename?)
Note: all but dirname will fail if you just have a single file_name without a path. While dirname foo will return ./ all others will return foo
awk behaves as it should.
When you define slash / as a separator, the fields in your expression become the content between the separators.
If you need the separator to be printed as well, you need to do it explicitly, like:
echo /Home/in/test_file.txt | awk -F'/' '{ printf "%s/%s/",$2,$3 }'
replace your last field with an empty string and
put the slash back in as the (builtin) Output Field Separator (OFS)
echo /Home/in/test_file.txt | awk -F'/' -vOFS='/' '{$NF="";print}

Awk: Drop last record separator in one-liner

I have a simple command (part of a bash script) that I'm piping through awk but can't seem to suppress the final record separator without then piping to sed. (Yes, I have many choices and mine is sed.) Is there a simpler way without needing the last pipe?
dolls = $(egrep -o 'alpha|echo|november|sierra|victor|whiskey' /etc/passwd \
| uniq | awk '{IRS="\n"; ORS=","; print}'| sed s/,$//);
Without the sed, this produces output like echo,sierra,victor, and I'm just trying to drop the last comma.
You don't need awk, try:
egrep -o ....uniq|paste -d, -s
Here is another example:
kent$ echo "a
b
c"|paste -d, -s
a,b,c
Also I think your chained command could be simplified. awk could do all things in an one-liner.
Instead of egrep, uniq, awk, sed etc, all this can be done in one single awk command:
awk -F":" '!($1 in a){l=l $1 ","; a[$1]} END{sub(/,$/, "", l); print l}' /etc/password
Here is a small and quite straightforward one-liner in awk that suppresses the final record separator:
echo -e "alpha\necho\nnovember" | awk 'y {print s} {s=$0;y=1} END {ORS=""; print s}' ORS=","
Gives:
alpha,echo,november
So, your example becomes:
dolls = $(egrep -o 'alpha|echo|november|sierra|victor|whiskey' /etc/passwd | uniq | awk 'y {print s} {s=$0;y=1} END {ORS=""; print s}' ORS=",");
The benefit of using awk over paste or tr is that this also works with a multi-character ORS.
Since you tagged it bash here is one way of doing it:
#!/bin/bash
# Read the /etc/passwd file in to an array called names
while IFS=':' read -r name _; do
names+=("$name");
done < /etc/passwd
# Assign the content of the array to a variable
dolls=$( IFS=, ; echo "${names[*]}")
# Display the value of the variable
echo "$dolls"
echo "a
b
c" |
mawk 'NF-= _==$NF' FS='\n' OFS=, RS=
a,b,c

Calling Awk in a shell script

I have this command which executes correctly if run directly on the terminal.
awk '/word/ {print NR}' file.txt | head -n 1
The purpose is to find the line number of the line on which the word 'word' first appears in file.txt.
But when I put it in a script file, it doens't seem to work.
#! /bin/sh
if [ $# -ne 2 ]
then
echo "Usage: $0 <word> <filename>"
exit 1
fi
awk '/$1/ {print NR}' $2 | head -n 1
So what did I do wrong?
Thanks,
Replace the single quotes with double quotes so that the $1 is evaluated by the shell:
awk "/$1/ {print NR}" $2 | head -n 1
In the shell, single-quotes prevent parameter-substitution; so if your script is invoked like this:
script.sh word
then you want to run this AWK program:
/word/ {print NR}
but you're actually running this one:
/$1/ {print NR}
and needless to say, AWK has no idea what $1 is supposed to be.
To fix this, change your single-quotes to double-quotes:
awk "/$1/ {print NR}" $2 | head -n 1
so that the shell will substitute word for $1.
You should use AWK's variable passing feature:
awk -v patt="$1" '$0 ~ patt {print NR; exit}' "$2"
The exit makes the head -1 unnecessary.
you could also pass the value as a variable to awk:
awk -v varA=$1 '{if(match($0,varA)>0){print NR;}}' $2 | head -n 1
Seems more cumbersome than the above, but illustrates passing vars.

Resources