Return two variables in awk - bash

At the moment here is what im doing
ret=$(ls -la | awk '{print $3 " " $9}')
usr=$(echo $ret | awk '{print $1}')
fil=$(echo $ret | awk '{print $2}')
The problem is that im not running an ls im running a command that takes time, so you can understand the logic.
Is there a way I can set the return value to set two external values, so something such as
ls -la | awk -r usr=x -r fil=y '{x=$3; y=$9}'
This way the command will be run once and i can minimize it to one line

It's not pretty, but if you really need to do this in one line you can use awk/bash's advanced meta-programming capabilities :)
eval $(ls -la | awk '{usr = $3 " " usr;fil = $9 " " fil} END{print "usr=\""usr"\";fil=\""fil"\""}')
To print:
echo -e $usr
echo -e $fil
Personally, I'd stick with what you have - it's much more readable and performance overhead is tiny compared to the above:
$time <three line approach>
real 0m0.017s
user 0m0.006s
sys 0m0.011s
$time <one line approach>
real 0m0.009s
user 0m0.004s
sys 0m0.007s

A workaround using read
usr=""
fil=""
while read u f; do usr="$usr\n$u"; fil="$fil\n$f"; done < <(ls -la | awk '{print $3 " " $9}')
For performance issue you could use <<<, but avoid it if the returned text is large:
while read u f; do usr="$usr\n$u"; fil="$fil\n$f"; done <<< $(ls -la | awk '{print $3 " " $9}')
A more portable way inspired from #WilliamPursell's answer:
$ usr=""
$ fil=""
$ while read u f; do usr="$usr\n$u"; fil="$fil\n$f"; done << EOF
> $(ls -la | awk '{print $3 " " $9}')
> EOF

What you want to do is capture the output of ls or any other command and then process it later.
ls=$(ls -l)
first=$(echo $ls | awk '{print $1}')
second=$(echo $ls | awk '{print $2}')

Using bash v4 associative array:
unset FILES
declare -A FILES
FILES=( ls -la | awk '{print $9 " " $3}' )
Print the list of owner & file:
for fil in ${!FILES[#]}
do
usr=${FILES["$fil"]}
echo -e "$usr" "\t" "$fil"
done
My apologies, I cannot test on my computer because my bash v3.2 does not support associative array :-(.
Please, report any issue...

The accepted answer uses process substitution, which is a bashism that only works on certain platforms. A more portable solution is to use a heredoc:
read u f << EOF
$( ls ... )
EOF
It is tempting to try:
ls ... | read u f
but the read then runs in a subshell. A common technique is:
ls ... | { read u f; # use $u and $f here; }
but to make the variables available in the remainder of the script, the interpolated heredoc is the most portable approach. Note that it requires the shell to read all of the output of the program into memory, so is not suitable if the output is expected to be large or if the process is long running.

You could use a bash array or the positional parameters as temporary holding place:
ret_ary=( $(command | awk '{print $3, $9}') )
usr=${ret_ary[0]}
fil=${ret_ary[1]}
set -- $(command | awk '{print $3, $9}')
usr=$1
fil=$2

Related

How to get the second word of a string in shell?

I want to get the size of the directory's content. If I use the command line I can get like this:
ls -l | head -n 1 | awk '{ print $2 }'
So the output is the size of the directory's content:
16816
But I want to do this in a bash script:
x="ls -l DBw | head -n 1 | awk '{ print $2 }'"
while sleep 1; do
y=$(eval "$x");
echo $y
done
But the output of this script is the full line:
Total 16816
Why is this happening and how can I get just the second word?
x="ls -l DBw | head -n 1 | awk '{ print $2 }'"
It's happening because $2 is inside double quotes and so it's interpreted immediately. You could escape it with \$2, but better yet:
Don't store code in variables. Use a function.
x() {
ls -l DBw | head -n 1 | awk '{ print $2 }'
}
Then you can call x many times.
while sleep 1; do
x
done
That said, it's better not to parse the output of ls in the first place. Use du to compute the size of a directory:
$ du -sh /dir
1.3M /dir
$ du -sh /dir | cut -f1
1.3M

Need to generate files based on the value available in a variable in shell

In my script I have a variable $var which will hold a value "00135 00136 00137". I want to generate three files based on the values available in $var - if possible without using a loop.
For example, I need touch files with these names:
test.00136.txt
test.00137.txt
test.00138.txt
Avioding a while loop is possible with xargs.
First split the var into lines, use the string num as a placeholder and touch the files:
var="000135 00136 00137 00138 00139"
echo "${var}" | tr " " "\n" | xargs -I num touch test.num.txt
Edit:
Avoid tr with
echo -n "$var" | xargs -d' ' -n1 -Inum echo test.num.txt
The awk utility makes processing columnar data quite simple:
var="00135 00136 00137"
var1=$(echo "$var" | awk '{print $1}')
var2=$(echo "$var" | awk '{print $2}')
var3=$(echo "$var" | awk '{print $3}')
touch "test.${var1}.txt"
touch "test.${var2}.txt"
touch "test.${var3}.txt"

Save output of awk to two different variables

Okay. I am kind of lost and google search isn't helping me much.
I have a command like:
filesize_filename=$(echo $line | awk ' ''{print $5":"$9}')
echo $filesize_filename
1024:/home/test
Now this one saves the two returns or awk'ed items into one variable. I'd like to achieve something like this:
filesize,filename=$(echo $line | awk ' ''{print $5":"$9}')
So I can access them individually like
echo $filesize
1024
echo $filename
/home/test
How to I achieve this?
Thanks.
Populate a shell array with the awk output and then do whatever you like with it:
$ fileInfo=( $(echo "foo 1024 bar /home/test" | awk '{print $2, $4}') )
$ echo "${fileInfo[0]}"
1024
$ echo "${fileInfo[1]}"
/home/test
If the file name can contain spaces then you'll have to adjust the FS and OFS in awk and the IFS in shell appropriately.
You may not need awk at all of course:
$ line="foo 1024 bar /home/test"
$ fileInfo=( $line )
$ echo "${fileInfo[1]}"
1024
$ echo "${fileInfo[3]}"
/home/test
but beware of globbing chars in $line matching on local file names in that last case. I expect there's a more robust way to populate a shell array from a shell variable but off the top of my head I can't think of it.
Use bash's read for that:
read size name < "$(awk '{print $5, $9}' <<< "$line")"
# Now you can output them separately
echo "$size"
echo "$name"
You can use process substitution on awk's output:
read filesize filename < <(echo "$line" | awk '{print $5,$9}')
You can totally avoid awk by doing:
read _ _ _ _ filesize _ _ _ filename _ <<< "$line"

Echo without a newline character results a syntax error

I am writing a shell script and I would like to have this code
echo $(awk '{print $1}' /proc/uptime) / 3600 | bc
without the newline character at the end.
I wanted to write it using echo -n, but this code
echo -n $(awk '{print $1}' /proc/uptime) / 3600 | bc
results a syntax error:
(standard_in) 1: syntax error
Can you help me with this?
Thank you very much!
echo $(awk '{print $1}' /proc/uptime) / 3600 | bc | tr -d "\n"
Alternatives:
echo -n $(($(cut -d . -f 1 /proc/uptime)/3600))
mapfile A </proc/uptime; echo -n $((${A%%.*}/3600))
A solution using echo -n:
echo -n $(echo $(awk '{print $1}' /proc/uptime) / 3600 | bc)
In general, if foo produces a line of output, you can print the same output without a newline using echo -n $(foo), even if foo is complicated.
A more straightforward solution using pure awk (since awk does arithmetic and output formatting, there's not much point in using both awk and bc):
awk '{printf("%d", $1 / 3600)}' /proc/uptime

Bash awk one-liner not printing

Expecting this to print out abc - but I get nothing, every time, nothing.
echo abc=xyz | g="$(awk -F "=" '{print $1}')" | echo $g
A pipeline isn't a set of separate assignments. However, you could rewrite your current code as follows:
result=$(
echo 'abc=xyz' | awk -F '=' '{print $1}'
)
echo "$result"
However, a more Bash-centric solution without intermediate assignments could take advantage of a here-string. For example:
awk -F '=' '{print $1}' <<< 'abc=xyz'
Other solutions are possible, too, but this should be enough to get you started in the right direction.

Resources