Would it be possible to print the file used to redirect STDERR? - shell

Would it be possible to print the filename used to redirect STDERR, given the sample command below:
command.sh 2>file.err
Code in command.sh:
#!/bin/sh
ls -l non_existing_file.txt
echo "STDERR file is: $stderrFilename" # variable should print file.err

It's a little risky, but you could try parsing AIX's procfiles output. It involves capturing the major and minor numbers of the stderr device, along with the inode number, then looking for the corresponding device, its mountpoint, and then using find to look for the file with the given inode number:
#!/bin/sh
dev=$(procfiles $$ | awk '$1 == "2:" { print substr($4, 5) }')
inode=$(procfiles $$ | awk '$1 == "2:" { print substr($5, 5) }')
major=${dev%%,*}
minor=${dev##*,}
if [ "$major}" -eq 0 ]
then
echo I give up, the major number is zero
exit 1
fi
for file in /dev/*
do
[ -b "$file" ] || continue
if istat "$file" | grep -q "^Major Device ${major}.*Minor Device ${minor}$"
then
break
fi
done
fs=$(mount | awk '$1 == "'"${file}"'" { print $2 }')
stderrFilename=$(find "$fs" -inum "$inode")

I made a solution using history. Not sure if there is an easier way to do this ( or a proper one).
#!/bin/sh
stderrfname=`history | tail -1 | awk '{ print $3 }' | sed "s/.*>//"`
echo "STDERR file is: $stderrfname"

Related

Unexpected behaviour with awk exit

I have next code:
process_mem() {
used=`sed -n -e '/^Cpu(s):/p' $temp_data_file | awk '{print $2}' | sed 's/\%us,//'`
idle=`sed -n -e '/^Cpu(s):/p' $temp_data_file | awk '{print $5}' | sed 's/\%id,//'`
awk -v used=$used \
-v custom_cpu_thres=$custom_cpu_thres \
'{
if(used>custom_cpu_thres){
exit 1
}else{
exit 0
}
}'
return=$?
echo $return
if [[ $return -eq 1 ]]; then
echo $server_name"- High CPU Usage (Used:"$used".Idle:"$idle"). "
out=1
else
echo $server_name"- Normal CPU Usage (Used:"$used".Idle:"$idle"). "
fi
}
while IFS='' read -r line || [[ -n "$line" ]]; do
server_name=`echo $line | awk '{print $1}'`
custom_cpu_thres=`echo $line | awk '{print $3}'`
if [ "$custom_cpu_thres" = "-" ]; then
custom_cpu_thres=$def_cpu_thres
fi
expect -f "$EXPECT_SCRIPT" "$command" >/dev/null 2>&1
result=$?
if [[ $result -eq 0 ]]; then
process_mem
else
echo $server_name"- Error in Expect Script. "
out=1
fi
echo $server_name
done < $conf_file
exit $out
The problem is that read bash loop should be executed 4 times (one per line readed). However, if I write the awk code with an exit inside, read bash loop exits after first loop.
Why is this happening? In my opinion exit code in awk code shouldn't affect bash script..
Regards.
I believe the statement you make is false.
You stated:
The problem is that read bash loop should be executed 4 times (one per line read). However, if I write the awk code with an exit inside, read bash loop exits after the first loop.
I do not believe that the script exits after the first loop, but is stuck in the first loop. The reason I make this statement is that your awk script is flawed. The way you wrote it is :
awk -v used=$used -v custom_cpu_thres=$custom_cpu_thres \
'{ if(used>custom_cpu_thres){ exit 1 }
else{ exit 0 } }'
The problem here is that Awk did not get an input file. If no input file is proved to awk, it is reading stdin (similar to processing a pipe or keyboard input). Since no information is sent to stdin (unless you pressed a couple of keys and accidentally hit Enter) the script will not move forward and Awk is awaiting input.
The standard input shall be used only if no file operands are specified, or if a file operand is '-', or if a progfile option-argument is '-'; see the INPUT FILES section. If the awk program contains no actions and no patterns, but is otherwise a valid awk program, standard input and any file operands shall not be read and awk shall exit with a return status of zero.
source : Awk POSIX Standard
The following bash-line demonstrates the above statement:
$ while true; do awk '{print "woot!"; exit }'; done
Only when you press some keys followed by Enter, the word "woot!" is printed on the screen!
How to solve your problem:
The easiest way to solve your problem using Awk is by making use of the BEGIN block. This block is executed before it reads any input line (or stdin). If you tell Awk to exit in a begin block, it will terminate Awk without reading any input. Thus:
awk -v used=$used -v custom_cpu_thres=$custom_cpu_thres \
'BEGIN{ if(used>custom_cpu_thres){ exit 1 }
else{ exit 0 } }'
or shorter
awk -v used=$used -v custom_cpu_thres=$custom_cpu_thres \
'BEGIN{ exit (used>custom_cpu_thres) }
However, Awk is a bit of an overkill here. A simple bash test would suffice:
[[ "$used" -le "$custom_cpu_thres" ]]
result=$?
or
(( used <= custom_cpu_thres ))
result=$?

awk command variable NF not working on NULL input

I run my safe shell script to make sure a binary is running
to check a binary is running I do following command
pidof prog.bin | awk '{print NF}'
is some system it gives me 0 when binary not running
and
in some systems it gives me NULL(nothing)
I can check the NULL using -z option but why awk command acting this way ??
Instead of pidof you can use:
pgrep -qf prog.bin
And check its exit status.
As per man pgrep:
-f Match against full argument lists. The default is to match against process names.
-q Do not write anything to standard output.
You can use this,
if [ `pidof 'NetworkManager'` ]; then
echo "Running"
else
echo "Not Running"
fi
One way to handle this sort of thing (undefined variables) in awk is like this:
echo hi | awk '{print a}'
compared with:
echo hi | awk '{print a || 0}'
0
One Liner for If else
[[ $(pidof 'NetworkManager') ]] && echo "Running" || echo "Not Running"
Try this:
pidof prog.bin | awk '{ if (NF!=0) print NF }'
Here's some tests with awk and NF:
$ # regular line of input
$ echo foo | awk '{print NF}'
1
$ # empty line
$ echo | awk '{print NF}'
0
$ # a word on input with no newline
$ printf "%s" nonewline | awk '{print NF}'
1
$ # no input, not even a newline
$ printf %s | awk '{print NF}'
# no output from awk
I suspect the pidof case is the last: not even a newline. To force a newline:
echo $(pidof prog) | ...
printf "%s\n" "$(pidof prog)" | ...

how to change a string to a path in ksh

ls -lAtr /data/log.* | tail -1 | awk '{ printf $9 }' > $logfile
echo $logfile
cat $logfile # I want to cat the content of this log file, but this wouldn't work
logfile2=/usr/some/path/text.log
echo $logfile2
cat $logfile2 # This work
I am new to shell programming, I wondering how do I convert the logfile into something like logfile2(Did I ask the right question?), so that I can treat it like a file and read from it.
Think you're looking for (works in bash as well)
logfile2="$(</usr/some/path/text.log)"
From ksh man page
$(cat file) can be replaced by the equivalent but faster $(<file).
e.g.
> cat text.log
line 1
line 2
> ksh
> logfile2="$(<text.log)"
> echo "$logfile2"
line 1
line 2
Are you trying to store the result of ls|tail|awk in $logFile? If so:
logFile=$(ls -lAtr /data/log.* | tail -1 | awk '{ printf $9 }')
However, you shouldn't parse the output of ls.

How can I specify a row in awk in for loop?

I'm using the following awk command:
my_command | awk -F "[[:space:]]{2,}+" 'NR>1 {print $2}' | egrep "^[[:alnum:]]"
which successfully returns my data like this:
fileName1
file Name 1
file Nameone
f i l e Name 1
So as you can see some file names have spaces. This is fine as I'm just trying to echo the file name (nothing special). The problem is calling that specific row within a loop. I'm trying to do it this way:
i=1
for num in $rows
do
fileName=$(my_command | awk -F "[[:space:]]{2,}+" 'NR==$i {print $2}' | egrep "^[[:alnum:]])"
echo "$num $fileName"
$((i++))
done
But my output is always null
I've also tried using awk -v record=$i and then printing $record but I get the below results.
f i l e Name 1
EDIT
Sorry for the confusion: rows is a variable that list ids like this 11 12 13
and each one of those ids ties to a file name. My command without doing any parsing looks like this:
id File Info OS
11 File Name1 OS1
12 Fi leNa me2 OS2
13 FileName 3 OS3
I can only use the id field to run a the command that I need, but I want to use the File Info field to notify the user of the actual File that the command is being executed against.
I think your $i does not expand as expected. You should quote your arguments this way:
fileName=$(my_command | awk -F "[[:space:]]{2,}+" "NR==$i {print \$2}" | egrep "^[[:alnum:]]")
And you forgot the other ).
EDIT
As an update to your requirement you could just pass the rows to a single awk command instead of a repeatitive one inside a loop:
#!/bin/bash
ROWS=(11 12)
function my_command {
# This function just emulates my_command and should be removed later.
echo " id File Info OS
11 File Name1 OS1
12 Fi leNa me2 OS2
13 FileName 3 OS3"
}
awk -- '
BEGIN {
input = ARGV[1]
while (getline line < input) {
sub(/^ +/, "", line)
split(line, a, / +/)
for (i = 2; i < ARGC; ++i) {
if (a[1] == ARGV[i]) {
printf "%s %s\n", a[1], a[2]
break
}
}
}
exit
}
' <(my_command) "${ROWS[#]}"
That awk command could be condensed to one line as:
awk -- 'BEGIN { input = ARGV[1]; while (getline line < input) { sub(/^ +/, "", line); split(line, a, / +/); for (i = 2; i < ARGC; ++i) { if (a[1] == ARGV[i]) {; printf "%s %s\n", a[1], a[2]; break; }; }; }; exit; }' <(my_command) "${ROWS[#]}"
Or better yet just use Bash instead as a whole:
#!/bin/bash
ROWS=(11 12)
while IFS=$' ' read -r LINE; do
IFS='|' read -ra FIELDS <<< "${LINE// +( )/|}"
for R in "${ROWS[#]}"; do
if [[ ${FIELDS[0]} == "$R" ]]; then
echo "${R} ${FIELDS[1]}"
break
fi
done
done < <(my_command)
It should give an output like:
11 File Name1
12 Fi leNa me2
Shell variables aren't expanded inside single-quoted strings. Use the -v option to set an awk variable to the shell variable:
fileName=$(my_command | awk -v i=$i -F "[[:space:]]{2,}+" 'NR==i {print $2}' | egrep "^[[:alnum:]])"
This method avoids having to escape all the $ characters in the awk script, as required in konsolebox's answer.
As you already heard, you need to populate an awk variable from your shell variable to be able to use the desired value within the awk script so thi:
awk -F "[[:space:]]{2,}+" 'NR==$i {print $2}' | egrep "^[[:alnum:]]"
should be this:
awk -v i="$i" -F "[[:space:]]{2,}+" 'NR==i {print $2}' | egrep "^[[:alnum:]]"
Also, though, you don't need awk AND grep since awk can do anything grep van do so you can change this part of your script:
awk -v i="$i" -F "[[:space:]]{2,}+" 'NR==i {print $2}' | egrep "^[[:alnum:]]"
to this:
awk -v i="$i" -F "[[:space:]]{2,}+" '(NR==i) && ($2~/^[[:alnum:]]/){print $2}'
and you don't need a + after a numeric range so you can change {2,}+ to just {2,}:
awk -v i="$i" -F "[[:space:]]{2,}" '(NR==i) && ($2~/^[[:alnum:]]/){print $2}'
Most importantly, though, instead of invoking awk once for every invocation of my_command, you can just invoke it once for all of them, i.e. instead of this (assuming this does what you want):
i=1
for num in rows
do
fileName=$(my_command | awk -v i="$i" -F "[[:space:]]{2,}" '(NR==i) && ($2~/^[[:alnum:]]/){print $2}')
echo "$num $fileName"
$((i++))
done
you can do something more like this:
for num in rows
do
my_command
done |
awk -F '[[:space:]]{2,}' '$2~/^[[:alnum:]]/{print NR, $2}'
I say "something like" because you don't tell us what "my_command", "rows" or "num" are so I can't be precise but hopefully you see the pattern. If you give us more info we can provide a better answer.
It's pretty inefficient to rerun my_command (and awk) every time through the loop just to extract one line from its output. Especially when all you're doing is printing out part of each line in order. (I'm assuming that my_command really is exactly the same command and produces the same output every time through your loop.)
If that's the case, this one-liner should do the trick:
paste -d' ' <(printf '%s\n' $rows) <(my_command |
awk -F '[[:space:]]{2,}+' '($2 ~ /^[::alnum::]/) {print $2}')

Dynamic Patch Counter for Shell Script

I am developing a script on a Solaris 10 SPARC machine to calculate how many patches got installed successfully during a patch delivery. I would like to display to the user:
(X) of 33 patches were successfully installed
I would like my script to output dynamically replacing the "X" so the user knows there is activity occurring; sort of like a counter. I am able to show counts, but only on a new line. How can I make the brackets update dynamically as the script performs its checks? Don't worry about the "pass/fail" ... I am mainly concerned with making my output update in the bracket.
for x in `cat ${PATCHLIST}`
do
if ( showrev -p $x | grep $x > /dev/null 2>&1 ); then
touch /tmp/patchcheck/* | echo "pass" >> /tmp/patchcheck/$x
wc /tmp/patchcheck/* | tail -1 | awk '{print $1}'
else
touch /tmp/patchcheck/* | echo "fail" >> /tmp/patchcheck/$x
wc /tmp/patchcheck/* | tail -1 | awk '{print $1}'
fi
done
The usual way to do that is to emit a \r carriage return (CR) at some point and to omit the \n newline or line feed (LF) at the end of the line. Since you're using awk, you can try:
awk '{printf "\r%s", $1} END {print ""}'
For most lines, it outputs a carriage return and the data in field 1 (without a newline at the end). At the end of the input, it prints an empty string followed by a newline.
One other possibility is that you should place the awk script outside your for loop:
for x in `cat ${PATCHLIST}`
do
if ( showrev -p $x | grep $x > /dev/null 2>&1 ); then
touch /tmp/patchcheck/* | echo "pass" >> /tmp/patchcheck/$x
wc /tmp/patchcheck/* | tail -1
else
touch /tmp/patchcheck/* | echo "fail" >> /tmp/patchcheck/$x
wc /tmp/patchcheck/* | tail -1
fi
done | awk '{ printf "\r%s", $1} END { print "" }'
I'm not sure but I think you can apply similar streamlining to the rest of the repetitious code in the script:
for x in `cat ${PATCHLIST}`
do
if showrev -p $x | grep -s $x
then echo "pass"
else echo "fail"
fi >> /tmp/patchcheck/$x
wc /tmp/patchcheck/* | tail -1
done | awk '{ printf "\r%s", $1} END { print "" }'
This eliminates the touch (which doesn't seem to do much), and especially not when the empty output of touch is piped to echo which ignores its standard input. It eliminates the sub-shell in the if line; it uses the -s option of grep to keep it quiet.
I'm still a bit dubious about the wc line. I think you're looking to count the number of files, in effect, since each file should contain one line (pass or fail), unless you listed some patch twice in the file identified by ${PATCHLIST}. In which case, I'd probably use:
for x in `cat ${PATCHLIST}`
do
if showrev -p $x | grep -s $x
then echo "pass"
else echo "fail"
fi >> /tmp/patchcheck/$x
ls /tmp/patchcheck | wc -l
done | awk '{ printf "\r%s", $1} END { print "" }'
This lists the files in /tmp/patchcheck and counts the number of lines output. It means you could simply print $0 in the awk script since $0 and $1 are the same. To the extent efficiency matters (not a lot), this is more efficient because ls only scans a directory, rather than having wc open each file. But it is more particularly a more accurate description of what you are trying to do. If you later want to count the passes, you can use:
for x in `cat ${PATCHLIST}`
do
if showrev -p $x | grep -s $x
then echo "pass"
else echo "fail"
fi >> /tmp/patchcheck/$x
grep '^pass$' /tmp/patchcheck/* | wc -l
done | awk '{ printf "\r%s", $1} END { print "" }'
Of course, this goes back to reading each file, but you're getting more refined information out of it now (and that's the penalty for the more refined information).
Here is how I got my patch installation script working the way I wanted:
while read pkgline
do
patchadd -d ${pkgline} >> /var/log/patch_install.log 2>&1
# Create audit file for progress indicator
for x in ${pkgline}
do
if ( showrev -p ${x} | grep -i ${x} > /dev/null 2>&1 ); then
echo "${x}" >> /tmp/pass
else
echo "${x}" >> /tmp/fail
fi
done
# Progress indicator
for y in `wc -l /tmp/pass | awk '{print $1}'`
do
printf "\r${y} out of `wc -l /patchdir/master | awk '{print $1}'` packages installed for `hostname`. Last patch installed: (${pkgline})"
done
done < /patchdir/master

Resources