Why is this pipeline behave differently in ksh93 compared to bash? - bash

I had an issue with a ksh93 code. As it was very complex I started reducing it to an example code that will reproduce the same issue. I ended up with this:
set -o pipefail;
{
echo "progress" 1>&3 | false
} 3>&1 | cat | \
while read pv_output; do
echo "meanwhile ... we got "
echo $pv_output | cat
done
echo $?
When I run this code with ksh93 it outputs "0", when I run it with bash it outputs "1".
# echo "ksh93";ksh93 ./x1.sh ;echo "bash"; bash ./x1.sh
ksh93
meanwhile ... we got
progress
0
bash
meanwhile ... we got
progress
1
However if I start fiddling with the code, and remove the first cat, both shells return "1"
set -o pipefail;
{
echo "progress" 1>&3 | false
} 3>&1 | \
while read pv_output; do
echo "meanwhile ... we got "
echo $pv_output | cat
done
echo $?
Or... if I leave in the first cat, but I remove the second one from inside the while, they work the same way again.
set -o pipefail;
{
echo "progress" 1>&3 | false
} 3>&1 | cat | \
while read pv_output; do
echo "meanwhile ... we got "
echo $pv_output
done
echo $?
Now, for this example I used cat for simplicity's sake. In real life the first cat is actually an awk processing the output of a complicated command. The second cat is actually a sed. I mention these, so that it is clear that the cat command in itself is not the culprit.

Related

bash optional command in variable

i have a code:
L12(){
echo -e "/tftpboot/log/archive/L12/*/*$sn*L12*.log /tftpboot/log/diag/*$sn*L12*.log"
command="| grep -v hdd"
}
getlog(){
echo $(ls -ltr $(${1}) 2>/dev/null `${command}` | tail -1)
}
however $command does not seem to be inserting | grep -v hdd correctly
i need $command to be either empty or | grep
is there a simple solution to my issue or should i go for different approach
edit:
there may be another problem in there
i am loading a few "modules"
EVAL.sh
ev(){
case "${1}" in
*FAIL*) paint $red "FAIL";;
*PASS*) paint $green "PASS";;
*)echo;;
esac
result=${1}
}
rackinfo.sh (the "main script")
#! /bin/bash
#set -x
n=0
for src in $(ls modules/)
do
source modules/$src && ((n++))
## debugging
# source src/$src || ((n++)) || echo "there may be an issue in $src"
done
## debugging
# x=($n - $(ls | grep src | wc -l))
# echo -e "$x plugin(s) failed to laod correctly"
# echo -e "loaded $n modules"
########################################################################
command=cat
tests=("L12" "AL" "BI" "L12-3")
while read sn
do
paint $blue "$sn\t"
for test in ${tests[#]}
do
log="$(ev "$(getlog ${test})")"
if [[ -z ${log} ]]
then
paint $cyan "${test} "; paint $red "!LOG "
else
paint $cyan "${test} ";echo -ne "$log "
fi
done
echo
done <$1
the results i get are still containing "hdd" for L12()
Set command to cat as a default.
Also, it's best to use an array for commands with arguments, in case any of the arguments is multiple words.
There's rarely a reason to write echo $(command). That's essentially the same as just writing command.
#default command does nothing
command=(cat)
L12(){
echo -e "/tftpboot/log/archive/L12/*/*$sn*L12*.log /tftpboot/log/diag/*$sn*L12*.log"
command=(grep -v hdd)
}
getlog(){
ls -ltr $(${1}) 2>/dev/null | "${command[#]}" | tail -1)
}

Ignoring all but the (multi-line) results of the last query sent to a program

I have an executable that accepts queries from stdin and responds to them, reading until EOF. Additionally I have an input file and a special command, let's call those EXEC, FILE and CMD respectively.
What I need to do is:
Pass FILE to EXEC as input.
Disregard all the output corresponding to commands read from FILE (/dev/null/).
Pass CMD as the last command.
Fetch output for the last command and save it in a variable.
EXEC's output can be multiline for each query.
I know how to pass FILE + CMD into the EXEC:
echo ${CMD} | cat ${FILE} - | ${EXEC}
but I have no idea how to fetch only output resulting from CMD.
Is there a magical one-liner that does this?
After looking around I've found the following partial solution:
mkfifo mypipe
(tail -f mypipe) | ${EXEC} &
cat ${FILE} | while read line; do
echo ${line} > mypipe
done
echo ${CMD} > mypipe
This allows me to redirect my input, but now the output gets printed to screen. I want to ignore all the output produced by EXEC in the while loop and get only what it prints for the last line.
I tried what first came into my mind, which is:
(tail -f mypipe) | ${EXEC} > somefile &
But it didn't work, the file was empty.
This is race-prone -- I'd suggest putting in a delay after the kill, or using an explicit sigil to determine when it's been received. That said:
#!/usr/bin/env bash
# route FD 4 to your output routine
exec 4> >(
output=; trap 'output=1' USR1
while IFS= read -r line; do
[[ $output ]] && printf '%s\n' "$line"
done
); out_pid=$!
# Capture the PID for the process substitution above; note that this requires a very
# new version of bash (4.4?)
[[ $out_pid ]] || { echo "ERROR: Your bash version is too old" >&2; exit 1; }
# Run your program in another process substitution, and close the parent's handle on FD 4
exec 3> >("$EXEC" >&4) 4>&-
# cat your file to FD 3...
cat "$file" >&3
# UGLY HACK: Wait to let your program finish flushing output from those commands
sleep 0.1
# notify the subshell writing output to disk that the ignored input is done...
kill -USR1 "$out_pid"
# UGLY HACK: Wait to let the subprocess actually receive the signal and set output=1
sleep 0.1
# ...and then write the command for which you actually want content logged.
echo "command" >&3
In validating this answer, I'm doing the following:
EXEC=stub_function
stub_function() {
local count line
count=0
while IFS= read -r line; do
(( ++count ))
printf '%s: %s\n' "$count" "$line"
done
}
cat >file <<EOF
do-not-log-my-output-1
do-not-log-my-output-2
do-not-log-my-output-3
EOF
file=file
export -f stub_function
export file EXEC
Output is only:
4: command
You could pipe it into a sed:
var=$(YOUR COMMAND | sed '$!d')
This will put only the last line into the variable
I think, that your proram EXEC does something special (open connection or remember state). When that is not the case, you can use
${EXEC} < ${FILE} > /dev/null
myvar=$(echo ${CMD} | ${EXEC})
Or with normal commands:
# Do not use (printf "==%s==\n" 1 2 3 ; printf "oo%soo\n" 4 5 6) | cat
printf "==%s==\n" 1 2 3 | cat > /dev/null
myvar=$(printf "oo%soo\n" 4 5 6 | cat)
When you need to give all input to one process, perhaps you can think of a marker that you can filter on:
(printf "==%s==\n" 1 2 3 ; printf "%s\n" "marker"; printf "oo%soo\n" 4 5 6) | cat | sed '1,/marker/ d'
You should examine your EXEC what could be used. When it is running SQL, you might use something like
(cat ${FILE}; echo 'select "DamonMarker" from dual;' ; echo ${CMD} ) |
${EXEC} | sed '1,/DamonMarker/ d'
and write this in a var with
myvar=$( (cat ${FILE}; echo 'select "DamonMarker" from dual;' ; echo ${CMD} ) |
${EXEC} | sed '1,/DamonMarker/ d' )

Why is exit my status valid in command line but not within bash script? (Bash)

There are a few layers here, so bear with me.
My docker-container ssh -c"echo 'YAY!'; exit 25;" command executes echo 'YAY!'; exit 25; in my docker container. It returns:
YAY
error: message=YAY!
, code=25
I need to know if the command within the container was successful, so I append the following to the command:
docker-container ssh -c"echo 'YAY!'; exit 25;" >&1 2>/tmp/stderr; cat /tmp/stderr | grep 'code=' | cut -d'=' -f2 | { read exitStatus; echo $exitStatus; }
This sends the stderr to /tmp/stderr and, with the echo $exitStatus returns:
YAY!
25
So, this is exactly what I want. I want the $exitStatus saved to a variable. My problem is, I am placing this into a bash script (GIT pre-commit) and when this exact code is executed, the exit status is null.
Here is my bash script:
# .git/hooks/pre-commit
if [ -z ${DOCKER_MOUNT+x} ];
then
docker-container ssh -c"echo 'YAY!'; exit 25;" >&1 2>/tmp/stderr; cat /tmp/stderr | grep 'code=' | cut -d'=' -f2 | { read exitStatus; echo $exitStatus; }
exit $exitStatus;
else
echo "Container detected!"
fi;
That's because you're setting the variable in a pipeline. Each command in the pipeline is run in a subshell, and when the subshell exits the variable are no longer available.
bash allows you to run the pipeline's last command in the current shell, but you also have to turn off job control
An example
# default bash
$ echo foo | { read x; echo x=$x; } ; echo x=$x
x=foo
x=
# with "lastpipe" configuration
$ set +m; shopt -s lastpipe
$ echo foo | { read x; echo x=$x; } ; echo x=$x
x=foo
x=foo
Add set +m; shopt -s lastpipe to your script and you should be good.
And as Charles comments, there are more efficient ways to do it. Like this:
source <(docker-container ssh -c "echo 'YAY!'; exit 25;" 2>&1 1>/dev/null | awk -F= '/code=/ {print "exitStatus=" $2}')
echo $exitStatus

Add timestamp to tee'd output, but not original output

I'm writing a little budget script to keep an eye on my finances. I'd like to keep a log of all of my transactions and when they happened.
Currently, I input spendings as an argument:
f)
echo "$OPTARG spent on food" | tee spendinglogs.log
... # take away money from monthly budget
echo "$REMAINING_FOOD_BUDGET remaining" | tee spendinglogs.log
m)
echo "$OPTARG spent on miscellaneous" | tee spendinglogs.log
... # take away money from monthly budget
echo "$REMAINING_MISC_BUDGET remaining" | tee spendinglogs.log
... #etc
I don't want to timestamp output to the terminal, but I do want to timestamp output to the logs. Is there a way to do this?
For example
echo "$OPTARG spent on food" | tee `date %d-%m-%y %H_%M_%S` spendinglogs.log
But I can't imagine that working.
EDIT: Tested and updated with correct info
Check out ts from the moreutils package.
If you're using bash, you can tee to a shell pipe as a file:
echo "$OPTARG spent on food" | tee >(ts "%d-%m-%y %H_%M_%S" > spendinglogs.log)
My previous answer correctly stated the above, correct answer, but also an incorrect one using pee, also from moreutils. pee appears to buffer stdin before sending it to the output pipes, so this will not work with timestamping (it will work with commands where the timing is not important however).
Try this:
echo something 2>&1 | while read line; do echo $line; echo "$(date) $line" >> something.log; done
function tee_with_timestamps () {
local logfile=$1
while read data; do
echo "${data}" | sed -e "s/^/$(date '+%T') /" >> "${logfile}"
echo "${data}"
done
}
echo "test" | tee_with_timestamps "file.txt"
The below snippet is prepending timestamp in front of each line in the log file.
The console output is without any timestamp.
exec &> >(tee -a >(sed "s/^/$(date) /" >> "${filename.log}"))
&> : for both stdout and stderr
>> : for appending in a file

Adding spaces to stdout

Is it possible to add spaces to the left of every output to stdout (and stderr if possible) when I run commands in a bash shell script?
I'd like to do something like:
#!/bin/bash
echo Installing: Something
echo " => installing prerequisite1"
## INSERT MAGICAL LEFT SPACES COMMAND HERE ##
apt-get install -q -y prerequisite
## ANOTHER MAGICAL CANCELLING LEFT SPACES COMMAND HERE ##
echo " => installing prerequisite2"
# ... the padding again ...
wget http://abc.com/lostzilla.tar.gz
tar vzxf lostzilla.tar.gz
cd lostzilla-1.01
./configure
make && make install
# ... end of padding ...
echo Done.
Any idea?
EDIT: Added quotes to the echo command, otherwise they won't be padded.
Yes, you can quote them for simple things:
echo ' => installing prerequisite1'
and pipe the output through sed for complex things:
tar vzxf lostzilla.tar.gz 2>&1 | sed 's/^/ /'
The 2>&1 puts stdout and stderr onto the stdout stream and the sed replaces every start-of-line marker with three spaces.
How well this will work on something like wget which does all sorts of cursor manipulations I'm not sure.
Example shown here:
pax> ls -1 p*
phase1.py
phase1.sh
phase2.py
phase2.sh
primes.c
primes.exe
primes.sh
primes.stat
pax> ls -1 p* | sed 's/^/ /'
phase1.py
phase1.sh
phase2.py
phase2.sh
primes.c
primes.exe
primes.sh
primes.stat
One trick I've used in the past is to ensure that the scripts themselves take care of the indentation:
#!/bin/bash
if [[ "${DONT_EVER_SET_THIS_VAR}" = "" ]] ; then
export DONT_EVER_SET_THIS_VAR=except_for_here
$0 | sed 's/^/ /'
exit
fi
ls -1 p*
This will re-run the script with indentation through sed if it's not already doing so. That way, you don't have to worry about changing all your output statements. A bit of a hack, I know, but I tend to just do what's necessary for quick-and-dirty shell scripts.
If you want to turn spacing on and off, use the following awk script:
#!/usr/bin/gawk -f
/^#SPACEON/ { spaces=1; }
/^#SPACEOFF/ { spaces=0; }
!/^#SPACE/ {
if(spaces) {
print " " $0;
} else {
print $0;
}
}
Note that there are slight problems with your bash scipt. Notably, the use of => in your echo statements will output the character = to the file "installing".
#!/bin/bash
echo Installing: Something
echo '=> installing prerequisite1'
echo '#SPACEON'
echo You would see apt-get install -q -y prerequisite
echo '#SPACEOFF'
echo '=> installing prerequisite2'
echo '#SPACEON'
echo You would see wget http://abc.com/lostzilla.tar.gz
echo You would see tar vzxf lostzilla.tar.gz
echo You would see cd lostzilla-1.01
echo You would see ./configure
echo You would see make \&\& make install
echo '#SPACEOFF'
echo Done.
Combining the two gives me:
$ ./do-stuff | ./magic-spacing
Installing: Something
=> installing prerequisite1
You would see apt-get install -q -y prerequisite
=> installing prerequisite2
You would see wget http://abc.com/lostzilla.tar.gz
You would see tar vzxf lostzilla.tar.gz
You would see cd lostzilla-1.01
You would see ./configure
You would see make && make install
Done.
Where do-stuff is your bash script and magic-spacing is my awk script above.
Depending on how the command writes to stdout, you can just indent with a simple awk script:
$ echo -e 'hello\nworld' | awk '{print " ",$0}'
hello
world
Quite un-magical you can use printf to do the following:
# space padding for single string
printf "%-4s%s\n" "" "=> installing prerequisite1"
# space padding for single command output
# use of subshell leaves original IFS intact
( IFS=$'\n'; printf " %s\n" $(command ls -ld * 2>&1) )
# note: output to stderr is unbuffered
( IFS=$'\n'; printf " %s\n" $(command ls -ld * 1>&2) )
It's also possible to group commands by enclosing them in curly braces and space-padd their output like so:
{
cmd1 1>&2
cmd2 1>&2
cmd3 1>&2
} 2>&1 | sed 's/.*/ &/'
It's possible to redirect stdout to stderr script/shell-wide using exec ...
(
exec 1>&2
command ls -ld *
) 2>&1 | sed 's/^/ /'
Use python pyp (The Pyed Piper):
ls -ld | pyp "' '+p"

Resources