Is there a way to use `script` with shell functions? (colorized output) - bash

I'm using a script to run several tests (npm, python etc...)
These have colored outputs.
I'm actually running some of these tests in parallel sending processes in the background, and capturing output in a variable to display when done (as opposed to letting the output come to TTY and having multiple outputs mixed up together).
All works well, but the output is not colored, and I would like to keep the colors. I understand it is because it is not an output to a TTY so color is stripped, and I looked for tricks to avoid this.
This answer:
Can colorized output be captured via shell redirect?
offers a way to do this, but doesn't work with shell functions
If I do:
OUTPUT=$(script -q /dev/null npm test | cat)
echo -e $OUTPUT
I get the output in the variable and the echo command output is colored.
but if f I do:
function run_test() { npm test; }
OUTPUT=$(script -q /dev/null run_test | cat)
echo -e $OUTPUT
I get:
script: run_test: No such file or directory
If I call the run_test function passing it to script like:
function run_test() { npm test; }
OUTPUT=$(script -q /dev/null `run_test` | cat)
echo -e $OUTPUT
it's like passing the output that is already eval'd without the colors, so the script output is not colored.
Is there a way to make shell functions work with script ?
I could have the script call in the function like:
function run_test() { script -q /dev/null npm run test | cat; }
but there are several issues with that:
Sometimes I need to run multiple commands in series, and send that in the background to run in parallel, it becomes messy: I want to wrap sequences in a shell function and run that with script.
This would already run the function itself, and return when done. What I want is pass the function to call to another function that runs in the background and logs the output with script.
PS: I also tried npm config set color always to force npm to always output colors, but that doesn't seem to help, plus I have other functions to call that are not all npm, so it would not work for everything anyways.

You can use a program such as unbuffer that simulates a TTY to get color output from software whose output is actually eventually going to a pipeline.
In the case of:
unbuffer npm test | cat
...there's a TTY simulated by unbuffer, so it doesn't see the FIFO going to cat on its output.
If you want to run a shell function behind a shim of this type, be sure to export it to the environment, as with export -f.
Demonstrating how to use this with a shell function:
myfunc() { echo "In function"; (( $# )) && { echo "Arguments:"; printf ' - %s\n' "$#"; }; }
export -f myfunc
unbuffer bash -c '"$#"' _ myfunc "Argument one" "Argument two"

I tried unbuffer and it doesn't seem to work with shell functions either
script doesn't work by passing it a shell function, however it's possible to pass some STDIN type input, so what ended up working for me was
script -q /dev/null <<< "run_test"
or
echo "run_test" | script -q /dev/null
so I could output this to a shell variable, even using a variable as the COMMAND like:
OUTPUT=$(echo "$COMMAND" | script -q /dev/null)
and later output the colored output with
echo -e $OUTPUT
Unfortunately, this still outputs some extra garbage (i.e. the shell name, the command name and the exit command at the end.
Since I wanted to capture the output code, I could not pipe the output somewhere else, so I went this way:
run() {
run_in_background "$#" &
}
run_in_background() {
COMMAND="$#" # whatever is passed to the function
CODE=0
OUTPUT=$(echo "$COMMAND" | script -q /dev/null) || CODE=$(( CODE + $? ));
echo -e $OUTPUT | grep -v "bash" | grep -v "$COMMAND";
if [ "$CODE" != "0" ]; then exit 1; fi
}
and use like:
# test suites shell functions
run_test1() { npm test; }
run_test2() { python manage.py test; }
# queue tests to run in background jobs
run run_test1
run run_test2
# wait for all to finish
wait
I'm skipping the part where I catch the errors and propagate failure to the top PID, but you get the gist.

Related

Redirect copy of stdin to file from within bash script itself

In reference to https://stackoverflow.com/a/11886837/1996022 (also shamelessly stole the title) where the question is how to capture the script's output I would like to know how I can additionally capture the scripts input. Mainly so scripts that also have user input produce complete logs.
I tried things like
exec 3< <(tee -ia foo.log <&3)
exec <&3 <(tee -ia foo.log <&3)
But nothing seems to work. I'm probably just missing something.
Maybe it'd be easier to use the script command? You could either have your users run the script with script directly, or do something kind of funky like this:
#!/bin/bash
main() {
read -r -p "Input string: "
echo "User input: $REPLY"
}
if [ "$1" = "--log" ]; then
# If the first argument is "--log", shift the arg
# out and run main
shift
main "$#"
else
# If run without log, re-run this script within a
# script command so all script I/O is logged
script -q -c "$0 --log $*" test.log
fi
Unfortunately, you can't pass a function to script -c which is why the double-call is necessary in this method.
If it's acceptable to have two scripts, you could also have a user-facing script that just calls the non-user-facing script with script:
script_for_users.sh
--------------------
#!/bin/sh
script -q -c "/path/to/real_script.sh" <log path>
real_script.sh
---------------
#!/bin/sh
<Normal business logic>
It's simpler:
#! /bin/bash
tee ~/log | your_script
The wonderful thing is your_script can be a function, command or a {} command block!

how to execute/terminate command in the pipe conditionally

I'm working on a script which need to detect the first call to FFMPEG in a program and run a script from then on.
the core code was like:
strace -f -etrace=execve <program> 2>&1 | grep <some_pattern> | <run_some_script>
The desired behaviours is, when the first greped result comes out, the script should start. And if nothing matched before <program> terminates, the script should be ignored.
The main problem is how to conditionally execute the script based on the grep's output and how to terminate the script after the program terminates.
I think the first one could be solved using read, since the greped text are used as signals, its contents are irrelevant:
... | read -N 1 && <run_some_script>
and the second could be solved using broken pipe mechanism:
<run_some_script> > >(...)
but I don't know how to make them work together. Or is there a better solution?
You could ask grep to just match the pattern once and return and make it return a success error code. Putting this together in a if conditional altogether as
if strace -f -etrace=execve <program> 2>&1 | grep -q <some_pattern>; then
echo 'run a program'
fi
The -q flag is to suppress the usual stdout content returned by the grep command as you've mentioned you only want to use grep result to perform an action and not use the results.
Or may be you needed to use coproc running the command to run in background and check every line of the output produced. Just write a wrapper over the command you want to run as below. The function is not needed for single commands but for multiple commands a function would be more relevant.
wrapper() { strace -f -etrace=execve <program> 2>&1 ; }
Use coproc is just similar to running the command in background but provides an easy way to capture the output of the command run
coproc outputfd { wrapper; }
Now watch the output of the commands run inside wrapper by reading from the file descriptor provided by coproc. The below code will watch on the output and on the first match of the pattern it starts a background job for the command to run and the process id is stored in pid.
flag=1
while IFS= read -r -u "${outputfd[0]}" output; do
if [[ $output == *"pattern"* && $flag -eq 1 ]]; then
flag=0
command_to_run & pid=$!
fi
done
When the loop terminates, which means the background job started by coproc is complete. At that point kill the script started. For safety purposes, see if its alive and do the kill
kill "$pid" >/dev/null 2>&1
Using the ifne util:
strace -f -etrace=execve <program> 2>&1 |
grep <some_pattern> | ifne <some_script>

bash: want errors from piped commands going to stderr, not to screen

In my script, If I want set a variable to the output of a command and avoid any errors from the command failing going to the screen, I can do something like:
var=$(command 2>/dev/null)
If I have commands piped together, i.e.
var=$(command1 | command2 | command3 2>/dev/null)
what's an elegant way to suppress any errors coming from any of the commands. I don't mind if var doesn't get set, I just don't want the user to see the errors from these "lower level commands" on the screen; I want to test var separately after.
Here's an example with two, but I've got a chain of command so I don't want to echo the variable results every time into the next command.
res=$(ls bogusfile | grep morebogus 2>/dev/null)
Put the whole pipeline in a group:
res=$( { ls bogusfile | grep morebogus; } 2>/dev/null)
You need to redirect stderr for each command in the pipeline:
res=$(ls bogusfile 2>/dev/null | grep morebogus 2>/dev/null)
Or you could wrap everything in a subshell whose output is redirected:
res=$( (ls bogusfile | grep morebogus) 2>/dev/null)
You should be able to use {} to group multiple commands:
var=$( { command1 | command2 | command3; } 2>/dev/null)
You can also just redirect it for the entire script, using exec 2>/dev/null, e.g.
#!/bin/bash
return 2>/dev/null # prevent sourcing
exec 3>&2 2>/dev/null
# file descriptor 2 is directed to /dev/null for any commands here
exec 2>&3
# fd 2 is directed back to where it was originally for any commands here
Note: This will prevent interactive output and displaying the prompt. So you can execute the script, but you shouldn't just run the commands in an interactive shell or source it without the initial return line. You also won't be able to use read normally without redirecting the file descriptor back

Copy *unbuffered* stdout to file from within bash script itself

I want to copy stdout to a log file from within a bash script, meaning I don't want to call the script with output piped to tee, I want the script itself to handle it. I've successfully used this answer to accomplish this, using the following code:
#!/bin/bash
exec > >(sed "s/^/[${1}] /" | tee -a myscript.log)
exec 2>&1
# <rest of script>
echo "hello"
sleep 10
echo "world"
This works, but has the downside of output being buffered until the script is completed, as is also discussed in the linked answer. In the above example, both "hello" and "world" will show up in the log only after the 10 seconds have passed.
I am aware of the stdbuf command, and if running the script with
stdbuf -oL ./myscript.sh
then stdout is indeed continuously printed both to the file and the terminal.
However, I'd like this to be handled from within the script as well. Is there any way to combine these two solutions? I'd rather not resort to a wrapper script that simply calls the original script enclosed with "stdbuf -oL".
You can use a workaround and make the script execute itself with stdbuf, if a special argument is present:
#!/bin/bash
if [[ "$1" != __BUFFERED__ ]]; then
prog="$0"
stdbuf -oL "$prog" __BUFFERED__ "$#"
else
shift #discard __BUFFERED__
exec > >(sed "s/^/[${1}] /" | tee -a myscript.log)
exec 2>&1
# <rest of script>
echo "hello"
sleep 1
echo "world"
fi
This will mostly work:
if you run the script with ./test, it shows unbuffered [] hello\n[] world.
if you run the script with ./test 123 456, it shows [123] hello\n[123] world like you want.
it won't work, however, if you run it with bash test - $0 is set to test which is not your script. Fixing this is not in the scope of this question though.
The delay in your first solution is caused by sed, not by tee. Try this instead:
#!/bin/bash
exec 6>&1 2>&1>&>(tee -a myscript.log)
To "undo" the tee effect:
exec 1>&6 2>&6 6>&-

Bash piped commands and its returns

Is there any way to a piped commands to replicate its previous command exit status?
For example:
#/bin/bash
(...)
function customizedLog() {
# do something with the piped command output
exit <returned value from the last piped comand/script (script.sh)>
}
script.sh | customizedLog
echo ${?} # here I wanna show the script exit value
(...)
I know I could simply check the return using ${PIPESTATUS[0]}, but I really want to do this like the customizedLog function wasn't there.
Any thoughts?
In bash:
set -o pipefail
This will return the last non-zero exit status in a pipeline, or zero if all commands in the pipeline succeed.
set -o pipefail
script.sh | customizedLog
echo ${?}
Just make sure customizedLog succeeds (return 0), and you should pick up the exit status of script.sh. Test with false | customizedLog and true | customizedLog.
script.sh | customizedLog
The above will run in two separate processes (or 3, actually -- customizedLog will run in a bash fork as you can verify with something like ps -T --forest). As far as I know, with the UNIX process model, the only process that has access to a process's return information is its parent so there's no way customized log will be able to retrieve it.
So no, unless the previous command is run from a wrapper command that passes the exit status through the pipe (e.g., as the last line):
( command ; echo $? ) | piped_command_that_is_aware_of_such_an_arrangement

Resources