I'm working on a script which need to detect the first call to FFMPEG in a program and run a script from then on.
the core code was like:
strace -f -etrace=execve <program> 2>&1 | grep <some_pattern> | <run_some_script>
The desired behaviours is, when the first greped result comes out, the script should start. And if nothing matched before <program> terminates, the script should be ignored.
The main problem is how to conditionally execute the script based on the grep's output and how to terminate the script after the program terminates.
I think the first one could be solved using read, since the greped text are used as signals, its contents are irrelevant:
... | read -N 1 && <run_some_script>
and the second could be solved using broken pipe mechanism:
<run_some_script> > >(...)
but I don't know how to make them work together. Or is there a better solution?
You could ask grep to just match the pattern once and return and make it return a success error code. Putting this together in a if conditional altogether as
if strace -f -etrace=execve <program> 2>&1 | grep -q <some_pattern>; then
echo 'run a program'
fi
The -q flag is to suppress the usual stdout content returned by the grep command as you've mentioned you only want to use grep result to perform an action and not use the results.
Or may be you needed to use coproc running the command to run in background and check every line of the output produced. Just write a wrapper over the command you want to run as below. The function is not needed for single commands but for multiple commands a function would be more relevant.
wrapper() { strace -f -etrace=execve <program> 2>&1 ; }
Use coproc is just similar to running the command in background but provides an easy way to capture the output of the command run
coproc outputfd { wrapper; }
Now watch the output of the commands run inside wrapper by reading from the file descriptor provided by coproc. The below code will watch on the output and on the first match of the pattern it starts a background job for the command to run and the process id is stored in pid.
flag=1
while IFS= read -r -u "${outputfd[0]}" output; do
if [[ $output == *"pattern"* && $flag -eq 1 ]]; then
flag=0
command_to_run & pid=$!
fi
done
When the loop terminates, which means the background job started by coproc is complete. At that point kill the script started. For safety purposes, see if its alive and do the kill
kill "$pid" >/dev/null 2>&1
Using the ifne util:
strace -f -etrace=execve <program> 2>&1 |
grep <some_pattern> | ifne <some_script>
Related
I'm using a script to run several tests (npm, python etc...)
These have colored outputs.
I'm actually running some of these tests in parallel sending processes in the background, and capturing output in a variable to display when done (as opposed to letting the output come to TTY and having multiple outputs mixed up together).
All works well, but the output is not colored, and I would like to keep the colors. I understand it is because it is not an output to a TTY so color is stripped, and I looked for tricks to avoid this.
This answer:
Can colorized output be captured via shell redirect?
offers a way to do this, but doesn't work with shell functions
If I do:
OUTPUT=$(script -q /dev/null npm test | cat)
echo -e $OUTPUT
I get the output in the variable and the echo command output is colored.
but if f I do:
function run_test() { npm test; }
OUTPUT=$(script -q /dev/null run_test | cat)
echo -e $OUTPUT
I get:
script: run_test: No such file or directory
If I call the run_test function passing it to script like:
function run_test() { npm test; }
OUTPUT=$(script -q /dev/null `run_test` | cat)
echo -e $OUTPUT
it's like passing the output that is already eval'd without the colors, so the script output is not colored.
Is there a way to make shell functions work with script ?
I could have the script call in the function like:
function run_test() { script -q /dev/null npm run test | cat; }
but there are several issues with that:
Sometimes I need to run multiple commands in series, and send that in the background to run in parallel, it becomes messy: I want to wrap sequences in a shell function and run that with script.
This would already run the function itself, and return when done. What I want is pass the function to call to another function that runs in the background and logs the output with script.
PS: I also tried npm config set color always to force npm to always output colors, but that doesn't seem to help, plus I have other functions to call that are not all npm, so it would not work for everything anyways.
You can use a program such as unbuffer that simulates a TTY to get color output from software whose output is actually eventually going to a pipeline.
In the case of:
unbuffer npm test | cat
...there's a TTY simulated by unbuffer, so it doesn't see the FIFO going to cat on its output.
If you want to run a shell function behind a shim of this type, be sure to export it to the environment, as with export -f.
Demonstrating how to use this with a shell function:
myfunc() { echo "In function"; (( $# )) && { echo "Arguments:"; printf ' - %s\n' "$#"; }; }
export -f myfunc
unbuffer bash -c '"$#"' _ myfunc "Argument one" "Argument two"
I tried unbuffer and it doesn't seem to work with shell functions either
script doesn't work by passing it a shell function, however it's possible to pass some STDIN type input, so what ended up working for me was
script -q /dev/null <<< "run_test"
or
echo "run_test" | script -q /dev/null
so I could output this to a shell variable, even using a variable as the COMMAND like:
OUTPUT=$(echo "$COMMAND" | script -q /dev/null)
and later output the colored output with
echo -e $OUTPUT
Unfortunately, this still outputs some extra garbage (i.e. the shell name, the command name and the exit command at the end.
Since I wanted to capture the output code, I could not pipe the output somewhere else, so I went this way:
run() {
run_in_background "$#" &
}
run_in_background() {
COMMAND="$#" # whatever is passed to the function
CODE=0
OUTPUT=$(echo "$COMMAND" | script -q /dev/null) || CODE=$(( CODE + $? ));
echo -e $OUTPUT | grep -v "bash" | grep -v "$COMMAND";
if [ "$CODE" != "0" ]; then exit 1; fi
}
and use like:
# test suites shell functions
run_test1() { npm test; }
run_test2() { python manage.py test; }
# queue tests to run in background jobs
run run_test1
run run_test2
# wait for all to finish
wait
I'm skipping the part where I catch the errors and propagate failure to the top PID, but you get the gist.
In my script, If I want set a variable to the output of a command and avoid any errors from the command failing going to the screen, I can do something like:
var=$(command 2>/dev/null)
If I have commands piped together, i.e.
var=$(command1 | command2 | command3 2>/dev/null)
what's an elegant way to suppress any errors coming from any of the commands. I don't mind if var doesn't get set, I just don't want the user to see the errors from these "lower level commands" on the screen; I want to test var separately after.
Here's an example with two, but I've got a chain of command so I don't want to echo the variable results every time into the next command.
res=$(ls bogusfile | grep morebogus 2>/dev/null)
Put the whole pipeline in a group:
res=$( { ls bogusfile | grep morebogus; } 2>/dev/null)
You need to redirect stderr for each command in the pipeline:
res=$(ls bogusfile 2>/dev/null | grep morebogus 2>/dev/null)
Or you could wrap everything in a subshell whose output is redirected:
res=$( (ls bogusfile | grep morebogus) 2>/dev/null)
You should be able to use {} to group multiple commands:
var=$( { command1 | command2 | command3; } 2>/dev/null)
You can also just redirect it for the entire script, using exec 2>/dev/null, e.g.
#!/bin/bash
return 2>/dev/null # prevent sourcing
exec 3>&2 2>/dev/null
# file descriptor 2 is directed to /dev/null for any commands here
exec 2>&3
# fd 2 is directed back to where it was originally for any commands here
Note: This will prevent interactive output and displaying the prompt. So you can execute the script, but you shouldn't just run the commands in an interactive shell or source it without the initial return line. You also won't be able to use read normally without redirecting the file descriptor back
I want to copy stdout to a log file from within a bash script, meaning I don't want to call the script with output piped to tee, I want the script itself to handle it. I've successfully used this answer to accomplish this, using the following code:
#!/bin/bash
exec > >(sed "s/^/[${1}] /" | tee -a myscript.log)
exec 2>&1
# <rest of script>
echo "hello"
sleep 10
echo "world"
This works, but has the downside of output being buffered until the script is completed, as is also discussed in the linked answer. In the above example, both "hello" and "world" will show up in the log only after the 10 seconds have passed.
I am aware of the stdbuf command, and if running the script with
stdbuf -oL ./myscript.sh
then stdout is indeed continuously printed both to the file and the terminal.
However, I'd like this to be handled from within the script as well. Is there any way to combine these two solutions? I'd rather not resort to a wrapper script that simply calls the original script enclosed with "stdbuf -oL".
You can use a workaround and make the script execute itself with stdbuf, if a special argument is present:
#!/bin/bash
if [[ "$1" != __BUFFERED__ ]]; then
prog="$0"
stdbuf -oL "$prog" __BUFFERED__ "$#"
else
shift #discard __BUFFERED__
exec > >(sed "s/^/[${1}] /" | tee -a myscript.log)
exec 2>&1
# <rest of script>
echo "hello"
sleep 1
echo "world"
fi
This will mostly work:
if you run the script with ./test, it shows unbuffered [] hello\n[] world.
if you run the script with ./test 123 456, it shows [123] hello\n[123] world like you want.
it won't work, however, if you run it with bash test - $0 is set to test which is not your script. Fixing this is not in the scope of this question though.
The delay in your first solution is caused by sed, not by tee. Try this instead:
#!/bin/bash
exec 6>&1 2>&1>&>(tee -a myscript.log)
To "undo" the tee effect:
exec 1>&6 2>&6 6>&-
For following bash statement:
tail -Fn0 /tmp/report | while [ 1 ]; do echo "pre"; exit; echo "past"; done
I got "pre", but didn't quit to the bash prompt, then if I input something into /tmp/report, I could quit from this script and get into bash prompt.
I think that's reasonable. the 'exit' make the 'while' statement quit, but the 'tail' still alive. If something input into /tmp/report, the 'tail' will output to pipe, then 'tail' will detect the pipe is close, then 'tail' quits.
Am I right? If not, would anyone provide a correct interpretation?
Is it possible to add anything into 'while' statement to quit from the whole pipe statement immediately? I know I could save the pid of tail into a temporary file, then read this file in the 'while', then kill the tail. Is there a simpler way?
Let me enlarge my question. If use this tail|while in a script file, is it possible to fulfill following items simultaneously?
a. If Ctrl-C is inputed or signal the main shell process, the main shell and various subshells and background processes spawned by the main shell will quit
b. I could quit from tail|while only at a trigger case, and preserve other subprocesses keep running
c. It's better not use temporary file or pipe file.
You're correct. The while loop is executing in a subshell because its input is redirected, and exit just exits from that subshell.
If you're running bash 4.x, you may be able to achieve what you want with a coprocess.
coproc TAIL { tail -Fn0 /tmp/report.txt ;}
while [ 1 ]
do
echo "pre"
break
echo "past"
done <&${TAIL[0]}
kill $TAIL_PID
http://www.gnu.org/software/bash/manual/html_node/Coprocesses.html
With older versions, you can use a background process writing to a named pipe:
pipe=/tmp/tail.$$
mkfifo $pipe
tail -Fn0 /tmp/report.txt >$pipe &
TAIL_PID=$!
while [ 1 ]
do
echo "pre"
break
echo "past"
done <$pipe
kill $TAIL_PID
rm $pipe
You can (unreliably) get away with killing the process group:
tail -Fn0 /tmp/report | while :
do
echo "pre"
sh -c 'PGID=$( ps -o pgid= $$ | tr -d \ ); kill -TERM -$PGID'
echo "past"
done
This may send the signal to more processes than you want. If you run the above command in an interactive terminal you should be okay, but in a script it is entirely possible (indeed likely) the the process group will include the script running the command. To avoid sending the signal, it would be wise to enable monitoring and run the pipeline in the background to ensure that a new process group is formed for the pipeline:
#!/bin/sh
# In Posix shells that support the User Portability Utilities option
# this includes bash & ksh), executing "set -m" turns on job control.
# Background processes run in a separate process group. If the shell
# is interactive, a line containing their exit status is printed to
# stderr upon their completion.
set -m
tail -Fn0 /tmp/report | while :
do
echo "pre"
sh -c 'PGID=$( ps -o pgid= $$ | tr -d \ ); kill -TERM -$PGID'
echo "past"
done &
wait
Note that I've replaced the while [ 1 ] with while : because while [ 1 ] is poor style. (It behaves exactly the same as while [ 0 ]).
Suppose I have test.sh as below. The intent is to run some background task(s) by this script, that continuously updates some file. If the background task is terminated for some reason, it should be started again.
#!/bin/sh
if [ -f pidfile ] && kill -0 $(cat pidfile); then
cat somewhere
exit
fi
while true; do
echo "something" >> somewhere
sleep 1
done &
echo $! > pidfile
and want to call it like ./test.sh | otherprogram, e. g. ./test.sh | cat.
The pipe is not being closed as the background process still exists and might produce some output. How can I tell the pipe to close at the end of test.sh? Is there a better way than checking for existence of pidfile before calling the pipe command?
As a variant I tried using #!/bin/bash and disown at the end of test.sh, but it is still waiting for the pipe to be closed.
What I actually try to achieve: I have a "status" script which collects the output of various scripts (uptime, free, date, get-xy-from-dbus, etc.), similar to this test.sh here. The output of the script is passed to my window manager, which displays it. It's also used in my GNU screen bottom line.
Since some of the scripts that are used might take some time to create output, I want to detach them from output collection. So I put them in a while true; do script; sleep 1; done loop, which is started if it is not running yet.
The problem here is now that I don't know how to tell the calling script to "really" detach the daemon process.
See if this serves your purpose:
(I am assuming that you are not interested in any stderr of commands in while loop. You would adjust the code, if you are. :-) )
#!/bin/bash
if [ -f pidfile ] && kill -0 $(cat pidfile); then
cat somewhere
exit
fi
while true; do
echo "something" >> somewhere
sleep 1
done >/dev/null 2>&1 &
echo $! > pidfile
If you want to explicitly close a file descriptor, like for example 1 which is standard output, you can do it with:
exec 1<&-
This is valid for POSIX shells, see: here
When you put the while loop in an explicit subshell and run the subshell in the background it will give the desired behaviour.
(while true; do
echo "something" >> somewhere
sleep 1
done)&