Execute a command in a script and kill it when pressing a key - bash

I want to write a bash script which records my voice until I press a concrete key. I have thought I could use this command
arecord -D hw -q -f cd -r 16000 speech.wav
which records from my laptop microphone and stops when the process is killed, but I don't know how to write bash code to call the process and then kill it when I press a concrete key. Can you help me?

key="q"
arecord speech.wav &
pid=$!
while read -n1 char ; do
if [ "$char" = "$key" ] ; then
kill "$pid"
break
fi
done
$! notation is the pid of last background job. the read builtin has the -n switch, with this switch only a number of characters instead of a full line is read at once.

Related

Running commands in bash script in parallel with loop

I have a script where I start a packet capture with tshark and then check whether the user has submitted an input text file.
If there is a file present, I need to run a command for every item in the file through a loop (while tshark is running); else continue running tshark.
I would also like some way to stop tshark with user input such as a letter.
Code snippet:
echo "Starting tshark..."
sleep 2
tshark -i ${iface} &>/dev/null
tshark_pid=$!
# if devices aren't provided (such as in case of new devices, start capturing directly)
if [ -z "$targets" ]; then
echo "No target list provided."
else
for i in $targets; do
echo "Attempting to deauthenticate $i..."
sudo aireplay-ng -0 $number -a $ap -c $i $iface$mon
done
fi
What happens here is that tshark starts, and only when I quit it using Ctrl+c does it move on to the if statement and subsequent loop.
Adding a & at the end of command executes the command in a new sub process. Mind you won't be able to kill it with ctlr + c
example:
firefox
will block the shell
firefox & will not block shell

Corrupted mpeg-4 files when trying to put recording and pulling screenrecord videos in script

I want to write a script that initiates adb shell screenrecord in the background of my looping script. The looping script will be waiting at a menu waiting to read a 1 character user input.
When user presses any key, I want to automatically stop the screen-recording and pull the video file onto the user's computer.
On top of that, I will need to be able to place a countdown timer that will display on screen once the screenrecording commences, so they know how close to the 3min limit they are;
..using printf "\r%*s" ((0)) "$remainingTime" in a for loop for example.
This is the general idea of what I'm trying to do:
fileName="rec.$$.mp4"
record(){
adb shell screenrecord /sdcard/$fileName &
read -n 1 -s -r -p ''
#EOF represents where the screenrecord command expects the EOF signal
EOF
}
extract(){
adb pull sdcard/$fileName
}
record | wait && extract
exit
Using this method to interrupt the screenrecord (when replacing EOF with either return or kill "$childPID") may be the problem, but I cannot find anywhere how to accomplish the EOF interrupt to avoid corrupting the video file produced.
Right now, it does interrupt with EOF replaced, and it does also pull the file, but the pulled file is always corrupted.
Any ideas?
I tried adding wait 's and using various interrupt types to replace the EOF signal but to no avail.
This produces identical results; corrupted video file.
#!/bin/bash
set -x; clear
fileName="rec.$$.mp4"
record(){
adb shell screenrecord /sdcard/$fileName & childPID=$!
read -n 1 -s -r -p ''
kill "$childPID"
wait "$childPID"
}
extract(){
adb pull sdcard/$fileName
}
record | wait && extract; exit
Replacing the kill command with exit produces identical results as well.
After reading the character, kill the background process and wait for it to finish
fileName="rec.$$.mp4"
adb shell screenrecord /sdcard/"$fileName" &
childpid=$!
read -n 1 -s -r -p '' endRec
kill "$childpid"
wait "$childpid"
adb pull /sdcard/"$fileName"
Rather then using abd pull, just pipe the screenrecord to your file and save it on the fly.
fileName="rec.$$.mp4"
adb shell screenrecord - > "$fileName" &
childpid=$!
read -n 1 -s -r -p '' endRec
kill "$childpid"
wait "$childpid"
I have verified that manually entering the screenrecord command and using the EOF signal to interrupt the screenrecord command,
Create a fifo and send it.
tmpd=$(mktemp -d)
fifo="$tmpd/fifo"
mkfifo "$fifo"
adb shell screenrecord /sdcard/"$fileName" < "$fifo" &
childpid=$!
read -n 1 -s -r -p '' endRec
echo -e '\04' > "$fifo"
wait "$childpid"
rm -r "$tmpd" # cleanup
adb pull /sdcard/"$fileName"
Alternatively a bash coprocess could be good here.
Success! After much trial and error, I figured out a method to send eof to the screenrecord function running on the device's shell!
The solution was to trap traditional signals, and have the script send adb shell echo \04 whenever the exitScript function is called (note that is also the function that the most common signals will execute when caught).
#!/bin/bash
set -x; clear
fileName="rec.$$.mp4"
# make sure SIGINT always works even in presence of infinite loops
exitScript() {
trap - SIGINT SIGTERM SIGTERM SIGSTOP # clear the trap
adb shell echo \04; extract; exit
}; trap exitScript SIGINT SIGTERM SIGSTOP # set trap
record(){
adb shell screenrecord /sdcard/$fileName
}
extract(){
adb pull sdcard/$fileName
}
record && extract
exitScript
I also believe that not running screenrecord in a subshell might've helped to avoid corrupting the output file.
After making the screenrecord function loop until an interrupt occurs (then extracting in the background for continuous video sequences), and putting everything in a function to plug into my script, I think issue is fully resolved. Thanks for all your help!
#!/bin/bash
screenDVR(){
clear
read -r -p 'Enter the file path (or just drag the folder itself) of where you want to save the video sequences.. ' savePath
if [ ! "$savePath" = "" ]; then cd ~; cd "$savePath"; else printf "\nDefaulting to home directory\n"; cd ~; fi
# remove all files on device containing 'rec.'
adb -d shell rm -f *"/sdcard/rec."*
# make sure SIGINT always works even in presence of infinite loops
exitScript() {
trap - SIGINT SIGTERM SIGTERM # clear the trap
tput cnorm
adb -d shell echo \04; wait
extract
# remove all files on device containing 'rec.'
adb -d shell rm -f *"/sdcard/rec."*; wait
exit
}; trap exitScript SIGINT SIGTERM # set trap
extract(){
printf "\n%*s\n" $((0)) "Extracting.. $fileName .. to your computer!"
wait && adb pull sdcard/$fileName || return
}
record(){
printf "\n\n%*s\n\n" $((0)) "Use CTRL-C to stop the DVR.."
while true; do
tStamp="$(date +'%Hh%Mm%Ss')"
fileName="rec.$tStamp.$$.mp4"
printf "\n%*s\n\n" $((0)) "Starting new recording: $fileName"
adb -d shell screenrecord /sdcard/$fileName || adb shell echo \04
# running extract in a sub-process means the next video doesn't have any time-gap from the last
wait; extract & continue
done
}
record && wait && exitScript
}
(screenDVR) && printf "\ncontinue main loop\n"; exit

Wait for process to end OR user input to continue

I have a bash script that runs a command in the background. After it executes it, it displays to the user: Press any key to continue (powered by read -n1 -r -p 'Press any key to continue' value)
I would like to have something monitor the command in the background to see when it is finished, and if it is, I want the script to continue anyway. On the other hand, the process could still be going, and I would like to enable the user to press a key to kill that process instead of waiting for it to complete.
I guess the easiest way to visualize it would be like this:
The user can either wait for the timer to go to 0 and it will shut down automatically, or if they click the shut down button it immediately shuts down.
If you want to wait on the pid until the user hits a key, you can do it as follows:
./long_command.sh &
waitpid=$!
echo "Hit any key to continue or wait until the command finishes"
while kill -0 ${waitpid} > /dev/null ; do
#if [[ ${#KEY} -gt 0 ]] ; then
if read -n 1 -t 1 KEY ; then
kill ${waitpid}
break
fi
done
Just replace long_command.sh by your command. Here $! returns the pid of the last started subprocess and kill -0 ${waitpid} checks if the process is still existing (it's not killing the process). ps -q ${waitpid} works on Linux as well, but not on Mac - thank you #leetbacoon for mentioning this. read -n 1 -t 1 means "read one character, but only wait up to 1 second" (you could also use fractions like 0.5 here). The return status of this command depends on, if it could read a character in the specified time, or not.
Something like this might work for you
#!/bin/bash
doStuff() {
local pidOfParent="$1"
for i in $(seq 10); do
echo "stuff ${i}" > /dev/null
sleep 1
done
kill $pidOfParent
}
doStuff $$ &
doStuffPid="$!"
read -n1 -rp 'Press any key to continue' && kill $doStuffPid
Break down
doStuff is our function that contains what you want to be running in the background i.e. the music.
$$ is the PID of the running script, which we pass into our function to become the more descriptive pidOfParent in which we kill after we've finished doing stuff.
The & after calling the function puts it in the background.
$! gets the PID of the last executed command, thus we now have the PID of the background process we just started.
You provided read -n1 -rp 'Press any key to continue' so I can assume you already know what that does, && kill $doStuffPid will kill the background process when read has exited (this also works if you terminate the script using ^C).
If you are willing to use embedded expect, you can write:
expect <(cat <<'EOD'
spawn sleep 10
send_user "Press any key to continue\n"
stty raw -echo
expect {
-i $user_spawn_id -re ".+" {}
-i $spawn_id eof {}
}
EOD
)
where you would replace sleep 10 with your process.

Quit from pipe in bash

For following bash statement:
tail -Fn0 /tmp/report | while [ 1 ]; do echo "pre"; exit; echo "past"; done
I got "pre", but didn't quit to the bash prompt, then if I input something into /tmp/report, I could quit from this script and get into bash prompt.
I think that's reasonable. the 'exit' make the 'while' statement quit, but the 'tail' still alive. If something input into /tmp/report, the 'tail' will output to pipe, then 'tail' will detect the pipe is close, then 'tail' quits.
Am I right? If not, would anyone provide a correct interpretation?
Is it possible to add anything into 'while' statement to quit from the whole pipe statement immediately? I know I could save the pid of tail into a temporary file, then read this file in the 'while', then kill the tail. Is there a simpler way?
Let me enlarge my question. If use this tail|while in a script file, is it possible to fulfill following items simultaneously?
a. If Ctrl-C is inputed or signal the main shell process, the main shell and various subshells and background processes spawned by the main shell will quit
b. I could quit from tail|while only at a trigger case, and preserve other subprocesses keep running
c. It's better not use temporary file or pipe file.
You're correct. The while loop is executing in a subshell because its input is redirected, and exit just exits from that subshell.
If you're running bash 4.x, you may be able to achieve what you want with a coprocess.
coproc TAIL { tail -Fn0 /tmp/report.txt ;}
while [ 1 ]
do
echo "pre"
break
echo "past"
done <&${TAIL[0]}
kill $TAIL_PID
http://www.gnu.org/software/bash/manual/html_node/Coprocesses.html
With older versions, you can use a background process writing to a named pipe:
pipe=/tmp/tail.$$
mkfifo $pipe
tail -Fn0 /tmp/report.txt >$pipe &
TAIL_PID=$!
while [ 1 ]
do
echo "pre"
break
echo "past"
done <$pipe
kill $TAIL_PID
rm $pipe
You can (unreliably) get away with killing the process group:
tail -Fn0 /tmp/report | while :
do
echo "pre"
sh -c 'PGID=$( ps -o pgid= $$ | tr -d \ ); kill -TERM -$PGID'
echo "past"
done
This may send the signal to more processes than you want. If you run the above command in an interactive terminal you should be okay, but in a script it is entirely possible (indeed likely) the the process group will include the script running the command. To avoid sending the signal, it would be wise to enable monitoring and run the pipeline in the background to ensure that a new process group is formed for the pipeline:
#!/bin/sh
# In Posix shells that support the User Portability Utilities option
# this includes bash & ksh), executing "set -m" turns on job control.
# Background processes run in a separate process group. If the shell
# is interactive, a line containing their exit status is printed to
# stderr upon their completion.
set -m
tail -Fn0 /tmp/report | while :
do
echo "pre"
sh -c 'PGID=$( ps -o pgid= $$ | tr -d \ ); kill -TERM -$PGID'
echo "past"
done &
wait
Note that I've replaced the while [ 1 ] with while : because while [ 1 ] is poor style. (It behaves exactly the same as while [ 0 ]).

Cannot terminate a shell command with Ctrl+c

Would someone please tell me why below bash statement cannot be terminated by Ctrl+c properly?
$( { ( tail -fn0 /tmp/a.txt & )| while read line; do echo $line; done } 3>&1 )
I run this statement, then two bash processes and one tail process are launched(got from ps auxf), then input Ctrl+c, and it won't quit to the bash prompt, at this moment, I see the two bash processes stopped, while the tail is still running, then I input something into /tmp/a.txt, then we could get into bash prompt.
What I want is, input Ctrl+c, then just quit into bash prompt without any relevant process left.
It will be more appreciative that someone explains the exact process of this statement, like a pipe causes the bash fork, something redirect to somewhere, etc.
Updated at Oct 9 2014:
Here provide some update in case it's useful to you.
My adopt solution is alike with 2 factors:
use a tmp pid file
( tail -Fn0 ${monitor_file} & echo "$!" >${tail_pid} ) | \
while IFS= read -r line; do
xxxx
done
use trap like: trap "rm ${tail_pid} 2>/dev/null; kill 0 2>/dev/null; exit;" INT TERM to kill relevant processes and remove remain files.
Please note, this kill 0 2 is bash specific, and 0 means all processes in the current process group.
This solution used a tmp pid file, while I still expect other solution without tmp pid file.
It works to trap the INT signal (sent by Ctrl-C) to kill the tail process.
$( r=$RANDOM; trap '{ kill $(cat /tmp/pid$r.pid);rm /tmp/pid$r.pid;exit; }' SIGINT EXIT; { ( tail -fn0 /tmp/a.txt & echo $! > /tmp/pid$r.pid )| while read line; do echo $line; done } 3>&1 )
(I use a random value on the PID file name to at least mostly allow multiple instances to run)

Resources