Corrupted mpeg-4 files when trying to put recording and pulling screenrecord videos in script - bash

I want to write a script that initiates adb shell screenrecord in the background of my looping script. The looping script will be waiting at a menu waiting to read a 1 character user input.
When user presses any key, I want to automatically stop the screen-recording and pull the video file onto the user's computer.
On top of that, I will need to be able to place a countdown timer that will display on screen once the screenrecording commences, so they know how close to the 3min limit they are;
..using printf "\r%*s" ((0)) "$remainingTime" in a for loop for example.
This is the general idea of what I'm trying to do:
fileName="rec.$$.mp4"
record(){
adb shell screenrecord /sdcard/$fileName &
read -n 1 -s -r -p ''
#EOF represents where the screenrecord command expects the EOF signal
EOF
}
extract(){
adb pull sdcard/$fileName
}
record | wait && extract
exit
Using this method to interrupt the screenrecord (when replacing EOF with either return or kill "$childPID") may be the problem, but I cannot find anywhere how to accomplish the EOF interrupt to avoid corrupting the video file produced.
Right now, it does interrupt with EOF replaced, and it does also pull the file, but the pulled file is always corrupted.
Any ideas?
I tried adding wait 's and using various interrupt types to replace the EOF signal but to no avail.
This produces identical results; corrupted video file.
#!/bin/bash
set -x; clear
fileName="rec.$$.mp4"
record(){
adb shell screenrecord /sdcard/$fileName & childPID=$!
read -n 1 -s -r -p ''
kill "$childPID"
wait "$childPID"
}
extract(){
adb pull sdcard/$fileName
}
record | wait && extract; exit
Replacing the kill command with exit produces identical results as well.

After reading the character, kill the background process and wait for it to finish
fileName="rec.$$.mp4"
adb shell screenrecord /sdcard/"$fileName" &
childpid=$!
read -n 1 -s -r -p '' endRec
kill "$childpid"
wait "$childpid"
adb pull /sdcard/"$fileName"
Rather then using abd pull, just pipe the screenrecord to your file and save it on the fly.
fileName="rec.$$.mp4"
adb shell screenrecord - > "$fileName" &
childpid=$!
read -n 1 -s -r -p '' endRec
kill "$childpid"
wait "$childpid"
I have verified that manually entering the screenrecord command and using the EOF signal to interrupt the screenrecord command,
Create a fifo and send it.
tmpd=$(mktemp -d)
fifo="$tmpd/fifo"
mkfifo "$fifo"
adb shell screenrecord /sdcard/"$fileName" < "$fifo" &
childpid=$!
read -n 1 -s -r -p '' endRec
echo -e '\04' > "$fifo"
wait "$childpid"
rm -r "$tmpd" # cleanup
adb pull /sdcard/"$fileName"
Alternatively a bash coprocess could be good here.

Success! After much trial and error, I figured out a method to send eof to the screenrecord function running on the device's shell!
The solution was to trap traditional signals, and have the script send adb shell echo \04 whenever the exitScript function is called (note that is also the function that the most common signals will execute when caught).
#!/bin/bash
set -x; clear
fileName="rec.$$.mp4"
# make sure SIGINT always works even in presence of infinite loops
exitScript() {
trap - SIGINT SIGTERM SIGTERM SIGSTOP # clear the trap
adb shell echo \04; extract; exit
}; trap exitScript SIGINT SIGTERM SIGSTOP # set trap
record(){
adb shell screenrecord /sdcard/$fileName
}
extract(){
adb pull sdcard/$fileName
}
record && extract
exitScript
I also believe that not running screenrecord in a subshell might've helped to avoid corrupting the output file.
After making the screenrecord function loop until an interrupt occurs (then extracting in the background for continuous video sequences), and putting everything in a function to plug into my script, I think issue is fully resolved. Thanks for all your help!
#!/bin/bash
screenDVR(){
clear
read -r -p 'Enter the file path (or just drag the folder itself) of where you want to save the video sequences.. ' savePath
if [ ! "$savePath" = "" ]; then cd ~; cd "$savePath"; else printf "\nDefaulting to home directory\n"; cd ~; fi
# remove all files on device containing 'rec.'
adb -d shell rm -f *"/sdcard/rec."*
# make sure SIGINT always works even in presence of infinite loops
exitScript() {
trap - SIGINT SIGTERM SIGTERM # clear the trap
tput cnorm
adb -d shell echo \04; wait
extract
# remove all files on device containing 'rec.'
adb -d shell rm -f *"/sdcard/rec."*; wait
exit
}; trap exitScript SIGINT SIGTERM # set trap
extract(){
printf "\n%*s\n" $((0)) "Extracting.. $fileName .. to your computer!"
wait && adb pull sdcard/$fileName || return
}
record(){
printf "\n\n%*s\n\n" $((0)) "Use CTRL-C to stop the DVR.."
while true; do
tStamp="$(date +'%Hh%Mm%Ss')"
fileName="rec.$tStamp.$$.mp4"
printf "\n%*s\n\n" $((0)) "Starting new recording: $fileName"
adb -d shell screenrecord /sdcard/$fileName || adb shell echo \04
# running extract in a sub-process means the next video doesn't have any time-gap from the last
wait; extract & continue
done
}
record && wait && exitScript
}
(screenDVR) && printf "\ncontinue main loop\n"; exit

Related

Running commands in bash script in parallel with loop

I have a script where I start a packet capture with tshark and then check whether the user has submitted an input text file.
If there is a file present, I need to run a command for every item in the file through a loop (while tshark is running); else continue running tshark.
I would also like some way to stop tshark with user input such as a letter.
Code snippet:
echo "Starting tshark..."
sleep 2
tshark -i ${iface} &>/dev/null
tshark_pid=$!
# if devices aren't provided (such as in case of new devices, start capturing directly)
if [ -z "$targets" ]; then
echo "No target list provided."
else
for i in $targets; do
echo "Attempting to deauthenticate $i..."
sudo aireplay-ng -0 $number -a $ap -c $i $iface$mon
done
fi
What happens here is that tshark starts, and only when I quit it using Ctrl+c does it move on to the if statement and subsequent loop.
Adding a & at the end of command executes the command in a new sub process. Mind you won't be able to kill it with ctlr + c
example:
firefox
will block the shell
firefox & will not block shell

Wait for process to end OR user input to continue

I have a bash script that runs a command in the background. After it executes it, it displays to the user: Press any key to continue (powered by read -n1 -r -p 'Press any key to continue' value)
I would like to have something monitor the command in the background to see when it is finished, and if it is, I want the script to continue anyway. On the other hand, the process could still be going, and I would like to enable the user to press a key to kill that process instead of waiting for it to complete.
I guess the easiest way to visualize it would be like this:
The user can either wait for the timer to go to 0 and it will shut down automatically, or if they click the shut down button it immediately shuts down.
If you want to wait on the pid until the user hits a key, you can do it as follows:
./long_command.sh &
waitpid=$!
echo "Hit any key to continue or wait until the command finishes"
while kill -0 ${waitpid} > /dev/null ; do
#if [[ ${#KEY} -gt 0 ]] ; then
if read -n 1 -t 1 KEY ; then
kill ${waitpid}
break
fi
done
Just replace long_command.sh by your command. Here $! returns the pid of the last started subprocess and kill -0 ${waitpid} checks if the process is still existing (it's not killing the process). ps -q ${waitpid} works on Linux as well, but not on Mac - thank you #leetbacoon for mentioning this. read -n 1 -t 1 means "read one character, but only wait up to 1 second" (you could also use fractions like 0.5 here). The return status of this command depends on, if it could read a character in the specified time, or not.
Something like this might work for you
#!/bin/bash
doStuff() {
local pidOfParent="$1"
for i in $(seq 10); do
echo "stuff ${i}" > /dev/null
sleep 1
done
kill $pidOfParent
}
doStuff $$ &
doStuffPid="$!"
read -n1 -rp 'Press any key to continue' && kill $doStuffPid
Break down
doStuff is our function that contains what you want to be running in the background i.e. the music.
$$ is the PID of the running script, which we pass into our function to become the more descriptive pidOfParent in which we kill after we've finished doing stuff.
The & after calling the function puts it in the background.
$! gets the PID of the last executed command, thus we now have the PID of the background process we just started.
You provided read -n1 -rp 'Press any key to continue' so I can assume you already know what that does, && kill $doStuffPid will kill the background process when read has exited (this also works if you terminate the script using ^C).
If you are willing to use embedded expect, you can write:
expect <(cat <<'EOD'
spawn sleep 10
send_user "Press any key to continue\n"
stty raw -echo
expect {
-i $user_spawn_id -re ".+" {}
-i $spawn_id eof {}
}
EOD
)
where you would replace sleep 10 with your process.

Bash: Using pkill to track progress of removing a directory

So I have a shell script that does some long operations, and when they do I want to just output a series of dots (.) until it's done, to show that it's running.
I'm using pkill to test that the process is running, and as long as it is it outputs another dot. This works very well for nearly every place I need it. However, one part of the process involves removing a directory, and that is where it breaks down.
Here is my code:
ERROR=$(rm -rf "$1" 2>&1 >/dev/null)
while pkill -0 rm; do
printf "."
sleep 1
done
printf "\n"
I'm using pkill to test the rm process, but when I do, this is the output I get:
pkill: signalling pid 192: Operation not permitted
pkill: signalling pid 326: Operation not permitted
.pkill: signalling pid 61: Operation not permitted
My script runs up until the dot-output code, including the folder deletion, but then it stops and just outputs those three lines over and over again until I forcibly kill the process.
Anyone have any ideas what's going on? I feel like it's not able to work with the rm operation, but I'm not sure.
Thanks in advance.
The problem is pkill is sending kill(PID, SIG_0) for the processes matched by Regex pattern rm. For some matched processes (shown by the PIDs), you don't have sufficient permission to send SIG_0 to get the process status.
You can use -x (--exact) option (no Regex) to match only process(es) with exact name rm (given there is no rm by other users running):
pkill -0 -x rm
or use pgrep
pgrep -x rm
Better mention your username:
pkill -0 -x -u username rm
pgrep -x -u username rm
Your script is not putting rm into the background, so when pkill is being run, presumably it's finding processes owned by other users and is not able to kill them because you cannot kill another user's process unless you are root.
Since you are spawning the process within the script, if you correctly background the rm then you can get the PID of the rm job from $! and use kill instead of pkill.
You should run the rm command and properly background it. The following untested code should do what you're trying to do:
rm -rf "$1" >/dev/null 2>&1 &
RMPID=$!
while kill -0 $RMPID 2>/dev/null; do
printf "."
sleep 1
done
printf "\n"
wait $RMPID
RESULT=$?
if (( $RESULT != 0 )); then
printf "Error when deleting $1\n"
exit 1
fi
You can read the bash documentation for more details on wait and $! and $?

How can I make an external program interruptible in this trap-captured bash script?

I am writing a script which will run an external program (arecord) and do some cleanup if it's interrupted by either a POSIX signal or input on a named pipe. Here's the draft in full
#!/bin/bash
X=`date '+%Y-%m-%d_%H.%M.%S'`
F=/tmp/$X.wav
P=/tmp/$X.$$.fifo
mkfifo $P
trap "echo interrupted && (rm $P || echo 'couldnt delete $P') && echo 'removed fifo' && exit" INT
# this forked process will wait for input on the fifo
(echo 'waiting for fifo' && cat $P >/dev/null && echo 'fifo hit' && kill -s SIGINT $$)&
while true
do
echo waiting...
sleep 1
done
#arecord $F
This works perfectly as it is: the script ends when a signal arrives and a signal is generated if the fifo is written-to.
But instead of the while true loop I want the now-commented-out arecord command, but if I run that program instead of the loop the SIGINT doesn't get caught in the trap and arecord keeps running.
What should I do?
It sounds like you really need this to work more like an init script. So, start arecord in the background and put the pid in a file. Then use the trap to kill the arecord process based on the pidfile.
#!/bin/bash
PIDFILE=/var/run/arecord-runner.pid #Just somewhere to store the pid
LOGFILE=/var/log/arecord-runner.log
#Just one option for how to format your trap call
#Note that this does not use &&, so one failed function will not
# prevent other items in the trap from running
trapFunc() {
echo interrupted
(rm $P || echo 'couldnt delete $P')
echo 'removed fifo'
kill $(cat $PIDFILE)
exit 0
}
X=`date '+%Y-%m-%d_%H.%M.%S'`
F=/tmp/$X.wav
P=/tmp/$X.$$.fifo
mkfifo $P
trap "trapFunc" INT
# this forked process will wait for input on the fifo
(echo 'waiting for fifo' && cat $P >/dev/null && echo 'fifo hit' && kill -s SIGINT $$)&
arecord $F 1>$LOGFILE 2>&1 & #Run in the background, sending logs to file
echo $! > $PIDFILE #Save pid of the last background process to file
while true
do
echo waiting...
sleep 1
done
Also... you may have your trap written with '&&' clauses for a reason, but as an alternative, you can give a function name as I did above, or a sort of anonymous function like this:
trap "{ command1; command2 args; command3; exit 0; }"
Just make sure that each command is followed by a semicolon and there are spaces between the braces and the commands. The risk of using && in the trap is that your script will continue to run past the interrupt if one of the commands before the exit fails to execute (but maybe you want that?).

How do I receive notification in a bash script when a specific child process terminates?

I wonder if anyone can help with this?
I have a bash script. It starts a sub-process which is another gui-based application. The bash script then goes into an interactive mode getting input from the user. This interactive mode continues indefinately. I would like it to terminate when the gui-application in the sub-process exits.
I have looked at SIGCHLD but this doesn't seem to be the answer. Here's what I've tried but I don't get a signal when the prog ends.
set -o monitor
"${prog}" &
prog_pid=$!
function check_pid {
kill -0 $1 2> /dev/null
}
function cleanup {
### does cleanup stuff here
exit
}
function sigchld {
check_pid $prog_pid
[[ $? == 1 ]] && cleanup
}
trap sigchld SIGCHLD
Updated following answers. I now have this working using the suggestion from 'nosid'. I have another, related, issue now which is that the interactive process that follows is a basic menu driven process that blocks waiting for key input from the user. If the child process ends the USR1 signal is not handled until after input is received. Is there any way to force the signal to be handled immediately?
The wait look looks like this:
stty raw # set the tty driver to raw mode
max=$1 # maximum valid choice
choice=$(expr $max + 1) # invalid choice
while [[ $choice -gt $max ]]; do
choice=`dd if=/dev/tty bs=1 count=1 2>/dev/null`
done
stty sane # restore tty
Updated with solution. I have solved this. The trick was to use nonblocking I/O for the read. Now, with the answer from 'nosid' and my modifications, I have exactly what I want. For completeness, here is what works for me:
#!/bin/bash -bm
{
"${1}"
kill -USR1 $$
} &
function cleanup {
# cleanup stuff
exit
}
trap cleanup SIGUSR1
while true ; do
stty raw # set the tty driver to raw mode
max=9 # maximum valid choice
while [[ $choice -gt $max || -z $choice ]]; do
choice=`dd iflag=nonblock if=/dev/tty bs=1 count=1 2>/dev/null`
done
stty sane # restore tty
# process choice
done
Here is a different approach. Instead of using SIGCHLD, you can execute an arbitrary command as soon as the GUI application terminates.
{
some_command args...
kill -USR1 $$
} &
function sigusr1() { ... }
trap sigusr1 SIGUSR1
Ok. I think I understand what you need. Have a look at my .xinitrc:
xrdb ~/.Xdefaults
source ~/.xinitrc.hw.settings
xcompmgr &
xscreensaver &
# after starting some arbitrary crap we want to start the main gui.
startfluxbox & PIDOFAPP=$! ## THIS IS THE IMPORTANT PART
setxkbmap genja
wmclockmon -bl &
sleep 1
wmctrl -s 3 && aterms sone &
sleep 1
wmctrl -s 0
wait $PIDOFAPP ## THIS IS THE SECOND PART OF THE IMPORTANT PART
xeyes -geometry 400x400+500+400 &
sleep 2
echo im out!
What happens is that after you send a process to the background, you can use wait to wait until the process dies. whatever is after wait will not be executed as long as the application is running. You can use this to exit after the GUI has been shut down.
PS: I run bash.
I think you need to do:
set -bm
or
set -o monitor notify
As per the bash manual:
-b
Cause the status of terminated background jobs to be reported immediately, rather than before printing the next primary prompt.
The shell's main job is executing child processes, and
it needs to catch SIGCHLD for its own purposes. This somehow restricts it to pass on the signal to the script itself.
Could you just check for the child pid and based on that send the alert. You can find the child pid as below-
bash_pid=$$
while true
do
children=`ps -eo ppid | grep -w $bash_pid`
if [ -z "$children" ]; then
cleanup
alert
exit
fi
done

Resources