How to prevent background command going to suspend immediately [duplicate] - bash

This question already has answers here:
running ssh process in background gets paused
(2 answers)
Closed 4 years ago.
Imagine this script (do not consider auth and other stuff, all SSH commands run just fine without &)
(ssh foo.com "/bin/sleep 5 && echo 1") &
(ssh bar.com "/bin/sleep 5 && echo 1") &
wait
echo "My commands finished"
Now, I would expect all my SSH commands to run immediately as background jobs and then, when finished, I would get the final "My commands finished" message.
But that's not what happens...
What actually happens is this
[1] 16155
[1] + 16155 suspended (tty input) ssh foo.com "/bin/sleep 5 && echo 1"
[2] 16156
[2] + 16156 suspended (tty input) ssh bar.com "/bin/sleep 5 && echo 1"
My commands finished
So all the background commands go immediatelly to suspended state where they stay forever. Sure I can bring them back with fg or kill -CONT PID but that's all sequential. I need to run all my commands in parallel and just wait for all of them to finish.
Do you know why is that and how to avoid the suspended state?

Thanks to n.m.. The solution is to pass < /dev/null to stdin. By doing that the subprocess closes the stdin and does not block console IO as in the previous case which led to suspend state.

Related

How can parallel Bash process be killed? [duplicate]

This question already has answers here:
How do I kill background processes / jobs when my shell script exits?
(15 answers)
killing background processes when script exists [duplicate]
(1 answer)
Closed 3 years ago.
I need a way to kill all the processes used in the script below. I removed all the unnecessary code and made it as simple as I can, without ruining the structure.
I made the kill_other function for that, which when called (after CTRL + C is pressed) is suppose to kill everything.
Right now if I kill the script, functionOne and adb logcat continue running.
How can I kill them?
Also, since I'm fairly new to the trap function I have to ask have I positioned it correctly in the code?
EDIT : I am running this script on Ubuntu 19.04 and my target is an Android phone, if someone needs the info.
#!/bin/sh
kill_other(){
## Insert code here to kill all other functions
exit
}
trap 'kill_other' SIGINT
main(){
functionOne &
adb logcat > log_test.log &
while true
do
echo "==================================================================="
memPrint
sleep 10
done
}
functionOne(){
while true
do
sleep 20
echo "==================================================================="
echo "Starting app 1"
echo "==================================================================="
functionTwo
sleep 20
echo "==================================================================="
echo "Starting app 2"
echo "==================================================================="
functionThree
done
}
functionTwo(){
adb shell monkey -p com.google.android.youtube -c android.intent.category.LAUNCHER 1
}
functionThree(){
adb shell monkey -p tv.twitch.android.app -c android.intent.category.LAUNCHER 1
}
memPrint(){
adb shell dumpsys meminfo | grep -A 10 "Total PSS by process\|Foreground\|Perceptible\|Total RAM\|Home"
}
## Start
main
You might try doing a "ps -ef" grepping the results for "adb", or if for instance the bin is actually adb you can try "pkill adb".
or as I saw around (Android ADB stop application command like "force-stop" for non rooted device)
adb shell ps => Will list all running processes on the device and their process ids
adb shell kill <PID> => Instead of use process id of your application
This are the options to place in kill function, just to clarify
Regards !

Execute command on second terminal in bash script

I am writing a bash script to execute 2 commands at a time on 2 different terminal & original terminal wait for both 2 terminal to finish & then continue with remaining script.
I am able to open a different terminal with required command, however the original terminal seems not waiting for the 2nd one to complete & auto close before proceeding with remaining of the script.
#!/bin/bash
read -p "Hello"
read -p "Press enter to start sql installation"
for i in 1
do
xterm -hold -e mysql_secure_installation &
done
echo "completed installation"
Use the Bash wait command to cause the calling script to wait for a background process to complete. Your for loop implies that you may be launching multiple background processes in parallel even though in your question there's only one. Without any options wait will wait for all of them.
I wonder why you're launching the processes in xterm instead of directly.

Run / Close Programs over and over again

Is there a way I can write a simple script to run a program, close that program about 5 seconds later, and then repeat?
I just want to be able to run a program that I wrote over and over again but to do so Id have to close it like 5 seconds after running it.
Thanks!
If your command is non-interactive (requires no user interaction):
Launch your program in the background with control operator &, which gives you access to its PID (process ID) via $!, by which you can kill the running program instance after sleeping for 5 seconds:
#!/bin/bash
# Start an infinite loop.
# Use ^C to abort.
while :; do
# Launch the program in the background.
/path/to/your/program &
# Wait 5 seconds, then kill the program (if still alive).
sleep 5 && { kill $! && wait $!; } 2>/dev/null
done
If your command is interactive:
More work is needed if your command must run in the foreground to allow user interaction: then it is the command to kill the program after 5 seconds that must run in the background:
#!/bin/bash
# Turn on job control, so we can bring a background job back to the
# foreground with `fg`.
set -m
# Start an infinite loop.
# CAVEAT: The only way to exit this loop is to kill the current shell.
# Setting up an INT (^C) trap doesn't help.
while :; do
# Launch program in background *initially*, so we can reliably
# determine its PID.
# Note: The command line being set to the bakground is invariably printed
# to stderr. I don't know how to suppress it (the usual tricks
# involving subshells and group commands do not work).
/path/to/your/program &
pid=$! # Save the PID of the background job.
# Launch the kill-after-5-seconds command in the background.
# Note: A status message is invariably printed to stderr when the
# command is killed. I don't know how to suppress it (the usual tricks
# involving subshells and group commands do not work).
{ (sleep 5 && kill $pid &) } 2>/dev/null
# Bring the program back to the foreground, where you can interact with it.
# Execution blocks until the program terminates - whether by itself or
# by the background kill command.
fg
done
Check out the watch command. It will let you run a program repeatedly monitoring the output. Might have to get a little fancy if you need to kill that program manually after 5 seconds.
https://linux.die.net/man/1/watch
A simple example:
watch -n 5 foo.sh
To literally answer your question:
Run 10 times with sleep 5:
#!/bin/bash
COUNTER=0
while [ $COUNTER -lt 10 ]; do
# your script
sleep 5
let COUNTER=COUNTER+1
done
Run continuously:
#!/bin/bash
while [ 1 ]; do
# your script
sleep 5
done
If there is no input on the code, you can simply do
#!/bin/bash
while [ 1 ]
do
./exec_name
if [ $? == 0 ]
then
sleep 5
fi
done

How to create Multiple Threads in Bash Shell Script [duplicate]

This question already has answers here:
Threads in bash?
(3 answers)
Closed 6 years ago.
I have a an array of arguments that will be utilized as such in the command for my shell script. I want to be able to do this
./runtests.sh -b firefox,chrome,ie
where each command here will start a separate thread (currently we are multithreading by opening multiple terminals and starting the commands there)
I have pushed the entered commands into an array:
if [[ $browser == *","* ]]; then
IFS=',' read -ra browserArray <<< "$browser"
fi
Now I will have to start a separate thread (or process) while looping through array. Can someone guide me in the right direction? My guess in sudo code is something like
for (( c=0; c<${#browserArray}; c++ ))
do
startTests &
Am I on the right track?
That's not a thread, but a background process. They are similar but:
So, effectively we can say that threads and light weight processes are same.
The main difference between a light weight process (LWP) and a normal process is that LWPs share same address space and other resources like open files etc. As some resources are shared so these processes are considered to be light weight as compared to other normal processes and hence the name light weight processes.
NB: Redordered for clarity
What are Linux Processes, Threads, Light Weight Processes, and Process State
You can see the running background process using the jobs command. E.g.:
nick#nick-lt:~/test/npm-test$ sleep 10000 &
[1] 23648
nick#nick-lt:~/test/npm-test$ jobs
[1]+ Running
You can bring them to the foreground using fg:
nick#nick-lt:~/test/npm-test$ fg 1
sleep 1000
where the cursor will wait until the sleep time has elapsed. You can pause the job when it's in the foreground (as in the scenario after fg 1) by pressing CTRL-Z (SIGTSTP), which gives something like this:
[1]+ Stopped sleep 1000
and resume it by typing:
bg 1 # Resumes in the background
fg 1 # Resumes in the foreground
and you can kill it by pressing CTRL-C (SIGINT) when it's in the foreground, which just ends the process, or through using the kill command with the % affix to the jobs ID:
kill %1 # Or kill <PID>
Onto your implementation:
BROWSERS=
for i in "${#}"; do
case $i in
-b)
shift
BROWSERS="$1"
;;
*)
;;
esac
done
IFS=',' read -r -a SPLITBROWSERS <<< "$BROWSERS"
for browser in "${SPLITBROWSERS[#]}"
do
echo "Running ${browser}..."
$browser &
done
Can be called as:
./runtests.sh -b firefox,chrome,ie
Tadaaa.

Bash script that will survive disconnection, but not user break

I want to write a bash script that will continue to run if the user is disconnected, but can be aborted if the user presses Ctrl+C.
I can solve the first part of it like this:
#!/bin/bash
cmd='
#commands here, avoiding single quotes...
'
nohup bash -c "$cmd" &
tail -f nohup.out
But pressing Ctrl+C obviously just kills the tail process, not the main body. Can I have both? Maybe using Screen?
I want to write a bash script that will continue to run if the user is disconnected, but can be aborted if the user presses Ctrl+C.
I think this is exactly the answer on the question you formulated, this one without screen:
#!/bin/bash
cmd=`cat <<EOF
# commands here
EOF
`
nohup bash -c "$cmd" &
# store the process id of the nohup process in a variable
CHPID=$!
# whenever ctrl-c is pressed, kill the nohup process before exiting
trap "kill -9 $CHPID" INT
tail -f nohup.out
Note however that nohup is not reliable. When the invoking user logs out, chances are that nohup also quits immediately. In that case disown works better.
bash -c "$cmd" &
CHPID=$!
disown
This is probably the simplest form using screen:
screen -S SOMENAME script.sh
Then, if you get disconnected, on reconnection simply run:
screen -r SOMENAME
Ctrl+C should continue to work as expected
Fact 1: When a terminal (xterm for example) gets closed, the shell is supposed to send a SIGHUP ("hangup") to any processes running in it. This harkens back to the days of analog modems, when a program needed to clean up after itself if mom happened to pick up the phone while you were online. The signal could be trapped, so that a special function could do the cleanup (close files, remove temporary junk, etc). The concept of "losing your connection" still exists even though we use sockets and SSH tunnels instead of analog modems. (Concepts don't change; all that changes is the technology we use to implement them.)
Fact 2: The effect of Ctrl-C depends on your terminal settings. Normally, it will send a SIGINT, but you can check by running stty -a in your shell and looking for "intr".
You can use these facts to your advantage, using bash's trap command. For example try running this in a window, then press Ctrl-C and check the contents of /tmp/trapped. Then run it again, close the window, and again check the contents of /tmp/trapped:
#!/bin/bash
trap "echo 'one' > /tmp/trapped" 1
trap "echo 'two' > /tmp/trapped" 2
echo "Waiting..."
sleep 300000
For information on signals, you should be able to man signal (FreeBSD or OSX) or man 7 signal (Linux).
(For bonus points: See how I numbered my facts? Do you understand why?)
So ... to your question. To "survive" disconnection, you want to specify behaviour that will be run when your script traps SIGHUP.
(Bonus question #2: Now do you understand where nohup gets its name?)

Resources