bg / fg inside a command line loop - bash

ctrl-z (^z) acts in ways I do not understand when done inside a loop executed from a terminal.
Say I type
for ii in {0..100}; do echo $ii; sleep 1; done
then I hit ^z. I'll get:
[1]+ Stopped sleep 1
I can resume the job using fg or bg, but the job refers only to he sleep command. The rest of the loop has apparently disappeared, and no more number appear on the terminal.
I could use & after the command to immediately run it in the background, or another solution is to wrap the whole thing in a subshell:
( for ii in {0..100}; do echo $ii; sleep 1; done )
then ^z gives me
[1]+ Stopped ( for ii in {0..100};
do
echo $ii; sleep 1;
done )
This job can be resumed and everyone is happy. But I'm not generally in the habit of doing this when running a one-off task, and the question I am asking is why the first behavior happens in the first place. Is there a way to suspend a command-line loop that isn't subshell'd? And what happened to the rest of the loop in the first example?
Note that this is specific to the loop:
echo 1; sleep 5; echo 2
and hitting ^z during the sleep causes the echo 2 to execute:
1
^Z
[2]+ Stopped sleep 5
2
Or should I just get in the habit of using & and call it dark magic?

You cannot suspend execution of the current shell. When you run your loop from the command line, it is executing in your current login shell/terminal. When you press [ctrl+z] you are telling the shell to suspend the current active process. Your loop is simply a counter in the current shell, the process being executed is sleep. Suspend only operates on sleep.
When you backgroud a process or execute it in a subshell (roughly equivalent), you can suspend that separate process in total.

Related

How to create Multiple Threads in Bash Shell Script [duplicate]

This question already has answers here:
Threads in bash?
(3 answers)
Closed 6 years ago.
I have a an array of arguments that will be utilized as such in the command for my shell script. I want to be able to do this
./runtests.sh -b firefox,chrome,ie
where each command here will start a separate thread (currently we are multithreading by opening multiple terminals and starting the commands there)
I have pushed the entered commands into an array:
if [[ $browser == *","* ]]; then
IFS=',' read -ra browserArray <<< "$browser"
fi
Now I will have to start a separate thread (or process) while looping through array. Can someone guide me in the right direction? My guess in sudo code is something like
for (( c=0; c<${#browserArray}; c++ ))
do
startTests &
Am I on the right track?
That's not a thread, but a background process. They are similar but:
So, effectively we can say that threads and light weight processes are same.
The main difference between a light weight process (LWP) and a normal process is that LWPs share same address space and other resources like open files etc. As some resources are shared so these processes are considered to be light weight as compared to other normal processes and hence the name light weight processes.
NB: Redordered for clarity
What are Linux Processes, Threads, Light Weight Processes, and Process State
You can see the running background process using the jobs command. E.g.:
nick#nick-lt:~/test/npm-test$ sleep 10000 &
[1] 23648
nick#nick-lt:~/test/npm-test$ jobs
[1]+ Running
You can bring them to the foreground using fg:
nick#nick-lt:~/test/npm-test$ fg 1
sleep 1000
where the cursor will wait until the sleep time has elapsed. You can pause the job when it's in the foreground (as in the scenario after fg 1) by pressing CTRL-Z (SIGTSTP), which gives something like this:
[1]+ Stopped sleep 1000
and resume it by typing:
bg 1 # Resumes in the background
fg 1 # Resumes in the foreground
and you can kill it by pressing CTRL-C (SIGINT) when it's in the foreground, which just ends the process, or through using the kill command with the % affix to the jobs ID:
kill %1 # Or kill <PID>
Onto your implementation:
BROWSERS=
for i in "${#}"; do
case $i in
-b)
shift
BROWSERS="$1"
;;
*)
;;
esac
done
IFS=',' read -r -a SPLITBROWSERS <<< "$BROWSERS"
for browser in "${SPLITBROWSERS[#]}"
do
echo "Running ${browser}..."
$browser &
done
Can be called as:
./runtests.sh -b firefox,chrome,ie
Tadaaa.

run forked process continuously, kill after interval

i'm having a difficult time writing a bash script, hoping someone could help. basically i'm trying to run a number of processes at the same time and then kill them all after an interval.
so for example, if i want to run my_long_running_task 50 times and kill after 10 minutes this is what i came up with:
#!/bin/bash
PIDS=()
(while :
do
my_long_running_task;
sleep 1
done ) &
PIDS+=($!)
...{repeat while loop 50 times or stick it in a for loop)...
sleep 600; # 10 minutes * 60 seconds
for p in "${PIDS[#]}"
do
kill $p
done
i'm not a bash expert but that seems like it should work - fork all the processes adding their pids to an array. then at the end just sleep for a certain amount of time before iterating over the array and killing all the pids. and indeed this worked for my very simple poc:
#!/bin/bash
PIDS=()
(while :
do
echo '1'
sleep 1;
done) &
PIDS+=($!)
(while :
do
echo '2'
sleep 1;
done) &
PIDS+=($!)
(sleep 10; \
for p in "${PIDS[#]}"
do
kill $p
done)
but when i do something more interesting than echo - like, in my case, running phantomjs, the processes don't get killed after the interval.
any thoughts? what am i missing?
Your wish is my command (at least, when your wish aligns sufficiently with my desires):
When you run phantomjs, do you run it with exec or just as a normal process?
Does it make any difference if you do use exec?
The thought behind the questions is that you kill the shell that runs the other process (which, in the case of echo, is the shell), but that doesn't necessarily kill the children of the process. Maybe you need to use something like:
kill -TERM -- -$p
kill -- -$p
to send a signal to the process group, rather than just the process.
Also, consider whether a 'time out' command would make your life easier (timeout on Linux).

Referencing Bash jobs in other jobs

I want to reference a background Bash job in another background Bash job. Is that possible?
For example, say I start a background job:
$ long_running_process &
[1] 12345
Now I want something to happen when that job finishes, so I can use wait:
$ wait %1 && thing_to_happen_after_long_running_process_finishes
However, that will block, and I want my terminal back to do other stuff, but Ctrl+Z does nothing.
Attempting to start this in the background in the first place instead fails:
$ { wait %1 && thing_to_happen_after_long_running_process_finishes; } &
[2] 12346
-bash: line 3: wait: %1: no such job
$ jobs
[1]- Running long_running_process &
[2]+ Exit 127 { wait %1 && thing_to_happen_after_long_running process_finishes; }
Is there some way to reference one job using wait in another background job?
I see this behaviour using GNU Bash 4.1.2(1)-release.
A shell can only wait on its own children. Since backgrounding a job creates a new shell, a wait in that shell can only wait on its own children, not the children of its parent (i.e., the shell from which the background-wait forked). For what you want, you need to plan ahead:
long_running_process && thing_to_happen_after &
There is one alternative:
long_running_process &
LRP_PID=$!
{ while kill -0 $LRP_PID 2> /dev/null; do sleep 1; done; thing_to_happen_after; } &
This would set up a loop that tries to ping your background process once a second. When the process is complete, the kill will fail, and move on to the post-process program. It carries the slight risk that your process would exit and another process would be given the same process ID between checks, in which case the kill would become confused and think your process was still running, when in fact it is the new one. But it's very slight risk, and actually it might be OK if thing_to_happen_after is delayed a little longer until there is no process with ID $LRP_PID.
try something like this :
x2=$(long_running_process && thing_to_happen_after_long_running_process_finishes ) &

start and monitoring a process inside shell script for completion

I have a simple shell script whose also is below:
#!/usr/bin/sh
echo "starting the process which is a c++ process which does some database action for around 30 minutes"
#this below process should be run in the background
<binary name> <arg1> <arg2>
exit
Now what I want is to monitor and display the status information of the process.
I don't want to go deep into its functionality. Since I know that the process will complete in 30 minutes, I want to show to the user that 3.3% is completed for every 1 min and also check whether the process is running in the background and finally if the process is completed I want to display that it is completed.
could anybody please help me?
The best thing you could do is to put some kind of instrumentation in your application,
and let it report the actual progress in terms of work items processed / total amount of work.
Failing that, you can indeed refer to the time that the thing has been running.
Here's a sample of what I've used in the past. Works in ksh93 and bash.
#! /bin/ksh
set -u
prog_under_test="sleep"
args_for_prog=30
max=30 interval=1 n=0
main() {
($prog_under_test $args_for_prog) & pid=$! t0=$SECONDS
while is_running $pid; do
sleep $interval
(( delta_t = SECONDS-t0 ))
(( percent=100*delta_t/max ))
report_progress $percent
done
echo
}
is_running() { (kill -0 ${1:?is_running: missing process ID}) 2>& -; }
function report_progress { typeset percent=$1
printf "\r%5.1f %% complete (est.) " $(( percent ))
}
main
If your process involves a pipe than http://www.ivarch.com/programs/quickref/pv.shtml would be an excellent solution or an alternative is http://clpbar.sourceforge.net/ . But these are essentially like "cat" with a progress bar and need something to pipe through them. There is a small program that you could compile and then execute as a background process then kill when things finish up, http://www.dreamincode.net/code/snippet3062.htm that would probablly work if you just want to dispaly something for 30 minutes and then print out almost done in the console if your process runs long and it exits, but you would have to modify it. Might be better just to create another shell script that displays a character every few seconds in a loop and checks if the pid of the previous process is still running, I believe you can get the parent pid by looking at the $$ variable then check if it is still running in /proc/pid .
You really should let the command output statistics, but for simplicity's sake you can do something like this to simply increment a counter while your process runs:
#!/bin/sh
cmd & # execute a command
pid=$! # Record the pid of the command
i=0
while sleep 60; do
: $(( i += 1 ))
e=$( echo $i 3.3 \* p | dc ) # compute percent completed
printf "$e percent complete\r" # report completion
done & # reporter is running in the background
pid2=$! # record reporter's pid
# Wait for the original command to finish
if wait $pid; then
echo cmd completed successfully
else
echo cmd failed
fi
kill $pid2 # Kill the status reporter

How to restart a BASH script from itself with a signal?

For example I have script with an infinite loop printing something to stdout. I need to trap a signal (for example SIGHUP) so it will restart the script with different PID and the loop will start itself again from 0. Killing and starting doesn't work as expected:
function traphup(){
kill $0
exec $0
}
trap traphup HUP
Maybe I should place something in background or use nohup, but I am not familiar with this command.
In your function:
traphup(){
$0 "$#" &
exit 0
}
This starts a new process in the background with the original command name and arguments (vary arguments to suit your requirements) with a new process ID. The original shell then exits. Don't forget to sort out the PID file if your daemon uses one to identify itself - but the restart may do that anyway.
Note that using nohup would be the wrong direction; the first time you launched the daemon, it would respond to the HUP signal, but the one launched with nohup would ignore the signal, not restarting again - unless you explicitly overrode the 'ignore' status, which is a bad idea for various reasons.
Answering comment
I'm not quite sure what the trouble is.
When I run the following script, I only see one copy of the script in ps output, regardless of whether I start it as ./xx.sh or as ./xx.sh &.
#!/bin/bash
traphup()
{
$0 "$$" &
exit 0
}
trap traphup HUP
echo
sleep 1
i=1
while [ $i -lt 1000 ]
do
echo "${1:-<none>}: $$: $i"
sleep 1
: $(( i++ ))
done
The output contains lines such as:
<none>: 1155: 21
<none>: 1155: 22
<none>: 1155: 23
1155: 1649: 1
1155: 1649: 2
1155: 1649: 3
1155: 1649: 4
The ones with '<none>' are the original process; the second set are the child process (1649) reporting its parent (1155). This output made it easy to track which process to send HUP signals to. (The initial echo and sleep gets the command line prompt out of the way of the output.)
My suspicion is that what you are seeing depends on the content of your script - in my case, the body of the loop is simple. But if I had a pipeline or something in there, then I might see a second process with the same name. But I don't think that would change depending on whether the original script is run in foreground or background.

Resources