Control operator as optional parameter in bash script - bash

I am trying to run a program optionally in the background. Is there a way to pass a control operator optionally. Something like:
if some_condition
bg=&
fi
myprog $bg
However, as I can see, bash is (rightly) treating $bg as an argument to myprog. I am trying to get myprog running in the background.
Is there a way to do this?

How about:
if some_condition
then
myprog &
else
myprog
fi

I'm in favour of James Brown's answer, however, if you're only looking to specify myprog once, then maybe you can use a function to determine if it should run in the background or not. However, this does mean you'll be writing more lines initially, but you could benefit from this if you're potentially running a lot in the background...
run_in_background=false
run_cmd () {
echo "Running $#"
if $run_in_background; then
"$#" &
else
"$#"
fi
}
if [[ "$1" == '--bg' ]]; then
run_in_background=true
shift
fi
run_cmd "$#"
Then you can run it like so:
./script.bash --bg sleep 1
Or if you could run it within the script itself:
... (Continuing from inside the script above)
run_cmd sleep 1
run_cmd echo hello
wait # Waits for background processes to finish
And then you can determine if the commands being passed to run_cmd will be in the foreground or background by either omitting or using the --bg flag.

Related

Why is the second bash script not printing its iteration?

I have two bash scripts:
a.sh:
echo "running"
doit=true
if [ $doit = true ];then
./b.sh &
fi
some-long-operation-binary
echo "done"
b.sh:
for i in {0..50}; do
echo "counting";
sleep 1;
done
I get this output:
> ./a.sh
running
counting
Why do I only see the first "counting" from b.sh and then nothing anymore? (Currently some-long-operation-binary just sleep 5 for this example). I first thought that due to setting b.sh in the background, its STDOUT is lost, but why do I see the first output? More importantly: is b.sh still running and doing its thing (its iteration)?
For context:
b.sh is going to poll a service provided by some-long-operation-binary, which is only available after some time the latter has run, and when ready, would write its content to a file.
Apologies if this is just rubbish, it's a bit late...
You should add #!/bin/bash or the like to b.sh that uses a Bash-like expansion, to make sure Bash is actually running the script. Otherwise there may be (indeed) only one loop iteration happening.
When you start a background process, it is usually a good practice to kill it and wait for it, no matter which way the script exits.
#!/bin/bash
set -e -o pipefail
declare -i show_counter=1
counter() {
local -i i
for ((i = 0;; ++i)); do
echo "counting $((i))"
sleep 1
done
}
echo starting
if ((show_counter)); then
counter &
declare -i counter_pid="${!}"
trap 'kill "${counter_pid}"
wait -n "${counter_pid}" || :
echo terminating' EXIT
fi
sleep 10 # long-running process

Getopts in sourced Bash function works interactively, but not in test script?

I have a Bash function library and one function is proving problematic for testing. prunner is a function that is meant to provide some of the functionality of GNU Parallel, and avoid the scoping issues of trying to use other Bash functions in Perl. It supports setting a command to run against the list of arguments with -c, and setting the number of background jobs to run concurrently with -t.
In testing it, I have ended up with the following scenario:
prunner -c "gzip -fk" *.out - works as expected in test.bash and interactively.
find . -maxdepth 1 -name "*.out" | prunner -c echo -t 6 - does not work, seemingly ignoring -c echo.
Testing was performed on Ubuntu 16.04 with Bash 4.3 and on Mac OS X with Bash 4.4.
What appears to be happening with the latter in test.bash is that getopts is refusing to process -c, and thus prunner will try to directly execute the argument without the prefix command it was given. The strange part is that I am able to observe it accepting the -t option, so getopts is at least partially working. Bash debugging with set -x has not been able to shed any light on why this is happening for me.
Here is the function in question, lightly modified to use echo instead of log and quit so that it can be used separately from the rest of my library:
prunner () {
local PQUEUE=()
while getopts ":c:t:" OPT ; do
case ${OPT} in
c) local PCMD="${OPTARG}" ;;
t) local THREADS="${OPTARG}" ;;
:) echo "ERROR: Option '-${OPTARG}' requires an argument." ;;
*) echo "ERROR: Option '-${OPTARG}' is not defined." ;;
esac
done
shift $(($OPTIND-1))
for ARG in "$#" ; do
PQUEUE+=("$ARG")
done
if [ ! -t 0 ] ; then
while read -r LINE ; do
PQUEUE+=("$LINE")
done
fi
local QCOUNT="${#PQUEUE[#]}"
local INDEX=0
echo "Starting parallel execution of $QCOUNT jobs with ${THREADS:-8} threads using command prefix '$PCMD'."
until [ ${#PQUEUE[#]} == 0 ] ; do
if [ "$(jobs -rp | wc -l)" -lt "${THREADS:-8}" ] ; then
echo "Starting command in parallel ($(($INDEX+1))/$QCOUNT): ${PCMD} ${PQUEUE[$INDEX]}"
eval "${PCMD} ${PQUEUE[$INDEX]}" || true &
unset PQUEUE[$INDEX]
((INDEX++)) || true
fi
done
wait
echo "Parallel execution finished for $QCOUNT jobs."
}
Can anyone please help me to determine why -c options are not working correctly for prunner when lines are piped to stdin?
My guess is that you are executing the two commands in the same shell. In that case, in the second invocation, OPTIND will have the value 3 (which is where it got to on the first invocation) and that is where getopts will start scanning.
If you use getopts to parse arguments to a function (as opposed to a script), declare local OPTIND=1 to avoid invocations from interfering with each other.
Perhaps you are already doing this, but make sure to pass the top-level shell parameters to your function. The function will receive the parameters via the call, for example:
xyz () {
echo "First arg: ${1}"
echo "Second arg: ${2}"
}
xyz "This is" "very simple"
In your example, you should always be calling the function with the standard args so that they can be processed in the method via getopts.
prunner "$#"
Note that pruner will not modify the standard args outside of the function.

Run bash script in background by default

I know I can run my bash script in the background by using bash script.sh & disown or alternatively, by using nohup. However, I want to run my script in the background by default, so when I run bash script.sh or after making it executable, by running ./script.sh it should run in the background by default. How can I achieve this?
Self-contained solution:
#!/bin/sh
# Re-spawn as a background process, if we haven't already.
if [[ "$1" != "-n" ]]; then
nohup "$0" -n &
exit $?
fi
# Rest of the script follows. This is just an example.
for i in {0..10}; do
sleep 2
echo $i
done
The if statement checks if the -n flag has been passed. If not, it calls itself with nohup (to disassociate the calling terminal so closing it doesn't close the script) and & (to put the process in the background and return to the prompt). The parent then exits to leave the background version to run. The background version is explicitly called with the -n flag, so wont cause an infinite loop (which is hell to debug!).
The for loop is just an example. Use tail -f nohup.out to see the script's progress.
Note that I pieced this answer together with this and this but neither were succinct or complete enough to be a duplicate.
Simply write a wrapper that calls your actual script with nohup actualScript.sh &.
Wrapper script wrapper.sh
#! /bin/bash
nohup ./actualScript.sh &
Actual script in actualScript.sh
#! /bin/bash
for i in {0..10}
do
sleep 10 #script is running, test with ps -eaf|grep actualScript
echo $i
done
tail -f 10 nohup.out
0
1
2
3
4
...
Adding to Heath Raftery's answer, what worked for me is a variation of what he suggested such as this:
if [[ "$1" != "-n" ]]; then
$0 -n & disown
exit $?
fi

Input to proprietary command prompt in shell script

I want to know how we can provide inputs to command prompts which change. I want to use shell scripting
Example where '#' is usual prompt and '>' is a prompt specific to my program:
mypc:/home/usr1#
mypc:/home/usr1# myprogram
myprompt> command1
response1
myprompt> command2
response2
myprompt> exit
mypc:/home/usr1#
mypc:/home/usr1#
If I understood correctly, you want to send specific commands to your program myprogram sequentially.
To achieve that, you could use a simple expect script. I will assume the prompt for myprogram is noted with myprompt>, and that the myprompt> symbol does not appear in response1 :
#!/usr/bin/expect -f
#this is the process we monitor
spawn ./myprogram
#we wait until 'myprompt>' is displayed on screen
expect "myprompt>" {
#when this appears, we send the following input (\r is the ENTER key press)
send "command1\r"
}
#we wait until the 1st command is executed and 'myprompt>' is displayed again
expect "myprompt>" {
#same steps as before
send "command2\r"
}
#if we want to manually interract with our program, uncomment the following line.
#otherwise, the program will terminate once 'command2' is executed
#interact
To launch, simply invoke myscript.expect if the script is in the same folder as myprogram.
Given that myprogram is a script, it must be prompting for input with something like while read IT; do ...something with $IT ...;done . Hard to say exactly how to change that script without seeing it. echo -n 'myprompt> would be the simplest addition.
can be done with PS3 and select construct
#!/bin/bash
PS3='myprompt> '
select cmd in command1 command2
do
case $REPLY in
command1)
echo response1
;;
command2)
echo response2
;;
exit)
break
;;
esac
done
Or with echo and read builtins
prompt='myprompt> '
while [[ $cmd != exit ]]; do
echo -n "$prompt"
read cmd
echo ${cmd/#command/response}
done

How to include nohup inside a bash script?

I have a large script called mandacalc which I want to always run with the nohup command. If I call it from the command line as:
nohup mandacalc &
everything runs swiftly. But, if I try to include nohup inside my command, so I don't need to type it everytime I execute it, I get an error message.
So far I tried these options:
nohup (
command1
....
commandn
exit 0
)
and also:
nohup bash -c "
command1
....
commandn
exit 0
" # and also with single quotes.
So far I only get error messages complaining about the implementation of the nohup command, or about other quotes used inside the script.
cheers.
Try putting this at the beginning of your script:
#!/bin/bash
case "$1" in
-d|--daemon)
$0 < /dev/null &> /dev/null & disown
exit 0
;;
*)
;;
esac
# do stuff here
If you now start your script with --daemon as an argument, it will restart itself detached from your current shell.
You can still run your script "in the foreground" by starting it without this option.
Just put trap '' HUP on the beggining of your script.
Also if it creates child process someCommand& you will have to change them to nohup someCommand& to work properly... I have been researching this for a long time and only the combination of these two (the trap and nohup) works on my specific script where xterm closes too fast.
Create an alias of the same name in your bash (or preferred shell) startup file:
alias mandacalc="nohup mandacalc &"
Why don't you just make a script containing nohup ./original_script ?
There is a nice answer here: http://compgroups.net/comp.unix.shell/can-a-script-nohup-itself/498135
#!/bin/bash
### make sure that the script is called with `nohup nice ...`
if [ "$1" != "calling_myself" ]
then
# this script has *not* been called recursively by itself
datestamp=$(date +%F | tr -d -)
nohup_out=nohup-$datestamp.out
nohup nice "$0" "calling_myself" "$#" > $nohup_out &
sleep 1
tail -f $nohup_out
exit
else
# this script has been called recursively by itself
shift # remove the termination condition flag in $1
fi
### the rest of the script goes here
. . . . .
the best way to handle this is to use $()
nohup $( command1, command2 ...) &
nohup is expecting one command and in that way You're able to execute multiple commands with one nohup

Resources