How can I run this command line in the background? - bash

I have this
script -q -c "continuously_running_command" /dev/null > out
When I have the above command line running I can stop it by doing CTRL+C
However I'd like to run the above commandline in back ground so that I can stop it by doing kill -9 %1
But when I try to run this
script -q -c "continuously_running_command" /dev/null > out &
I get
[2]+ Stopped (tty output) script -q -c "continuously_running_command" /dev/null 1>out
Question:
How can I run the above commandline in back ground?

In order to background a process with redirection to a file, you must also redirect stderr. With stdout and stderr redirected, you can then background the process:
script -q -c "continuously_running_command" /dev/null > out 2>&1 &
Fully working example:
#!/bin/bash
i=$((0+0))
while test "$i" -lt 100; do
((i+=1))
echo "$i"
sleep 1
done
Running the script and tail of output file while backgrounded:
alchemy:~/scr/tmp/stack> ./back.sh > outfile 2>&1 &
[1] 31779
alchemy:~/scr/tmp/stack> tailf outfile
10
11
12
13
14
...
100
[1]+ Done ./back.sh > outfile 2>&1
In the case of:
script -q -c "continuously_running_command" /dev/null
The problem in in this case is the fact that script itself causes redirection of all dialog with script to FILE, in this case to /dev/null. So you need to simply issue the command without redirecting to /dev/null or just redirect stderr to out:
script -q -c "continuously_running_command" out 2>&1 &
or
script -q -c "continuously_running_command" /dev/null/ 2>out &

Related

How do I use nohup and time together?

I want to be able to use nohup and time together in a bash shell while running mpiexec such that the output (stdout), errors (stderr) and time all end up in 1 file. Here is what I am using right now:
nohup bash -c "time mpiexec.hydra -np 120 -f nodefile ./executable -i 1000 > results.log 2>&1"
However, what is happening is that time goes to a file called nohup.out and output and error goes to results.log
Has anyone figured this out?
You could enclose the whole command between curcly braces to redirect the whole output { ... ; } such as :
{ nohup bash -c "time mpiexec.hydra -np 120 -f nodefile ./executable -i 1000" ; } > results.log 2>&1
Gnu reference :
Bash provides two ways to group a list of commands to be executed as a unit. When commands are grouped, redirections may be applied to the entire command list.
(...) : creates a subshell
{...} : doesn't
EDIT :
Here not sure that it is required.
I think that the issue is that your enclosing " " is too broad.
You should redirect only the result of the nohup command such as :
nohup "foo-cmd -bar argFooBar" > results.log 2>&1
So try without enclosing the redirection from the command passed to bash :
nohup bash -c "time mpiexec.hydra -np 120 -f nodefile ./executable -i 1000" > results.log 2>&1
Man page for nohup
To save output to FILE, use 'nohup COMMAND > FILE'
Simple example:
$ nohup bash -c "time echo 'this is stdout' > results.log 2>&1" >> results.log
nohup: ignoring input and redirecting stderr to stdout
$ cat results.log
this is stdout
real 0m0.001s
user 0m0.000s
sys 0m0.000s
So, in your case, try this:
nohup bash -c "time mpiexec.hydra -np 120 -f nodefile ./executable -i 1000 > results.log 2>&1" >> results.log

Run mplayer command in background replace &1 with 1

The formal way to run mplayer in background.
mplayer some.mkv </dev/null >/dev/null 2>&1 &
[3]9536
[3]9536 means pid of the mplayer command is 9536.
If i replace &1 with 1 for the above command:
mplayer some.mkv </dev/null >/dev/null 2>1 &
[4] 9590
[3] Done mplayer some.mkv < /dev/null > /dev/null 2> 1
Why got the extra output here?
[3] Done mplayer some.mkv < /dev/null > /dev/null 2> 1
It's not related to your change. This happened because your previous mplayer instance exited in the mean time. Here's another example:
$ sleep 1 &
[1] 18155 # Sleep #1 started in background
$ sleep 1 &
[2] 18163 # Sleep #2 started in background
[1] Done sleep 1 # Sleep #1 finished because it's been a second
Replacing 2>&1 with 2>1 makes mplayer status and errors go to a file named 1 in the current directory. I don't know why you would do such a thing, but bash is happy to oblige.

How to get return status of a command in subshell into the main shell?

I want to retrieve the return status of a command which is being executed in a subshell.
I am running the below script from Unix Box A which has a passwordless SSH access to Box B whose IP is mentioned in the script as ip_addr.
I want to get the return status of command which is ran in subshell in my current environment.
That is if the below command fails:
echo "cmd" | system_program> 2>> /dev/null
then echo $? should print non-zero value and I should be able to use that value to decide further action.
Snippet of my script is:
sample.sh :
ip_addr="xxx.xxx.xx.xx"
status=$(ssh -q -T $ip_addr << EOF
rm /tmp/program.log; echo "cmd" | system_program> 2>> /dev/null; echo $?
EOF
)
You don't need the here-doc, or the echo. Try:
ssh -q -T $ip_addr 'rm /tmp/program.log; echo "cmd" | system_program> 2>> /dev/null'
Or if you want to use here-doc set errexit:
status=$(ssh -q -T $ip_addr << EOF
set -o errexit
rm /tmp/program.log; echo "cmd" | system_program> 2>> /dev/null
EOF
)

stop bash script from outputting in terminal

I believe I have everything setup correctly for my if else statement however it keeps outputting content into my shell terminal as if i ran the command myself. is there anyway i can escape this so i can run these commands without it populating my terminal with text from the results?
#!/bin/bash
ps cax | grep python > /dev/null
if [ $? -eq 0 ]; then
echo "Process is running." &
echo $!
else
echo "Process is not running... Starting..."
python likebot.py &
echo $!
fi
Here is what the output looks like a few minutes after running my bash script
[~]# sh check.sh
Process is not running... Starting...
12359
[~]# Your account has been rated. Sleeping on kranze for 1 minute(s). Liked 0 photo(s)...
Your account has been rated. Sleeping on kranze for 2 minute(s). Liked 0 photo(s)...
If you want to redirect output from within the shell script, you use exec:
exec 1>/dev/null 2>&1
This will redirect everything from now on. If you want to output to a log:
exec 1>/tmp/logfile 2>&1
To append a log:
exec 1>>/tmp/logfile 2>&1
To backup your handles so you can restore them:
exec 3>&1 4>&2
exec 1>/dev/null 2>&1
# Do some stuff
# Restore descriptors
exec 1>&3 2>&4
# Close the descriptors.
exec 3>&- 4>&-
If there is a particular section of a script you want to silence:
#!/bin/bash
echo Hey, check me out, I can make noise!
{
echo Thats not fair, I am being silenced!
mv -v /tmp/a /tmp/b
echo Me too.
} 1>/dev/null 2>&1
If you want to redirect the "normal (stdout)" output use >/dev/null if you also want to redirect the error output as well use 2>&1 >/dev/null
eg
$ command 2>&1 >/dev/null
I think you have to redirect STDOUT (and may be STDERR) of the python interpreter:
...
echo "Process is not running... Starting..."
python likebot.py >/dev/null 2>&1 &
...
For further details, please have a look at Bash IO-Redirection.
Hope that helped a bit.
*Jost
You have two options:
You can redirect standard output to a log file using > /path/to/file
You can redirect standard output to /dev/null to get rid of it completely using > /dev/null
If you want error output redirected as well use &>
See here
Also, not relevant to this particular example, but some bash commands support a 'quiet' or 'silent' flag.
Append >> /path/to/outputfile/outputfile.txt to the end of every echo statement
echo "Process is running." >> /path/to/outputfile/outputfile.txt
Alternatively, send the output to the file when you run the script from the shell
[~]# sh check.sh >> /path/to/outputfile/outputfile.txt

nohup doesn't work when used with double-ampersand (&&) instead of semicolon (;)

I have a script that uses ssh to login to a remote machine, cd to a particular directory, and then start a daemon. The original script looks like this:
ssh server "cd /tmp/path ; nohup java server 0</dev/null 1>server_stdout 2>server_stderr &"
This script appears to work fine. However, it is not robust to the case when the user enters the wrong path so the cd fails. Because of the ;, this command will try to run the nohup command even if the cd fails.
The obvious fix doesn't work:
ssh server "cd /tmp/path && nohup java server 0</dev/null 1>server_stdout 2>server_stderr &"
that is, the SSH command does not return until the server is stopped. Putting nohup in front of the cd instead of in front of the java didn't work.
Can anyone help me fix this? Can you explain why this solution doesn't work? Thanks!
Edit: cbuckley suggests using sh -c, from which I derived:
ssh server "nohup sh -c 'cd /tmp/path && java server 0</dev/null 1>master_stdout 2>master_stderr' 2>/dev/null 1>/dev/null &"
However, now the exit code is always 0 when the cd fails; whereas if I do ssh server cd /failed/path then I get a real exit code. Suggestions?
See Bash's Operator Precedence.
The & is being attached to the whole statement because it has a higher precedence than &&. You don't need ssh to verify this. Just run this in your shell:
$ sleep 100 && echo yay &
[1] 19934
If the & were only attached to the echo yay, then your shell would sleep for 100 seconds and then report the background job. However, the entire sleep 100 && echo yay is backgrounded and you're given the job notification immediately. Running jobs will show it hanging out:
$ sleep 100 && echo yay &
[1] 20124
$ jobs
[1]+ Running sleep 100 && echo yay &
You can use parenthesis to create a subshell around echo yay &, giving you what you'd expect:
sleep 100 && ( echo yay & )
This would be similar to using bash -c to run echo yay &:
sleep 100 && bash -c "echo yay &"
Tossing these into an ssh, and we get:
# using parenthesis...
$ ssh localhost "cd / && (nohup sleep 100 >/dev/null </dev/null &)"
$ ps -ef | grep sleep
me 20136 1 0 16:48 ? 00:00:00 sleep 100
# and using `bash -c`
$ ssh localhost "cd / && bash -c 'nohup sleep 100 >/dev/null </dev/null &'"
$ ps -ef | grep sleep
me 20145 1 0 16:48 ? 00:00:00 sleep 100
Applying this to your command, and we get
ssh server "cd /tmp/path && (nohup java server 0</dev/null 1>server_stdout 2>server_stderr &)"
or:
ssh server "cd /tmp/path && bash -c 'nohup java server 0</dev/null 1>server_stdout 2>server_stderr &'"
Also, with regard to your comment on the post,
Right, sh -c always returns 0. E.g., sh -c exit 1 has error code
0"
this is incorrect. Directly from the manpage:
Bash's exit status is the exit status of the last command executed in
the script. If no commands are executed, the exit status is 0.
Indeed:
$ bash -c "true ; exit 1"
$ echo $?
1
$ bash -c "false ; exit 22"
$ echo $?
22
ssh server "test -d /tmp/path" && ssh server "nohup ... &"
Answer roundup:
Bad: Using sh -c to wrap the entire nohup command doesn't work for my purposes because it doesn't return error codes. (#cbuckley)
Okay: ssh <server> <cmd1> && ssh <server> <cmd2> works but is much slower (#joachim-nilsson)
Good: Create a shell script on <server> that runs the commands in succession and returns the correct error code.
The last is what I ended up using. I'd still be interested in learning why the original use-case doesn't work, if someone who understands shell internals can explain it to me!

Resources