How do you run a shell command in the background and suppress all output? - bash

I'm trying to write a Shell script (for use in Mac OSX Termninal) that will run a command to start a development server (gulp serve). I have it working except the server is continuously running so it doesn't allow me to enter subsequent commands in the same window without stopping the server (Control+C). My question is, is there a way I can run the process in the background and/or suppress any/all output? My goal is to also write a 'stop server' command that will kill the process (which I'm also unsure how to do). I've tried all combinations of using ampersands and &>/dev/null and nothing quite works. Here's what I have so far:
if [ "$1" = "server" ]
then
if [ "$2" = "on" ]
then
cd / & gulp serve --gulpfile /server/example/gulpfile.js # the output is still shown
printf "\033[0;32mserver is online.\033[0m\n"
else
killall tail &>/dev/null 2>&1 # this doesn't kill the process
printf "\033[0;32mportals is offline.\033[0m\n"
fi
fi

You're doing the output redirection on killall, not gulp, so gulp will continue to merrily split out text to your terminal. Try instead:
cd / && gulp server --gulpfile /server/example/gulpfile.js >/dev/null 2>&1 &
Secondly, your kill command doesn't kill your process because you're not telling it to; you're asking it to kill all tail processes. You want instead:
killall gulp
These modifications should be the most direct path to your goal. However, there are a few additional things that may be useful to know.
Process management has a long history in the *nix world, and so we've been inventing tools to make this easier for a long time. You can go through re-inventing them yourself (the next step would be to store the PID of your gulp process so that you can ensure you only kill it and not anything else with "gulp" in the name), or you can go all the way and write a system monitoring file. For Linux, this would be SysV, Upstart, or systemd; I'm not sure what the OS X equivalent is.
However, since you're just doing this for development purposes, not a production website, you probably don't actually need that; your actual goal is to be able to execute ad-hoc shell commands while gulp is running. You can use terminal tabs to do this, or more elegantly use the splitting capabilities of iTerm, screen, or tmux. Tmux in particular is a useful tool for when you find yourself working a lot in a terminal, and would be a useful thing to become familiar with.

First, to run the process in the background
cd / && gulp serve --gulpfile /server/example/gulpfile.js > /tmp/gulp.log &
after cd you need && (and) and & to run in the background at the end.
To kill all gulp processes
killall gulp

Related

Automatically terminate all nodes after calling roslaunch

I am trying to run several roslaunch files, one after the other, from a bash script. However, when the nodes complete execution, they hang with the message:
[grem_node-1] process has finished cleanly
log file: /home/user/.ros/log/956b5e54-75f5-11e9-94f8-a08cfdc04927/grem_node-1*.log
Then I need to Ctrl-C to get killing on exit for all of the nodes launched from the launch file. Is there some way of causing nodes to automatically kill themselves on exit? Because at the moment I need to Ctrl-C every time a node terminates.
My bash script looks like this, by the way:
python /home/user/git/segmentation_plots/scripts/generate_grem_launch.py /home/user/Data2/Coco 0 /home/user/git/Async_CNN/config.txt
source ~/setupgremsim.sh
roslaunch grem_ros grem.launch config:=/home/user/git/Async_CNN/config.txt
source /home/user/catkin_ws/devel/setup.bash
roslaunch rpg_async_cnn_generator conf_coco.launch
The script setupgremsim.sh sources another catkin workspace.
Many thanks!
Thanks all for your advice. What I ended up doing was this; I launched my ROS Nodes from separate python scripts, which I then called from the bash script. In python you are able to terminate child processes with shutdown. So to provide an example for anyone else with this issue:
bash script:
#!/bin/bash
for i in {0..100}
do
echo "========================================================\n"
echo "This is the $i th run\n"
echo "========================================================\n"
source /home/timo/catkin_ws/devel/setup.bash
python planar_launch_generator.py
done
and then inside planar_launch_generator.py:
import roslaunch
import rospy
process_generate_running = True
class ProcessListener(roslaunch.pmon.ProcessListener):
global process_generate_running
def process_died(self, name, exit_code):
global process_generate_running
process_generate_running = False
rospy.logwarn("%s died with code %s", name, exit_code)
def init_launch(launchfile, process_listener):
uuid = roslaunch.rlutil.get_or_generate_uuid(None, False)
roslaunch.configure_logging(uuid)
launch = roslaunch.parent.ROSLaunchParent(
uuid,
[launchfile],
process_listeners=[process_listener],
)
return launch
rospy.init_node("async_cnn_generator")
launch_file = "/home/user/catkin_ws/src/async_cnn_generator/launch/conf_coco.launch"
launch = init_launch(launch_file, ProcessListener())
launch.start()
while process_generate_running:
rospy.sleep(0.05)
launch.shutdown()
Using this method you could source any number of different catkin workspaces and launch any number of launchfiles.
Try to do this
(1) For each launch you put in a separate shell script. So you have N script
In each script, call the launch file in xterm. xterm -e "roslaunch yourfacnylauncher"
(2) Prepare a master script which calling all N child script in the sequence you want it to be and delay you want it to have.
Once it is done, xterm should kill itself.
Edit. You can manually kill one if you know its gonna hang. Eg below
#!/bin/sh
source /opt/ros/kinetic/setup.bash
source ~/catkin_ws/devel/setup.bash
start ROScore using systemd or rc.local using lxtermal or other terminals to avoid accident kill. Then run the part which you think gonna hang or create a problem. Echo->action if necessary
xterm -geometry 80x36+0+0 -e "echo 'uav' | sudo -S dnsmasq -C /dev/null -kd -F 10.5.5.50,10.5.5.100 -i enp59s0 --bind-dynamic" & sleep 15
Stupid OUSTER LIDAR cant auto config like Veloydne and will hang here. other code cant run
killall xterm & sleep 1
Lets just kill it and continuous run other launches
xterm -e "roslaunch '/home/uav/catkin_ws/src/ouster_driver_1.12.0/ouster_ros/os1.launch' os1_hostname:=os1-991907000715.local os1_udp_dest:=10.5.5.1"

Terminal - Close all terminal windows/processes

I have a couple cli-based scripts that run for some time.
I'd like another script to 'restart' those other scripts.
I've checked SO for answers, but the scenarios were not applicable enough to mine, as I'm trying to end Terminal processes using Terminal.
Process:
2 cli-based scripts are running (node, python, etc).
3rd script is run and decides whether or not to restart the other 2.
This can't quit Terminal, but has to end current processes.
3rd script then runs an executable that restarts everything.
Currently none of the terminal windows are named, and from reading the other posts, I can see that it may be helpful to do so.
I can mostly set this up, I just could not find a command that would end all other terminal processes and close them.
There are a couple of ways to do this. Most common is having a pidfile.
This file contains the process ID (pid) of the job you want to kill
later on. A simple way to create the pidfile is:
$ node server &
$ echo $! > /tmp/node.pidfile
$! contains the pid of the process that was most recently backgrounded.
Then later on, you kill it like so:
$ kill `cat /tmp/node.pidfile`
You would do similar for the python script.
The other less robust way is to do a killall for each process and assume you are not running similar node or python jobs.
Refer to
What is a .pid file and what does it contain? if you're not familiar with this.
The question headline is quite general, so is my reply
killall bash
or generically
killall processName
eg. killall chrome

How can I tell if a script was run in the background and with nohup?

Ive got a script that takes a quite a long time to run, as it has to handle many thousands of files. I want to make this script as fool proof as possible. To this end, I want to check if the user ran the script using nohup and '&'. E.x.
me#myHost:/home/me/bin $ nohup doAlotOfStuff.sh &. I want to make 100% sure the script was run with nohup and '&', because its a very painful recovery process if the script dies in the middle for whatever reason.
How can I check those two key paramaters inside the script itself? and if they are missing, how can I stop the script before it gets any farther, and complain to the user that they ran the script wrong? Better yet, is there way I can force the script to run in nohup &?
Edit: the server enviornment is AIX 7.1
The ps utility can get the process state. The process state code will contain the character + when running in foreground. Absence of + means code is running in background.
However, it will be hard to tell whether the background script was invoked using nohup. It's also almost impossible to rely on the presence of nohup.out as output can be redirected by user elsewhere at will.
There are 2 ways to accomplish what you want to do. Either bail out and warn the user or automatically restart the script in background.
#!/bin/bash
local mypid=$$
if [[ $(ps -o stat= -p $mypid) =~ "+" ]]; then
echo Running in foreground.
exec nohup $0 "$#" &
exit
fi
# the rest of the script
...
In this code, if the process has a state code +, it will print a warning then restart the process in background. If the process was started in the background, it will just proceed to the rest of the code.
If you prefer to bailout and just warn the user, you can remove the exec line. Note that the exit is not needed after exec. I left it there just in case you choose to remove the exec line.
One good way to find if a script is logging to nohup, is to first check that the nohup.out exists, and then to echo to it and ensure that you can read it there. For example:
echo "complextag"
if ( $(cat nohup.out | grep "complextag" ) != "complextag" );then
# various commands complaining to the user, then exiting
fi
This works because if the script's stdout is going to nohup.out, where they should be going (or whatever out file you specified), then when you echo that phrase, it should be appended to the file nohup.out. If it doesn't appear there, then the script was nut run using nohup and you can scold them, perhaps by using a wall command on a temporary broadcast file. (if you want me to elaborate on that I can).
As for being run in the background, if it's not running you should know by checking nohup.

How to run shell script on VM indefinitely?

I have a VM that I want running indefinitely. The server is always running but I want the script to keep running after I log out. How would I go about doing so? Creating a cron job?
In general the following steps are sufficient to convince most Unix shells that the process you're launching should not depend on the continued existence of the shell:
run the command under nohup
run the command in the background
redirect all file descriptors that normally point to the terminal to other locations
So, if you want to run command-name, you should do it like so:
nohup command-name >/dev/null 2>/dev/null </dev/null &
This tells the process that will execute command-name to send all stdout and stderr to nowhere (instead of to your terminal) and also to read stdin from nowhere (instead of from your terminal). Of course if you actually have locations to write to/read from, you can certainly use those instead -- anything except the terminal is fine:
nohup command-name >outputFile 2>errorFile <inputFile &
See also the answer in Petur's comment, which discusses this issue a fair bit.

BASH: Shell Script as Init Script

I have a shell script that calls a java jar file and runs an application. There's no way around this, so I have to work with what I have.
When you execute this shell script, it outputs the application status and just sits there (pretty much a console); so when something happens to the program it updates the screen. This is like with any normal non daemonized/backgrounded process. Only way to get out of it is ctrl-c, which then ends the process altogether. I do know that I could get around this by doing path_to_shell_script/script.sh &, which would background it for my session (I could use nohup if I wanted to logout).
My issue is, I just don't know how to put this script into a init script. I have most of the init script written, but when I try to daemonize it, it doesn't work. I've almost got it working, however, when i run the initscript, it actually spans the same "console" on the script, and just sits there until i hit ctrl-c. Here's the line in question:
daemon ${basedir}/$prog && success || failure
The problem is that I can't background just the daemon ${basedir}/$prog part and I think that's where I'm running into the issue. Has anyone been successful at creating an init script FOR a shell script? Also this shell script is not daemonizable (you can background it, but the underlying program does not support a daemonize option, or else I would have just let the application do all the work).
You need to open a subshell to execute it. It also help to redirect its output to a file, or at least /dev/null.
Something like:
#!/bin/bash
(
{ daemon ${basedir}/$prog && success || failure ; } &>/dev/null
) &
exit 0
It work as follows ( list ) & in a background subshell. { list } is a group command, it's used here to capture all the output of your commands and send it to /dev/null.
I have had success with initially detached screen sessions for running things like the half life server and my custom "tail logfile " bash scripts.
To start something in the background:
screen -dmS arbitarySessionName /path/to/script/launchService.sh
To look at the process:
# screen -r arbitrarySessionName
Hope you find this useful, gl!

Resources