On our development server, we have a bunch of shell script wrappers for Java JARs. Using the CRON scheduler, we do fire these scripts daily for different purposes.
For performance testing, we would like to renice a script's PID to a priority of 1 at runtime.
Right now, we do it from the command line or using TOP.
Is there a way to do that within the shell script itself without "doing harm" to the process as well as other processes?
This should work:
renice -n 1 $$
Afterwards the script itself will have a nice value of 1. This will also apply to all new children, although not to previously forked ones.
Related
I have a couple cli-based scripts that run for some time.
I'd like another script to 'restart' those other scripts.
I've checked SO for answers, but the scenarios were not applicable enough to mine, as I'm trying to end Terminal processes using Terminal.
Process:
2 cli-based scripts are running (node, python, etc).
3rd script is run and decides whether or not to restart the other 2.
This can't quit Terminal, but has to end current processes.
3rd script then runs an executable that restarts everything.
Currently none of the terminal windows are named, and from reading the other posts, I can see that it may be helpful to do so.
I can mostly set this up, I just could not find a command that would end all other terminal processes and close them.
There are a couple of ways to do this. Most common is having a pidfile.
This file contains the process ID (pid) of the job you want to kill
later on. A simple way to create the pidfile is:
$ node server &
$ echo $! > /tmp/node.pidfile
$! contains the pid of the process that was most recently backgrounded.
Then later on, you kill it like so:
$ kill `cat /tmp/node.pidfile`
You would do similar for the python script.
The other less robust way is to do a killall for each process and assume you are not running similar node or python jobs.
Refer to
What is a .pid file and what does it contain? if you're not familiar with this.
The question headline is quite general, so is my reply
killall bash
or generically
killall processName
eg. killall chrome
For example in the redis official image:
https://github.com/docker-library/redis/blob/master/2.8/docker-entrypoint.sh
#!/bin/bash
set -e
if [ "$1" = 'redis-server' ]; then
chown -R redis .
exec gosu redis "$#"
fi
exec "$#"
Why not just run the commands as usual without exec preceding them?
As #Peter Lyons says, using exec will replace the parent process, rather than have two processes running.
This is important in Docker for signals to be proxied correctly. For example, if Redis was started without exec, it will not receive a SIGTERM upon docker stop and will not get a chance to shutdown cleanly. In some cases, this can lead to data loss or zombie processes.
If you do start child processes (i.e. don't use exec), the parent process becomes responsible for handling and forwarding signals as appropriate. This is one of the reasons it's best to use supervisord or similar when running multiple processes in a container, as it will forward signals appropriately.
Without exec, the parent shell process survives and waits for the child to exit. With exec, the child process replaces the parent process entirely so when there's nothing for the parent to do after forking the child, I would consider exec slightly more precise/correct/efficient. In the grand scheme of things, I think it's probably safe to classify it as a minor optimization.
without exec
parent shell starts
parent shell forks child
child runs
child exits
parent shell exits
with exec
parent shell starts
parent shell forks child, replaces itself with child
child program runs taking over the shell's process
child exits
Think of it as an optimization like tail recursion.
If running another program is the final act of the shell script, there's not much of a need to have the shell run the program in a new process and wait for it. Using exec, the shell process replaces itself with the program.
In either case, the exit value of the shell script will be identical1. Whatever program originally called the shell script will see an exit value that is equal to the exit value of the exec`ed program (or 127 if the program cannot be found).
1 modulo corner cases such as a program doing something different depending on the name of its parent.
I inherited a legacy Ant-based build system and I'm trying to get a sense of its scope. I observed multiple jvm and junit tasks with fork=yes. It calls subant and similar tasks wildly. Occasionally, it just execs other processes.
I really don't want to search through 100s of scripts and reference documentation for every task to find possible-forking-behavior. I'd like to capture the child-process list while the build runs.
I managed to create a clean Vagrant + Puppet environment for builds and I can run the full build like so
$ cd /vagrant && $ANT_HOME/bin/ant
If I had to brute force something... I'd have a script kick off the build and capture child processes until the build is completed?
#!/bin/bash
$ANT_HOME/bin/ant &
while ps $!
do
sleep 1
ps --ppid $! >> build_processes
done
User Jayan recommended strace, specifically:
$ strace -f -e trace=fork ant
The -f limits tracing to fork system calls.
Trace child processes as they are created by cur-
rently traced processes as a result of the fork(2)
system call. The new process is attached to as soon
as its pid is known (through the return value of
fork(2) in the parent process). This means that such
children may run uncontrolled for a while (espe-
cially in the case of a vfork(2)), until the parent
is scheduled again to complete its (v)fork(2) call.
If the parent process decides to wait(2) for a child
that is currently being traced, it is suspended
until an appropriate child process either terminates
or incurs a signal that would cause it to terminate
(as determined from the child’s current signal dis-
position).
I can't find the trace=fork expression, but trace=process seems useful.
-e trace=process
Trace all system calls which involve process management. This is useful for watching the fork, wait, and exec steps of a process.
http://linuxcommand.org/man_pages/strace1.html
As ant is a java process, you can try to use byteman. In byteman script you define rules which are triggered when methods exec from java.lang.Runtime are executed.
You attach byteman to ant using ANT_OPTS env variable.
Following problem:
3 programs:
one Java application which is started via a existing sh script
one node application
one grunt server
I want to write 2 shell scripts, the first should start all 3 programs. The second should end them. For the first script I simply call the starting commands. But for the second, which should be a standalone script(as the first should be), I have to know all process Ids for killing them. But even if I know these Ids, what if they started sub processes. I would just kill these parent processes, wouldn't I?
What's the approach here?
Thanks in advance!
Try pkill -P -KILL [parentid]. This should kill processes with the designated parent ID.
I'm seeking for an advice regarding the best practice of starting (java) programs from shell scripts.
Currently the practice within our firm seems to be having a shell script which sets all the environment variables and launches the java (which is not important in this case) process on background similar to:
nohup $JAVA_CMD > $LOG_DIR/$LOG_FILE 2>&1 &
which is the last line of the script. We are launching single process.
One doubt I have is that return code of such shell process is always 0 even for the case when the program fails to start due to some Exception/Error. This makes it hard for monitoring tools - they can't rely on the shell exit code for example.
I found out this can be fixed by waiting for the process to end like:
nohup $JAVA_CMD > $LOG_DIR/$LOG_FILE 2>&1 &
wait $!
But my understanding is that this makes the last & completely useless since running:
nohup $JAVA_CMD > $LOG_DIR/$LOG_FILE 2>&1
will have the same effect.
So my question is: what is the best practice of launching programs from shell? Does the running on background have some benefits I'm overlooking?
You should look into at and batch, and possibly cron. These are all tools to run commands scripts, job streams non-interactively. at runs a job then emails stderr output back to the user - default behavior.
at -k now <<!
$JAVA_CMD > $LOG_DIR/$LOG_FILE 2>&1
!
The batch command will let you write a series of commands to a file, then execute the file as if it were stdin, you can also do this interactively.
cron jobs (crontab) run at specified times and dates, like every Monday at 0200. This does not seem to fit your question.
Try this:
http://www.thegeekstuff.com/2010/06/at-atq-atrm-batch-command-examples/