how to kill a group of processes in clozure cl? - shell

I want to run a shell command within ccl, but this command may be hung for some reason. So I want to kill all the sub process generated by this command. How can I do this?
I have tried trivial-shell to run the shell command, when the command not hung, it works well.
I also use with-timeout macro which is in trivial-shell to check the timeout, it just give me a timeout-error condition, the shell process is still hunging there. Here I just want to kill them all and return something.
Thank you all.

As far as I can tell, trivial-shell only provides a synchronous shell call so there's no simple way to terminate ongoing subprocesses.
I suggest calling Clozure Common Lisp's implementation-specific ccl:run-program function with :wait nil to run the jobs asynchronously. You can then call ccl:signal-external-process on the running process to kill it if you need. Documentation here.

Related

What does "shell out" or "shelling out" mean?

As used in these examples, for instance:
shell out to bundle from inside a command invoked by bundle exec
or
shell out to a Ruby command that is not part of your current bundle,
http://bundler.io/man/bundle-exec.1.html
or
i'm shelling out to the heroku command in the rake task
https://github.com/sstephenson/rbenv/issues/400
It means executing a subprocess using backticks (as in `command`), the system call, or other similar methods. These execute the process in a sub-shell, hence the name.
You can find a lot more details in this answer: https://stackoverflow.com/a/18623297/29470
Spawning a pipeline of connected programs via an intermediate shell —
a.k.a. “shelling out”
http://julialang.org/blog/2012/03/shelling-out-sucks/
And the related reddit comment thread: http://www.reddit.com/r/programming/comments/1bwbyf/shelling_out_sucks/
So, from what I can gather, I presume it means "going out from the context of the executing program, to the surrounding program, or execution environment", in broad terms. Usually you go out to the unix shell, hence the term shell out.

Pausing and resuming a Bash script

Is there a way to pause a Bash script, then resume it another time, such as after the computer has been rebooted?
The only way to do that AFAIK:
Save any variables, or other script context information in a temporary file to establish the state of the script just before the pause. This goes without saying that the script should include a mechanism to check this file to know if the previous execution was paused and, if it was, fetch all the context and resume accordingly.
After reboot, manually run the script again, OR, have the script automatically run from your startup profile script.
Try Ctrl-Z to pause the command. I don't think you can pause it and then resume after reboot unless you're keeping state somehow.
You can't pause and resume the same script after a reboot, but a script could arrange to have another script run at some later time. For example, it could create an init script (or a cron job, or a login script, etc) which contained the tasks you want to defer, and then removed itself.
Intriguing...
You can suspend a job in BASH with a CTRL-Z, but you can't resume after a reboot. A reboot initializes the machine and the process that was suspended is terminated.
However, it might be possible to force the process into a coredump via a 'kill -QUIT $pidand then usegdb` to restart the script. I tried for a while, but was unable to do it. Maybe someone else can point out the way.
If this applies to your script and the job it does, add checkpoints to it - that means places where all the state of the process is saved to disk before continuing. Then have each individual part check if the output they have to produce is already there, and skip running if it is. That should make a rerun of the script almost as efficient as resuming from the exact same place in execution.
Alternatively, run the script in a VM. Freeze the VM before shutting down the real system and resume it afterwards. It would probably take a really huge and complex shell script to make this worth it, though.

How do you make a bash script wait for second script to finish before continuing?

I have a script that does some processing and then will call another relevant script. This second script may not be the same each time.
How do I call the second script from bash and have my first script wait until it is finished before it continues. I also want to run the second script in its own window.
Currently I have:
gnome-terminal -x sh second.sh
But the first script continues whilst second is running.
Your problem here is not with bash (which processes commands in sequence unless you explicitly tell it not to using &), it's with gnome-terminal, which hands off your execution request to a background process and then terminates the one you called.
As far as I can tell, there is no way to get gnome-terminal to behave differently. An alternative might be to use xterm, which is synchronous by default.

Shell script that can check if it was backgrounded at invocation

I have written a script that relies on other server responses (uses wget to pull data), and I want it to always be run in the background unquestionably. I know one solution is to just write a wrapper script that will call my script with an & appended, but I want to avoid that clutter.
Is there a way for a bash (or zsh) script to determine if it was called with say ./foo.sh &, and if not, exit and re-launch itself as such?
The definition of a background process (I think) is that it has a controlling terminal but it is not part of that terminal's foreground process group. I don't think any shell, even zsh, gives you any access to that information through a builtin.
On Linux (and perhaps other unices), the STAT column of ps includes a + when the process is part of its terminal's foreground process group. So a literal answer to your question is that you could put your script's content in a main function and invoke it with:
case $(ps -o stat= -p $$) in
*+*) main "$#" &;;
*) main "$#";;
esac
But you might as well run main "$#" & anyway. On Unix, fork is cheap.
However, I strongly advise against doing what you propose. This makes it impossible for someone to run your script and do something else afterwards — one would expect to be able to write your_script; my_postprocessing or your_script && my_postprocessing, but forking the script's main task makes this impossible. Considering that the gain is occasionally saving one character when the script is invoked, it's not worth making your script markedly less useful in this way.
If you really mean for the script to run in the background so that the user can close his terminal, you'll need to do more work — you'll need to daemonize the script, which includes not just backgrounding but also closing all file descriptors that have the terminal open, making the process a session leader and more. I think that will require splitting your script into a daemonizing wrapper script and a main script. But daemonizing is normally done for programs that never terminate unless explicitly stopped, which is not the behavior you describe.
I do not know, how to do this, but you may set variable in parent script and check for it in child:
if [[ -z "$_BACKGROUNDED" ]] ; then
_BACKGROUNDED=1 exec "$0" "$#" & exit
fi
# Put code here
Works both in bash and zsh.
the "tty" command says "not a tty" if you're in the background, or gives the controlling terminal name (/dev/pts/1 for example) if you're in the foreground. A simple way to tell.
Remember that you can't (or, not recommended to) edit the running script. This question and the answers give workarounds.
I don't write shell scripts a long time ago, but I can give you a very good idea (I hope). You can check the value of $$ (this is the PID of the process) and compare with the output of the command "jobs -l". This last command will return the PID of all the backgrounded processes (jobs) and if the value of $$ is contained in the result of the "jobs -l", this means that the current script is running on background.

need a script which will invoke other process/script and exit, but the invoked process should continue running

I'm trying to run a script, which internally invokes other script
but, the main script should exit after invoking and the invoked script should run independently on the background.
How can i achieve this in shell scripting? or is there any other alternative way to do this?
Regrads,
senny
nohup otherscript &
The nohup will ensure that the process keeps running even if the current terminal goes away (for example if you close the window).
(Just to make it clear: the "&" puts the other script in the background, which means the first will keep running, and the second script won't exit when the first one does.)
If your script is in Perl, use exec() command to start the second script.
exec() returns immediately after executing the command, and the calling script can exit, while the second script keeps running.
http://perl.about.com/od/programmingperl/qt/perlexecsystem.htm

Resources