Terminating a shell function non-interactively - shell

Is there a way to terminate a shell function non-interactively without killing the shell that's running it?
I know that the shell can be told how to respond to a signal (e.g. USR1), but I can't figure out how the signal handler would terminate the function.
If necessary you may assume that the function to be terminate has been written in such a way that it is "terminable" (i.e. by declaring some suitable options).
(My immediate interest is in how to do this for zsh, but I'm also interested in knowing how to do it for bash and for /bin/sh.)
EDIT: In response to Rob Watt's suggestion:
% donothing () { echo $$; sleep 1000000 }
% donothing
47139
If at this point I hit Ctrl-C at the same terminal that is running the shell, then the function donothing does indeed terminate, and I get the command prompt back. But if instead, from a different shell session, I run
% kill -s INT 47139
...the donothing function does not terminate.

Maybe I'm not fully understand what you want, but maybe something like this?
trap "stopme=1" 2
function longcycle() {
last=$1
for i in 1 2 3 4 5
do
[ ! -z "$stopme" ] && return
echo $i
sleep 1
done
}
stopme=""
echo "Start 1st cycle"
longcycle
echo "1st cycle end"
echo "2nd cycle"
stopme=""
longcycle
echo "2nd cycle end"
The above is for bash. Run it, and try press CTRL-C.
Or for not interactively, Save the above as for example my_command, then try:
$ ./my_command & #into background
$ kill -2 $! #send CTRL-C to the bg process
EDIT:
Solution for your sleep example in the bash:
$ donothing() { trap '[[ $mypid ]] && trap - 2 && kill $mypid' 0 2; sleep 1000000 & mypid=$!;wait; }
$ donothing
when you send a signal from another terminal will terminate it. Remeber, signal '0' je "normal end of the process". Semantic name: 0=EXIT, 2=INT... etc.
and remeber too, than signals are sending to processes not to the functions. In your example, the process is the current (interactive shell), so must use the wait trick to get something interrupt-able... Not a nice solution - but the only way when want interrupt something what is running in interactive shell (not a forked one) from the another terminal...

Related

How to cleanly print a message in a bash shell after a background process finishes?

I want to start a lengthy process in the background in my bash shell, get notified when that process is finished, and then be returned to the command line in elegant fashion. Here's what I have so far:
echo $(lengthy_process >/dev/null 2&>1 ; printf "consummatum est.\r" ) &
This almost works. The message "consummatum est" eventually shows, but it leaves my command prompt in an ugly/indeterminate state with the text interjected into what I happen to be typing.
Is there a way to get the background process to print to terminal without interrupting what I'm doing and without requiring a carriage return to get the command prompt into a fresh state?
a more modern take with notify-send
( lengthy_process &>/dev/null; notify-send "done" ) &
otherwise you're asking for the interruption. You may want to display exit status as well.
You can create a script like this (show-msg) that saves the cursor position, prints the message and restores the cursor position after.
#! /bin/bash
tput sc; tput cup 0 0
printf '%s
================
PROCESS FINISHED
================
%s\n' "$(tput setab 13)" "$(tput sgr0)"
tput rc
and then
( lengthy_process &>/dev/null; ./show-msg ) &
which will show the message without interfering with your typing.
I realized the command prompt in the main shell was still active when the final echo prints its message, but it was printing over the prompt and entered text, so you couldn't tell what was happening. (Hitting enter would therefore execute whatever had been written at the command prompt at the moment the sub process completed).
After trying a LOT of different approaches, I eventually settled on one that suits my purposes. IMO, the key thing to do to avoid a confusing mess is to first issue a CTRL-C command from the subshell(s) to the main shell (in order to cancel anything that may have been written at the moment the lengthy process completes), and THEN print the notification to terminal.
Here's what it looks like (with sleep 3 instead of lengthy_process):
TOPSHELLPID=$$; ( (TEMP=$(sleep 3; echo -e "kill -INT $TOPSHELLPID; echo '\n\n===============\nConsummatum Est\n===============\n'; kill -INT $TOPSHELLPID" ); bash -c "$TEMP") & )

Bash - How to show variable value instead of name in background process descriptions?

I have a function running process like this:
hello (){
year=2001
longRunProcess $year
}
When I executed this function into background, the description of the running process is:
[1] longRunProcess $year
, where I missed the information of $year. It cause me some troubles when running my hello() function in a for loop, which i cannot tell which year process is still running and which is not.
Is there a way to show the $year in process as its value instead of its name?
If nothing else, you could use eval to expand the value before the command is invoked, rather than during:
$ arg=2001
$ sleep "$arg" &
[1] 4088610
$ eval "sleep ${arg#Q} &"
[2] 4088612
$ jobs
[1]- Running sleep "$arg" &
[2]+ Running sleep '2001' &
${arg#Q} shell escapes the variable arg to make sure the eval won't be fragile or insecure if the argument changes from a simple number to arbitrary text.

Implement shortcut keys in a script

I have a program, and while it is running I'd like to give the user the ability to perform an action e.g. exit the program by pressing ctrl+x .
It would be great if anyone could help, can't seem to find the correct syntax online.
echo "Many thanks"
You can trap the keyboard interrupt SIGINT generated by Ctrl+C with something like this:
#!/bin/bash
tidy_close() {
echo "Ending gracefully, stage $1"
exit 0
}
trap 'tidy_close $stage' INT
while :
do
(( stage++ ))
echo "Pondering the answers"
sleep 2
done
The function can have any code you need to shutdown. The stage variable is just to illustrate how to pass a value, if you need it.

How to get a stdout message once a background process finishes?

I realize that there are several other questions on SE about notifications upon completion of background tasks, and how to queue up jobs to start after others end, and questions like these, but I am looking for a simpler answer to a simpler question.
I want to start a very simple background job, and get a simple stdout text notification of its completion.
For example:
cp My_Huge_File.txt New_directory &
...and when it done, my bash shell would display a message. This message could just be the completed job's PID, but if I could program unique messages per background process, that would be cool too, so I could have numerous background jobs running without confusion.
Thanks for any suggestions!
EDIT: user000001's answer separates commands with ;. I separated commands with && in my original example. The only difference I notice is that you don't have to surround your base command with braces if you use&&. Semicolons are a bit more flexible, so I've updated my examples.
The first thing that comes to mind is
{ sleep 2; echo "Sleep done"; } &
You can also suppress the accompanying stderr output from the above line:
{ { sleep 2; echo "Sleep done"; } & } 2>/dev/null
If you want to save your program output (stdout) to a log file for later viewing, you can use:
{ { sleep 2; echo "Sleep done"; } & } 2>/dev/null 1>myfile.log
Here's even a generic form you might use (You can even make an alias so that you can run it at any time without having to type so much!):
# dont hesitate to add semicolons for multiple commands
CMD="cp My_Huge_File.txt New_directory"
{ eval $CMD & } 2>/dev/null 1>myfile.log
You might also pipe stdout into another process using | in case you wish to process output in real time with other scripts or software. tee is also a helpful tool in case you wish to use multiple pipes. For reference, there are more examples of I/O redirection here.
You could use command grouping:
{ slow_program; echo ok; } &
or the wait command
slow_program &
wait
echo ok
The most reliable way is to simply have the output from the background process go to a temporary file and then consume the temporary file.
When you have a background process running it can be difficult to capture the output into something useful because multiple jobs will overwrite eachother
For example, if you have two processes which each print out a string with a number "this is my string1" "this is my string2" then it is possible for you to end up with output that looks like this:
"this is mthis is my string2y string1"
instead of:
this is my string1
this is my string2
By using temporary files you guarantee that the output will be correct.
As I mentioned in my comment above, bash already does this kind of notification by default, as far as I know. Here's an example I just made:
$ sleep 5 &
[1] 25301
$ sleep 10 &
[2] 25305
$ sleep 3 &
[3] 25309
$ jobs
[1] Done sleep 5
[2]- Running sleep 10 &
[3]+ Running sleep 3 &
$ :
[3]+ Done sleep 3
$ :
[2]+ Done sleep 10
$

Bash: How to lock also when perform an outside script

Here is my bash code:
(
flock -n -e 200 || (echo "This script is currently being run" && exit 1)
sleep 10
...Call some functions which is written in another script...
sleep 5
) 200>/tmp/blah.lockfile
I'm running the script from two shells successively and as long as the first one is at "sleep 5" all goes good, meaning that the other one doesn't start. But when the first turns to perform the code from another script (other file) the second run starts to execute.
So I have two questions here:
What should I do to prevent this script and all its "children" from run while the script OR its "child" is still running.
(I didn't find a more appropriate expression for running another script other than a "child", sorry for that :) ).
According to man page, -n causes the process to exit when it fails to gain the lock, but as far as I can see it just wait until it can run. What am I missing ?
Your problem may be fairly mundane. Namely,
false || ( exit 1 )
Does not cause the script to exit. Rather, the exit instructs the subshell to exit. So change your first line to:
flock -n -e 200 || { echo "This script is currently being run"; exit 1; } >&2

Resources