Implement shortcut keys in a script - bash

I have a program, and while it is running I'd like to give the user the ability to perform an action e.g. exit the program by pressing ctrl+x .
It would be great if anyone could help, can't seem to find the correct syntax online.
echo "Many thanks"

You can trap the keyboard interrupt SIGINT generated by Ctrl+C with something like this:
#!/bin/bash
tidy_close() {
echo "Ending gracefully, stage $1"
exit 0
}
trap 'tidy_close $stage' INT
while :
do
(( stage++ ))
echo "Pondering the answers"
sleep 2
done
The function can have any code you need to shutdown. The stage variable is just to illustrate how to pass a value, if you need it.

Related

How to cleanly print a message in a bash shell after a background process finishes?

I want to start a lengthy process in the background in my bash shell, get notified when that process is finished, and then be returned to the command line in elegant fashion. Here's what I have so far:
echo $(lengthy_process >/dev/null 2&>1 ; printf "consummatum est.\r" ) &
This almost works. The message "consummatum est" eventually shows, but it leaves my command prompt in an ugly/indeterminate state with the text interjected into what I happen to be typing.
Is there a way to get the background process to print to terminal without interrupting what I'm doing and without requiring a carriage return to get the command prompt into a fresh state?
a more modern take with notify-send
( lengthy_process &>/dev/null; notify-send "done" ) &
otherwise you're asking for the interruption. You may want to display exit status as well.
You can create a script like this (show-msg) that saves the cursor position, prints the message and restores the cursor position after.
#! /bin/bash
tput sc; tput cup 0 0
printf '%s
================
PROCESS FINISHED
================
%s\n' "$(tput setab 13)" "$(tput sgr0)"
tput rc
and then
( lengthy_process &>/dev/null; ./show-msg ) &
which will show the message without interfering with your typing.
I realized the command prompt in the main shell was still active when the final echo prints its message, but it was printing over the prompt and entered text, so you couldn't tell what was happening. (Hitting enter would therefore execute whatever had been written at the command prompt at the moment the sub process completed).
After trying a LOT of different approaches, I eventually settled on one that suits my purposes. IMO, the key thing to do to avoid a confusing mess is to first issue a CTRL-C command from the subshell(s) to the main shell (in order to cancel anything that may have been written at the moment the lengthy process completes), and THEN print the notification to terminal.
Here's what it looks like (with sleep 3 instead of lengthy_process):
TOPSHELLPID=$$; ( (TEMP=$(sleep 3; echo -e "kill -INT $TOPSHELLPID; echo '\n\n===============\nConsummatum Est\n===============\n'; kill -INT $TOPSHELLPID" ); bash -c "$TEMP") & )

Trap doesn't exit the loop

I'm trying to do a cleanup using trap command. The safe_cancel function is being called when I hit a Ctrl + C but the script isn't being exited. I have to use Ctrl + Z to suspend the script and then kill.
The foo is another script I have in my PATH which returns an exit 1 if it receives an invalid argument.
What am I lacking or doing wrong in this script?
#!/bin/bash
safe_cancel () {
echo "Cancelling..."
# do some cleanup here
exit 1
}
trap safe_cancel 1
while true; do
read -p "Choose an option: " someOption < /dev/tty
foo $someOption
if [[ $? == 0 ]]
then
break
exit 0
fi
done
Additional details:
I'm writing this script for a Git hook. Apparently, git hooks aren't exactly expecting a standard in/out so I have to explicitly use /dev/tty
Edit:
When using this as part of a git hook, I'm receiving the error
read: read error: 0: Input/output error
and it's an infinite loop
Signal 1 is SIGHUP, which is raised if the terminal goes away, for example because you were connected from a remote machine and your session was interrupted because the network disconnected. When you press Ctrl+C, this sends SIGINT.
trap safe_cancel HUP INT
This may or may not be related to the error you get with Git.

How to safely exit early from a bash script?

I know there are several SO questions on exit vs. return in bash scripts (e.g. here).
On this topic, but different from existing questions, I believe, I'd like to know if there is a "best practice" for how to safely implement "early return" from a bash script such that the user's current shell is not exited if they source the script.
Answers such as this seem based on "exit", but if the script is sourced, i.e. run with a "." (dot space) prefix, the script runs in the current shell's context, in which case exit statements have the effect of exiting the current shell. I assume this is an undesirable result because a script doesn't know if it is being sourced or being run in a subshell - if the former, the user may unexpectedly have his shell disappear. Is there a way/best practice for early-returns to not exit the current shell if the caller sources it?
E.g. This script...
#! /usr/bin/bash
# f.sh
func()
{
return 42
}
func
retVal=$?
if [ "${retVal}" -ne 0 ]; then
exit "${retVal}"
# return ${retVal} # Can't do this; I get a "./f.sh: line 13: return: can only `return' from a function or sourced script"
fi
echo "don't wanna reach here"
...runs without killing my current shell if it is run from a subshell...
> ./f.sh
>
...but kills my current shell if it is sourced:
> . ./f.sh
One idea that comes to mind is to nest code within coditionals so that there is no explicit exit statement, but my C/C++ bias makes think of early-returns as aesthetically preferable to nested code. Are there other solutions that are truly "early return"?
The most common solution to bail out of a script without causing the parent shell to terminate is to try return first. If it fails then exit.
Your code will look like this:
#! /usr/bin/bash
# f.sh
func()
{
return 42
}
func
retVal=$?
if [ "${retVal}" -ne 0 ]; then
return ${retVal} 2>/dev/null # this will attempt to return
exit "${retVal}" # this will get executed if the above failed.
fi
echo "don't wanna reach here"
You can also use return ${retVal} 2>/dev/null || exit "${retVal}".
Hope this helps.

Terminating a shell function non-interactively

Is there a way to terminate a shell function non-interactively without killing the shell that's running it?
I know that the shell can be told how to respond to a signal (e.g. USR1), but I can't figure out how the signal handler would terminate the function.
If necessary you may assume that the function to be terminate has been written in such a way that it is "terminable" (i.e. by declaring some suitable options).
(My immediate interest is in how to do this for zsh, but I'm also interested in knowing how to do it for bash and for /bin/sh.)
EDIT: In response to Rob Watt's suggestion:
% donothing () { echo $$; sleep 1000000 }
% donothing
47139
If at this point I hit Ctrl-C at the same terminal that is running the shell, then the function donothing does indeed terminate, and I get the command prompt back. But if instead, from a different shell session, I run
% kill -s INT 47139
...the donothing function does not terminate.
Maybe I'm not fully understand what you want, but maybe something like this?
trap "stopme=1" 2
function longcycle() {
last=$1
for i in 1 2 3 4 5
do
[ ! -z "$stopme" ] && return
echo $i
sleep 1
done
}
stopme=""
echo "Start 1st cycle"
longcycle
echo "1st cycle end"
echo "2nd cycle"
stopme=""
longcycle
echo "2nd cycle end"
The above is for bash. Run it, and try press CTRL-C.
Or for not interactively, Save the above as for example my_command, then try:
$ ./my_command & #into background
$ kill -2 $! #send CTRL-C to the bg process
EDIT:
Solution for your sleep example in the bash:
$ donothing() { trap '[[ $mypid ]] && trap - 2 && kill $mypid' 0 2; sleep 1000000 & mypid=$!;wait; }
$ donothing
when you send a signal from another terminal will terminate it. Remeber, signal '0' je "normal end of the process". Semantic name: 0=EXIT, 2=INT... etc.
and remeber too, than signals are sending to processes not to the functions. In your example, the process is the current (interactive shell), so must use the wait trick to get something interrupt-able... Not a nice solution - but the only way when want interrupt something what is running in interactive shell (not a forked one) from the another terminal...

How to re-prompt after a trap return in bash?

I have a script that is supposed to trap SIGTERM and SIGTSTP. This is what I have in the main block:
trap 'killHandling' TERM
And in the function:
killHandling () {
echo received kill signal, ignoring
return
}
... and similar for SIGINT. The problem is one of user interface. The script prompts the user for some input, and if the SIGTERM or SIGINT occurs when the script is waiting for input, it's confusing. Here is the output in that case:
Enter something: # SIGTERM received
received kill signal, ignoring
# shell waits at blank line for user input, user gets confused
# user hits "return", which then gets read as blank input from the user
# bad things happen because of the blank input
I have definitely seen scripts which handle this more elegantly, like so:
Enter something: # SIGTERM received
received kill signal, ignoring
Enter something: # re-prompts user for user input, user is not confused
What is the mechanism used to accomplish the latter? Unfortunately I can't simply change my trap code to do the re-prompt as the script prompts the user for several things and what the prompt says is context-dependent. And there has to be a better way than writing context-dependent trap functions.
I'd be very grateful for any pointers. Thanks!
These aren't terribly robust methods--there are some issues with the way it handles CTRL-C as a character after the first trap, for example--but they both handle the use case you defined.
Use BASH_COMMAND to re-run the last command (e.g read).
prompt () {
read -p 'Prompting: '
}
reprompt () {
echo >&2
eval "$BASH_COMMAND"
}
trap "reprompt" INT
prompt
In this case, *BASH_COMMAND* evaluates to read -p 'Prompting: '. The command then needs to be reprocessed with eval. If you don't eval it, you can end up with weird quoting problems. YMMV.
Use FUNCNAME to re-run previous function in call stack.
prompt () {
read -p 'Prompting: '
}
reprompt () {
echo >&2
"${FUNCNAME[1]}"
}
trap "reprompt" INT
prompt
In this example, FUNCNAME[1] expands to prompt, which is the previous function in the stack. We just call it again recursively, as many times as needed.
The answer CodeGnome gave works, but as he points out, it is not robust; a second control-c causes undesirable behavior. I ultimately got around the problem by making better use of existing input validation in the code. So my interrupt handling code now looks like this:
killHandling () {
echo received kill signal, ignoring
echo "<<Enter>> to continue"
return
}
Now the cursor still waits at a blank line for user input, but the user is not confused, and hits the "Enter" key as prompted. Then the script's input validation detects that a blank line has been entered, which is treated as invalid input, and the user is re-prompted to enter something.
I remain grateful to CodeGnome for his suggestions, from which I learned a couple of things. And I apologize for the delay in posting this answer.

Resources