URxvt as a shell-to-run-things or how to exit bash by <Enter>, but execute string entered so far? - bash

What am I looking for is a replacement for all those things you’ve probably seen by hitting Mod+R in various tiling WM or Alt+F2 in Gnome DE — a small window to run things from there. I feel very sad without having my bash aliases and other stuff missing because those shells (at least those ones I can use now) are non-interactive and their interactiveness can’t be set as an option at run time.
This is why I decided to use URxvt window as a ‘shell for one command’. In my WM I have Mod+R shortcut bound to execute
/bin/bash -c 'export ONE_COMMAND_SHELL=t && urxvt -name one_command_shell'
And put
[ -v ONE_COMMAND_SHELL ] && bind -x '"\C-m":"exit"'
in my ~/.bashrc
This way I can distinguish an URxvt instance which will become ‘shell for one command’, and set C-m combination (yes, it also works for Enter) to exit the shell, and, therefore, URxvt. The problem is how to execute string entered so far before exit?
I found two potential clues:
a) Making use of BASH_COMMAND
BASH_COMMAND
The command currently being executed or about to be executed, unless the
shell is executing a command as the result of a trap, in which case it is
the command executing at the time of the trap.
But I didn’t get it working because
[ -v ONE_COMMAND_SHELL ] && bind -x '"\C-m":"exec $BASH_COMMAND exit"'
will cause exec exec exec.
b) A trap on SIGCHLD!
It may work for you, if you place
[ -v ONE_COMMAND_SHELL ] && {
PS1=
eaoc() { exit; }
trap eaoc SIGCHLD
}
At the end of your ~/.bashrc. PS1 may be non-empty, but it may not have subshell calls, or the trap will execute and shell will exit before the prompt is to be generated. This holds URxvt open till the forked process exit, so may be considered a half-way to solution.
c) A trap to DEBUG.
DEBUG trap executes before any simple command to be executed and before the first command in each function. If we provide a number via OC_SHELL and will count the number of executed commands, like…
/bin/bash -c 'export OC_SHELL=0 && urxvt -name one_command_shell'
# note a zero ----------------^
And at the end of ~/.bashrc
[ -v OC_SHELL ] && {
export PS1=
eaoc() { echo $OC_SHELL && [ $(( OC_SHELL++ )) -ge 1 ] && wait $! && exit; }
trap eaoc DEBUG
}
We get a fail again. Because our process, forked like
$ gimp &
Dies with its parent. How to deal with it?
Interesting thing is that DEBUG trap executes right after the command was entered, this trap does not wait for a process to return its status code or pid of background process in case of using &.
Ok, that’s how create process independent of bash Need to somehow wrap entered string into (nohup … &)

This works for me:
one_command_execute() {
eval $(echo "$READLINE_LINE") &
exit
}
[ -v ONE_COMMAND_SHELL ] && bind -x '"\C-m":"one_command_execute"'
Another version is without eval:
one_command_checkexit() {
[ -v ONE_COMMAND_DONE ] && exit
ONE_COMMAND_DONE=1
}
[ -v ONE_COMMAND_SHELL ] && PROMPT_COMMAND=one_command_checkexit
This doesn't close the window until the command exits. To execute everything in background automatically, add:
[ -v ONE_COMMAND_SHELL ] && bind '"\C-m":" & \n"'

Related

How to exit gitlab job when script fails [duplicate]

I have a Bash shell script that invokes a number of commands.
I would like to have the shell script automatically exit with a return value of 1 if any of the commands return a non-zero value.
Is this possible without explicitly checking the result of each command?
For example,
dosomething1
if [[ $? -ne 0 ]]; then
exit 1
fi
dosomething2
if [[ $? -ne 0 ]]; then
exit 1
fi
Add this to the beginning of the script:
set -e
This will cause the shell to exit immediately if a simple command exits with a nonzero exit value. A simple command is any command not part of an if, while, or until test, or part of an && or || list.
See the bash manual on the "set" internal command for more details.
It's really annoying to have a script stubbornly continue when something fails in the middle and breaks assumptions for the rest of the script. I personally start almost all portable shell scripts with set -e.
If I'm working with bash specifically, I'll start with
set -Eeuo pipefail
This covers more error handling in a similar fashion. I consider these as sane defaults for new bash programs. Refer to the bash manual for more information on what these options do.
To add to the accepted answer:
Bear in mind that set -e sometimes is not enough, specially if you have pipes.
For example, suppose you have this script
#!/bin/bash
set -e
./configure > configure.log
make
... which works as expected: an error in configure aborts the execution.
Tomorrow you make a seemingly trivial change:
#!/bin/bash
set -e
./configure | tee configure.log
make
... and now it does not work. This is explained here, and a workaround (Bash only) is provided:
#!/bin/bash
set -e
set -o pipefail
./configure | tee configure.log
make
The if statements in your example are unnecessary. Just do it like this:
dosomething1 || exit 1
If you take Ville Laurikari's advice and use set -e then for some commands you may need to use this:
dosomething || true
The || true will make the command pipeline have a true return value even if the command fails so the the -e option will not kill the script.
If you have cleanup you need to do on exit, you can also use 'trap' with the pseudo-signal ERR. This works the same way as trapping INT or any other signal; bash throws ERR if any command exits with a nonzero value:
# Create the trap with
# trap COMMAND SIGNAME [SIGNAME2 SIGNAME3...]
trap "rm -f /tmp/$MYTMPFILE; exit 1" ERR INT TERM
command1
command2
command3
# Partially turn off the trap.
trap - ERR
# Now a control-C will still cause cleanup, but
# a nonzero exit code won't:
ps aux | grep blahblahblah
Or, especially if you're using "set -e", you could trap EXIT; your trap will then be executed when the script exits for any reason, including a normal end, interrupts, an exit caused by the -e option, etc.
The $? variable is rarely needed. The pseudo-idiom command; if [ $? -eq 0 ]; then X; fi should always be written as if command; then X; fi.
The cases where $? is required is when it needs to be checked against multiple values:
command
case $? in
(0) X;;
(1) Y;;
(2) Z;;
esac
or when $? needs to be reused or otherwise manipulated:
if command; then
echo "command successful" >&2
else
ret=$?
echo "command failed with exit code $ret" >&2
exit $ret
fi
Run it with -e or set -e at the top.
Also look at set -u.
On error, the below script will print a RED error message and exit.
Put this at the top of your bash script:
# BASH error handling:
# exit on command failure
set -e
# keep track of the last executed command
trap 'LAST_COMMAND=$CURRENT_COMMAND; CURRENT_COMMAND=$BASH_COMMAND' DEBUG
# on error: print the failed command
trap 'ERROR_CODE=$?; FAILED_COMMAND=$LAST_COMMAND; tput setaf 1; echo "ERROR: command \"$FAILED_COMMAND\" failed with exit code $ERROR_CODE"; put sgr0;' ERR INT TERM
An expression like
dosomething1 && dosomething2 && dosomething3
will stop processing when one of the commands returns with a non-zero value. For example, the following command will never print "done":
cat nosuchfile && echo "done"
echo $?
1
#!/bin/bash -e
should suffice.
I am just throwing in another one for reference since there was an additional question to Mark Edgars input and here is an additional example and touches on the topic overall:
[[ `cmd` ]] && echo success_else_silence
Which is the same as cmd || exit errcode as someone showed.
For example, I want to make sure a partition is unmounted if mounted:
[[ `mount | grep /dev/sda1` ]] && umount /dev/sda1

bash function run command in background

issue related to this: Cannot make bash script work from cloud-init
I tried all kinds of variants like this:
function ge() {
if [ "$1" == ""]
then
geany &
else
eval "geany $1 &"
#also tried:
geany $1 &
geany "$1" &
etc
fi
}
I tried with or without eval, with $1 quoted or not etc.
In all cases (if it works at all) I get bash: [: some.txt: unary operator expected
What I want is that the editor opens/creates the file in the background, so I can still use the terminal for foreground tasks.
Ultimately, I want a working function that does what I intended above, but with geany replaced by $EDITOR. So on different platforms I can use a different editor.
Why is the syntax in functions different than in scripts?
It's certainly possible to start commands in the background via a script:
#!/bin/bash
cmd=geany
function ge {
if [[ $# -eq 0 ]]
then
${cmd} &
else
${cmd} "$#" &
fi
}
ge "$#"
or simpler:
#!/bin/bash
geany "$#" &
but starting an interactive command in the background and terminating the script is likely to fail since the background command's stdin will be closed as soon as the script dies.
You can wait for the background command to finish before terminating the script to fix that though.

How to exit tmux without leaving my terminal

I'm using tmux on bash, and letting it start automatically from .bashrc. Sometimes I want it disabled, and I should edit my .bashrc to do so. Editting a file everytime I disable tmux is quite troublesome, and I think the easiest way to do the same thing is exiting tmux without leaving terminal. Can I do that?
When I type exit, bash and terminal close. I tried exec bash , but it just restarted bash inside tmux.
I start tmux from code below, according to https://wiki.archlinux.org/index.php/tmux#Bash.
if [[ $DISPLAY ]]; then
# If not running interactively, do not do anything
[[ $- != *i* ]] && return
[[ -z "$TMUX" ]] && exec tmux
fi
If I just run tmux in code above instead of exec tmux, I can achieve my goal. But I don't like that, because I don't understand why the code uses exec tmux rather than tmux and don't wanna change it rashly, and when I run tmux I shoud type exit or C-d twice in order to exit terminal.
(Note: this question should really be on unix.stackexchange.com). One simple solution is to replace the line
[[ -z "$TMUX" ]] && exec tmux
with
[[ -z "$TMUX" ]] && { tmux; [ ! -f ~/dontdie ] && exit || rm ~/dontdie; }
This runs tmux as before, but when it exits goes on to test for the existence of a file, ~/dontdie. If the file does not exist the && exit is done and the terminal closes as before. If, however, you create the file before leaving tmux, then you will do the || rm ... part, which removes the file, and continues through the rest of the .bashrc file, leaving you in the bash shell.
So, to stay in the terminal, from the tmux window you type the commands:
touch ~/dontdie; exit
instead of just exit, and you will exit tmux and continue in bash.
To make it easier you can add a binding in ~/.tmux.conf to a key, such as X:
bind-key X send-keys 'touch ~/dontdie; exit' Enter
Then you simply type your prefix character, Control-B by default, then X to create the file and exit tmux.

Abort bash script if git pull fails [duplicate]

I have a Bash shell script that invokes a number of commands.
I would like to have the shell script automatically exit with a return value of 1 if any of the commands return a non-zero value.
Is this possible without explicitly checking the result of each command?
For example,
dosomething1
if [[ $? -ne 0 ]]; then
exit 1
fi
dosomething2
if [[ $? -ne 0 ]]; then
exit 1
fi
Add this to the beginning of the script:
set -e
This will cause the shell to exit immediately if a simple command exits with a nonzero exit value. A simple command is any command not part of an if, while, or until test, or part of an && or || list.
See the bash manual on the "set" internal command for more details.
It's really annoying to have a script stubbornly continue when something fails in the middle and breaks assumptions for the rest of the script. I personally start almost all portable shell scripts with set -e.
If I'm working with bash specifically, I'll start with
set -Eeuo pipefail
This covers more error handling in a similar fashion. I consider these as sane defaults for new bash programs. Refer to the bash manual for more information on what these options do.
To add to the accepted answer:
Bear in mind that set -e sometimes is not enough, specially if you have pipes.
For example, suppose you have this script
#!/bin/bash
set -e
./configure > configure.log
make
... which works as expected: an error in configure aborts the execution.
Tomorrow you make a seemingly trivial change:
#!/bin/bash
set -e
./configure | tee configure.log
make
... and now it does not work. This is explained here, and a workaround (Bash only) is provided:
#!/bin/bash
set -e
set -o pipefail
./configure | tee configure.log
make
The if statements in your example are unnecessary. Just do it like this:
dosomething1 || exit 1
If you take Ville Laurikari's advice and use set -e then for some commands you may need to use this:
dosomething || true
The || true will make the command pipeline have a true return value even if the command fails so the the -e option will not kill the script.
If you have cleanup you need to do on exit, you can also use 'trap' with the pseudo-signal ERR. This works the same way as trapping INT or any other signal; bash throws ERR if any command exits with a nonzero value:
# Create the trap with
# trap COMMAND SIGNAME [SIGNAME2 SIGNAME3...]
trap "rm -f /tmp/$MYTMPFILE; exit 1" ERR INT TERM
command1
command2
command3
# Partially turn off the trap.
trap - ERR
# Now a control-C will still cause cleanup, but
# a nonzero exit code won't:
ps aux | grep blahblahblah
Or, especially if you're using "set -e", you could trap EXIT; your trap will then be executed when the script exits for any reason, including a normal end, interrupts, an exit caused by the -e option, etc.
The $? variable is rarely needed. The pseudo-idiom command; if [ $? -eq 0 ]; then X; fi should always be written as if command; then X; fi.
The cases where $? is required is when it needs to be checked against multiple values:
command
case $? in
(0) X;;
(1) Y;;
(2) Z;;
esac
or when $? needs to be reused or otherwise manipulated:
if command; then
echo "command successful" >&2
else
ret=$?
echo "command failed with exit code $ret" >&2
exit $ret
fi
Run it with -e or set -e at the top.
Also look at set -u.
On error, the below script will print a RED error message and exit.
Put this at the top of your bash script:
# BASH error handling:
# exit on command failure
set -e
# keep track of the last executed command
trap 'LAST_COMMAND=$CURRENT_COMMAND; CURRENT_COMMAND=$BASH_COMMAND' DEBUG
# on error: print the failed command
trap 'ERROR_CODE=$?; FAILED_COMMAND=$LAST_COMMAND; tput setaf 1; echo "ERROR: command \"$FAILED_COMMAND\" failed with exit code $ERROR_CODE"; put sgr0;' ERR INT TERM
An expression like
dosomething1 && dosomething2 && dosomething3
will stop processing when one of the commands returns with a non-zero value. For example, the following command will never print "done":
cat nosuchfile && echo "done"
echo $?
1
#!/bin/bash -e
should suffice.
I am just throwing in another one for reference since there was an additional question to Mark Edgars input and here is an additional example and touches on the topic overall:
[[ `cmd` ]] && echo success_else_silence
Which is the same as cmd || exit errcode as someone showed.
For example, I want to make sure a partition is unmounted if mounted:
[[ `mount | grep /dev/sda1` ]] && umount /dev/sda1

Bash - Hiding a command but not its output

I have a bash script (this_script.sh) that invokes multiple instances of another TCL script.
set -m
for vars in $( cat vars.txt );
do
exec tclsh8.5 the_script.tcl "$vars" &
done
while [ 1 ]; do fg 2> /dev/null; [ $? == 1 ] && break; done
The multi threading portion was taken from Aleksandr's answer on: Forking / Multi-Threaded Processes | Bash.
The script works perfectly (still trying to figure out the last line). However, this line is always displaed: exec tclsh8.5 the_script.tcl "$vars"
How do I hide that line? I tried running the script as :
bash this_script.sh > /dev/null
But this hides the output of the invoked tcl scripts too (I need the output of the TCL scripts).
I tried adding the /dev/null to the end of the statement within the for statement, but that too did not work either. Basically, I am trying to hide the command but not the output.
You should use $! to get the PID of the background process just started, accumulate those in a variable, and then wait for each of those in turn in a second for loop.
set -m
pids=""
for vars in $( cat vars.txt ); do
tclsh8.5 the_script.tcl "$vars" &
pids="$pids $!"
done
for pid in $pids; do
wait $pid
# Ought to look at $? for failures, but there's no point in not reaping them all
done

Resources