How to run some command before or after every Bash command entered from console? - bash

I want to run a command, for example
echo "foobar";
After each command, entered by the user.
Two scenarios:
When the user enters a command, my global command should be executed, and later his command should be executed
When the user enters a command, his command should be executed, and later my global command should be executed
How to accomplish the above two scenarios?
NB: I don't want to use the prompt for this purpose, (leave the PS1 variable as is).

As l0b0 suggests, you can use PROMPT_COMMAND to do your second request and you won't have to touch PS1.
To do your first request, you can trap the DEBUG pseudo-signal:
trap 'echo "foobar"' DEBUG

For the second part you could use declare -r PROMPT_COMMAND="echo 'foobar'": It is executed just before the prompt is displayed. Beware that it will not be run for each command in for example a pipe or command group.
Beware that any solution to this has the potential to mess things up for the user, so you should ideally only call commands which do not output anything (otherwise any output handling is virtually impossible) and which are not available to the user (to avoid them faking or corrupting the output).

Related

Execute command that results from execution of a script whose name is in a variable

When posting this question originally, I totally misworded it, obtaining another, reasonable but different question, which was correctly answered here.
The following is the correct version of the question I originally wanted to ask.
In one of my Bash scripts, there's a point where I have a variable SCRIPT which contains the /path/to/an/exe which, when executed, outputs a line to be executed.
What my script ultimately needs to do, is executing that line to be executed. Therefore the last line of the script is
$($SCRIPT)
so that $SCRIPT is expanded to /path/to/an/exe, and $(/path/to/an/exe) executes the executable and gives back the line to be executed, which is then executed.
However, running shellcheck on the script generates this error:
In setscreens.sh line 7:
$($SCRIPT)
^--------^ SC2091: Remove surrounding $() to avoid executing output.
For more information:
https://www.shellcheck.net/wiki/SC2091 -- Remove surrounding $() to avoid e...
Is there a way I can rewrite that $($SCRIPT) in a more appropriate way? eval does not seem to be of much help here.
If the script outputs a shell command line to execute, the correct way to do that is:
eval "$("$SCRIPT")"
$($SCRIPT) would only happen to work if the command can be completely evaluated using nothing but word splitting and pathname expansion, which is generally a rare situation. If the program instead outputs e.g. grep "Hello World" or cmd > file.txt then you will need eval or equivalent.
You can make it simple by setting the command to be executed as a positional argument in your shell and execute it from the command line
set -- "$SCRIPT"
and now run the result that is obtained by expansion of SCRIPT, by doing below on command-line.
"$#"
This works in case your output from SCRIPT contains multiple words e.g. custom flags that needs to be run. Since this is run in your current interactive shell, ensure the command to be run is not vulnerable to code injection. You could take one step of caution and run your command within a sub-shell, to not let your parent environment be affected by doing ( "$#" ; )
Or use shellcheck disable=SCnnnn to disable the warning and take the occasion to comment on the explicit intention, rather than evade the detection by cloaking behind an intermediate variable or arguments array.
#!/usr/bin/env bash
# shellcheck disable=SC2091 # Intentional execution of the output
"$("$SCRIPT")"
By disabling shellcheck with a comment, it clarifies the intent and tells the questionable code is not an error, but an informed implementation design choice.
you can do it in 2 steps
command_from_SCRIPT=$($SCRIPT)
$command_from_SCRIPT
and it's clean in shellcheck

Copy output of commands to a file but make them believe they are writing in a terminal

To copy the output of my commands launched from a shell I use
exec > >(tee myfile)
and then the next commands will be logged into the file.
The problem is the commands know the output is not a terminal anymore. So they can change how they display. For instance, with the command ls when the redirection is on, the output is displayed in only one column.
I know I can use unbuffer when I use a pipe, but it is not what I want. I want to be able to log all the outputs I have from my shell.
You can use script, which copies all output to a file (usually typescript). It does not interfere with the program, allowing it to think it is writing to the terminal.
The program is available "everywhere", though some options differ:
script(1) Linux
script(1) OSX
The main difference that I encounter is how to specify the output filename and the command. With Linux you can give a command as an option, while in OSX the command consists of the argument(s) past the filename. When using the -c option on Linux, keep in mind that script runs this using the shell identified by the SHELL environment variable. That can actually be "any" program (I've used a text editor). Running a shell to execute a command means that it may use new environment variables (normally not a problem).
If you do not use the -c option, script starts a new shell, writing everything to its output until you exit from that shell. To use it as you were doing for redirection, you could make an alias like
alias redir=`script myfile'
to write to myfile, or
alias redir='script -a myfile'
to append to myfile. In either case, exiting the shell (press controlD, or type exit) will end the "redirection".
Aside from ls (which ignores the terminal database), most programs use the TERM environment variable. It is possible that you do something unusual in initializing your shell, so that running script would reinitialize TERM to a different value than you are currently using. To see this, you could do something like
env >before.log
script -c "env >after.log"
diff before.log after.log

Passing values that the commands in a shell script are going to expect

I am writing a shell script which has a bunch of commands, out of which some commands expect user input. For instance, if I want to push my changes to git, I use the command git push, which then asks me for the username and password. A traditional shell script halts for user input when it expects username. What I want to do is, pass this username and password as a command line argument, and the command should pick up that value.
Please note:
1. The above question is not in context to git, I have just used it as an example. I understand that I can always store my credentials in the configuration file specific to git, and then it won't prompt me for the credentials. I only want to know the technique of how such thing is done.
Once again, in regards to the example above, security is not a concern. Currently, I am not concerned about passing the password as clear text from command line.
I tried to google it, but didn't get satisfactory results. Any ideas on this will be helpful.
You need to use the "expect" command to send password.
Refer this answer
using expect in bash script
Command name parameters in shell are referenced by $n where n is the number of parameter starting with 0 which is the name of the script. For instance the following script:
#!/bin/bash
echo $0 #name of the script
echo $1 #the first parameter
echo $2 #the second parameter
will print name of the program and parameters

Run simple programme using unix shell

I'm new to unix and its developing. In my new.sh script I wrote
$USERNAME=user
$PASSWORD=sekrit
echo $USERNAME
and ran new.sh using bash new.sh
But I get the following errors
new.sh: line 1: =user: command not found
new.sh: line 2: =sekrit: command not found
How do I run that command and print the username variable in terminal?
USERNAME is the name of the variable. $USERNAME is the replacement (aka contents, aka value). Since USERNAME is empty, you effectively try to run a command named =user, which is what the error message tells you.
Remove the $ from $USERNAME=... and it will work.
As Jens notes in his answer, the problem is that an assignment to a variable is not prefixed with a $, so:
USERNAME=user
PASSWORD=sekrit
is the way to write what you wanted. You got an error because USERNAME was not set, so after expansion, the shell looked at the command as:
=user
=sekrit
and it could not find such commands on the system (not very surprisingly). However, be aware that if you have previously written:
USERNAME=archipelago
PASSWORD=anchovy
then the lines:
$USERNAME=user
$PASSWORD=sekrit
would have been equivalent to writing:
archipelago=user
anchovy=sekrit
You could see that by running set with no arguments; it would show you the values of all the variables set in the shell. You could search for words such as USERNAME and archipelago to see what happened.
Now you've learned that, forget it. The number of times you'll need to use it is very limited (but it is handy on those rare — very rare — occasions when you need it).
For all practical purposes, don't write a $ on the left-hand side of a variable assignment in shell.

Automatically run a program if another program returns an error

I have a program (grabface) that takes a picture of the face of a person using a webcam, and I also have a shell script wrapper that works like this:
On the command line the user gives the script the name of a program to run and its command line arguments. The script then executes the given command and checks the exit code. If there was an error the program grabface is run to capture the surprised face of the user.
This all works quite well. But the problem is that the wrapper script must always be used. Is there some way to automatically run this script whenever a command is entered in the shell? Or is there some other way to automatically run a given program after any program is run?
Preferably the solution should work in bash, but any other shell is also OK. I realize this could be accomplished by simply making some adjustments in the source code of the shell, but that's kind of a last measure.
Something that is probably even trickier would be to extend this to work with programs launched outside of the shell as well (e.g. from a desktop environment) but this may be too difficult.
Edit: Awsome! Since bash was so easy, what about other shells?
In Bash, you can use the trap command with an argument of ERR to execute a specified command whenever an executed command returns non-zero.
$ trap "echo 'there was an error'" ERR
$ touch ./can_touch
$ touch ./asfdsafds/fdsafsdaf/fdsafdsa/fdsafdasfdsa/fdsa
touch: cannot touch `./asfdsafds/fdsafsdaf/fdsafdsa/fdsafdasfdsa/fdsa': No such file or directory
there was an error
trap affects the whole session, so you'll need to make sure that trap is called at the beginning of the session by putting it in .bashrc or .profile.
Other special trap signals that Bash understands are: DEBUG, RETURN and EXIT as well as all the system signals (which can be listed using trap -l).
The Korn shell has a similar facility, while the Z shell has a more extensive trap capability.
By the way, in some cases for the command line, it can be useful in Bash to set the PROMPT_COMMAND variable to a script or command that will be run each time the prompt is issued.
Just subtitute your command where I have the false.
false || echo "It failed"
If you want to do the oposite, like when it succeeds, just put your command instead of true:
true && echo "It succeeded"
In the .profile of the user add:
trap grabface ERR

Resources