Bash interactive and non-interactive shell behaviour - bash

I have a hard time with interactive and non-interactive shells. I don't understand which is which.
For example, I have read that non interactive shells usually check for the BASH_ENV variable on their startup and execute whatever it points to.
So, what I did is I set the BASH_ENV to point to some script which only echoes OK. Then I typed in bash in terminal and this script echoed OK. But why? Didn't I call yet another INTERACTIVE shell by typing bash in terminal, and not the other way around? Why did it execute the bash_env? I'm on linux mint maya.

The only thing you can be certain of is what's shown in the manpage for bash (see INVOCATION) - that lists in details what startup files are run in each instance.
However, there's nothing stopping (for example) one of those startup files running other files which would normally not be run.
By way of example, if .bash_profile had the following line:
. ~/.profile
it would also run the .profile script.
In fact the manpage states:
When bash is started non-interactively, to run a shell script, for example, it looks for the variable BASH_ENV in the environment, expands its value if it appears there, and uses the expanded value as the name of a file to read and execute. Bash behaves as if the following command were executed:
if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi
So, if you put that exact line in your startup scripts for an interactive shell like ~/.bash_profile, you'll also source the file pointed to by BASH_ENV.
Your best bet is to examine the INVOCATION section to find out which of the files will run, and then track through them (with something like set -x at the top of the script) to see what's getting called from where.

If memory serves, Bash is only interactive if you tell it, example
bash -i
So, by you calling just bash you invoked a non-interactive Bash.
More info
-i
If the -i option is present, the shell is interactive.

Related

AppleScript do shell script returns error for "which" command

I'm writing an AppleScript that will ask a user which remote cloud service and then which bucket they would like to mount in Mac OS using rclone. But in order to run the rclone command in an AppleScript, you need to include the entire path to the app. For me that is: /usr/local/bin/rclone
I want to include, as a variable, the location of rclone using the which command in a shell script like this:
set rcloneLOC to paragraphs of (do shell script "which rclone")
But I get a script error stating "The command exited with a non-zero status." This happens even if I just try to run do shell script "which rclone" by itself. If I type which rclone into terminal, I get the result I expect.
How do I get this to work?
As #GordonDavisson suggests, you can view your path using echo $PATH.
To change your applescript's path (and view the change) try this:
do shell script "export PATH=/usr/local/bin:$PATH ; echo $PATH"
The first part of the shell command (up to the semi-colon) will prepend /usr/local/bin to your default path. The second part will return your updated path. The semi-colon has the second part run after the first part is finished.
It's important to note that this change is temporary and only in effect for this shell script and only while it is operating. This is why you need the combined commands in order to see the effect.
I'll use 'rsync' as an example since I don't have rclone; substitute 'rclone' to get its path. To get its path, you combine the export command with which, like so:
do shell script "export PATH=/usr/local/bin:$PATH ; which rsync"
The result is /usr/local/bin/rsync.
To clarify a couple of things… the environment is a set of conditions that apply for each user. You can get a basic rundown of it by running man 7 environ in Terminal. There is an env command which lists your settings and allows you to edit them; man env will provide info on it. At the bottom of these man pages, you should see references to related commands which you can also look up. Meanwhile, from within Script Editor, you could run a 1-line script with do shell script "env" and see the corresponding environment for applescript's shell.
Based on Apple's documentation (or my interpretation of it), they chose this setup because it is relatively secure and portable. You know what you get every time you run a shell script. You don't need to use a more modern shell to run the which command. You can modify the environment as needed, the same way you would while using the terminal.
Finally, Apple has provided Technical Note 2065 which provides info on using shell scripts with applescript. Also, you can likely get more info here or on the unix stack exchange.
NB All of the above is just my understanding, which is limited.

How to run shell script by including "cd" command in ubuntu?

I am trying to execute a shell script for automating the process rather than manually running the python script. But i am getting the error folder not found.
cd /home/gaurav/AndroPyTool
export ANDROID_HOME=$HOME/android-sdk-linux/
export PATH=$PATH:$ANDROID_HOME/tools
export PATH=$PATH:$ANDROID_HOME/platform-tools
source ~/.bashrc
source droidbox_env/bin/activate
alias mycd='cd /home/gaurav/AndroPyTool/test'
mycd
pwd
python androPyTool.py -all -csv EXPORTCSV.csv -s mycd
>>>> AndroPyTool -- STEP 1: Filtering apks
Folder not found!
This is the error i am getting because the script is not able to find the path that i have provided above.
The part after "-s" in the code represents the folder path where the file stored.
The issue here is that you are not passing the path to the python program. The python program is not aware of bash aliases and bash will only expand aliases when it is interpreting the token as a command.
When bash reads python androPyTool.py -all -csv EXPORTCSV.csv -s mycd It interprets python as the command and all other space separated tokens are arguments that will be passed into python. Python then invokes androPyTool.py and passes the subsequent arguments to that script. So the program is receiving literally mycd as the argument for -s.
Moreover, even if mycd is expanded, it wouldn't be the correct argument for -s. androPyTool.py is expecting just the /path/to/apks, not cd /path/to/apks/.
I don't really think that using the alias in this script makes much sense. It actually makes the script harder to read and understand. If you want to wrap a command, I recommend defining a function, and occasionally you can use variable expansion (but that mixes code and data which can lead to issues). EDIT: As has been pointed out in the comments, aliases are disabled for scripts.
Finally there are some other suspicious issues with your script. Mainly, why are you sourcing .bashrc? If this script is run by you in your user's environment, .bashrc will already be sourced and there is no need to re-source it. On the other hand, if this is not intended to be run in your environment, and there is something in the .bashrc file that you need in your script, I recommend pulling just that out and nothing else.
But the most immediate issue that I can see is that sourcing .bashrc after you modify path runs the risk of overwriting the changes to PATH you just made. Depending on the contents of the .bashrc file, sourcing it may not be idempotent, meaning that running it more than once could have side effects. Finally, anything could get thrown in that .bashrc file down the road since that's what its for. So now your script may depend on something that likely will be changing. This opens up the possibility that bugs will creep in to your script unexpectedly.

Use of a pipe prevents left process to export variables. Why?

I have the following one-line bash file foo.sh:
export PATH=<new path>
In another script, I use:
echo $PATH # --> old path
. foo.sh | grep bar
echo $PATH # --> old path!!!!
Depending on the machine I execute this second script on, the PATH is or is not updated in the main script. On the machines where it does not work, whatever the command right of the pipe, it still does not work. On the contrary, if I drop the pipe, it always work whatever the machine.
My machines are supposed to have the exact same configuration (even though, considering this issue, it looks as if they don't). Bash version is 4.1.2.
Do you have any idea where/what to look to understand this behaviour?
In bash, all parts of a pipeline are executed in separate subshells, which is why sourcing the script doesn't change the path.
Some shells are able to run the last command in the current shell environment (ksh93, for example), but bash does not (unless job control is disabled and the lastpipe shell option is enabled, and the pipeline is not executed in the background).
The bash manual states, in the "Pipelines" section,
Each command in a pipeline is executed as a separate process (i.e., in
a subshell).

How can I get the list of all files that are sourced by bash?

Is there a way to find out all the files that are sourced by bash?
Alternately, is there a single point of entry (or a first point of entry) where I can go to follow and find this information by adding a set -x at the top?
(By single point of entry, I do not mean ~/.bashrc or ~/.bash_profile because some other file higher in the source chain tells bash to load these above files in the first place).
Reviving this question because there is an automation for this:
Execute bash and carve it out of the output. -li is login interactively, -x prints out what bash is doing internally, and -c exit just tells bash to terminate immediately. Using sed to filter out the source command or the . alias.
/bin/bash -lixc exit 2>&1 | sed -n 's/^+* \(source\|\.\) //p'
There is no easy catch-all answer here, it depends on the combination of login/interactive attributes.
A login shell will source /etc/profile, and only the first one it finds among ~/.bash_profile, ~/.bash_login, and ~/.profile. You could call these independent points of entry: /etc/profile doesn't need to explicitly source one of the others, it's bash that does it.
For non-login interactive, you have /etc/bash.bashrc and ~/.bashrc, again independent.
For non-login non-interactive, the single point of entry is $BASH_ENV, if defined.
You can find the official description at the GNU Bash manual under Bash startup files.
There are several places where the process to load all startup files start.
The table in this link will make it clear:
Interactive login /etc/profile
Interactive non-login /etc/bash.bashrc
Script $BASH_ENV
Understanding login as either an Interactive login or a non-interactive shell called with the option --login. From man bash:
When bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists.
However, the most common call to an interactive shell is su - $USER which use a - as the first character of the command called (not --login).
That is the default.
Nothing prevent you from editing those files and add in /etc/profile something like:
if [ -f /etc/bash.bashrc ]; then
source /etc/bash.bashrc
fi
Which will ensure that /etc/bash.bashrc would be sourced in all cases of Interactive shells.
Care should be taken to avoid duplicating variables or actions (sourced in both files). Defining a variable and checking that it has been already set before some actions would make this process more reliable.
The starting point for scripts, from the point of view of bash, is the variable $BASH_ENV which has to be set in the environment before bash is called. That expands the search to other shells or programs that may call bash. There is no single definitive solution in this case, only what is the usual practice.
The usual practice is to not use $BASH_ENV at all, so bash would start with all the compiled-in options only.

Copy output of commands to a file but make them believe they are writing in a terminal

To copy the output of my commands launched from a shell I use
exec > >(tee myfile)
and then the next commands will be logged into the file.
The problem is the commands know the output is not a terminal anymore. So they can change how they display. For instance, with the command ls when the redirection is on, the output is displayed in only one column.
I know I can use unbuffer when I use a pipe, but it is not what I want. I want to be able to log all the outputs I have from my shell.
You can use script, which copies all output to a file (usually typescript). It does not interfere with the program, allowing it to think it is writing to the terminal.
The program is available "everywhere", though some options differ:
script(1) Linux
script(1) OSX
The main difference that I encounter is how to specify the output filename and the command. With Linux you can give a command as an option, while in OSX the command consists of the argument(s) past the filename. When using the -c option on Linux, keep in mind that script runs this using the shell identified by the SHELL environment variable. That can actually be "any" program (I've used a text editor). Running a shell to execute a command means that it may use new environment variables (normally not a problem).
If you do not use the -c option, script starts a new shell, writing everything to its output until you exit from that shell. To use it as you were doing for redirection, you could make an alias like
alias redir=`script myfile'
to write to myfile, or
alias redir='script -a myfile'
to append to myfile. In either case, exiting the shell (press controlD, or type exit) will end the "redirection".
Aside from ls (which ignores the terminal database), most programs use the TERM environment variable. It is possible that you do something unusual in initializing your shell, so that running script would reinitialize TERM to a different value than you are currently using. To see this, you could do something like
env >before.log
script -c "env >after.log"
diff before.log after.log

Resources