Dockerfile doesn't source .bashrc even in a single subshell - bash

I'm trying to source .bashrc but no luck
USER user
SHELL ["/bin/bash", "-c"]
RUN echo "export TEST_VAR=test" >> /home/user/.bashrc && tail /home/user/.bashrc && source /home/user/.bashrc && echo "1 \"${TEST_VAR} 2\" var" && exit 1
I expect that this RUN command print 1 "test" 2 but what i get is that
Step 13/40 : RUN echo "export TEST_VAR=test" >> /home/user/.bashrc && tail /home/user/.bashrc && source /home/user/.bashrc && echo "1 \"${TEST_VAR}\" 2" && exit 1
---> Running in b870d36e9dd0
# this, if it's already enabled in /etc/bash.bashrc and /etc/profile
# sources /etc/bash.bashrc).
if ! shopt -oq posix; then
if [ -f /usr/share/bash-completion/bash_completion ]; then
. /usr/share/bash-completion/bash_completion
elif [ -f /etc/bash_completion ]; then
. /etc/bash_completion
fi
fi
export TEST_VAR=test
1 "" 2
What's wrong with handling shells in docker? I just wanted to source ~/.bashrc once and use all exposed variables in subsequent command below source call but it doesn't even work in a single subshell joined with &&

Usually ~/.bashrc contains something similar to:
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
That is very normal - .bashrc is meant to be used in interactive sessions only. Because RUN is non-interactive, it just exits.
Aaaanyway, I would recommend, if you want to only add environment variables, output them to /etc/profile.d and . /etc/profile.

Most paths in Docker don't read shell dotfiles at all. You need to use other approaches to provide configuration to your application; for example, Dockerfile ENV to set environment variables or an entrypoint wrapper script if you need things to be set up dynamically before starting the container.
Let's look specifically at a reduced form of your example:
SHELL ["/bin/bash", "-c"]
RUN echo "export TEST_VAR=test" >> $HOME/.bashrc
RUN echo "$TEST_VAR"
Bash Startup Files in the GNU Bash manual lists out which dotfiles are read in which case. For the last line Docker combines the SHELL and RUN lines to run the equivalent of
/bin/bash -c 'echo "$TEST_VAR"'
but the bash instance is neither an interactive nor a login shell, so the only dotfile that's automatically read is one named in a $BASH_ENV environment variable. (POSIX sh doesn't specify anything about any shell dotfiles at all.)
This further applies to the image's default CMD, which also will get run with sh -c (or the alternate SHELL) and it won't read dotfiles. If the CMD (or ENTRYPOINT or RUN) uses JSON-array syntax, it won't invoke a shell at all, and again won't read dotfiles.
The only case where shell dotfiles will be read is if the main container command is an interactive shell, and this won't typically be the common case.
docker run --rm -it yourimage /bin/bash # reads .bashrc
docker run --rm -it yourimage /bin/bash --login # also reads .profile, .bash_login
This means you should almost never try to edit the .bashrc, /etc/profile, or any similar files. If you need to set environment variables as in the example, use Dockerfile ENV instead.
ENV TEST_VAR=test
RUN echo "$TEST_VAR"

Related

/etc/profile does not appear to run

I'm on OS X and according to man bash /etc/profile is the system wide configuration for the bash shell.
For testing purposes I opened this file up :
open_profile(){
sudo chmod 777 /etc/profile
}
and added an echo so that I can see if it is actually running.
It is not:
# System-wide .profile for sh(1)
echo "test_global"
if [ -x /usr/libexec/path_helper ]; then
eval `/usr/libexec/path_helper -s`
fi
if [ "${BASH-no}" != "no" ]; then
[ -r /etc/bashrc ] && . /etc/bashrc
fi
I do not see the echo when I open up a bash shell.
/etc/profile is only invoked for login shells. To force a login shell vice an non-login shell add --login.
bash --login
When Bash is invoked as a login shell, or as a non-login shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable.

bash: parse_git_branch: command not found

This should be very simple.
I recently noticed that when I type 'bash' into Terminal on Mac it shows this:
Jays-MacBook-Pro: ~ $ bash
bash: parse_git_branch: command not found
When before it didn't. Can someone explain why and how to resolve.
It is likely that you configured BASH to run parse_git_branch and print the result as part of PS1 (or alike). You can check this by: "echo $PS1" and "echo $PROMPT_COMMAND".
However, parse_git_branch is not a builtin function of bash. Below is how I configured my PS1. You may want to copy my git_branch_4_ps1 as your parse_git_branch
PS1='\n' # begin with a newline
PS1=$PS1'\[\e[38;5;101m\]\! \t ' # time and command history number
PS1=$PS1'\[\e[38;5;106m\]\u#\h ' # user#host
PS1=$PS1'\[\e[7;35m\]${MY_WARN}\[\e[0m\] ' # warning message if there is any
PS1=$PS1'\[\e[38;5;10m\]${MY_EXTRA} ' # extra info if there is any
PS1=$PS1'\[\e[0;36m\]$(git_branch_4_ps1) ' # git_branch_4_ps1 defined below
PS1=$PS1'\[\e[38;5;33m\]\w' # working directory
PS1=$PS1'\n\[\e[32m\]\$ ' # "$"/"#" sign on a new line
PS1=$PS1'\[\e[0m\]' # restore to default color
function git_branch_4_ps1 { # get git branch of pwd
local branch="$(git branch 2>/dev/null | grep "\*" | colrm 1 2)"
if [ -n "$branch" ]; then
echo "(git: $branch)"
fi
}
If your parse_git_branch is defined in ~/.bash_profile, it will not be loaded when you open a non-login shell (e.g. by running bash).
The differences between login and non-login shells are described here: Difference between Login Shell and Non-Login Shell? For our purposes, the main difference is that login shells (e.g. that when you first open Terminal) automatically source ~/.bash_profile upon startup, whereas non-login shells (e.g. that when you run bash from within Terminal) do not.
To fix this error, simply source your ~/.bash_profile after running bash:
user#host:~ $ bash
bash: parse_git_branch: command not found
user#host:~ $ source .bash_profile
Alternatively, place the function in ~/.bashrc instead, which will be automatically sourced by non-login shells (as covered in the earlier link).
Instead of having
parse_git_branch
call in PS1 definition alone you may use
parse_git_branch 2>/dev/null
to send stderr to /dev/null. This will silence the error you don't want to see.
have you export your $PS1 ?
You can check by run command:
printenv
else you should export it by run:
export -n PS1
after you will can run sudo or sudo su without problem
The key to this is to NOT export PS1. If it's exported, then any non-login shell also takes PS1. Since .bash_profile is automatically source'd by the login shell, the PS1 variable only affects the login shell.

Running system command under interactive bash shell

I am trying to run a command that has been aliased in my ~/.bashrc from Perl using the system command. It works well running the command only once, but when I run it twice the second invocation is run as a background job and then suspended (the same as pressing <CTRL-Z>) and I have to type fg to complete the command. For example
use strict;
use warnings;
system ('bash -ic "my_cmd"');
system ('bash -ic "my_cmd"');
The second call never completes. The output is [1]+ Stopped a.pl.
Note:
The same result is obtained when replacing my_cmd with any other command, for example ls.
It seems not to depend of the contents of my ~/.bashrc file. I tried to remove everything from it, and the problem still persisted.
I am using Ubuntu 14.04 and Perl version 5.18.2.
Update
For debugging I reduced my ~/.bashrc to
echo "Entering ~/.bashrc .."
alias my_cmd="ls"
alias
and my ~/.bash_profile to
if [ -f ~/.bashrc ]; then
echo "Entering ~/.bash_profile .."
. ~/.bashrc
fi
Now running:
system ('bash -lc "my_cmd"');
system ('bash -lc "my_cmd"');
gives
Entering ~/.bash_profile ..
Entering ~/.bashrc ..
alias my_cmd='ls'
bash: my_cmd: command not found
Entering ~/.bash_profile ..
Entering ~/.bashrc ..
alias my_cmd='ls'
bash: my_cmd: command not found
and running
system ('bash -ic "my_cmd"');
system ('bash -ic "my_cmd"');
gives
Entering ~/.bashrc ..
alias my_cmd='ls'
a.pl p.sh
[1]+ Stopped a.pl
Rather than using the -i switch for an interactive shell, I think you should use the -l (or --login) switch, which causes bash to act as if it had been invoked as a login shell.
Using the -l switch doesn't load ~/.bashrc by default. According to man bash, in a login shell, /etc/profile/ is loaded, followed by the first file found from ~/.bash_profile/, ~/.bash_login or ~/.profile/. On my system, I have the following in ~/.bash_profile, so ~/.bashrc is loaded:
# Source .bashrc
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
Now that your ~/.bashrc is being loaded, you need to enable the expansion of aliases, which is off in a non-interactive shell. To do this, you can add the following line before setting your aliases:
shopt -s expand_aliases
A process randomly stopping - aside from ctrl-z is usually when it needs STDIN, but doesn't have it attached.
Try it with - for example passwd &. It'll background and go straight into 'stopped' state. This may well be what's happening with your bash command. -i means interactive shell, explicitly, and you're trying to do something noninteractive with it.
That's almost certainly not the best approach, you probably want to do something different. bash --login might be closer to what you're after.
Tom Fenech's answer worked for me in Ubuntu 16.04.1 LTS with a small addition. At the top of my ~/.bashrc file, I commented out the following section so that if the shell is not interactive (e.g., a login shell), ~/.bashrc is still read. On some other versions of Linux I don't see this section.
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac

How to invoke bash, run commands inside the new shell, and then give control back to user?

This must either be really simple or really complex, but I couldn't find anything about it... I am trying to open a new bash instance, then run a few commands inside it, and give the control back to the user inside that same instance.
I tried:
$ bash -lic "some_command"
but this executes some_command inside the new instance, then closes it. I want it to stay open.
One more detail which might affect answers: if I can get this to work I will use it in my .bashrc as alias(es), so bonus points for an alias implementation!
bash --rcfile <(echo '. ~/.bashrc; some_command')
dispenses the creation of temporary files. Question on other sites:
https://serverfault.com/questions/368054/run-an-interactive-bash-subshell-with-initial-commands-without-returning-to-the
https://unix.stackexchange.com/questions/123103/how-to-keep-bash-running-after-command-execution
This is a late answer, but I had the exact same problem and Google sent me to this page, so for completeness here is how I got around the problem.
As far as I can tell, bash does not have an option to do what the original poster wanted to do. The -c option will always return after the commands have been executed.
Broken solution: The simplest and obvious attempt around this is:
bash -c 'XXXX ; bash'
This partly works (albeit with an extra sub-shell layer). However, the problem is that while a sub-shell will inherit the exported environment variables, aliases and functions are not inherited. So this might work for some things but isn't a general solution.
Better: The way around this is to dynamically create a startup file and call bash with this new initialization file, making sure that your new init file calls your regular ~/.bashrc if necessary.
# Create a temporary file
TMPFILE=$(mktemp)
# Add stuff to the temporary file
echo "source ~/.bashrc" > $TMPFILE
echo "<other commands>" >> $TMPFILE
echo "rm -f $TMPFILE" >> $TMPFILE
# Start the new bash shell
bash --rcfile $TMPFILE
The nice thing is that the temporary init file will delete itself as soon as it is used, reducing the risk that it is not cleaned up correctly.
Note: I'm not sure if /etc/bashrc is usually called as part of a normal non-login shell. If so you might want to source /etc/bashrc as well as your ~/.bashrc.
You can pass --rcfile to Bash to cause it to read a file of your choice. This file will be read instead of your .bashrc. (If that's a problem, source ~/.bashrc from the other script.)
Edit: So a function to start a new shell with the stuff from ~/.more.sh would look something like:
more() { bash --rcfile ~/.more.sh ; }
... and in .more.sh you would have the commands you want to execute when the shell starts. (I suppose it would be elegant to avoid a separate startup file -- you cannot use standard input because then the shell will not be interactive, but you could create a startup file from a here document in a temporary location, then read it.)
bash -c '<some command> ; exec /bin/bash'
will avoid additional shell sublayer
You can get the functionality you want by sourcing the script instead of running it. eg:
$cat script
cmd1
cmd2
$ . script
$ at this point cmd1 and cmd2 have been run inside this shell
Append to ~/.bashrc a section like this:
if [ "$subshell" = 'true' ]
then
# commands to execute only on a subshell
date
fi
alias sub='subshell=true bash'
Then you can start the subshell with sub.
The accepted answer is really helpful! Just to add that process substitution (i.e., <(COMMAND)) is not supported in some shells (e.g., dash).
In my case, I was trying to create a custom action (basically a one-line shell script) in Thunar file manager to start a shell and activate the selected Python virtual environment. My first attempt was:
urxvt -e bash --rcfile <(echo ". $HOME/.bashrc; . %f/bin/activate;")
where %f is the path to the virtual environment handled by Thunar.
I got an error (by running Thunar from command line):
/bin/sh: 1: Syntax error: "(" unexpected
Then I realized that my sh (essentially dash) does not support process substitution.
My solution was to invoke bash at the top level to interpret the process substitution, at the expense of an extra level of shell:
bash -c 'urxvt -e bash --rcfile <(echo "source $HOME/.bashrc; source %f/bin/activate;")'
Alternatively, I tried to use here-document for dash but with no success. Something like:
echo -e " <<EOF\n. $HOME/.bashrc; . %f/bin/activate;\nEOF\n" | xargs -0 urxvt -e bash --rcfile
P.S.: I do not have enough reputation to post comments, moderators please feel free to move it to comments or remove it if not helpful with this question.
With accordance with the answer by daveraja, here is a bash script which will solve the purpose.
Consider a situation if you are using C-shell and you want to execute a command
without leaving the C-shell context/window as follows,
Command to be executed: Search exact word 'Testing' in current directory recursively only in *.h, *.c files
grep -nrs --color -w --include="*.{h,c}" Testing ./
Solution 1: Enter into bash from C-shell and execute the command
bash
grep -nrs --color -w --include="*.{h,c}" Testing ./
exit
Solution 2: Write the intended command into a text file and execute it using bash
echo 'grep -nrs --color -w --include="*.{h,c}" Testing ./' > tmp_file.txt
bash tmp_file.txt
Solution 3: Run command on the same line using bash
bash -c 'grep -nrs --color -w --include="*.{h,c}" Testing ./'
Solution 4: Create a sciprt (one-time) and use it for all future commands
alias ebash './execute_command_on_bash.sh'
ebash grep -nrs --color -w --include="*.{h,c}" Testing ./
The script is as follows,
#!/bin/bash
# =========================================================================
# References:
# https://stackoverflow.com/a/13343457/5409274
# https://stackoverflow.com/a/26733366/5409274
# https://stackoverflow.com/a/2853811/5409274
# https://stackoverflow.com/a/2853811/5409274
# https://www.linuxquestions.org/questions/other-%2Anix-55/how-can-i-run-a-command-on-another-shell-without-changing-the-current-shell-794580/
# https://www.tldp.org/LDP/abs/html/internalvariables.html
# https://stackoverflow.com/a/4277753/5409274
# =========================================================================
# Enable following line to see the script commands
# getting printing along with their execution. This will help for debugging.
#set -o verbose
E_BADARGS=85
if [ ! -n "$1" ]
then
echo "Usage: `basename $0` grep -nrs --color -w --include=\"*.{h,c}\" Testing ."
echo "Usage: `basename $0` find . -name \"*.txt\""
exit $E_BADARGS
fi
# Create a temporary file
TMPFILE=$(mktemp)
# Add stuff to the temporary file
#echo "echo Hello World...." >> $TMPFILE
#initialize the variable that will contain the whole argument string
argList=""
#iterate on each argument
for arg in "$#"
do
#if an argument contains a white space, enclose it in double quotes and append to the list
#otherwise simply append the argument to the list
if echo $arg | grep -q " "; then
argList="$argList \"$arg\""
else
argList="$argList $arg"
fi
done
#remove a possible trailing space at the beginning of the list
argList=$(echo $argList | sed 's/^ *//')
# Echoing the command to be executed to tmp file
echo "$argList" >> $TMPFILE
# Note: This should be your last command
# Important last command which deletes the tmp file
last_command="rm -f $TMPFILE"
echo "$last_command" >> $TMPFILE
#echo "---------------------------------------------"
#echo "TMPFILE is $TMPFILE as follows"
#cat $TMPFILE
#echo "---------------------------------------------"
check_for_last_line=$(tail -n 1 $TMPFILE | grep -o "$last_command")
#echo $check_for_last_line
#if tail -n 1 $TMPFILE | grep -o "$last_command"
if [ "$check_for_last_line" == "$last_command" ]
then
#echo "Okay..."
bash $TMPFILE
exit 0
else
echo "Something is wrong"
echo "Last command in your tmp file should be removing itself"
echo "Aborting the process"
exit 1
fi
$ bash --init-file <(echo 'some_command')
$ bash --rcfile <(echo 'some_command')
In case you can't or don't want to use process substitution:
$ cat script
some_command
$ bash --init-file script
Another way:
$ bash -c 'some_command; exec bash'
$ sh -c 'some_command; exec sh'
sh-only way (dash, busybox):
$ ENV=script sh
Here is yet another (working) variant:
This opens a new gnome terminal, then in the new terminal it runs bash. The user's rc file is read first, then a command ls -la is sent for execution to the new shell before it turns interactive.
The last echo adds an extra newline that is needed to finish execution.
gnome-terminal -- bash -c 'bash --rcfile <( cat ~/.bashrc; echo ls -la ; echo)'
I also find it useful sometimes to decorate the terminal, e.g. with colorfor better orientation.
gnome-terminal --profile green -- bash -c 'bash --rcfile <( cat ~/.bashrc; echo ls -la ; echo)'

Using bash shell inside Matlab

I'm trying to put a large set of bash commands into a matlab script and manage my variables (like file paths, parameters etc) from there. It is also needed because this workflow requires manual intervention at certain steps and I would like to use the step debugger for this.
The problem is, I don't understand how matlab interfaces with bash shell.
I can't do system('source .bash_profile') to define my bash variables. Similarly I can't define them by hand and read them either, e.g. system('export var=somepath') and then system('echo $var') returns nothing.
What is the correct way of defining variables in bash inside matlab's command window? How can I construct a workflow of commands which will use the variables I defined as well as those in my .bash_profile?
If all you need to do is set environment variables, do this in MATLAB:
>> setenv('var','somepath')
>> system('echo $var')
Invoke Bash as a login shell to get your ~/.bash_profile sourced and use the -c option to execute a group of shell commands in one go.
# in Terminal.app
man bash | less -p 'the --login option'
man bash | less -p '-c string'
echo 'export profilevar=myProfileVar' >> ~/.bash_profile
# test in Terminal.app
/bin/bash --login -c '
echo "$0"
echo "$3"
echo "$#"
export var=somepath
echo "$var"
echo "$profilevar"
ps
export | nl
' zero 1 2 3 4 5
# in Matlab
cmd=sprintf('/bin/bash --login -c ''echo "$profilevar"; ps''');
[r,s]=system(cmd);
disp(s);

Resources