$PATH not updated when running docker exec sh -c - bash

I have the following script in a sh file running in the host:
printf '\n\n=== Installing asdf ...\n\n'
docker container exec "$CONTAINER_NAME" sh -c 'git clone https://github.com/asdf-vm/asdf.git /root/.asdf --branch v0.10.2'
docker container exec "$CONTAINER_NAME" sh -c 'echo ''. /root/.asdf/asdf.sh'' >> /root/.bashrc'
docker container exec "$CONTAINER_NAME" sh -c 'echo ''. /root/.asdf/completions/asdf.bash'' >> /root/.bashrc'
printf '\n\n=== Installing node/npm using asdf ...\n\n'
NODEJS_VERSION='17.9.0'
docker container exec "$CONTAINER_NAME" sh -c 'asdf plugin add nodejs'
docker container exec "$CONTAINER_NAME" sh -c "asdf install nodejs $NODEJS_VERSION"
docker container exec "$CONTAINER_NAME" sh -c "asdf global nodejs $NODEJS_VERSION"
When asdf plugin add nodejs line is executed I get the following error:
sh: 1: asdf: not found
The whole issue is happening because $PATH is not being updated after the installation of asdf. I tried:
to reload .bashrc/.profile after installing asdf
docker container exec "$CONTAINER_NAME" sh -c '. /root/.bashrc'
to restart the container:
docker "$CONTAINER_NAME" restart
The (not so) weird thing is when I get into the container I can use asdf because, as expected, $PATH contains the path to asdf folders.
Does anybody knows what I am missing here?

Each exec runs a new process which loses all its settings when it terminates. You need to start a new Bash shell with the correct options to read .bashrc ... or just give up on trying to use its interactive features for noninteractive scripts, and instead put these commands in a script, and then simply run it.
docker container exec "$container_name" bash '
printf "%s\n" "=== Installing asdf ..."
git clone https://github.com/asdf-vm/asdf.git /root/.asdf --branch v0.10.2
. /root/.asdf/asdf.sh
# . /root/.asdf/completions/asdf.bash
printf "%s\n" "=== Installing node/npm using asdf ..."
nodejs_version="17.9.0"
asdf plugin add nodejs
asdf install nodejs "$nodejs_version"
asdf global nodejs "$nodejs_version"'
I could not bring myself to keep all the newlines in your diagnostic messages.
Tangentially see also Correct Bash and shell script variable capitalization which explains why I changed the variables with the container name and the Node version to lower case.

Related

Install nvm inside Docker using bash as source for Docker's commands

When using bash to execute "docker exec" commands, I am trying to install nvm inside the Docker container. Those are the following commands inside the bash script:
Update:
I've added the whole script.
#!/bin/bash
ubuntu_version="ubuntu:$1"
node_version=$2
container_name="ubuntu_container"
green=`tput setaf 2`
reset=`tput sgr0`
red=`tput setaf 1`
echo "${green}Pulling Ubuntu version: $ubuntu_version ${reset}"
docker pull $ubuntu_version
echo "${green}Running Ubuntu in background...${reset}"
# Stop and remove an existing docker container
docker stop $container_name
docker rm $container_name
docker run -d --name $container_name --rm $ubuntu_version sleep inf
echo "${green}Updating Ubuntu...${reset}"
docker exec $container_name apt update
docker exec $container_name apt upgrade
echo "${green}Installing curl${reset}"
docker exec $container_name apt install -y curl
echo "${green}Installing nvm: ${red}$node_version${reset}"
docker exec -it $container_name bash -c "curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash"
docker exec $container_name bash -c ". ~/.bashrc; nvm"
This leads to the following output:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 15037 100 15037 0 0 54481 0 --:--:-- --:--:-- --:--:-- 54481
=> Downloading nvm as script to '/root/.nvm'
=> Appending nvm source string to /root/.bashrc
=> Appending bash_completion source string to /root/.bashrc
=> Close and reopen your terminal to start using nvm or run the following to use it now:
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: "nvm": executable file not found in $PATH: unknown
Seems like nvm is either not in the path or not installed. What am I missing here?
The problem is with the third docker exec.
You have two options
Run :
docker exec -it test_container bash
in bash session, you can run nvm
Or
Run :
docker exec test_container bash -c '. ~/.bashrc; nvm'
Update
Regarding 1), I thought(wrongly) you were running your docker exec manually.
Regarding 2), save follwoing in test.sh :
#!/bin/bash
docker run -d --name test_container2 --rm node sleep inf
docker exec test_container2 apt install -y curl
docker exec -it test_container2 bash -c "curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash"
docker exec test_container2 bash -c '. ~/.bashrc; nvm'
and run bash test.sh
When I run it, I don't have nvm: command not found
Update 2
Your version of ~/.bashrc contains a test on $PS1, so a workaround, change the last line to :
docker exec $container_name bash -c "PS1=x; . ~/.bashrc; nvm"

Executing docker from terminal directly works fine but not when executed from inside a .sh script?

I am on ubuntu 20.04 I installed docker using sudo snap install docker now when I run directly from the terminal (terminal installed with ubuntu) docker command it works fine but when I execute a .sh script from the terminal using either bash ./script.sh or ./script.sh I am getting an error docker: command not found.
This is the script:
#!/bin/bash
source $(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)/env.sh
docker run -e "NODE_ENV=dev" -it --rm --name my-npm-2 -v $PROJECT_HOME/code:/var/www/html/code -w /var/www/html/code node:14 npm install
docker run -e "NODE_ENV=dev" -it --rm --name my-npm -v $PROJECT_HOME/code/web:/var/www/html/code/web -w /var/www/html/code/web node:14 npm install
$SCRIPT_HOME/buildjs_dev.sh
docker exec project_php sudo php -d memory_limit=-1 /usr/local/bin/composer install --working-dir=/var/www/html/code
docker exec project_php chown -R www-data:www-data /var/www/html/code/var/cache
docker exec project_php chown -R www-data:www-data /var/www/html/code/var/log
I am new to linux in general and I don't know if the problem is with the script itself or why isn't it recognizing docker?
You are defining a source file at the start of your script which might be changing the PATH variable. Try by either commenting the source line or calling the docker command with full path.

Docker run bash --init-file

I'm trying to create an alias to help debug my docker containers.
I discovered bash accepts a --init-file option which ought to let us run some commands before passing over to interactive mode.
So I thought I could do
docker-bash() {
docker run --rm -it "$1" bash --init-file <(echo "ls; pwd")
}
But those commands don't appear to be running:
% docker-bash c7460dfcab50
root#9c6f64a9db8c:/#
Is it an escaping issue or.. what's going on?
bash --init-file <(echo "ls; pwd")
Alone in a terminal on my host machine works as expected (runs the command starts a new bash instance).
In points:
The <(...) is a bash extension process subtitution.
From the manual above: Process substitution is supported on systems that support named pipes (FIFOs) or the /dev/fd method of naming open files..
The process substitution works like this:
bash creates a fifo in /tmp or creates a new file descriptor in /dev/fd.
The filename, either the /tmp/.something or /dev/fd/<number> is substituted for <(...) when command is executed.
So for example echo <(echo 1) outputs /dev/fd/63.
Docker works by creating a new environment that is separated from the host. That means that:
Processes inside docker do not inherit file descriptors from the host process:
So /dev/fd/* files are not inherited.
Processes inside docker are accessing isolated filesystem tree.
So processes can't access /tmp/* files from the host.
So summarizing docker run -ti --rm alpine cat <(echo 1) will not work, because the filename substituted by <(...) is not available from docker environment.
An easy workaround would be to just:
docker run -ti --rm alpine sh -c 'ls; pwd; exec sh'
Or use a temporary file:
echo "ls; pwd" > /tmp/tempfile
docker run -v /tmp/tempfile:/tmp/tempfile bash bash --init-file /tmp/tempfile
For my use-case I wanted to set an alias which won't persist if we re-exec the shell. However, aliases can be written to ~/.bashrc which will be reloaded on the subsequent exec. Ergo,
docker-bash() {
docker run --rm -it "$1" bash -c $'set -o xtrace; echo "alias ll=\'ls -lAhtrF --color=always\'" >> ~/.bashrc; exec "$0"'
}
Works. --rm should clean up any files we create anyway if I understand properly how docker works.
Or perhaps this is a nicer way to write it:
docker-bash() {
read -r -d '' BASHRC << EOM
alias ll='ls -lAhtrF --color=always'
EOM
docker run --rm -it "$1" bash -c "echo \"$BASHRC\" >> ~/.bashrc; exec \"\$0\""
}

Source script on interactive shell inside Docker container

I want to open a interactive shell which sources a script to use the bitbake environment on a repository that I bind mount:
docker run --rm -it \
--mount type=bind,source=$(MY_PATH),destination=/mnt/bb_repoistory \
my_image /bin/bash -c "cd /mnt/bb_repoistory/oe-core && source build/conf/set_bb_env.sh"
The problem is that the -it argument does not seem to have any effect, since the shell exits right after executing cd /mnt/bb_repoistory/oe-core && source build/conf/set_bb_env.sh
I also tried this:
docker run --rm -it \
--mount type=bind,source=$(MY_PATH),destination=/mnt/bb_repoistory \
my_image /bin/bash -c "cd /mnt/bb_repoistory/oe-core && source build/conf/set_bb_env.sh && bash"
Which spawns an interactive shell, but none of the macros defined in set_bb_env.sh
Would there be a way to provide a tty with the script properly sourcered ?
The -it flag is conflicting with the command to run in that you're telling docker to create the pseudo-terminal (ptty), and then running a command in that terminal (bash -c ...). When that command finishes, then the run is done.
What some people have done to work around this is to only have export variables in their sourced environment, and the last command would be exec bash. But if you need aliases or other items that aren't inherited like that, then your options are a bit more limited.
Instead of running the source in a parent shell, you could run it in the target shell. If you modified your .bash_profile to include the following line:
[ -n "$DOCKER_LOAD_EXTRA" -a -r "$DOCKER_LOAD_EXTRA" ] && source "$DOCKER_LOAD_EXTRA”
and then had your command be:
... /bin/bash -c "cd /mnt/bb_repository/oe-core && DOCKER_LOAD_EXTRA=build/conf/set_bb_env.sh exec bash"
that may work. This tells your .bash_profile to load this file when the env variable is already set, but not otherwise. (There can also be the -e flag on the docker command line, but I think that sets it globally for the entire container, which is probably not what you want.)

docker run -it bash -c 'Function from sourced file'

I have a file with couple of functions inside a docker container. I want to run:
docker run -it 06b68ae1c601 bash -c 'function1'
but the output is '/bin/bash: function1: command not found"
I have placed 'source /pathtofile' in /root/.bashrc, /root/.profile /etc/bash.bashrc but the file is still not sourced.
Also tried to run bash --login option, again command not found
The OS of the container is Ubuntu 14.04.
I dont want to run bash -c 'source /file; function1'.
Any idea where should i put the source command?

Resources