How to run bash function in Dockerfile - bash

I have a bash function nvm defined in /root/.profile. docker build failed to find that function when I call it in RUN step.
RUN apt-get install -y curl build-essential libssl-dev && \
curl https://raw.githubusercontent.com/creationix/nvm/v0.16.1/install.sh | sh
RUN nvm install 0.12 && \
nvm alias default 0.12 && \
nvm use 0.12
The error is
Step 5 : RUN nvm install 0.12
---> Running in b639c2bf60c0
/bin/sh: nvm: command not found
I managed to call nvm by wrapping it with bash -ic, which will load /root/.profile.
RUN bash -ic "nvm install 0.12" && \
bash -ic "nvm alias default 0.12" && \
bash -ic "nvm use 0.12"
The above method works fine, but it has a warning
bash: cannot set terminal process group (1): Inappropriate ioctl for device
bash: no job control in this shell
And I want to know is there a easier and cleaner way to call the bash function directly as it's normal binary without the bash -ic wrapping? Maybe something like
RUN load_functions && \
nvm install 0.12 && \
nvm alias default 0.12 && \
nvm use 0.12

Docker's RUN doesn't start the command in a shell. That's why shell functions and shell syntax (like cmd1 && cmd2) cannot being used out of the box. You need to call the shell explicitly:
RUN bash -c 'nvm install 0.12 && nvm alias default 0.12 && nvm use 0.12'
If you are afraid of that long command line, put those commands into a shell script and call the script with RUN:
script.sh
#!/bin/bash
nvm install 0.12 && \
nvm alias default 0.12 && \
nvm use 0.12
and make it executable:
chmod +x script.sh
In Dockerfile put:
RUN /path/to/script.sh

Related

How to pass docker build command arguments to bash files

I am passing two command line arguments to my docker file like this:
docker build . -t ros-container --build-arg UBUNTU_VERSION=bionic --build-arg ROS_VERSION=melodic
I'm able to access them in my docker file, tho I couldn't get them in my bash files. I have tried both entrypoint and cmd techniques. But, non of them helped me.
Expectation
I want to access the two arguments,UBUNTU_VERSION & ROS_VERSION, from the 'script_init.bash' file. See the project structure.
Project structure
- ros_tutorials-noetic-devel
-Dockerfile
-scripts
-script_init.bash
Dockerfile
FROM ros:melodic-perception-bionic
# install packages
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update -q && \
apt-get upgrade -yq && \
apt-get install -yq wget curl git build-essential vim sudo lsb-release locales bash-
completion
# Adjust working directory
RUN locale-gen en_US.UTF-8
RUN useradd -m -d /home/ubuntu ubuntu -p `perl -e 'print crypt("ubuntu",
"salt"),"\n"'` && \
echo "ubuntu ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
# declare ros version arg
ARG ROS_VERSION
#declare ubuntu version arg
ARG UBUNTU_VERSION
# setup environment
USER ubuntu
WORKDIR /home/ubuntu
ENV UBUNTU_V=$UBUNTU_VERSION \
ROS_V=$ROS_VERSION
ENV LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=en_US.UTF-8
CMD COPY ./scripts/script_init.bash /
ENTRYPOINT ["/scripts/script_init.bash /"]
script_init.bash
#!/bin/bash
set -e
export UBUNTU_CODENAME=$UBUNTU_V
export REPO_DIR=$(dirname "$SCRIPT_DIR")
export CATKIN_DIR="$HOME/catkin_ws"
export ROS_DISTRO=$ROS_V
You need to copy the script file into your docker image and execute it correctly.
You should be able to get it working by using this Dockerfile, note the lines at the bottom:
FROM ros:melodic-perception-bionic
# install packages
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update -q && \
apt-get upgrade -yq && \
apt-get install -yq \
bash-completion \
build-essential \
curl \
git \
locales \
lsb-release \
sudo \
vim \
wget
# Adjust working directory
RUN locale-gen en_US.UTF-8
RUN useradd -m -d /home/ubuntu ubuntu -p `perl -e 'print crypt("ubuntu", "salt"),"\n"'` && \
echo "ubuntu ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
# declare ros version arg
ARG ROS_VERSION
#declare ubuntu version arg
ARG UBUNTU_VERSION
# setup environment
USER ubuntu
WORKDIR /home/ubuntu
ENV UBUNTU_V=$UBUNTU_VERSION \
ROS_V=$ROS_VERSION
ENV LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=en_US.UTF-8
# Copy your scripts directory into the docker image
COPY --chown=ubuntu:ubuntu scripts scripts
# Make sure you have execute permissions on the script
RUN chmod +x "./scripts/script_init.bash"
# Set your entrypoint to execute the script
ENTRYPOINT ["./scripts/script_init.bash"]
As a note, you could export all of these environment variables in the Dockerfile during the build without needing to execute a script at runtime, e.g. in your dockerfile:
# Export environment variables in Dockerfile
ENV UBUNTU_CODENAME=$UBUNTU_VERSION
ENV REPO_DIR=/home/ubuntu/scripts
ENV CATKIN_DIR=/home/ubuntu/catkin_ws
ENV ROS_DISTRO=$ROS_VERSION
I have finally found a solution that works like charm! Once you add the script folder, you can run it with bash command. In this way, you can pass what ever arguments to any bash file within the script folder.
# setup base image
FROM ros:melodic-perception-bionic
# install packages
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update -q && \
apt-get upgrade -yq && \
apt-get install -yq wget curl git build-essential vim sudo lsb-release locales bash-completion
# Adjust working directory
RUN locale-gen en_US.UTF-8
RUN useradd -m -d /home/ubuntu ubuntu -p `perl -e 'print crypt("ubuntu", "salt"),"\n"'` && \
echo "ubuntu ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
# declare ros version arg
ARG ROS_VERSION
#declare ubuntu version arg
ARG UBUNTU_VERSION
# setup environment
USER ubuntu
WORKDIR /home/ubuntu
ENV UBUNTU_V=$UBUNTU_VERSION \
ROS_V=$ROS_VERSION
ENV LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=en_US.UTF-8
# call script files
ADD /scripts /scripts
RUN bash /scripts/script_init.bash

How do I replace "\r" line endings when running Docker script on Windows?

I'm using Docker 19 on Windows 10 (using Cygwin to run Docker). I have this web/Dockerfile ...
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y dos2unix
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
    && apt-get install -y --no-install-recommends gcc \
    && rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
WORKDIR /app/
COPY requirements.txt requirements.txt
COPY entrypoint.sh entrypoint.sh
RUN tr -d '\r' < /app/entrypoint.sh > /app/entrypoint2.sh
RUN python -m pip install -r requirements.txt
RUN grep '\r' /app/entrypoint.sh
RUN dos2unix /app/entrypoint.sh
RUN grep '\r' /app/entrypoint.sh
ENTRYPOINT ["bash", "/app/entrypoint.sh"]
and the entrypoint.sh file referenced looks like
#!/bin/bash
set -e
python manage.py migrate
python manage.py migrate directory
python manage.py docker_init_db_data
exec "$#"
But I guess there are some "\r" line endings that causes running "docker-compose up" on Windows to die. In my above file, I have
RUN dos2unix /app/entrypoint.sh
But I guess this doesn't do it, because running "docker-compose up" results in
web_1     | set: usage: set [-abefhkmnptuvxBCHP] [-o option-name] [--] [arg ...]
web_1     | /app/entrypoint.sh: line 3: $'\r': command not found
web_1     | Unknown command: 'migrate\r'. Did you mean migrate?
web_1     | Type 'manage.py help' for usage.
How do I properly replace "\r" line endings in my shell script so that I can properly run my Dockerfile on Windows (and ideally all other) platforms?
I had same issue and with typing the commands in one line and separating them with && solved the problem.
in your case it would be:
python manage.py migrate --noinput && python manage.py migrate directory && python manage.py docker_init_db_data &&
hope it will solve your problem.
I just ran into the same problem, and found out that it is caused by the file end of line break type. On windows most file will be saved in CRLF instead of LF. Change the break type from CRLF to LF would solve the issue.
if you're on vscode you can easily change it at the bottom right.

Installing OSSEC agent on a container. The ossec install script (install.sh) falls and loops infintely when passing arguments via script

Basically I am going to have a whole bunch of ubuntu containers that are going to have ossec agent installed that will communicate with a main server. I want to automate the installation so using the docker RUN variable in the dockerfile I wrote a script that downloads the ossec tar file, unpacks it, cds into directory and runs the install script while passing arguments to each question of the installation phase:
Dockerfile:
From ubuntu
RUN apt-get update && apt-get install -y \
build-essential \
libmysqlclient-dev \
postgresql-common \
wget \
tar \
RUN wget -U ossec https://bintray.com/artifact/download/ossec/ossec-hids/ossec-hids-2.8.3.tar.gz
RUN tar -xvf ossec-hids-2.8.3.gz && \
rm -f ossec-hids-2.8.3.tar.gz && \
cd ossec-hids-2.8.3 && \
echo "en agent \n 192.168.1.50 y y y" | ./install.sh
When it echos in the arguments into the script, the install.sh script falls and loops over the second question infinitely. Note I have tried printf, expect script, yes command and tried the script inside the container. All with the same outcome.

How can I set Bash aliases for docker containers in Dockerfile?

I am new to Docker. I found that we can set environment variables using the ENV instruction in the Dockerfile. But how does one set Bash aliases for long commands in Dockerfile?
Basically like you always do, by adding it to the user's .bashrc file:
FROM foo
RUN echo 'alias hi="echo hello"' >> ~/.bashrc
As usual this will only work for interactive shells:
docker build -t test .
docker run -it --rm --entrypoint /bin/bash test hi
/bin/bash: hi: No such file or directory
docker run -it --rm test bash
$ hi
hello
For non-interactive shells you should create a small script and put it in your path, i.e.:
RUN echo -e '#!/bin/bash\necho hello' > /usr/bin/hi && \
chmod +x /usr/bin/hi
If your alias uses parameters (ie. hi Jim -> hello Jim), just add "$#":
RUN echo -e '#!/bin/bash\necho hello "$#"' > /usr/bin/hi && \
chmod +x /usr/bin/hi
To create an alias of an existing command, might also use ln -s:
ln -s $(which <existing_command>) /usr/bin/<my_command>
If you want to use aliases just in Dockerfile, but not inside a container then the shortest way is the ENV declaration:
ENV update='apt-get update -qq'
ENV install='apt-get install -qq'
RUN $update && $install apt-utils \
curl \
gnupg \
python3.6
And for use in a container the way like already described:
RUN printf '#!/bin/bash \n $(which apt-get) install -qq $#' > /usr/bin/install
RUN chmod +x /usr/bin/install
Most of the time I use aliases just in the building stage and do not go inside containers, so the first example is quicker, clearer and simpler for every day use.
I just added this to my app.dockerfile file:
# Set up aliases
ADD ./bashrc_alias.sh /usr/sbin/bashrc_alias.sh
ADD ./initbash_profile.sh /usr/sbin/initbash_profile
RUN chmod +x /usr/sbin/initbash_profile
RUN /bin/bash -C "/usr/sbin/initbash_profile"
And inside the initbash_profile.sh file which just appends my custom aliases and no need to source the .bashrc file:
# Add the Bash aliases
cat /usr/sbin/bashrc_alias.sh >> ~/.bashrc
It worked a treat!
Another option is to just use the "docker exec -it <container-name> command" from outside the container and just use your own .bashrc or the .bash_profile file (what ever you prefer).
E.g.,
docker exec -it docker_app_1 bash
I think the easiest way would be to mount a file into your container containing your aliases, and then specify where Bash should find it:
docker run \
-it \
--rm \
-v ~/.bash_aliases:/tmp/.bash_aliases \
[image] \
/bin/bash --init-file /tmp/.bash_aliases
Sample usage:
echo 'alias what="echo it works"' > my_aliases
docker run -it --rm -v ~/my_aliases:/tmp/my_aliases ubuntu:18.04 /bin/bash --init-file /tmp/my_aliases
alias
Output:
alias what='echo it works'
what
Output:
it works
You can use ENTRYPOINT, but it will not work for aliases, in your Dockerfile:
ADD dev/entrypoint.sh /opt/entrypoint.sh
ENTRYPOINT ["/opt/entrypoint.sh"]
Your entrypoint.sh file:
#!/bin/bash
set -e
function dev_run()
{
}
export -f dev_run
exec "$#"
Here is a Bash function to have your aliases in every container you use interactively.
ducker_it() {
docker cp ~/bin/alias.sh "$1":/tmp
docker exec -it "$1" /bin/bash -c "[[ ! -f /tmp/alias.sh.done ]] \
&& [[ -w /root/.bashrc ]] \
&& cat /tmp/alias.sh >> /root/.bashrc \
&& touch /tmp/alias.sh.done"
docker exec -it "$1" /bin/bash
}
Required step before:
grep ^alias ~/.zshrc > ~/bin/alias.sh
Used some of the previous solutions, but the aliases are not recognised still.
I'm trying to set aliases and use them both within later Dockerfile steps and in the container at runtime.
RUN echo "alias model-downloader='python3 ${MODEL_DL_PATH}/downloader.py'" >> ~/.bash_aliases && \
echo "alias model-converter='python3 ${MODEL_DL_PATH}/converter.py'" >> ~/.bash_aliases && \
source ~/.bash_aliases
# Download the model
RUN model-downloader --name $MODEL_NAME -o $MODEL_DIR --precisions $MODEL_PRECISION;
The solution for me was to use ENV variables that held folder paths and then add the exact executable. I could have use ARG too, but for more of my scenarios I needed the aliases in both the build stage and later in the runtime.
I used the ENV variables in conjunction with a Bash script that runs once dependencies have ponged and sets the Bash source, sets some more env variables, and allows for further commands to pipe through.
#ErikDannenberg's answer did the trick, but in my case, some adjustments were needed.
It didn't work with aliases cause apparently there's an issue with interactive shells.
I reached for his second solution, but it still didn't really work. I checked existing shell scripts in my project and noticed the head comment (first line = #!/usr/bin/env sh) differs a bit from #!/usr/bin/bash. After changing it accordingly it started working for my t and tc "aliases", but I had to use the addendum to his second solution for getting tf to work.
Here's the complete Dockerfile
FROM php:8.1.1-fpm-alpine AS build
RUN apk update && apk add git
RUN curl -sS https://getcomposer.org/installer | php && mv composer.phar /usr/local/bin/composer
RUN apk add --no-cache $PHPIZE_DEPS \
&& pecl install xdebug \
&& docker-php-ext-enable xdebug \
&& touch /usr/local/etc/php/conf.d/99-xdebug.ini \
&& echo "xdebug.mode=coverage" >> /usr/local/etc/php/conf.d/99-xdebug.ini \
&& echo -e '#!/usr/bin/env sh\nphp artisan test' > /usr/bin/t \
&& chmod +x /usr/bin/t \
&& echo -e '#!/usr/bin/env sh\nphp artisan test --coverage' > /usr/bin/tc \
&& chmod +x /usr/bin/tc \
&& echo -e '#!/usr/bin/env sh\nphp artisan test --filter "$#"' > /usr/bin/tf \
&& chmod +x /usr/bin/tf
WORKDIR /var/www

Installing rbenv on docker ubuntu/debian

I want to install rbenv on Docker which seems to work but I can't reload the shell.
FROM node:0.10.32-slim
RUN \
apt-get update \
&& apt-get install -y sudo
RUN \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers \
&& groupadd r \
&& useradd r -m -g r -g sudo
USER r
RUN \
git clone https://github.com/sstephenson/rbenv.git ~/.rbenv \
&& echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc \
&& echo 'eval "$(rbenv init -)"' >> ~/.bashrc
RUN rbenv # check if it works...
When I run this I get:
docker build .
..
Step 5 : RUN rbenv
/bin/sh: 1: rbenv: not found
From what I understand, I need to reload the current shell so I can install ruby versions. Not sure if I am on the right track.
Also see:
Using rbenv with Docker
The RUN command executes everything under /bin/sh, thus your bashrc is not evaled at any point.
use this
&& export PATH="$HOME/.rbenv/bin:$PATH" \
which would append rbenv to /bin/sh's PATH.
Full Dockerfile
FROM node:0.10.32-slim
RUN \
apt-get update \
&& apt-get install -y sudo
RUN \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers \
&& groupadd r \
&& useradd r -m -g r -g sudo
USER r
RUN \
git clone https://github.com/sstephenson/rbenv.git ~/.rbenv \
&& echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc \
&& echo 'eval "$(rbenv init -)"' >> ~/.bashrc \
&& export PATH="$HOME/.rbenv/bin:$PATH"
RUN rbenv # check if it works...
I'm not sure how Docker works, but it seems like maybe you're missing a step where you source ~/.bashrc, which is preventing you from having the rbenv executable in your PATH. Try adding that right before your first attempt to run rbenv and see if it helps.
You can always solve PATH issues by using the absolute path, too. Instead of just rbenv, try running $HOME/.rbenv/bin/rbenv.
If that works, it indicates that rbenv has installed successfully, and that your PATH is not correctly set to include its bin directory.
It looks from reading the other question you posted that docker allows you to set your PATH via an ENV PATH command, like this, for example:
ENV PATH $HOME/.rbenv/bin:/usr/bin:/bin
but you should make sure that you include all of the various paths you will need.

Resources