Run interactive docker with entrypoint [duplicate] - bash

This question already has answers here:
Activate conda environment in docker
(14 answers)
Closed last year.
I am trying to build a docker image with a conda environment and start the environment when I start a container, but I cannot figure out how to. My Dockerfile is currently:
FROM nvidia/cuda:10.2-cudnn7-runtime-ubuntu18.04
ENV PATH="/root/miniconda3/bin:${PATH}"
ARG PATH="/root/miniconda3/bin:${PATH}"
RUN apt update \
&& apt install -y htop python3-dev wget git imagemagick
RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& mkdir root/.conda \
&& sh Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh \
&& conda create -y -n .venv python=3.7
RUN /bin/bash -c "source activate .venv \
&& pip install -r requirements.txt"
# More omitted installs
CMD ["/bin/bash", "source activate .venv"]
RUN /bin/bash -c "source activate .venv"
And then I build and run with:
docker build -f Dockerfile -t adlr .
docker run -it adlr /bin/bash
-->The conda environment is not being activated upon starting the container, but I would like it to be.

You can activate the environment upon starting the container by replacing the last line of the Dockerfile with:
RUN echo "source activate .venv" >> ~/.bashrc

Related

How to Start ssh-agent in Dockerfile

When entering my container, I want to log in as user ryan in directory /home/ryan/cas with the command eval "$(ssh-agent -c)" run. My following Dockerfile:
FROM ubuntu:latest
ENV TZ=Australia/Sydney
RUN set -ex; \
# NOTE(Ryan): Prevent docker build hanging on timezone confirmation
ln -sf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone; \
apt update; \
apt install -y --no-install-recommends \
sudo ca-certificates git gnupg openssh-client vim; \
useradd -m ryan -g sudo; \
printf "ryan ALL=(ALL:ALL) NOPASSWD:ALL" | sudo EDITOR="tee -a" visudo; \
# NOTE(Ryan): Prevent sudo usage prompt appearing on startup
touch /home/ryan/.sudo_as_admin_successful; \
git clone https://github.com/ryan-mcclue/cas.git /home/ryan/cas; \
chmod 777 -R /home/ryan/cas;
ENTRYPOINT ["/bin/bash", "-l", "-c"]
USER ryan
WORKDIR /home/ryan/cas
CMD eval "$(ssh-agent -s)"
However, running ssh-add I still get the Could not open a connection to your authentication agent which is indicative that the ssh-agent is not running. Manually typing eval "$(ssh-agent -c)" works.
I think you want remove your ENTRYPOINT statement, and then you want:
USER ryan
WORKDIR /home/ryan/cas
CMD ["ssh-agent", "bash", "-l"]
This will get you a login shell, run under the control of ssh-agent (so you'll have the necssary SSH_* environment variables and an active socket available).
To understand what's happening with your container, try running from the command line:
bash -l -c 'eval $(ssh-agent -s)'
What happens? The shell exits immediately, because running ssh-agent -s causes the agent to background itself, which looks pretty much the same as "exiting". Since you passed the -c flag, and the command given to -c has exited, the parent bash shell exits as well.

How to pass docker build command arguments to bash files

I am passing two command line arguments to my docker file like this:
docker build . -t ros-container --build-arg UBUNTU_VERSION=bionic --build-arg ROS_VERSION=melodic
I'm able to access them in my docker file, tho I couldn't get them in my bash files. I have tried both entrypoint and cmd techniques. But, non of them helped me.
Expectation
I want to access the two arguments,UBUNTU_VERSION & ROS_VERSION, from the 'script_init.bash' file. See the project structure.
Project structure
- ros_tutorials-noetic-devel
-Dockerfile
-scripts
-script_init.bash
Dockerfile
FROM ros:melodic-perception-bionic
# install packages
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update -q && \
apt-get upgrade -yq && \
apt-get install -yq wget curl git build-essential vim sudo lsb-release locales bash-
completion
# Adjust working directory
RUN locale-gen en_US.UTF-8
RUN useradd -m -d /home/ubuntu ubuntu -p `perl -e 'print crypt("ubuntu",
"salt"),"\n"'` && \
echo "ubuntu ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
# declare ros version arg
ARG ROS_VERSION
#declare ubuntu version arg
ARG UBUNTU_VERSION
# setup environment
USER ubuntu
WORKDIR /home/ubuntu
ENV UBUNTU_V=$UBUNTU_VERSION \
ROS_V=$ROS_VERSION
ENV LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=en_US.UTF-8
CMD COPY ./scripts/script_init.bash /
ENTRYPOINT ["/scripts/script_init.bash /"]
script_init.bash
#!/bin/bash
set -e
export UBUNTU_CODENAME=$UBUNTU_V
export REPO_DIR=$(dirname "$SCRIPT_DIR")
export CATKIN_DIR="$HOME/catkin_ws"
export ROS_DISTRO=$ROS_V
You need to copy the script file into your docker image and execute it correctly.
You should be able to get it working by using this Dockerfile, note the lines at the bottom:
FROM ros:melodic-perception-bionic
# install packages
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update -q && \
apt-get upgrade -yq && \
apt-get install -yq \
bash-completion \
build-essential \
curl \
git \
locales \
lsb-release \
sudo \
vim \
wget
# Adjust working directory
RUN locale-gen en_US.UTF-8
RUN useradd -m -d /home/ubuntu ubuntu -p `perl -e 'print crypt("ubuntu", "salt"),"\n"'` && \
echo "ubuntu ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
# declare ros version arg
ARG ROS_VERSION
#declare ubuntu version arg
ARG UBUNTU_VERSION
# setup environment
USER ubuntu
WORKDIR /home/ubuntu
ENV UBUNTU_V=$UBUNTU_VERSION \
ROS_V=$ROS_VERSION
ENV LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=en_US.UTF-8
# Copy your scripts directory into the docker image
COPY --chown=ubuntu:ubuntu scripts scripts
# Make sure you have execute permissions on the script
RUN chmod +x "./scripts/script_init.bash"
# Set your entrypoint to execute the script
ENTRYPOINT ["./scripts/script_init.bash"]
As a note, you could export all of these environment variables in the Dockerfile during the build without needing to execute a script at runtime, e.g. in your dockerfile:
# Export environment variables in Dockerfile
ENV UBUNTU_CODENAME=$UBUNTU_VERSION
ENV REPO_DIR=/home/ubuntu/scripts
ENV CATKIN_DIR=/home/ubuntu/catkin_ws
ENV ROS_DISTRO=$ROS_VERSION
I have finally found a solution that works like charm! Once you add the script folder, you can run it with bash command. In this way, you can pass what ever arguments to any bash file within the script folder.
# setup base image
FROM ros:melodic-perception-bionic
# install packages
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update -q && \
apt-get upgrade -yq && \
apt-get install -yq wget curl git build-essential vim sudo lsb-release locales bash-completion
# Adjust working directory
RUN locale-gen en_US.UTF-8
RUN useradd -m -d /home/ubuntu ubuntu -p `perl -e 'print crypt("ubuntu", "salt"),"\n"'` && \
echo "ubuntu ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
# declare ros version arg
ARG ROS_VERSION
#declare ubuntu version arg
ARG UBUNTU_VERSION
# setup environment
USER ubuntu
WORKDIR /home/ubuntu
ENV UBUNTU_V=$UBUNTU_VERSION \
ROS_V=$ROS_VERSION
ENV LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=en_US.UTF-8
# call script files
ADD /scripts /scripts
RUN bash /scripts/script_init.bash

Docker centos7 systemctl deos not work : Failed to connect D-bus

I am trying to run elasticsearch on docker.
My features like below
host system : OSX 10.12.5
docker : 17.05.0-ce
docker operating image : centos:latest
I was following this article, but it stuck with systemctl daemon-reload.
I found CentOS official respond about this D-bus bug, but when I ran docker run command it shows the message below.
[!!!!!!] Failed to mount API filesystems, freezing.
How could I solve this problem?
FYI, Here is Dockerfile what I build image
FROM centos
MAINTAINER juneyoung <juneyoung#hanmail.net>
ARG u=elastic
ARG uid=1000
ARG g=elastic
ARG gid=1000
ARG p=elastic
# add USER
RUN groupadd -g ${gid} ${g}
RUN useradd -d /home/${u} -u ${uid} -g ${g} -s /bin/bash ${u}
# systemctl settings from official Centos github
# https://github.com/docker-library/docs/tree/master/centos#systemd-integration
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
# yum settings
RUN yum -y update
RUN yum -y install java-1.8.0-openjdk.x86_64
ENV JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.131-3.b12.el7_3.x86_64/jre/
# install wget
RUN yum install -y wget
# install net-tools : netstat, ifconfig
RUN yum install -y net-tools
# Elasticsearch install
ENV ELASTIC_VERSION=5.4.0
RUN rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
RUN wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-${ELASTIC_VERSION}.rpm
RUN rpm -ivh elasticsearch-${ELASTIC_VERSION}.rpm
CMD ["/usr/sbin/init"]
and I have ran with command
docker run -ti -v /sys/fs/cgroup:/sys/fs/cgroup:ro --name=elastic2 elastic2
First, thanks to #Robert.
I did not think it that way.
All I have to do is just edit my CMD command.
Change that to
CMD["elasticsearch"]
However, have to some chores to access from the browser.
refer this elasticsearch forum post.
You could follow the commands for a systemd-enabled OS if you would replace the normal systemctl command. That's how I do install elasticsearch in a centos docker container.
See "docker-systemctl-replacement" for the details.

Docker network does not work with bash entrypoint

First, we have a Docker network like so:
docker network create cdt-net
Then I have this bash script which will start a selenium server:
cd $(dirname "$0")
./node_modules/.bin/webdriver-manager update
./node_modules/.bin/webdriver-manager start
The above bash script is called by this Dockerfile:
FROM openjdk:latest
RUN apt-get update && \
apt-get -y install sudo
RUN mkdir -p /root/cdt-webdriver
WORKDIR /root/cdt-webdriver
COPY start-selenium-server.sh .
ENTRYPOINT ["/bin/bash", "/root/cdt-webdriver/start-selenium-server.sh"]
I would build it like so:
docker build -t cdt-selenium .
and then run it like so:
docker run --network=cdt-net --name cdt-selenium -d cdt-selenium
the problem that I am having, is that even though everything is clean with no errors, other processes in the same Docker network cannot talk to the selenium server.
On the other hand, if I create a selenium server using a pre-existing image, like so:
docker run -d --network=cdt-net --name cdt-selenium selenium/standalone-firefox:3.4.0-chromium
then things are working as expected, and I can connect to the selenium server from other processes in the Docker network.
Anyone know what might be wrong with my bash script or Dockerfile? Perhaps my manually created Selenium server is not listening on the right host?
Here is the complete Dockerfile for reference:
FROM openjdk:latest
RUN apt-get update && \
apt-get -y install sudo
RUN sudo apt-get install -y curl
RUN sudo apt-get install -y apt-utils
RUN sudo apt-get -y update
RUN sudo apt-get -y upgrade
RUN sudo apt-get purge nodejs npm
RUN curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
RUN sudo apt-get install -y nodejs
RUN echo "before nodejs => $(which nodejs)"
RUN echo "before npm => $(which npm)"
RUN sudo ln -s `which nodejs` /usr/bin/node || echo "ignore error"
RUN mkdir -p /root/cdt-webdriver
WORKDIR /root/cdt-webdriver
COPY start-selenium-server.sh .
RUN rm -rf node_modules > /dev/null 2>&1
RUN npm init -f || echo "ignore non-zero exit code" > /dev/null 2>&1
RUN npm install webdriver-manager > /dev/null 2>&1
ENTRYPOINT ["/bin/bash", "/root/cdt-webdriver/start-selenium-server.sh"]
You should use -d only when you docker images run fine. Before that use -it.
Change you webdriver-manager to a global install
RUN npm install -g webdriver-manager > /dev/null 2>&1
ENTRYPOINT ["/bin/bash", "/root/cdt-webdriver/start-selenium-server.sh"]
Also change your start-selenium-server.sh to
webdriver-manager update
webdriver-manager start
And use below to run and check if there are any issues
docker run --network=cdt-net --name cdt-selenium -it cdt-selenium

Why does "docker run" error with "no such file or directory"?

I am trying to run a container which runs an automated build. Here is the dockerfile:
FROM ubuntu:14.04
MAINTAINER pmandayam
# update dpkg repositories
RUN apt-get update
# install wget
RUN apt-get install -y wget
# get maven 3.2.2
RUN wget --no-verbose -O /tmp/apache-maven-3.2.2.tar.gz http://archive.apache.or
g/dist/maven/maven-3/3.2.2/binaries/apache-maven-3.2.2-bin.tar.gz
# verify checksum
RUN echo "87e5cc81bc4ab9b83986b3e77e6b3095 /tmp/apache-maven-3.2.2.tar.gz" | md5
sum -c
# install maven
RUN tar xzf /tmp/apache-maven-3.2.2.tar.gz -C /opt/
RUN ln -s /opt/apache-maven-3.2.2 /opt/maven
RUN ln -s /opt/maven/bin/mvn /usr/local/bin
RUN rm -f /tmp/apache-maven-3.2.2.tar.gz
ENV MAVEN_HOME /opt/maven
# remove download archive files
RUN apt-get clean
# set shell variables for java installation
ENV java_version 1.8.0_11
ENV filename jdk-8u11-linux-x64.tar.gz
ENV downloadlink http://download.oracle.com/otn-pub/java/jdk/8u11-b12/$filename
# download java, accepting the license agreement
RUN wget --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie
" -O /tmp/$filename $downloadlink
# unpack java
RUN mkdir /opt/java-oracle && tar -zxf /tmp/$filename -C /opt/java-oracle/
ENV JAVA_HOME /opt/java-oracle/jdk$java_version
ENV PATH $JAVA_HOME/bin:$PATH
# configure symbolic links for the java and javac executables
RUN update-alternatives --install /usr/bin/java java $JAVA_HOME/bin/java 20000 &
& update-alternatives --install /usr/bin/javac javac $JAVA_HOME/bin/javac 20000
# install mongodb
RUN echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen
' | sudo tee /etc/apt/sources.list.d/mongodb.list && \
apt-get update && \
apt-get --allow-unauthenticated install -y mongodb-org mongodb-org-s
erver mongodb-org-shell mongodb-org-mongos mongodb-org-tools && \
echo "mongodb-org hold" | dpkg --set-selections && \
echo "mongodb-org-server hold" | dpkg --set-selections && \
echo "mongodb-org-shell hold" | dpkg --set-selections &&
\
echo "mongodb-org-mongos hold" | dpkg --set-selectio
ns && \
echo "mongodb-org-tools hold" | dpkg --set-selec
tions
RUN mkdir -p /data/db
VOLUME /data/db
EXPOSE 27017
COPY build-script /build-script
CMD ["/build-script"]
I can build the image successfully but when I try to run the container I get this error:
$ docker run mybuild
no such file or directory
Error response from daemon: Cannot start container 3e8aa828909afcd8fb82b5a5ac894
97a537bef2b930b71a5d20a1b98d6cc1dd6: [8] System error: no such file or directory
what does it mean 'no such file or directory'?
Here is my simple script:
#!/bin/bash
sudo service mongod start
mvn clean verify
sudo service mongod stop
I copy it like this: COPY build-script /build-script
and run it like this: CMD ["/build-script"] not sure why its not working
Using service isn't going to fly - the Docker base images are minimal and don't support this. If you want to run multiple processes, you can use supervisor or runit etc.
In this case, it would be simplest just to start mongo manually in the script e.g. /usr/bin/mongod & or whatever the correct incantation is.
BTW the lines where you try to clean up don't have much effect:
RUN rm -f /tmp/apache-maven-3.2.2.tar.gz
...
# remove download archive files
RUN apt-get clean
These files have already been committed to a previous image layer, so doing this doesn't save any disk-space. Instead you have to delete the files in the same Dockerfile instruction in which they're added.
Also, I would consider changing the base image to a Java one, which would save a lot of work. However, you may have trouble finding one which bundles the official Oracle JDK rather than OpenJDK if that's a problem.

Resources