Docker refusing to run bash - bash

I have the following docker setup:
python27.Dockerfile
FROM python:2.7
COPY ./entrypoint.sh /entrypoint.sh
RUN mkdir /src
RUN apt-get update && apt-get install -y bash libmysqlclient-dev python-pip build-essential && pip install virtualenv
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 8000
WORKDIR /src
CMD source /src/env/bin/activate && python /src/manage.py runserver
entrypoint.sh
#!/bin/bash
# some code here...
# some code here...
# some code here...
exec "$#"
Whenever I try to run my docker container I get python27 | /bin/sh: 1: source: not found.
I understand that the error comes from the fact that the command is run with sh instead of bash, but I can't understand why is that happening, given the fact that I have the correct shebang at the top of my entrypoint.
Any ideas why is that happening and how can I fix it?

The problem is that for CMD you're using the shell form that uses /bin/sh, and the /src/env/bin/activate likely contains a "source" command, which isn't available on POSIX /bin/sh (the equivalent builtin would be just .).
You must use the exec form for CMD using brackets:
CMD ["/bin/bash", "-c", "source /src/env/bin/activate && python /src/manage.py runserver"]
More details in:
https://docs.docker.com/engine/reference/builder/#run
https://docs.docker.com/engine/reference/builder/#cmd
https://docs.docker.com/engine/reference/builder/#entrypoint

Related

How to properly run entrypoint bash script on docker?

I would like to build a docker image for dumping large SQL Server tables into S3 using the bcp tool by combining this docker and this script. Ideally I could pass table, database, user, password and s3 path as arguments for the docker run command.
The script looks like
#!/bin/bash
TABLE_NAME=$1
DATABASE=$2
USER=$3
PASSWORD=$4
S3_PATH=$5
# read sqlserver...
# write to s3...
# .....
And the Dockerfile is:
# SQL Server Command Line Tools
FROM ubuntu:16.04
LABEL maintainer="SQL Server Engineering Team"
# apt-get and system utilities
RUN apt-get update && apt-get install -y \
curl apt-transport-https debconf-utils \
&& rm -rf /var/lib/apt/lists/*# SQL Server Command Line Tools
FROM ubuntu:16.04
LABEL maintainer="SQL Server Engineering Team"
# apt-get and system utilities
RUN apt-get update && apt-get install -y \
curl apt-transport-https debconf-utils \
&& rm -rf /var/lib/apt/lists/*
# adding custom MS repository
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list > /etc/apt/sources.list.d/mssql-release.list
# install SQL Server drivers and tools
RUN apt-get update && ACCEPT_EULA=Y apt-get install -y msodbcsql mssql-tools awscli
RUN echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc
RUN /bin/bash -c "source ~/.bashrc"
ADD ./sql2sss.sh /opt/mssql-tools/bin/sql2sss.sh
RUN chmod +x /opt/mssql-tools/bin/sql2sss.sh
RUN apt-get -y install locales
RUN locale-gen en_US.UTF-8
RUN update-locale LANG=en_US.UTF-8
ENTRYPOINT ["/opt/mssql-tools/bin/sql2sss.sh", "DB.dbo.TABLE", "SQLSERVERDB", "USER", "PASSWORD", "S3PATH"]
If I replae the entrypoint for CMD /bin/bash and run the image with -it, I can manually run the sql2sss.sh and it works properly, reading and writing to s3. However if I try to use the entrypoint as shown yelds bcp: command not found.
I also noticed if I use CMD /bin/sh in iterative mode it will produce the same error. Am I missing some configuration in order for the entrypoint to run the script properly?
Have you tried
ENV PATH="/opt/mssql-tools/bin:${PATH}"
Instead of exporting the bashrc?
As David Maze pointed out docker doesn't read dot files
Basically add your env definitions in the ENV primitive

Dockerfile: Bash Script executing flawlessly with ENTRYPOINT, but not with RUN

Project
I am building a docker-compose file for a simple devops stack. One of the tools is Helix Core by Perforce. I am trying to build an Ubuntu Dockerfile, that will install Helix Core and then run it. I have already written a bash script install.sh that when put like this
FROM ubuntu:20.04
COPY ./install.sh /install.sh
ENTRYPOINT["/bin/bash", "/install.sh"]
will work flawlessly.
Breaking Change
The problem is that I need the script to run as a setup step and not every time the container is started. So I tried the following:
FROM ubuntu:20.04
COPY ./install.sh /install.sh
RUN chmod +x /install.sh
SHELL ["/bin/bash", "-c"]
RUN /install.sh
ENTRYPOINT [ "p4d" ]
Problem
Now firstly I do not get any descriptive output in the console. The only thing I get is the default building output.
...
=> CACHED [2/4] COPY ./install.sh /install.sh 0.0s
=> CACHED [3/4] RUN chmod +x /install.sh 0.0s
=> CACHED [4/4] RUN /install.sh 0.0s
=> exporting to image
...
Secondly the script does not seem to execute or fails immediately. (It should take much longer than it does.) Here is the script, it does work in the first Dockerfile, just not in the second.
#!/bin/bash
service_name="${service_name:="master"}"
p4root="${p4root:="/opt/perforce/servers/$service_name"}"
unicode_mode="${unicode_mode:=0}"
case_sensitive="${case_sensitive:=0}"
p4port="${p4port:="1666"}"
super_user_login="${super_user_login:="super"}"
if [ -z "$super_user_password" ]
then
echo "Install aborted!"
echo "Please set 'super_user_password' via environment variable!"
exit
fi
echo "Installing Helix Core..."
echo "Updating Ubuntu..."
apt-get update -y
echo "Installing utilities..."
apt-get install ca-certificates wget gpg curl -y
echo "Downloading public key..."
curl https://package.perforce.com/perforce.pubkey > perforce.pubkey
echo "Adding public key..."
gpg init
gpg -n --import --import-options import-show perforce.pubkey
rm perforce.pubkey
echo "Adding perforce packaging key to keyring..."
wget -qO - https://package.perforce.com/perforce.pubkey | apt-key add -
echo "Adding perforce repository to APT configuration..."
echo "deb http://package.perforce.com/apt/ubuntu focal release" > /etc/apt/sources.list.d/perforce.list
echo "Updating Ubuntu..."
apt-get update -y
echo "Installing..."
apt-get install helix-p4d -y
echo "Install complete! Writing config file..."
/opt/perforce/sbin/configure-helix-p4d.sh $service_name -n -p $p4port -r $p4root -u $super_user_login -P $super_user_password
p4 admin stop
Extra Information
The Dockerfile is being built with docker-compose, but I have already tried out docker build with no success.
My Thoughts
It is my understanding that the only relevant difference between RUN and ENTRYPOINT is, that RUN only executes one time in the lifecycle of a container (in the build phase), while ENTRYPOINT defines the executable that will be started together with the container every time it is started. So I assume the environment that script is called in is the same.
Any ideas to why this behavior occurs and how to fix it are appreciated.
So thanks to #DavidMaze I took another look at something I overlooked before.
You can get output from the build command using --progress (see: Why is docker build not showing any output from commands?)
Problem 1 solved!
From there I found:
#6 [4/4] RUN /install.sh
#6 sha256:26be9fa6818fd9e3a0fb64e95a83b816de790902b1f54da1c123ff19888873ff
#6 0.344 Install aborted!
#6 0.344 Please set 'super_user_password' via environment variable!
#6 DONE 0.4s
Problem 2 identified!
Now... the actual problem is that:
Environment variables and arguments are two different things.
You want environment variables for the execution environment and arguments for the build environment. This webpage explained it to me.
My mistake was trying to use environment variables for the build environment.
Modified dockerfile:
FROM ubuntu:20.04
COPY ./install.sh /install.sh
SHELL ["/bin/bash", "-c"]
RUN chmod +x /install.sh
RUN ["/install.sh", "$super_user_password"]
ENTRYPOINT [ "p4d" ]
and I had to add this line to my script:
super_user_password="$1"
And finally... it works!

Docker gives `no such file or directory` on `docker run <image_id>`

So I've just created my very first docker image (woohoo) and was able to run it on the original host system where it was created (Ubuntu 20.04 Desktop PC). The image was executed using docker run -it <image_id>. The expected command (defined in CMD which is just a bash script) was run, and the expected output was seen. I assumed this meant I successfully created my very first docker image and so I pushed this to Docker Hub.
Docker Hub
GitHub repo with original docker-compose.yml and Dockerfile
Here's the Dockerfile:
FROM ubuntu:20.04
# Required for Debian interaction
# (https://stackoverflow.com/questions/62299928/r-installation-in-docker-gets-stuck-in-geographic-area)
ENV DEBIAN_FRONTEND noninteractive
WORKDIR /home/benchmarking-programming-languages
# Install pre-requisites
# Versions at time of writing:
# gcc -- version (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
# make -- GNU Make 4.2.1
# curl -- 7.68.0
RUN apt update && apt install make build-essential curl wget tar -y
# Install `column`
RUN wget https://mirrors.edge.kernel.org/pub/linux/utils/util-linux/v2.35/util-linux-2.35-rc1.tar.gz
RUN tar xfz util-linux-2.35-rc1.tar.gz
WORKDIR /home/benchmarking-programming-languages/util-linux-2.35-rc1
RUN ./configure
RUN make column
RUN cp .libs/column /bin/
WORKDIR /home/benchmarking-programming-languages
RUN rm -rf util-linux-2.35-rc1*
RUN apt install python3 pip -y
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN apt install default-jdk-headless -y
RUN apt install rustc -y
# Install GoLang
RUN wget https://go.dev/dl/go1.17.8.linux-amd64.tar.gz
RUN rm -rf /usr/local/go && tar -C /usr/local -xzf go1.17.8.linux-amd64.tar.gz
ENV PATH="/usr/local/go/bin:${PATH}"
# Install Haxe and Haxelib
RUN wget https://github.com/HaxeFoundation/haxe/releases/download/4.2.5/haxe-4.2.5-linux64.tar.gz
RUN tar xfz haxe-4.2.5-linux64.tar.gz
RUN ln -s /home/benchmarking-programming-languages/haxe_20220306074705_e5eec31/haxe /usr/bin/haxe
RUN ln -s /home/benchmarking-programming-languages/haxe_20220306074705_e5eec31/haxelib /usr/bin/haxelib
# # Install Neko (Haxe VM)
# RUN add-apt-repository ppa:haxe/snapshots -y
# RUN apt update
# RUN apt install neko -y
RUN if ! test -d /home/benchmarking-programming-languages; then mkdir /home/benchmarking-programming-languages && echo "Created directory /home/benchmarking-programming-languages."; fi
COPY . /home/benchmarking-programming-languages
RUN pip install -r /home/benchmarking-programming-languages/requirements_dev.txt
CMD [ "/home/benchmarking-programming-languages/benchmark.sh -v" ]
However, upon pulling the same image on my Windows 10 machine (same machine as above just dual booted) and a Windows 11 laptop using both the Docker Desktop application and the command line (docker pull mariosyian/benchmarking-programming-languages followed by docker run -it <image_id>). Both which give me the following error
Error invoking remote method 'docker-run-container': Error: (HTTP code 400) unexpected - failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/home/benchmarking-programming-languages/benchmark.sh -v": stat /home/benchmarking-programming-languages/benchmark.sh -v: no such file or directory: unknown
Despite this, running the image as a container with a shell (docker run -it <image_id> sh), I am successfully able to, not only see the file, but execute it with no errors! Can someone suggest a reason for why the error happens in the first place, and how to fix it?
In your Dockerfile you have specified the CMD as
CMD [ "/home/benchmarking-programming-languages/benchmark.sh -v" ]
This uses the JSON syntax of the CMD instruction, i.e. is an array of strings where the first string is the executable and each following string is a parameter to that executable.
Since you only have a single string specified docker tries to invoke the executable /home/benchmarking-programming-languages/benchmark.sh -v - i.e. a file named "benchmark.sh -v", containing a space in its name and ending with -v. But what you actually intended to do was to invoke the benchmark.sh script with the -v parameter.
You can do this by correctly specifying the parameter(s) as separate strings:
CMD ["/home/benchmarking-programming-languages/benchmark.sh", "-v"]
or by using the shell syntax:
CMD /home/benchmarking-programming-languages/benchmark.sh -v

Running shell script while building docker image

I have a custom package with install.sh script, which I want to run while building a docker image (meaning - put ./install.sh inside Dockerfile). I could have ran it along with the container, but I want to have an image that contains the required packages (which are mentioned in the install script).
What I tried:
RUN /bin/sh/ -c "./install.sh"
RUN ./install.sh
It errors out saying -
/bin/sh install.sh not found
or
/bin/sh ./install.sh not found
This might be a repeated question, but I haven't found an answer to this anywhere. Any help would be appreciated.
You must copy your install.sh into docker image with this command in your dockerfile:
COPY install.sh /tmp
Then use your RUN command to run it:
RUN /bin/sh -c "/tmp/install.sh"
or
RUN sh /tmp/install.sh
Don't forget to make install.sh executable before run it:
chmod +x /tmp/install.sh

command not found even when "which" shows its path with sudo

I'm on Fedora release 25 with zsh 5.2
I am trying to use a command with sudo. (In this example, docker-compose)
Problem:
which command shows where it is.
$ sudo PATH=$PATH which docker-compose
/usr/local/bin/docker-compose
In spite of that, command not found
$ sudo PATH=$PATH docker-compose
sudo: docker-compose: command not found
I could make it work by sudo `which docker-compose` but I want to know why this occurs.
What I tried:
I double-quoted PATH=$PATH but got the same result.
$ sudo "PATH=$PATH" docker-compose
sudo: docker-compose: command not found
/usr/local/bin/ is not on root path. Check with
sudo bash -c 'echo "$PATH"'
/usr/sbin:/usr/bin:/sbin:/bin
Use absolute path to the command.
Adding /usr/local/bin to root path seems to be a security risk.

Resources