Hi I'm trying to run this following Dockerfile
FROM ubuntu:20.04
ADD install /
RUN chmod u+x /install
RUN /install
ENV PATH /root/miniconda3/bin:$PATH
CMD ["ipython"]
combined with this bash script
#!/bin/bash
apt-get update
apt-get upgrade -y
apt-get install -y bzip2 gcc git htop screen vim wget
apt-get upgrade -y bashapt-get clean
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O Miniconda.sh
bash Miniconda.sh -b
rm -rf Miniconda.sh
export PATH="/root/miniconda3/bin:$PATH"
conda update -y conda python
conda install -y pandas
conda install -y ipython
The Dockerfile and the bash script are in the same folder, really not sure what I'm doing wrong here. This is the error I'm getting:
$ docker build -t py4fi:basic .
[+] Building 0.5s (8/8) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 31B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/ubu 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 434B 0.0s
=> [1/4] FROM docker.io/library/ubuntu:20.04 0.0s
=> CACHED [2/4] ADD install / 0.0s
=> CACHED [3/4] RUN chmod u+x /install 0.0s
=> ERROR [4/4] RUN /install 0.4s
------
> [4/4] RUN /install:
#8 0.416 /bin/sh: 1: /install: not found
------
executor failed running [/bin/sh -c /install]: exit code: 127
Any ideas what I'm doing wrong here?
TL;DR
On all platforms, even containerized, Bash scripts must have Unix-style line endings (LF), except in some cases on Windows, like with sufficiently recent versions Git Bash.
Details
I just reproduced your exact error message when I saved the install file with CRLF line endings on my computer.
On Windows, Git-Bash is patched to tolerate Windows-style CRLF line endings, but on all other platforms Bash only accepts Unix-style LF line endings. When you run install in the docker build, it's using the Bash that is distributed with your base image, where it ends up looking for (and not finding) /bin/bash\0x0D instead of /bin/bash.
The solution is simple, you have to convert the newlines to LF for the install file. (The Dockerfile can have either line endings.) On my computer, VSCode has a CRLF showing at the right of the status bar (see my screen snip below), which I can click to change to LF and save the file again with Unix line endings.
Change:
To:
Related
I have this Dockerfile (steps based on installation guide from AWS)
FROM amazon/aws-cli:latest
RUN yum install python37 -y
RUN curl -O https://bootstrap.pypa.io/get-pip.py
RUN python3 get-pip.py --user
RUN pip3 install awsebcli --upgrade --user
RUN echo 'export PATH=~/.local/bin:$PATH' >> ~/.bashrc
RUN source ~/.bashrc
ENTRYPOINT ["/bin/bash"]
When I build the image with docker build -t eb-cli . and then run eb --version inside container docker run -it eb-cli, everything works
bash-4.2# eb --version
EB CLI 3.20.3 (Python 3.7.1)
But, when I run the command directly as docker run -it eb-cli eb --version, it gives me this error
/bin/bash: eb: No such file or directory
I think that is problem with bash profiles, but I can't figure it out.
Your sourced .bashrc would stay in the layer it was sourced, but won't apply to the resulting container. This is actually more thoroughly explained in this answer:
Each command runs a separate sub-shell, so the environment variables are not preserved and .bashrc is not sourced
Source: https://stackoverflow.com/a/55213158/2123530
A solution for you would be to set the PATH in an environment variable of the container, rather, and blank the ENTRYPOINT as set by your base image.
So you could end with an image as simple as:
FROM amazon/aws-cli:latest
ENV PATH="/root/.local/bin:${PATH}"
RUN yum install python37 -y \
&& pip3 install awsebcli
ENTRYPOINT []
With this Dockerfile, here is the resulting build and run:
$ docker build . -t eb-cli -q
sha256:49c376d98fc2b35cf121b43dbaa96caf9e775b0cd236c1b76932e25c60b231bc
$ docker run eb-cli eb --version
EB CLI 3.20.3 (Python 3.7.1)
Notes:
you can install the really latest version of pip, as you did it, but it is not needed as it is already bundled in the package python37
installing packages for the user, with the --user flag, is a good practice indeed, but since you are running this command as root, there is no real point in doing so, in the end
having the --upgrade flag does not makes much more sense, here, as the package won't be installed beforehand. And upgrading the package would be as simple as rebuilding the image
reducing the number of layer of an image by reducing the number of RUN in your Dockerfile is an advisable practice that you can find in the best practice
Trying to build a docker image with the execution of a pre-requisites installation script inside the Dockerfile fails for fetching packages via apt-get from archive.ubuntu.com.
Using the apt-get command inside the Dockerfile works flawless, despite being behind a corporate proxy, which is setup via the ENV command in the Dockerfile.
Anyway, executing the apt-get command from a bash-script in a terminal inside the resulting docker container or as "postCreateCommand" in a devcontainer.json of Visual Studio Code does work as expected too. But it won't work in my case for the invocation of a bash script from inside a Dockerfile.
It simply will tell:
Starting installation of package iproute2
Reading package lists...
Building dependency tree...
The following additional packages will be installed:
libatm1 libcap2 libcap2-bin libmnl0 libpam-cap libxtables12
Suggested packages:
iproute2-doc
The following NEW packages will be installed:
iproute2 libatm1 libcap2 libcap2-bin libmnl0 libpam-cap libxtables12
0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
Need to get 971 kB of archives.
After this operation, 3,287 kB of additional disk space will be used.
Err:1 http://archive.ubuntu.com/ubuntu focal/main amd64 libcap2 amd64 1:2.32-1
Could not resolve 'archive.ubuntu.com'
... more output ...
E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/libc/libcap2/libcap2_2.32-1_amd64.deb Could not resolve 'archive.ubuntu.com'
... more output ...
Just for example a snippet of the Dockerfile looks like this:
FROM ubuntu:20.04 as builderImage
USER root
ARG HTTP_PROXY_HOST_IP='http://172.17.0.1'
ARG HTTP_PROXY_HOST_PORT='3128'
ARG HTTP_PROXY_HOST_ADDR=$HTTP_PROXY_HOST_IP':'$HTTP_PROXY_HOST_PORT
ENV http_proxy=$HTTP_PROXY_HOST_ADDR
ENV https_proxy=$http_proxy
ENV HTTP_PROXY=$http_proxy
ENV HTTPS_PROXY=$http_proxy
ENV ftp_proxy=$http_proxy
ENV FTP_PROXY=$http_proxy
# it is always helpful sorting packages alpha-numerically to keep the overview ;)
RUN apt-get update && \
apt-get -y upgrade && \
apt-get -y install --no-install-recommends apt-utils dialog 2>&1 \
&& \
apt-get -y install \
default-jdk \
git \
python3 python3-pip
SHELL ["/bin/bash", "-c"]
ADD ./env-setup.sh .
RUN chmod +x env-setup.sh && ./env-setup.sh
CMD ["bash"]
The minimal version of the environment script env-setup.sh, which is supposed to be invoked by the Dockerfile, would look like this:
#!/bin/bash
packageCommand="apt-get";
sudo $packageCommand update;
packageInstallCommand="$packageCommand install";
package="iproute2"
packageInstallCommand+=" -y";
sudo $packageInstallCommand $package;
Of course the usage of variables is down to making use of a list for the packages to be installed and other aspects.
Hopefully that has covered everything essential to the question:
Why is the execution of apt-get working with a RUN and as well running the bash script inside the container after creating, but not from the very same bash script while building the image from a Dockerfile?
I was hoping to find the answer with the help of an extensive web-search, but unfortunately I was only able to find anything but an answer to this case.
As pointed out in the comment section underneath the question:
using sudo to launch the command, wiping out all the current vars set in the current environment, more specifically your proxy settings
So that is the case.
The solution is either to remove sudo from the bash script and invoke the script as root inside the Dockerfile.
Or, using sudo will work with ENV variables, just apply sudo -E.
I'm trying to create a docker image on MacOS using the below command. Note docker build cannot run any command, I tried it with npm --version, and it couldn't find npm either.
% docker build -<<EOF
# syntax=docker/dockerfile:1
FROM ubuntu:20.04
COPY . /app
RUN make /app
CMD node /app/index.js
EOF
Here's the error message I'm getting:
[output clipped...]
=> ERROR [3/3] RUN make /app
------
> [3/3] RUN make /app:
#11 0.200 /bin/sh: 1: make: not found
------
executor failed running [/bin/sh -c make /app]: exit code: 127
However, make is already installed, via xcode-select:
% /bin/sh -c make /app
make: *** No targets specified and no makefile found. Stop.
Can anyone help me here please? Thanks!
Docker can't use software installed on the host. You need to install make in your Dockerfile like this
% docker build -<<EOF
# syntax=docker/dockerfile:1
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y build-essential
COPY . /app
RUN make /app
CMD node /app/index.js
EOF
My Dockerfile won't run my entrypoint automatically.
Dockerfile:
FROM mcr.microsoft.com/dotnet/runtime:5.0 AS base
RUN apt update
RUN apt --yes --force-yes install libssl1.1
RUN apt --yes --force-yes install libpulse0
RUN apt --yes --force-yes install libasound2
RUN apt --yes --force-yes install libicu63
RUN apt --yes --force-yes install libpcre2-16-0
RUN apt --yes --force-yes install libdouble-conversion1
RUN apt --yes --force-yes install libglib2.0-0
RUN apt --yes --force-yes install telnet
RUN apt --yes --force-yes install pulseaudio
RUN apt --yes --force-yes install libasound2-dev
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["SDKStandalone/SDKStandalone.csproj", "SDKStandalone/"]
RUN dotnet restore "SDKStandalone/SDKStandalone.csproj"
COPY . .
WORKDIR "/src/SDKStandalone"
RUN dotnet build "SDKStandalone.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "SDKStandalone.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
RUN chmod +x /app/SDKContainer/startup.sh
ENTRYPOINT ["/app/SDKContainer/startup.sh"]
What also doesn't work is if I change the last line to:
ENTRYPOINT ["/bin/bash", "/app/SDKContainer/mySDK"]
My Startup file contains:
#!/bin/bash
/app/SDKContainer/mySDK &
What does work, is if I open bash from the running container, and do either:
chmod +x /app/SDKContainer/startup.sh
/app/SDKContainer/startup.sh
Or simply
/app/SDKContainer/mySDK
Both of those work fine, but I need my SDK to run automatically on container start and I do not want to start it manually. I don't know if it matters, but for completeness - I am debugging in Visual Studio 2019, they are running through a Docker compose YML, and I have selected 'do not debug'.
Docker compose
version: '3.4'
services:
myproject.server:
image: ${DOCKER_REGISTRY-}myserver
build:
context: .
dockerfile: Server/Dockerfile
sdkstandalone:
image: ${DOCKER_REGISTRY-}sdkstandalone
container_name: sdk1
build:
context: .
dockerfile: SDKStandalone/Dockerfile
sdkstandalone2:
image: ${DOCKER_REGISTRY-}sdkstandalone
container_name: sdk2
build:
context: .
dockerfile: SDKStandalone/Dockerfile
launchSettings.json
{
"profiles": {
"Docker Compose": {
"commandName": "DockerCompose",
"serviceActions": {
"sdkstandalone": "StartWithoutDebugging",
"myproject.server": "StartDebugging",
"sdkstandalone2": "StartWithoutDebugging"
},
"commandVersion": "1.0"
}
}
}
The container exits when the entry point process terminates. You have ensured that it terminates immediately. Take out the & to run the process in the foreground instead; this will keep your Docker image alive until the job finishes. This is a very common Docker FAQ.
Unless your parent image was specifically designed this way, you should probably use CMD, not ENTRYPOINT.
As a further aside, apt can install multiple packages in one go. Your long list of RUN commands near the beginning of your Dockerfile can be reduced to just two commands, and run significantly quicker.
The issue was in Visual Studio debugging itself. When it runs without debugging it doesn't work, but running the docker-compose directly from my commandline without Visual Studio works absolutely fine. I will mark this is the correct answer since it solved my issue, but upvoted #triplee for good advice and best practices
This problem was killing me. After passed some hours looking for some resolution, I found this forum Debugging docker compose. VS can't attach to containers
In my case, I updated my VS and there's this problem with Docker Compose v2. They're going to release the fix soon.
For now, disable the version 2, restart Docker and VS. It worked for me.
Command to check current version: docker-compose --version
Commando to come back to the previous version: docker-compose disable-v2
Hope it helps anyone with the similar issue.
My dockerfile is as below:
FROM bash:4.4
COPY prerequisites_ubuntu.sh /temp/prerequisites_ubuntu.sh
RUN /temp/prerequisites_ubuntu.sh
prerequisites_ubuntu.sh :
FROM ubuntu:latest
apt-get update
apt-get install -y coreutils git-core ssh scons build-essential g++ libglib2.0-dev unzip uuid-dev python-dev autotools-dev gcc libjansson-dev cmake
When I do docker build "docker build --rm --no-cache -t my_image ."
It gives error as
/temp/prerequisites_ubuntu.sh: line 1: FROM: not found
/temp/prerequisites_ubuntu.sh: line 3: apt-get: not found
/temp/prerequisites_ubuntu.sh: line 4: apt-get: not found
The prerequisites_ubuntu.sh file will change for RaspberryPI or other platform
There are a couple of issues with the prerequisites_ubuntu.sh file. First of all, it is not an sh file. You are missing a shebang (which specifies which shell to use to execute the script). the FROM statement is part of the Dockerfile spec, not of shell scripts (which is why you get FROM: not found) as an error. And the bash image is based on alpine linux, which does not use apt-get but it uses apk add. Once you change the shell script to use apk add, add a shebang, and remove the FROM statement it should work.