Dockerfile not working - running command manually works, but same command through run or entrypoint doesn't work - bash

My Dockerfile won't run my entrypoint automatically.
Dockerfile:
FROM mcr.microsoft.com/dotnet/runtime:5.0 AS base
RUN apt update
RUN apt --yes --force-yes install libssl1.1
RUN apt --yes --force-yes install libpulse0
RUN apt --yes --force-yes install libasound2
RUN apt --yes --force-yes install libicu63
RUN apt --yes --force-yes install libpcre2-16-0
RUN apt --yes --force-yes install libdouble-conversion1
RUN apt --yes --force-yes install libglib2.0-0
RUN apt --yes --force-yes install telnet
RUN apt --yes --force-yes install pulseaudio
RUN apt --yes --force-yes install libasound2-dev
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["SDKStandalone/SDKStandalone.csproj", "SDKStandalone/"]
RUN dotnet restore "SDKStandalone/SDKStandalone.csproj"
COPY . .
WORKDIR "/src/SDKStandalone"
RUN dotnet build "SDKStandalone.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "SDKStandalone.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
RUN chmod +x /app/SDKContainer/startup.sh
ENTRYPOINT ["/app/SDKContainer/startup.sh"]
What also doesn't work is if I change the last line to:
ENTRYPOINT ["/bin/bash", "/app/SDKContainer/mySDK"]
My Startup file contains:
#!/bin/bash
/app/SDKContainer/mySDK &
What does work, is if I open bash from the running container, and do either:
chmod +x /app/SDKContainer/startup.sh
/app/SDKContainer/startup.sh
Or simply
/app/SDKContainer/mySDK
Both of those work fine, but I need my SDK to run automatically on container start and I do not want to start it manually. I don't know if it matters, but for completeness - I am debugging in Visual Studio 2019, they are running through a Docker compose YML, and I have selected 'do not debug'.
Docker compose
version: '3.4'
services:
myproject.server:
image: ${DOCKER_REGISTRY-}myserver
build:
context: .
dockerfile: Server/Dockerfile
sdkstandalone:
image: ${DOCKER_REGISTRY-}sdkstandalone
container_name: sdk1
build:
context: .
dockerfile: SDKStandalone/Dockerfile
sdkstandalone2:
image: ${DOCKER_REGISTRY-}sdkstandalone
container_name: sdk2
build:
context: .
dockerfile: SDKStandalone/Dockerfile
launchSettings.json
{
"profiles": {
"Docker Compose": {
"commandName": "DockerCompose",
"serviceActions": {
"sdkstandalone": "StartWithoutDebugging",
"myproject.server": "StartDebugging",
"sdkstandalone2": "StartWithoutDebugging"
},
"commandVersion": "1.0"
}
}
}

The container exits when the entry point process terminates. You have ensured that it terminates immediately. Take out the & to run the process in the foreground instead; this will keep your Docker image alive until the job finishes. This is a very common Docker FAQ.
Unless your parent image was specifically designed this way, you should probably use CMD, not ENTRYPOINT.
As a further aside, apt can install multiple packages in one go. Your long list of RUN commands near the beginning of your Dockerfile can be reduced to just two commands, and run significantly quicker.

The issue was in Visual Studio debugging itself. When it runs without debugging it doesn't work, but running the docker-compose directly from my commandline without Visual Studio works absolutely fine. I will mark this is the correct answer since it solved my issue, but upvoted #triplee for good advice and best practices

This problem was killing me. After passed some hours looking for some resolution, I found this forum Debugging docker compose. VS can't attach to containers
In my case, I updated my VS and there's this problem with Docker Compose v2. They're going to release the fix soon.
For now, disable the version 2, restart Docker and VS. It worked for me.
Command to check current version: docker-compose --version
Commando to come back to the previous version: docker-compose disable-v2
Hope it helps anyone with the similar issue.

Related

No such file or directory when executing command via docker run -it

I have this Dockerfile (steps based on installation guide from AWS)
FROM amazon/aws-cli:latest
RUN yum install python37 -y
RUN curl -O https://bootstrap.pypa.io/get-pip.py
RUN python3 get-pip.py --user
RUN pip3 install awsebcli --upgrade --user
RUN echo 'export PATH=~/.local/bin:$PATH' >> ~/.bashrc
RUN source ~/.bashrc
ENTRYPOINT ["/bin/bash"]
When I build the image with docker build -t eb-cli . and then run eb --version inside container docker run -it eb-cli, everything works
bash-4.2# eb --version
EB CLI 3.20.3 (Python 3.7.1)
But, when I run the command directly as docker run -it eb-cli eb --version, it gives me this error
/bin/bash: eb: No such file or directory
I think that is problem with bash profiles, but I can't figure it out.
Your sourced .bashrc would stay in the layer it was sourced, but won't apply to the resulting container. This is actually more thoroughly explained in this answer:
Each command runs a separate sub-shell, so the environment variables are not preserved and .bashrc is not sourced
Source: https://stackoverflow.com/a/55213158/2123530
A solution for you would be to set the PATH in an environment variable of the container, rather, and blank the ENTRYPOINT as set by your base image.
So you could end with an image as simple as:
FROM amazon/aws-cli:latest
ENV PATH="/root/.local/bin:${PATH}"
RUN yum install python37 -y \
&& pip3 install awsebcli
ENTRYPOINT []
With this Dockerfile, here is the resulting build and run:
$ docker build . -t eb-cli -q
sha256:49c376d98fc2b35cf121b43dbaa96caf9e775b0cd236c1b76932e25c60b231bc
$ docker run eb-cli eb --version
EB CLI 3.20.3 (Python 3.7.1)
Notes:
you can install the really latest version of pip, as you did it, but it is not needed as it is already bundled in the package python37
installing packages for the user, with the --user flag, is a good practice indeed, but since you are running this command as root, there is no real point in doing so, in the end
having the --upgrade flag does not makes much more sense, here, as the package won't be installed beforehand. And upgrading the package would be as simple as rebuilding the image
reducing the number of layer of an image by reducing the number of RUN in your Dockerfile is an advisable practice that you can find in the best practice

Docker build fails to fetch packages from archive.ubuntu.com inside bash script used in Dockerfile

Trying to build a docker image with the execution of a pre-requisites installation script inside the Dockerfile fails for fetching packages via apt-get from archive.ubuntu.com.
Using the apt-get command inside the Dockerfile works flawless, despite being behind a corporate proxy, which is setup via the ENV command in the Dockerfile.
Anyway, executing the apt-get command from a bash-script in a terminal inside the resulting docker container or as "postCreateCommand" in a devcontainer.json of Visual Studio Code does work as expected too. But it won't work in my case for the invocation of a bash script from inside a Dockerfile.
It simply will tell:
Starting installation of package iproute2
Reading package lists...
Building dependency tree...
The following additional packages will be installed:
libatm1 libcap2 libcap2-bin libmnl0 libpam-cap libxtables12
Suggested packages:
iproute2-doc
The following NEW packages will be installed:
iproute2 libatm1 libcap2 libcap2-bin libmnl0 libpam-cap libxtables12
0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
Need to get 971 kB of archives.
After this operation, 3,287 kB of additional disk space will be used.
Err:1 http://archive.ubuntu.com/ubuntu focal/main amd64 libcap2 amd64 1:2.32-1
Could not resolve 'archive.ubuntu.com'
... more output ...
E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/libc/libcap2/libcap2_2.32-1_amd64.deb Could not resolve 'archive.ubuntu.com'
... more output ...
Just for example a snippet of the Dockerfile looks like this:
FROM ubuntu:20.04 as builderImage
USER root
ARG HTTP_PROXY_HOST_IP='http://172.17.0.1'
ARG HTTP_PROXY_HOST_PORT='3128'
ARG HTTP_PROXY_HOST_ADDR=$HTTP_PROXY_HOST_IP':'$HTTP_PROXY_HOST_PORT
ENV http_proxy=$HTTP_PROXY_HOST_ADDR
ENV https_proxy=$http_proxy
ENV HTTP_PROXY=$http_proxy
ENV HTTPS_PROXY=$http_proxy
ENV ftp_proxy=$http_proxy
ENV FTP_PROXY=$http_proxy
# it is always helpful sorting packages alpha-numerically to keep the overview ;)
RUN apt-get update && \
apt-get -y upgrade && \
apt-get -y install --no-install-recommends apt-utils dialog 2>&1 \
&& \
apt-get -y install \
default-jdk \
git \
python3 python3-pip
SHELL ["/bin/bash", "-c"]
ADD ./env-setup.sh .
RUN chmod +x env-setup.sh && ./env-setup.sh
CMD ["bash"]
The minimal version of the environment script env-setup.sh, which is supposed to be invoked by the Dockerfile, would look like this:
#!/bin/bash
packageCommand="apt-get";
sudo $packageCommand update;
packageInstallCommand="$packageCommand install";
package="iproute2"
packageInstallCommand+=" -y";
sudo $packageInstallCommand $package;
Of course the usage of variables is down to making use of a list for the packages to be installed and other aspects.
Hopefully that has covered everything essential to the question:
Why is the execution of apt-get working with a RUN and as well running the bash script inside the container after creating, but not from the very same bash script while building the image from a Dockerfile?
I was hoping to find the answer with the help of an extensive web-search, but unfortunately I was only able to find anything but an answer to this case.
As pointed out in the comment section underneath the question:
using sudo to launch the command, wiping out all the current vars set in the current environment, more specifically your proxy settings
So that is the case.
The solution is either to remove sudo from the bash script and invoke the script as root inside the Dockerfile.
Or, using sudo will work with ENV variables, just apply sudo -E.

Docker hangs on the RUN command OSX

I have been learning docker using the docker docs, and first encountered a problem in the quick start guide to Django. The program would build normally up until the the second to last command. Here is my dockerfile and the output:
FROM python:3.5.2
ENV PYTHONUNBUFFERED 1
WORKDIR /code
ADD requirements.txt /code
RUN pip3 install -r requirements.txt
ADD . /code
Then when I run:
docker-compose run web django-admin startproject src .
I get the whole thing built and then it hangs:
Installing collected packages: Django, psycopg2
Running setup.py install for psycopg2: started
Running setup.py install for psycopg2: finished with status 'done'
Successfully installed Django-1.10.5 psycopg2-2.6.2
So since I don't have experience with compose I tried the most basic docker build that included a dockerfile. This one also got hung up.
FROM docker/whalesay:latest
RUN apt-get -y update && apt-get install -y fortunes
CMD /usr/games/fortune -a | cowsay
And this is the last terminal line before the hang.
Processing triggers for libc-bin (2.19-0ubuntu6.6) ...
According to the tutorial this occurs in the same spot. The second to last command, which happens to also be RUN.
Processing triggers for libc-bin (2.19-0ubuntu6.6) ...
---> dfaf993d4a2e
Removing intermediate container 05d4eda04526
[END RUN COMMAND]
So thats why I think its in the RUN command, but I'm not sure why or how to fix it. Does anyone know why this might be occurring? I am using a Macbook Pro 8.1 running OSX EL CAPITIAN version 10.11.6 and Docker version 1.13.1, build 092cba3

`npm start` in docker ends with: Please install a supported C++11 compiler and reinstall the module

I got a issue while running a docket container, getting error:
error: uncaughtException: Compilation of µWebSockets has failed and there is no pre-compiled binary available for your system. Please install a supported C++11 compiler and reinstall the module 'uws'.
Here is a full stack trace: http://pastebin.com/qV0hzRxL
This is my Dockerfile:
FROM node:6.7-slim
# ----- I added this, but it didn't help
RUN apt-get update && apt-get install -y gcc g++
RUN gcc --version
RUN g++ --version
# ------------------------------------
WORKDIR /usr/src/app
ENV NODE_ENV docker
RUN npm install
CMD ["npm", "start"]
Then I build successfully it with: sudo docker-compose build --no-cache chat-ws (chat-ws is name of image)
And sudo docker-compose up chat-ws ends with error.
Note: Docker image is part of composition in docker-compose.
EDIT: Parts of docker-compose.yml
chat-ws:
build: ./dockerfiles/chat-ws
links:
- redis
- chat-api
ports:
- 3000:3000
volumes_from:
- data_chat-ws
And:
data_chat-ws:
image: node:6.7-slim
volumes:
- ${PATH_CHAT_WS}:/usr/src/app
command: "true"
Any ideas? Please?
Thanks, Peter
To me, the problem was my darwin version was 57 (OSX 10.12.6)
npm install uws # to me was uws#8.14.1, thit has my darwin version
now copy the compiled version to your OS
cp node_modules/uws/uws_darwin_57.node node_modules/socketcluster-server/node_modules/uws/
It's late, and I only really learned about docker-compose and Stack Overflow just today, so forgive me if I am making some really obvious mistakes here, but:
Have you tried running the image you made? docker run yourimage, including whichever command. Does that run fine?
In your docker-compose, why does your data_chat-ws service have as an image node:6.7-slim? Shouldn't that be the image you built, i.e. image: chat-ws?
This is where I would start debugging: first make sure you are actually able to run the image, leaving aside all docker-compose stuff, and only when it runs fine, add it to your docker-compose.yml. To help checking whether you can actually run your container, check the example in their official documentation here (scroll down to the 'How to use this image' part of the page).

Cannot (apt-get) install packages inside docker

I installed ubuntu 14.04 virtual machine and run docker(1.11.2). I try to build sample image (here).
docker file :
FROM java:8
# Install maven
RUN apt-get update
RUN apt-get install -y maven
....
I get following error:
Step 3: RUN apt-get update
--> Using cache
--->64345sdd332
Step 4: RUN apt-get install -y maven
---> Running in a6c1d5d54b7a
Reading package lists...
Reading dependency tree...
Reading state information...
E: Unable to locate package maven
INFO[0029] The command [/bin/sh -c apt-get install -y maven] returned a non-zero code:100
following solutions I have tried, but no success.
restarted docker here
run as apt-get -qq -y install curl here :same error :(
how can i view detailed error message ?
a
any way to fix the issue?
you may need to update os inside docker before
try to run apt-get update first, then apt-get install xxx
The cached result of the apt-get update may be very stale. Redesign the package pull according to the Docker best practices:
FROM java:8
# Install maven
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install -y maven \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
Based on similar issues I had, you want to both look at possible network issues and possible image related issues.
Network issues : you are already looking at proxy related stuff. Make sure also the iptables setup done automatically by docker has not been messed up unintentionnaly by yourself or another application. Typically, if another docker container runs with a net=host option, this can cause trouble.
Image issues : The distro you are running on in your container is not Ubuntu 14.04 but the one that java:8 was built from. If you took the java image from official library on docker hub, you have a hierarchy of images coming initially from Debian jessie. You might want to look the different Dockerfile in this hierarchy to find out where the repo setup is not the one you are looking at.
For both situations, to debug this, I recommand you run inside the latest image a shell to look the actual network and repo situation in your image. In your case
docker run -ti --rm 64345sdd332 /bin/bash
gives you a shell just before running your install maven command.
I am currently working behind proxy. it failed to download some dependency. for that you have to mention proxy configuration in docker file. ref
but, now I facing difficulty to run "mvn", "dependency:resolve" due to the proxy, maven itself block to download some dependency and build failed.
thanks buddies for your great support !
Execute 'apt-get update' and 'apt-get install' in a single RUN instruction. This is done to make sure that the latest packages will be installed. If 'apt-get install' were in a separate RUN instruction, then it would reuse a layer added by 'apt-get update', which could have been created a long time ago.
RUN apt-get update && \
apt-get install -y <tool..eg: maven>
Note: RUN instructions build your image by adding layers on top of the initial image.

Resources