Docker hangs on the RUN command OSX - macos

I have been learning docker using the docker docs, and first encountered a problem in the quick start guide to Django. The program would build normally up until the the second to last command. Here is my dockerfile and the output:
FROM python:3.5.2
ENV PYTHONUNBUFFERED 1
WORKDIR /code
ADD requirements.txt /code
RUN pip3 install -r requirements.txt
ADD . /code
Then when I run:
docker-compose run web django-admin startproject src .
I get the whole thing built and then it hangs:
Installing collected packages: Django, psycopg2
Running setup.py install for psycopg2: started
Running setup.py install for psycopg2: finished with status 'done'
Successfully installed Django-1.10.5 psycopg2-2.6.2
So since I don't have experience with compose I tried the most basic docker build that included a dockerfile. This one also got hung up.
FROM docker/whalesay:latest
RUN apt-get -y update && apt-get install -y fortunes
CMD /usr/games/fortune -a | cowsay
And this is the last terminal line before the hang.
Processing triggers for libc-bin (2.19-0ubuntu6.6) ...
According to the tutorial this occurs in the same spot. The second to last command, which happens to also be RUN.
Processing triggers for libc-bin (2.19-0ubuntu6.6) ...
---> dfaf993d4a2e
Removing intermediate container 05d4eda04526
[END RUN COMMAND]
So thats why I think its in the RUN command, but I'm not sure why or how to fix it. Does anyone know why this might be occurring? I am using a Macbook Pro 8.1 running OSX EL CAPITIAN version 10.11.6 and Docker version 1.13.1, build 092cba3

Related

No such file or directory when executing command via docker run -it

I have this Dockerfile (steps based on installation guide from AWS)
FROM amazon/aws-cli:latest
RUN yum install python37 -y
RUN curl -O https://bootstrap.pypa.io/get-pip.py
RUN python3 get-pip.py --user
RUN pip3 install awsebcli --upgrade --user
RUN echo 'export PATH=~/.local/bin:$PATH' >> ~/.bashrc
RUN source ~/.bashrc
ENTRYPOINT ["/bin/bash"]
When I build the image with docker build -t eb-cli . and then run eb --version inside container docker run -it eb-cli, everything works
bash-4.2# eb --version
EB CLI 3.20.3 (Python 3.7.1)
But, when I run the command directly as docker run -it eb-cli eb --version, it gives me this error
/bin/bash: eb: No such file or directory
I think that is problem with bash profiles, but I can't figure it out.
Your sourced .bashrc would stay in the layer it was sourced, but won't apply to the resulting container. This is actually more thoroughly explained in this answer:
Each command runs a separate sub-shell, so the environment variables are not preserved and .bashrc is not sourced
Source: https://stackoverflow.com/a/55213158/2123530
A solution for you would be to set the PATH in an environment variable of the container, rather, and blank the ENTRYPOINT as set by your base image.
So you could end with an image as simple as:
FROM amazon/aws-cli:latest
ENV PATH="/root/.local/bin:${PATH}"
RUN yum install python37 -y \
&& pip3 install awsebcli
ENTRYPOINT []
With this Dockerfile, here is the resulting build and run:
$ docker build . -t eb-cli -q
sha256:49c376d98fc2b35cf121b43dbaa96caf9e775b0cd236c1b76932e25c60b231bc
$ docker run eb-cli eb --version
EB CLI 3.20.3 (Python 3.7.1)
Notes:
you can install the really latest version of pip, as you did it, but it is not needed as it is already bundled in the package python37
installing packages for the user, with the --user flag, is a good practice indeed, but since you are running this command as root, there is no real point in doing so, in the end
having the --upgrade flag does not makes much more sense, here, as the package won't be installed beforehand. And upgrading the package would be as simple as rebuilding the image
reducing the number of layer of an image by reducing the number of RUN in your Dockerfile is an advisable practice that you can find in the best practice

Cannot build dockerfile with sdkman

I am entirely new to the concept of dockers. I am creating the following Dockerfile as an exercise.
FROM ubuntu:latest
MAINTAINER kesarling
RUN apt update && apt upgrade -y
RUN apt install nginx curl zip unzip -y
RUN apt install openjdk-14-jdk python3 python3-doc clang golang-go gcc g++ -y
RUN curl -s "https://get.sdkman.io" | bash
RUN bash /root/.sdkman/bin/sdkman-init.sh
RUN sdk version
RUN yes | bash -c 'sdk install kotlin'
CMD [ "echo","The development environment has now been fully setup with C, C++, JAVA, Python3, Go and Kotlin" ]
I am using SDKMAN! to install Kotlin. The problem initially was that instead of using RUN bash /root/.sdkman/bin/sdkman-init.sh, I was using RUN source /root/.sdkman/bin/sdkman-init.sh. However, it gave the error saying source not found. So, I tried using RUN . /root/.sdkman/bin/sdkman-init.sh, and it did not work. However, RUN bash /root/.sdkman/bin/sdkman-init.sh seems to work, as in does not give any error and tries to run the next command. However, the docker then gives error saying sdk: not found
Where am I going wrong?
It should be noted that these steps worked like charm for my host distribution (The one on which I'm running docker) which is Pop!_OS 20.04
Actually the script /root/.sdkman/bin/sdkman-init.sh sources the sdk
source is a built-in to bash rather than a binary somewhere on the filesystem.
source command executes the file in the current shell.
Each RUN instruction will execute any commands in a new layer on top of the current image and commit the results.
The resulting committed image will be used for the next step in the Dockerfile.
Try this:
FROM ubuntu:latest
MAINTAINER kesarling
RUN apt update && apt upgrade -y
RUN apt install nginx curl zip unzip -y
RUN apt install openjdk-14-jdk python3 python3-doc clang golang-go gcc g++ -y
RUN curl -s "https://get.sdkman.io" | bash
RUN /bin/bash -c "source /root/.sdkman/bin/sdkman-init.sh; sdk version; sdk install kotlin"
CMD [ "echo","The development environment has now been fully setup with C, C++, JAVA, Python3, Go and Kotlin" ]
SDKMAN in Ubuntu Dockerfile
tl;dr
the sdk command is not a binary but a bash script loaded into memory
Shell sessions are a "process", which means environment variables and declared shell function only exist for the duration that shell session exists; which lasts only as long as the RUN command.
Manually tweak your PATH
RUN apt-get update && apt-get install curl bash unzip zip -y
RUN curl -s "https://get.sdkman.io" | bash
RUN source "$HOME/.sdkman/bin/sdkman-init.sh" \
&& sdk install java 8.0.275-amzn \
&& sdk install sbt 1.4.2 \
&& sdk install scala 2.12.12
ENV PATH=/root/.sdkman/candidates/java/current/bin:$PATH
ENV PATH=/root/.sdkman/candidates/scala/current/bin:$PATH
ENV PATH=/root/.sdkman/candidates/sbt/current/bin:$PATH
Full Version
Oh wow this was a journey to figure out. Below each line is commented as to why certain commands are run.
I learnt a lot about how unix works and how sdkman works and how docker works and why the intersection of the three give very unusual behaviour.
# I am using a multi-stage build so I am just copying the built artifacts
# from this stage to keep final image small.
FROM ubuntu:latest as ScalaBuild
# Switch from `sh -c` to `bash -c` as the shell behind a `RUN` command.
SHELL ["/bin/bash", "-c"]
# Usual updates
RUN apt-get update && apt-get upgrade -y
# Dependencies for sdkman installation
RUN apt-get install curl bash unzip zip -y
#Install sdkman
RUN curl -s "https://get.sdkman.io" | bash
# FUN FACTS:
# 1) the `sdk` command is not a binary but a bash script loaded into memory
# 2) Shell sessions are a "process", which means environment variables
# and declared shell function only exist for
# the duration that shell session exists
RUN source "$HOME/.sdkman/bin/sdkman-init.sh" \
&& sdk install java 8.0.275-amzn \
&& sdk install sbt 1.4.2 \
&& sdk install scala 2.12.12
# Once the real binaries exist these are
# the symlinked paths that need to exist on PATH
ENV PATH=/root/.sdkman/candidates/java/current/bin:$PATH
ENV PATH=/root/.sdkman/candidates/scala/current/bin:$PATH
ENV PATH=/root/.sdkman/candidates/sbt/current/bin:$PATH
# This is specific to running a minimal empty Scala project and packaging it
RUN touch build.sbt
RUN sbt compile
RUN sbt package
FROM alpine AS production
# setup production environment image here
COPY --from=ScalaBuild /root/target/scala-2.12/ $INSTALL_PATH
ENTRYPOINT ["java", "-cp", "$INSTALL_PATH", "your.main.classfile"]
Generally you want to avoid using "version manager" type tools in Docker; it's better to install a specific version of the compiler or runtime you need.
In the case of Kotlin, it's a JVM application distributed as a zip file so it should be fairly easy to install:
FROM openjdk:15-slim
ARG KOTLIN_VERSION=1.3.72
# Get OS-level updates:
RUN apt-get update \
&& apt-get install --no-install-recommends --assume-yes \
curl \
unzip
# and if you need C/Python dependencies, those too
# Download and unpack Kotlin
RUN cd /opt \
&& curl -LO https://github.com/JetBrains/kotlin/releases/download/v${KOTLIN_VERSION}/kotlin-compiler-${KOTLIN_VERSION}.zip \
&& unzip kotlin-compiler-${KOTLIN_VERSION}.zip \
&& rm kotlin-compiler-${KOTLIN_VERSION}.zip
# Add its directory to $PATH
ENV PATH=/opt/kotlinc/bin:$PATH
The real problem with version managers is that they heavily depend on the tool setting environment variables. As #JeevanRao notes in their answer, each Dockerfile RUN command runs in a separate shell in a separate container, and any environment variable settings within that command get lost for the next command.
# Does absolutely nothing: environment variables do not stay set
RUN . /root/.sdkman/bin/sdkman-init.sh
Since an image generally contains only one application and its runtime, you don't need the ability to change which version of the runtime or compiler you're using. My Dockerfile example passes it as an ARG, so you can change it in the Dockerfile or pass a docker build --build-arg KOTLIN_VERSION=... option to use a different version.

How do I get LaraDock to use yum instead of apt-get?

I am trying to setup a container using laradock with the following command:
docker-compose up -d nginx mysql
The problem is I am getting the following error:
E: There were unauthenticated packages and -y was used without --allow-unauthenticated
ERROR: Service 'workspace' failed to build: The command '/bin/sh -c apt-get update -yqq && apt-get -yqq install nasm' returned a non-zero code: 100`
Is there a way to get it to use yum instead of apt-get?
(I'm a server noob, thought docker would be easy and it seems that it is. Just can't figure out why it's trying to use apt-get instead of yum. Thanks.)
I suggest to read about the problems with different package system: Getting apt-get on an alpine container
Most official docker images are available with different version of Linux (alpine, debian, cent). I would rather create a own Dockerfile and change "FROM x:y" than use different package systems.
But, read the linked comment.

Pip stuck on "Running command python setup.py egg_info" - no errors.

I run Vagrant on Windows 10 with VirtualBox,Xenial64 ubuntu to load TaigaIO via manual setup.
At pip install -vvv -r requirements-devel.txt part , pip hangs forever when it tries to install django-sampledatahelper.
When i try to install just this package, it shows same effect: no errors, not going back to bash, just hanging on:
Downloading from URL https://pypi.python.org/packages/2b/fe/e8ef20ee17dcd5d4df96c36dcbcaca7a79d6a2f8dc319f4e25107e000859/django-sampledatahelper-0.4.1.tar.gz#md5=a750d769af76d3f6e5791cfeb78832b0 (from https://pypi.python.org/simple/django-sampledatahelper/)
Running setup.py (path:/tmp/pip-build-pZcRoU/django-sampledatahelper/setup.py) egg_info for package django-sampledatahelper
Running command python setup.py egg_info
I tried fresh VM install, in virtualenv or without it, pip mirrors, removing cache and --no-cache option, xenial64 and bento/ubuntu-16.04 distros, with vagrant ssh and with Putty. Efect is the same.
I had the same issue and I run -vvv command. It seemed that pip had stopped, but I waited for a couple of minutes and the package successfully installed
It seems that there is something wrong with ubuntu Xenial64 distribution AND manual setup instructions. When i use bento/ubuntu-16.04 and setup-server.sh from taiga-scripts the installation is finishing correctly.
It was still downloading the package, but slowly. The inner pip script that setting up the egg_info used neither '-i' nor '--proxy' you passed to the outer pip to accelerate the installation.
You can use a global proxy (tun/tap or vpn) or just modify the pip script to force the inner setup to download the package in an accelerated way.

Cannot (apt-get) install packages inside docker

I installed ubuntu 14.04 virtual machine and run docker(1.11.2). I try to build sample image (here).
docker file :
FROM java:8
# Install maven
RUN apt-get update
RUN apt-get install -y maven
....
I get following error:
Step 3: RUN apt-get update
--> Using cache
--->64345sdd332
Step 4: RUN apt-get install -y maven
---> Running in a6c1d5d54b7a
Reading package lists...
Reading dependency tree...
Reading state information...
E: Unable to locate package maven
INFO[0029] The command [/bin/sh -c apt-get install -y maven] returned a non-zero code:100
following solutions I have tried, but no success.
restarted docker here
run as apt-get -qq -y install curl here :same error :(
how can i view detailed error message ?
a
any way to fix the issue?
you may need to update os inside docker before
try to run apt-get update first, then apt-get install xxx
The cached result of the apt-get update may be very stale. Redesign the package pull according to the Docker best practices:
FROM java:8
# Install maven
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install -y maven \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
Based on similar issues I had, you want to both look at possible network issues and possible image related issues.
Network issues : you are already looking at proxy related stuff. Make sure also the iptables setup done automatically by docker has not been messed up unintentionnaly by yourself or another application. Typically, if another docker container runs with a net=host option, this can cause trouble.
Image issues : The distro you are running on in your container is not Ubuntu 14.04 but the one that java:8 was built from. If you took the java image from official library on docker hub, you have a hierarchy of images coming initially from Debian jessie. You might want to look the different Dockerfile in this hierarchy to find out where the repo setup is not the one you are looking at.
For both situations, to debug this, I recommand you run inside the latest image a shell to look the actual network and repo situation in your image. In your case
docker run -ti --rm 64345sdd332 /bin/bash
gives you a shell just before running your install maven command.
I am currently working behind proxy. it failed to download some dependency. for that you have to mention proxy configuration in docker file. ref
but, now I facing difficulty to run "mvn", "dependency:resolve" due to the proxy, maven itself block to download some dependency and build failed.
thanks buddies for your great support !
Execute 'apt-get update' and 'apt-get install' in a single RUN instruction. This is done to make sure that the latest packages will be installed. If 'apt-get install' were in a separate RUN instruction, then it would reuse a layer added by 'apt-get update', which could have been created a long time ago.
RUN apt-get update && \
apt-get install -y <tool..eg: maven>
Note: RUN instructions build your image by adding layers on top of the initial image.

Resources