getting "cannot find package" trying to build my application in a docker container - go

Here is my Dockerfile.
FROM ubuntu
MAINTAINER me <my#email.com>
RUN apt-get update && apt-get install -y \
build-essential \
curl \
git \
make
# Get and compile go
RUN curl -s https://go.googlecode.com/files/go1.2.1.src.tar.gz | tar -v -C /usr/local -xz
RUN cd /usr/local/go/src && ./make.bash --no-clean 2>&1
ENV PATH /usr/local/go/bin:/go/bin:$PATH
ENV GOPATH /go
RUN go get github.com/gorilla/feeds
WORKDIR /go
CMD go version && go install feed && feed
It builds just fine:
sudo docker build -t ubuntu-go .
but when I run it I get a package error:
sudo docker run -v /home/rbucker/go:/go --name go ubuntu-go
The error looks like:
src/feed/feed.go:7:2: cannot find package "github.com/gorilla/feeds" in any of:
/usr/local/go/src/pkg/github.com/gorilla/feeds (from $GOROOT)
/go/src/github.com/gorilla/feeds (from $GOPATH)
It's odd because "go install" is not installing the dependencies and while the previous "go get github.com/gorilla/feeds" completes without errors. So presumably I have a path or environment problem but all of the examples look just like this one.
PS: my code is located in /go/src/feed (feed.go)
package main
import (
"net/http"
"time"
"github.com/gorilla/feeds"
)
. . .
UPDATE: when I performed the "go get" manually and then launched the "run" it seemed to work. So it appears that the "RUN go get" is storing my file in the ether instead of my host's volume.
sudo docker run -v /home/rbucker/go:/go --name go ubuntu-go /bin/bash
then
sudo docker run -v /home/rbucker/go:/go --name go ubuntu-go
(the files were located in my ~/go/src/githum.com and ~/go/pkg folders.)
UPDATE: It occurs to me that during the BUILD step the /go volume has not been attached to the docker image. So it's essentially assigned to nil. But then during the run the "get install" should have retrieved it's deps.
FINALLY: this works but is clearly not the preferred method:
CMD go get github.com/gorilla/feeds && go version && go install feed && feed
notice that I performed the "go get" in the CMD rather than a RUN.

I solved something similar to this by using
go get ./...
Source:
https://coderwall.com/p/arxtja/install-all-go-project-dependencies-in-one-command

Related

No such file or directory when executing command via docker run -it

I have this Dockerfile (steps based on installation guide from AWS)
FROM amazon/aws-cli:latest
RUN yum install python37 -y
RUN curl -O https://bootstrap.pypa.io/get-pip.py
RUN python3 get-pip.py --user
RUN pip3 install awsebcli --upgrade --user
RUN echo 'export PATH=~/.local/bin:$PATH' >> ~/.bashrc
RUN source ~/.bashrc
ENTRYPOINT ["/bin/bash"]
When I build the image with docker build -t eb-cli . and then run eb --version inside container docker run -it eb-cli, everything works
bash-4.2# eb --version
EB CLI 3.20.3 (Python 3.7.1)
But, when I run the command directly as docker run -it eb-cli eb --version, it gives me this error
/bin/bash: eb: No such file or directory
I think that is problem with bash profiles, but I can't figure it out.
Your sourced .bashrc would stay in the layer it was sourced, but won't apply to the resulting container. This is actually more thoroughly explained in this answer:
Each command runs a separate sub-shell, so the environment variables are not preserved and .bashrc is not sourced
Source: https://stackoverflow.com/a/55213158/2123530
A solution for you would be to set the PATH in an environment variable of the container, rather, and blank the ENTRYPOINT as set by your base image.
So you could end with an image as simple as:
FROM amazon/aws-cli:latest
ENV PATH="/root/.local/bin:${PATH}"
RUN yum install python37 -y \
&& pip3 install awsebcli
ENTRYPOINT []
With this Dockerfile, here is the resulting build and run:
$ docker build . -t eb-cli -q
sha256:49c376d98fc2b35cf121b43dbaa96caf9e775b0cd236c1b76932e25c60b231bc
$ docker run eb-cli eb --version
EB CLI 3.20.3 (Python 3.7.1)
Notes:
you can install the really latest version of pip, as you did it, but it is not needed as it is already bundled in the package python37
installing packages for the user, with the --user flag, is a good practice indeed, but since you are running this command as root, there is no real point in doing so, in the end
having the --upgrade flag does not makes much more sense, here, as the package won't be installed beforehand. And upgrading the package would be as simple as rebuilding the image
reducing the number of layer of an image by reducing the number of RUN in your Dockerfile is an advisable practice that you can find in the best practice

Docker build fails to fetch packages from archive.ubuntu.com inside bash script used in Dockerfile

Trying to build a docker image with the execution of a pre-requisites installation script inside the Dockerfile fails for fetching packages via apt-get from archive.ubuntu.com.
Using the apt-get command inside the Dockerfile works flawless, despite being behind a corporate proxy, which is setup via the ENV command in the Dockerfile.
Anyway, executing the apt-get command from a bash-script in a terminal inside the resulting docker container or as "postCreateCommand" in a devcontainer.json of Visual Studio Code does work as expected too. But it won't work in my case for the invocation of a bash script from inside a Dockerfile.
It simply will tell:
Starting installation of package iproute2
Reading package lists...
Building dependency tree...
The following additional packages will be installed:
libatm1 libcap2 libcap2-bin libmnl0 libpam-cap libxtables12
Suggested packages:
iproute2-doc
The following NEW packages will be installed:
iproute2 libatm1 libcap2 libcap2-bin libmnl0 libpam-cap libxtables12
0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
Need to get 971 kB of archives.
After this operation, 3,287 kB of additional disk space will be used.
Err:1 http://archive.ubuntu.com/ubuntu focal/main amd64 libcap2 amd64 1:2.32-1
Could not resolve 'archive.ubuntu.com'
... more output ...
E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/libc/libcap2/libcap2_2.32-1_amd64.deb Could not resolve 'archive.ubuntu.com'
... more output ...
Just for example a snippet of the Dockerfile looks like this:
FROM ubuntu:20.04 as builderImage
USER root
ARG HTTP_PROXY_HOST_IP='http://172.17.0.1'
ARG HTTP_PROXY_HOST_PORT='3128'
ARG HTTP_PROXY_HOST_ADDR=$HTTP_PROXY_HOST_IP':'$HTTP_PROXY_HOST_PORT
ENV http_proxy=$HTTP_PROXY_HOST_ADDR
ENV https_proxy=$http_proxy
ENV HTTP_PROXY=$http_proxy
ENV HTTPS_PROXY=$http_proxy
ENV ftp_proxy=$http_proxy
ENV FTP_PROXY=$http_proxy
# it is always helpful sorting packages alpha-numerically to keep the overview ;)
RUN apt-get update && \
apt-get -y upgrade && \
apt-get -y install --no-install-recommends apt-utils dialog 2>&1 \
&& \
apt-get -y install \
default-jdk \
git \
python3 python3-pip
SHELL ["/bin/bash", "-c"]
ADD ./env-setup.sh .
RUN chmod +x env-setup.sh && ./env-setup.sh
CMD ["bash"]
The minimal version of the environment script env-setup.sh, which is supposed to be invoked by the Dockerfile, would look like this:
#!/bin/bash
packageCommand="apt-get";
sudo $packageCommand update;
packageInstallCommand="$packageCommand install";
package="iproute2"
packageInstallCommand+=" -y";
sudo $packageInstallCommand $package;
Of course the usage of variables is down to making use of a list for the packages to be installed and other aspects.
Hopefully that has covered everything essential to the question:
Why is the execution of apt-get working with a RUN and as well running the bash script inside the container after creating, but not from the very same bash script while building the image from a Dockerfile?
I was hoping to find the answer with the help of an extensive web-search, but unfortunately I was only able to find anything but an answer to this case.
As pointed out in the comment section underneath the question:
using sudo to launch the command, wiping out all the current vars set in the current environment, more specifically your proxy settings
So that is the case.
The solution is either to remove sudo from the bash script and invoke the script as root inside the Dockerfile.
Or, using sudo will work with ENV variables, just apply sudo -E.

Cannot build dockerfile with sdkman

I am entirely new to the concept of dockers. I am creating the following Dockerfile as an exercise.
FROM ubuntu:latest
MAINTAINER kesarling
RUN apt update && apt upgrade -y
RUN apt install nginx curl zip unzip -y
RUN apt install openjdk-14-jdk python3 python3-doc clang golang-go gcc g++ -y
RUN curl -s "https://get.sdkman.io" | bash
RUN bash /root/.sdkman/bin/sdkman-init.sh
RUN sdk version
RUN yes | bash -c 'sdk install kotlin'
CMD [ "echo","The development environment has now been fully setup with C, C++, JAVA, Python3, Go and Kotlin" ]
I am using SDKMAN! to install Kotlin. The problem initially was that instead of using RUN bash /root/.sdkman/bin/sdkman-init.sh, I was using RUN source /root/.sdkman/bin/sdkman-init.sh. However, it gave the error saying source not found. So, I tried using RUN . /root/.sdkman/bin/sdkman-init.sh, and it did not work. However, RUN bash /root/.sdkman/bin/sdkman-init.sh seems to work, as in does not give any error and tries to run the next command. However, the docker then gives error saying sdk: not found
Where am I going wrong?
It should be noted that these steps worked like charm for my host distribution (The one on which I'm running docker) which is Pop!_OS 20.04
Actually the script /root/.sdkman/bin/sdkman-init.sh sources the sdk
source is a built-in to bash rather than a binary somewhere on the filesystem.
source command executes the file in the current shell.
Each RUN instruction will execute any commands in a new layer on top of the current image and commit the results.
The resulting committed image will be used for the next step in the Dockerfile.
Try this:
FROM ubuntu:latest
MAINTAINER kesarling
RUN apt update && apt upgrade -y
RUN apt install nginx curl zip unzip -y
RUN apt install openjdk-14-jdk python3 python3-doc clang golang-go gcc g++ -y
RUN curl -s "https://get.sdkman.io" | bash
RUN /bin/bash -c "source /root/.sdkman/bin/sdkman-init.sh; sdk version; sdk install kotlin"
CMD [ "echo","The development environment has now been fully setup with C, C++, JAVA, Python3, Go and Kotlin" ]
SDKMAN in Ubuntu Dockerfile
tl;dr
the sdk command is not a binary but a bash script loaded into memory
Shell sessions are a "process", which means environment variables and declared shell function only exist for the duration that shell session exists; which lasts only as long as the RUN command.
Manually tweak your PATH
RUN apt-get update && apt-get install curl bash unzip zip -y
RUN curl -s "https://get.sdkman.io" | bash
RUN source "$HOME/.sdkman/bin/sdkman-init.sh" \
&& sdk install java 8.0.275-amzn \
&& sdk install sbt 1.4.2 \
&& sdk install scala 2.12.12
ENV PATH=/root/.sdkman/candidates/java/current/bin:$PATH
ENV PATH=/root/.sdkman/candidates/scala/current/bin:$PATH
ENV PATH=/root/.sdkman/candidates/sbt/current/bin:$PATH
Full Version
Oh wow this was a journey to figure out. Below each line is commented as to why certain commands are run.
I learnt a lot about how unix works and how sdkman works and how docker works and why the intersection of the three give very unusual behaviour.
# I am using a multi-stage build so I am just copying the built artifacts
# from this stage to keep final image small.
FROM ubuntu:latest as ScalaBuild
# Switch from `sh -c` to `bash -c` as the shell behind a `RUN` command.
SHELL ["/bin/bash", "-c"]
# Usual updates
RUN apt-get update && apt-get upgrade -y
# Dependencies for sdkman installation
RUN apt-get install curl bash unzip zip -y
#Install sdkman
RUN curl -s "https://get.sdkman.io" | bash
# FUN FACTS:
# 1) the `sdk` command is not a binary but a bash script loaded into memory
# 2) Shell sessions are a "process", which means environment variables
# and declared shell function only exist for
# the duration that shell session exists
RUN source "$HOME/.sdkman/bin/sdkman-init.sh" \
&& sdk install java 8.0.275-amzn \
&& sdk install sbt 1.4.2 \
&& sdk install scala 2.12.12
# Once the real binaries exist these are
# the symlinked paths that need to exist on PATH
ENV PATH=/root/.sdkman/candidates/java/current/bin:$PATH
ENV PATH=/root/.sdkman/candidates/scala/current/bin:$PATH
ENV PATH=/root/.sdkman/candidates/sbt/current/bin:$PATH
# This is specific to running a minimal empty Scala project and packaging it
RUN touch build.sbt
RUN sbt compile
RUN sbt package
FROM alpine AS production
# setup production environment image here
COPY --from=ScalaBuild /root/target/scala-2.12/ $INSTALL_PATH
ENTRYPOINT ["java", "-cp", "$INSTALL_PATH", "your.main.classfile"]
Generally you want to avoid using "version manager" type tools in Docker; it's better to install a specific version of the compiler or runtime you need.
In the case of Kotlin, it's a JVM application distributed as a zip file so it should be fairly easy to install:
FROM openjdk:15-slim
ARG KOTLIN_VERSION=1.3.72
# Get OS-level updates:
RUN apt-get update \
&& apt-get install --no-install-recommends --assume-yes \
curl \
unzip
# and if you need C/Python dependencies, those too
# Download and unpack Kotlin
RUN cd /opt \
&& curl -LO https://github.com/JetBrains/kotlin/releases/download/v${KOTLIN_VERSION}/kotlin-compiler-${KOTLIN_VERSION}.zip \
&& unzip kotlin-compiler-${KOTLIN_VERSION}.zip \
&& rm kotlin-compiler-${KOTLIN_VERSION}.zip
# Add its directory to $PATH
ENV PATH=/opt/kotlinc/bin:$PATH
The real problem with version managers is that they heavily depend on the tool setting environment variables. As #JeevanRao notes in their answer, each Dockerfile RUN command runs in a separate shell in a separate container, and any environment variable settings within that command get lost for the next command.
# Does absolutely nothing: environment variables do not stay set
RUN . /root/.sdkman/bin/sdkman-init.sh
Since an image generally contains only one application and its runtime, you don't need the ability to change which version of the runtime or compiler you're using. My Dockerfile example passes it as an ARG, so you can change it in the Dockerfile or pass a docker build --build-arg KOTLIN_VERSION=... option to use a different version.

Unable to install ipfs

I'm trying to download ipfs on ubuntu so I can use it with golang.
I'm using the following command:
go get -d github.com/ipfs/go-ipfs
But that gives me the following error message:
package github.com/ipfs/go-ipfs
imports runtime: cannot find package "runtime" in any of:
/home/userone/go/src/runtime (from $GOROOT)
/home/userone/gostuff/src/runtime (from $GOPATH)
I have added the following lines at the end of the file ~/.bashrc
export GOPATH=$HOME/go
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
I am using Ubuntu 16.04 and I installed golang using the following command
sudo aptitude install golang-go git
Why am I getting that error message?
Install instructions can be found at the README.md https://github.com/ipfs/go-ipfs#install
To compile from source, do:
$ go get -u -d github.com/ipfs/go-ipfs
$ cd $GOPATH/src/github.com/ipfs/go-ipfs
$ make install
Just to make everything clear. Following are the only things you need to do( don't set GOROOT)
Set environment
export PATH=$PATH:/usr/local/go/bin export PATH=$PATH:$GOPATH/bin
Install IPFS from source
$ go get -u -d github.com/ipfs/go-ipfs
$ cd $GOPATH/src/github.com/ipfs/go-ipfs<br>
$ make install

How to import an unpopular package to Docker using the GOLang official image?

I've posted this question already as an issue on the imagick git repository, but it has a very small user-base, so I'm hoping to get some help from here. I've been trying for a few days now to import https://github.com/gographics/imagick to Docker using the official goLang dockerfile for a project I'm working on, but have been unsuccessful. Since this package isn't popular, running apt-get won't work. I've (hesitantly) tried to just add the files to the container, but that didn't work. Here's the DockerFile I've built and the error it produces:
===DOCKERFILE===
# 1) Use the official go docker image built on debian.
FROM golang:latest
# 2) ENV VARS
ENV GOPATH $HOME/<PROJECT>
ENV PATH $HOME/<PROJECT>/bin:$PATH
# 3) Grab the source code and add it to the workspace.
ADD . /<GO>/src/<PROJECT>
ADD . /<GO>/gopkg.in
# Trying to add the files manually... Doesn't help.
ADD . /opt/local/share/doc/ImageMagick-6
# 4) Install revel and the revel CLI.
#(The commented out code is from previous attempts)
#RUN pkg-config --cflags --libs MagickWand
#RUN go get gopkg.in/gographics/imagick.v2/imagick
RUN go get github.com/revel/revel
RUN go get github.com/revel/cmd/revel
# 5) Does not work... Can't find the package.
#RUN apt-get install libmagickwand-dev
# 6) Get godeps from main repo
RUN go get github.com/tools/godep
# 7) Restore godep dependencies
WORKDIR /<GO>/src/<PROJECT>
RUN godep restore
# 8) Install Imagick
#RUN go build -tags no_pkgconfig gopkg.in/gographics/imagick.v2/imagick
# 9) Use the revel CLI to start up our application.
ENTRYPOINT revel run <PROJECT> dev 9000
# 10) Open up the port where the app is running.
EXPOSE 9000
===END DOCKERFILE===
This allows me to build the docker container, but when I try to run it, I get the following error in the logs of kinematic:
===DOCKER ERROR===
ERROR 2016/08/20 21:15:10 build.go:108: # pkg-config --cflags MagickWand MagickCore MagickWand MagickCore
pkg-config: exec: "pkg-config": executable file not found in $PATH
2016-08-20T21:15:10.081426584Z
ERROR 2016/08/20 21:15:10 build.go:308: Failed to parse build errors:
#pkg-config --cflags MagickWand MagickCore MagickWand MagickCore
pkg-config: exec: "pkg-config": executable file not found in $PATH
2016-08-20T21:15:10.082140143Z
===END DOCKER ERROR===
Most base images have package lists removed to avoid to reduce image size. Thus, in order to install something with apt-get, you first need to update the package lists and then install whatever package you wish. Then, after installing the package, remove all side-effects of running apt to avoid polluting the image with unneeded files (all that necessarily as a single RUN command).
The following Dockerfile should do the trick:
FROM golang:latest
RUN apt-get update \ # update package lists
&& apt-get install -y libmagickwand-dev \ # install the package
&& apt-get clean \ # clean package cache
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* # remove everything else
RUN go get gopkg.in/gographics/imagick.v2/imagick
Remember to add -y to apt-get install, because docker build is non-interactive.

Resources