How to import an unpopular package to Docker using the GOLang official image? - go

I've posted this question already as an issue on the imagick git repository, but it has a very small user-base, so I'm hoping to get some help from here. I've been trying for a few days now to import https://github.com/gographics/imagick to Docker using the official goLang dockerfile for a project I'm working on, but have been unsuccessful. Since this package isn't popular, running apt-get won't work. I've (hesitantly) tried to just add the files to the container, but that didn't work. Here's the DockerFile I've built and the error it produces:
===DOCKERFILE===
# 1) Use the official go docker image built on debian.
FROM golang:latest
# 2) ENV VARS
ENV GOPATH $HOME/<PROJECT>
ENV PATH $HOME/<PROJECT>/bin:$PATH
# 3) Grab the source code and add it to the workspace.
ADD . /<GO>/src/<PROJECT>
ADD . /<GO>/gopkg.in
# Trying to add the files manually... Doesn't help.
ADD . /opt/local/share/doc/ImageMagick-6
# 4) Install revel and the revel CLI.
#(The commented out code is from previous attempts)
#RUN pkg-config --cflags --libs MagickWand
#RUN go get gopkg.in/gographics/imagick.v2/imagick
RUN go get github.com/revel/revel
RUN go get github.com/revel/cmd/revel
# 5) Does not work... Can't find the package.
#RUN apt-get install libmagickwand-dev
# 6) Get godeps from main repo
RUN go get github.com/tools/godep
# 7) Restore godep dependencies
WORKDIR /<GO>/src/<PROJECT>
RUN godep restore
# 8) Install Imagick
#RUN go build -tags no_pkgconfig gopkg.in/gographics/imagick.v2/imagick
# 9) Use the revel CLI to start up our application.
ENTRYPOINT revel run <PROJECT> dev 9000
# 10) Open up the port where the app is running.
EXPOSE 9000
===END DOCKERFILE===
This allows me to build the docker container, but when I try to run it, I get the following error in the logs of kinematic:
===DOCKER ERROR===
ERROR 2016/08/20 21:15:10 build.go:108: # pkg-config --cflags MagickWand MagickCore MagickWand MagickCore
pkg-config: exec: "pkg-config": executable file not found in $PATH
2016-08-20T21:15:10.081426584Z
ERROR 2016/08/20 21:15:10 build.go:308: Failed to parse build errors:
#pkg-config --cflags MagickWand MagickCore MagickWand MagickCore
pkg-config: exec: "pkg-config": executable file not found in $PATH
2016-08-20T21:15:10.082140143Z
===END DOCKER ERROR===

Most base images have package lists removed to avoid to reduce image size. Thus, in order to install something with apt-get, you first need to update the package lists and then install whatever package you wish. Then, after installing the package, remove all side-effects of running apt to avoid polluting the image with unneeded files (all that necessarily as a single RUN command).
The following Dockerfile should do the trick:
FROM golang:latest
RUN apt-get update \ # update package lists
&& apt-get install -y libmagickwand-dev \ # install the package
&& apt-get clean \ # clean package cache
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* # remove everything else
RUN go get gopkg.in/gographics/imagick.v2/imagick
Remember to add -y to apt-get install, because docker build is non-interactive.

Related

Dockerfile configuration for GSSAPI with SASL_SSL support for alpine based Go image

I have a Confluence Kafka consumer written in Golang. I am trying to deploy it in a PKS cluster.
The Kafka config looks like this,
kafka.bootstrap.servers=server.myserver.com
kafka.security.protocol=SASL_SSL
kafka.sasl.mechanisms=GSSAPI
kafka.group.id=kafka-go-getting-started
kafka.auto.offset.reset=latest
kafka.topic=topic.consumer-topic
acks=all
I need to configure my Dockerfile for GSSAPI mechanism with SASL_SSL protocol. I have managed to resolve the GSSAPI thing, however, currently it shows,
**Failed to create consumer: Unsupported value "SASL_SSL" for configuration property "security.protocol": OpenSSL not available at build time**
Here is how my Dockerfile looks like:
FROM golang:1.19-alpine3.16 as c-bindings
RUN apk update && apk upgrade && apk add pkgconf git bash build-base sudo
RUN git clone https://github.com/edenhill/librdkafka.git
RUN cd librdkafka && ./configure && make && sudo make install
FROM c-bindings as app-builder
WORKDIR /go/app
COPY . .
RUN go mod download
RUN go mod verify
RUN go build -race -tags musl --ldflags "-extldflags -static -s -w" -o main ./main.go
FROM scratch AS app-runner
WORKDIR /go/app/
COPY --from=app-builder /go/app/main ./main
CMD ["/go/app/main"]`
Tried some ways in Dockerfile to make OpenSSL available, however things are stuck at same. Not sure if both GSSAPI mechanism as well as SASL_SSL protocol can be resolved over a common solution.
[Dec 05, 2022] Latest try:
Dockerfile,
FROM golang:1.19-alpine as c-bindings
RUN apk update && apk upgrade && apk add pkgconf git bash build-base sudo
FROM c-bindings as app-builder
WORKDIR /go/app
COPY . .
RUN go mod download
RUN go mod verify
RUN apk add zstd-dev
RUN apk add krb5
RUN apk add cyrus-sasl-gssapiv2
RUN apk add cyrus-sasl-dev
RUN apk add openssl-dev
RUN git clone https://github.com/edenhill/librdkafka.git
RUN cd librdkafka && ./configure --install-deps && make && sudo make install
COPY krb5.conf /etc/krb5.conf
COPY jaas.conf /etc/jaas.conf
RUN go build -race -tags dynamic -o main ./main.go
CMD ["/go/app/main"]
Kafka config -
kafka.bootstrap.servers=server.myserver.com
kafka.security.protocol=SASL_SSL
kafka.sasl.mechanism=GSSAPI
kafka.group.id=kafka-go-getting-started
kafka.auto.offset.reset=latest
kafka.topic=topic.consumer-topic
kafka.ssl.ca.location=/etc/ssl/certs/my-cert.pem
kafka.sasl.kerberos.service.name=kafka
kafka.sasl.kerberos.keytab=/etc/security/keytab/consumer.keytab
kafka.sasl.kerberos.principal=principal#myprincipal.COM
acks=all
Now the container is technically running. However, it is not able to run the Kafka consumer application with below errors -
GSSAPI Error: A token had an invalid MIC (unknown mech-code 0 for mech unknown)
that is because you are missing the SSL or SASL dependancies you would need to make sure that libssl-dev, hoewever it could also needs those libsasl2-dev, libsasl2-modules, but libssl-dev should be enough though
adding the following to the DOCKERFILE should help to resolve it
RUN apk add libressl-dev
here is the official libssl and the alpine pkg

unrecognized option --gc-keep-exported on arm ld

Right now, I'm trying to integrate a GitHub Action that checks if some code on a pull request compiles properly (VEX Robotics for anyone interested). However, when it gets to running the make command, I get this error:
Building Project
make: Entering directory '/github/workspace/V5'
Adding timestamp [OK]
Creating cold package with libpros,okapilib [ERRORS]
/usr/lib/gcc/arm-none-eabi/6.3.1/../../../arm-none-eabi/bin/ld: unrecognized option '--gc-keep-exported'
/usr/lib/gcc/arm-none-eabi/6.3.1/../../../arm-none-eabi/bin/ld: use the --help option for usage information
collect2: error: ld returned 1 exit status
make: *** [bin/cold.package.elf] Error 1
common.mk:200: recipe for target 'bin/cold.package.elf' failed
I'm extremely confused as to why this is occurring? --gc-keep-exported is a real option, and this code compiles perfectly on my local machine. I've tried changing the ubuntu version and updating the VEX SDK to see if it helps, but I keep on getting the same error. What should I do?
Code:
Dockerfile:
FROM ubuntu:18.04
RUN apt-get update
# Install GCC & Clang
RUN apt-get install build-essential -y
RUN apt-get install clang -y
# Install needed ARM deps
RUN apt-get install gcc-arm-none-eabi -y
RUN apt-get install binutils-arm-none-eabi -y
# Install 7z & cURL
RUN apt-get install p7zip-full -y
RUN apt-get install curl -y
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh:
echo "Downloading VEX SDK"
# Get VEX SDK and put it in ~/sdk
curl -L https://content.vexrobotics.com/vexcode/v5code/VEXcodeProV5_2_0_1.dmg -o _vexcode_.dmg
7z x _vexcode_.dmg || :
7z x Payload~ ./VEXcode\ Pro\ V5.app/Contents/Resources/sdk -osdk_temp || :
mkdir ~/sdk
mv sdk_temp/VEXcode\ Pro\ V5.app/Contents/Resources/sdk/* ~/sdk
rm -fR _vex*_ _vex*_.dmg sdk_temp/ Payload~
ls ~/sdk # ls just for testing
echo "Building Project"
# Now make the makefile in the set path
make --directory=$1
The reason this was happening was because I used an old version of gcc-arm-none-eabi. The version on apt is super outdated (v6.3.1 vs v10.2.1).
I was able to use the new version by downloading the tarball available on their site and using the direct paths to compile my code.
There is a newer GCC version available for a newer Ubuntu version, you could browse for another one, or be lazy and enjoy some malpractice with me:
FROM ubuntu:latest

AWS Lambda layer for psycopg2

I'm trying to create a new lambda layer to import the zip file with psycopg2, because the library made my deployment package get over 3MB, and I can not see the inline code in my lambda function any more.
I created lambda layer for the following 2 cases with Python 3.7:
psycopg2_lib.zip (contains psycopg2, psycopg2_binary.libs and psycopg2_binary-2.8.5.dist-info folders)
psycopg2_only.zip which contains only the psycopg2 folder.
I added they new created layer into my lambda function.
But, in both cases, my lambda_function throws an error as follows:
{
"errorMessage": "Unable to import module 'lambda_function': No module named 'psycopg2'",
"errorType": "Runtime.ImportModuleError"
}
The error seems as if something went wrong with my zip file that they are not recognized. But when it works well in my deployment package.
Any help or reason would be much appriciated. Thanks!
not sure if the OP found a solution to this but in case others land here. I resolved this using the following steps:
download the code/clone the git from:
https://github.com/jkehler/awslambda-psycopg2
create the following directory tree, if building for python3.7, otherwise replace 'python3.7' with the version choice:
mkdir -p python/lib/python3.7/site-packages/psycopg2
choose the python version of interest and copy the files from the folders downloaded in step 1. to the directory tree in step 2. e.g. if building a layer for python 3.7:
cp psycopg2-3.7/* python/lib/python3.7/site-packages/psycopg2
create the zip file for the layer. e.g.: zip -r9 psycopg2-py37.zip python
create a layer in the console or cli and upload the zip
If you end up on this page in >= 2022 year. Use official psycopg2-binary https://pypi.org/project/psycopg2-binary/
Works well for me. Just
pip install --target ./python psycopg2-binary
zip -r python.zip python
Maintainers of psycopg2 do not recommend using psycopg2-binary because it comes with linked libpq and libssl and others that may cause issues in production under certain circumstances.
I may imagine this being an issue when upgrading postgresql server while bundled libpq is incompatible. I also had issues w/ psycopg2-binary on AWS Lambda running in arm64 environment.
I've resorted to building postgresql and psycopg in Docker running on linux/arm64 platform using public.ecr.aws/lambda/python:3.9 as the base image.
FROM public.ecr.aws/lambda/python:3.9
RUN yum -y update && \
yum -y upgrade && \
yum -y install libffi-devel postgresql-devel postgresql-libs zip rsync wget openssl openssl-devel && \
yum -y groupinstall "development tools" && \
pip install pipenv
ENTRYPOINT ["/bin/bash"]
The build script is the following and valid for aarch64 platform. Just change path to x86_64 version on Prepare psycopg2 step.
#!/usr/bin/env bash
set -e
PG_VERSION="14.5"
cd "$TERRAFORM_ROOT"
if [ ! -f "postgresql-$PG_VERSION.tar.bz2" ]; then
wget "https://ftp.postgresql.org/pub/source/v$PG_VERSION/postgresql-$PG_VERSION.tar.bz2"
tar -xf "$(pwd)/postgresql-$PG_VERSION.tar.bz2"
fi
if [ ! -d "psycopg2" ]; then
git clone https://github.com/psycopg/psycopg2.git
fi
# Build postgres
cd "$TERRAFORM_ROOT/postgresql-$PG_VERSION"
./configure --without-readline --without-zlib
make
make install
# Build psycopg2
cd "$TERRAFORM_ROOT/psycopg2"
make clean
python setup.py build_ext \
--pg-config "$TERRAFORM_ROOT/postgresql-14.5/src/bin/pg_config/pg_config"
# Prepare psycopg2
cd build/lib.linux-aarch64-3.9
mkdir -p python/
cp -r psycopg2 python/
zip -9 -r "$BUNDLE" ./python
# Prepare libpq
cd "$TERRAFORM_ROOT/postgresql-$PG_VERSION/src/interfaces/libpq/"
mkdir -p lib/
cp libpq.so.5 lib/
zip -9 -r "$BUNDLE" ./lib
where $BUNDLE is the path to already existing .zip file.
I also tried to statically build psycopg2 binary and link libpq.a, however, I have had quite a lot of issues with missing symbols.
From AWS post How do I add Python packages with compiled binaries to my deployment package and make the package compatible with Lambda?:
To create a Lambda deployment package or layer that's compatible with Lambda Python runtimes when using pip outside of Linux operating system, run the pip install command with manylinux2014 as the value for the --platform parameter.
pip install \
--platform manylinux2014_x86_64 \
--target=my-lambda-function \
--implementation cp \
--python 3.9 \
--only-binary=:all: --upgrade \
psycopg2-binary
You can then zip the content of directory my-lambda-function

Cannot build dockerfile with sdkman

I am entirely new to the concept of dockers. I am creating the following Dockerfile as an exercise.
FROM ubuntu:latest
MAINTAINER kesarling
RUN apt update && apt upgrade -y
RUN apt install nginx curl zip unzip -y
RUN apt install openjdk-14-jdk python3 python3-doc clang golang-go gcc g++ -y
RUN curl -s "https://get.sdkman.io" | bash
RUN bash /root/.sdkman/bin/sdkman-init.sh
RUN sdk version
RUN yes | bash -c 'sdk install kotlin'
CMD [ "echo","The development environment has now been fully setup with C, C++, JAVA, Python3, Go and Kotlin" ]
I am using SDKMAN! to install Kotlin. The problem initially was that instead of using RUN bash /root/.sdkman/bin/sdkman-init.sh, I was using RUN source /root/.sdkman/bin/sdkman-init.sh. However, it gave the error saying source not found. So, I tried using RUN . /root/.sdkman/bin/sdkman-init.sh, and it did not work. However, RUN bash /root/.sdkman/bin/sdkman-init.sh seems to work, as in does not give any error and tries to run the next command. However, the docker then gives error saying sdk: not found
Where am I going wrong?
It should be noted that these steps worked like charm for my host distribution (The one on which I'm running docker) which is Pop!_OS 20.04
Actually the script /root/.sdkman/bin/sdkman-init.sh sources the sdk
source is a built-in to bash rather than a binary somewhere on the filesystem.
source command executes the file in the current shell.
Each RUN instruction will execute any commands in a new layer on top of the current image and commit the results.
The resulting committed image will be used for the next step in the Dockerfile.
Try this:
FROM ubuntu:latest
MAINTAINER kesarling
RUN apt update && apt upgrade -y
RUN apt install nginx curl zip unzip -y
RUN apt install openjdk-14-jdk python3 python3-doc clang golang-go gcc g++ -y
RUN curl -s "https://get.sdkman.io" | bash
RUN /bin/bash -c "source /root/.sdkman/bin/sdkman-init.sh; sdk version; sdk install kotlin"
CMD [ "echo","The development environment has now been fully setup with C, C++, JAVA, Python3, Go and Kotlin" ]
SDKMAN in Ubuntu Dockerfile
tl;dr
the sdk command is not a binary but a bash script loaded into memory
Shell sessions are a "process", which means environment variables and declared shell function only exist for the duration that shell session exists; which lasts only as long as the RUN command.
Manually tweak your PATH
RUN apt-get update && apt-get install curl bash unzip zip -y
RUN curl -s "https://get.sdkman.io" | bash
RUN source "$HOME/.sdkman/bin/sdkman-init.sh" \
&& sdk install java 8.0.275-amzn \
&& sdk install sbt 1.4.2 \
&& sdk install scala 2.12.12
ENV PATH=/root/.sdkman/candidates/java/current/bin:$PATH
ENV PATH=/root/.sdkman/candidates/scala/current/bin:$PATH
ENV PATH=/root/.sdkman/candidates/sbt/current/bin:$PATH
Full Version
Oh wow this was a journey to figure out. Below each line is commented as to why certain commands are run.
I learnt a lot about how unix works and how sdkman works and how docker works and why the intersection of the three give very unusual behaviour.
# I am using a multi-stage build so I am just copying the built artifacts
# from this stage to keep final image small.
FROM ubuntu:latest as ScalaBuild
# Switch from `sh -c` to `bash -c` as the shell behind a `RUN` command.
SHELL ["/bin/bash", "-c"]
# Usual updates
RUN apt-get update && apt-get upgrade -y
# Dependencies for sdkman installation
RUN apt-get install curl bash unzip zip -y
#Install sdkman
RUN curl -s "https://get.sdkman.io" | bash
# FUN FACTS:
# 1) the `sdk` command is not a binary but a bash script loaded into memory
# 2) Shell sessions are a "process", which means environment variables
# and declared shell function only exist for
# the duration that shell session exists
RUN source "$HOME/.sdkman/bin/sdkman-init.sh" \
&& sdk install java 8.0.275-amzn \
&& sdk install sbt 1.4.2 \
&& sdk install scala 2.12.12
# Once the real binaries exist these are
# the symlinked paths that need to exist on PATH
ENV PATH=/root/.sdkman/candidates/java/current/bin:$PATH
ENV PATH=/root/.sdkman/candidates/scala/current/bin:$PATH
ENV PATH=/root/.sdkman/candidates/sbt/current/bin:$PATH
# This is specific to running a minimal empty Scala project and packaging it
RUN touch build.sbt
RUN sbt compile
RUN sbt package
FROM alpine AS production
# setup production environment image here
COPY --from=ScalaBuild /root/target/scala-2.12/ $INSTALL_PATH
ENTRYPOINT ["java", "-cp", "$INSTALL_PATH", "your.main.classfile"]
Generally you want to avoid using "version manager" type tools in Docker; it's better to install a specific version of the compiler or runtime you need.
In the case of Kotlin, it's a JVM application distributed as a zip file so it should be fairly easy to install:
FROM openjdk:15-slim
ARG KOTLIN_VERSION=1.3.72
# Get OS-level updates:
RUN apt-get update \
&& apt-get install --no-install-recommends --assume-yes \
curl \
unzip
# and if you need C/Python dependencies, those too
# Download and unpack Kotlin
RUN cd /opt \
&& curl -LO https://github.com/JetBrains/kotlin/releases/download/v${KOTLIN_VERSION}/kotlin-compiler-${KOTLIN_VERSION}.zip \
&& unzip kotlin-compiler-${KOTLIN_VERSION}.zip \
&& rm kotlin-compiler-${KOTLIN_VERSION}.zip
# Add its directory to $PATH
ENV PATH=/opt/kotlinc/bin:$PATH
The real problem with version managers is that they heavily depend on the tool setting environment variables. As #JeevanRao notes in their answer, each Dockerfile RUN command runs in a separate shell in a separate container, and any environment variable settings within that command get lost for the next command.
# Does absolutely nothing: environment variables do not stay set
RUN . /root/.sdkman/bin/sdkman-init.sh
Since an image generally contains only one application and its runtime, you don't need the ability to change which version of the runtime or compiler you're using. My Dockerfile example passes it as an ARG, so you can change it in the Dockerfile or pass a docker build --build-arg KOTLIN_VERSION=... option to use a different version.

getting "cannot find package" trying to build my application in a docker container

Here is my Dockerfile.
FROM ubuntu
MAINTAINER me <my#email.com>
RUN apt-get update && apt-get install -y \
build-essential \
curl \
git \
make
# Get and compile go
RUN curl -s https://go.googlecode.com/files/go1.2.1.src.tar.gz | tar -v -C /usr/local -xz
RUN cd /usr/local/go/src && ./make.bash --no-clean 2>&1
ENV PATH /usr/local/go/bin:/go/bin:$PATH
ENV GOPATH /go
RUN go get github.com/gorilla/feeds
WORKDIR /go
CMD go version && go install feed && feed
It builds just fine:
sudo docker build -t ubuntu-go .
but when I run it I get a package error:
sudo docker run -v /home/rbucker/go:/go --name go ubuntu-go
The error looks like:
src/feed/feed.go:7:2: cannot find package "github.com/gorilla/feeds" in any of:
/usr/local/go/src/pkg/github.com/gorilla/feeds (from $GOROOT)
/go/src/github.com/gorilla/feeds (from $GOPATH)
It's odd because "go install" is not installing the dependencies and while the previous "go get github.com/gorilla/feeds" completes without errors. So presumably I have a path or environment problem but all of the examples look just like this one.
PS: my code is located in /go/src/feed (feed.go)
package main
import (
"net/http"
"time"
"github.com/gorilla/feeds"
)
. . .
UPDATE: when I performed the "go get" manually and then launched the "run" it seemed to work. So it appears that the "RUN go get" is storing my file in the ether instead of my host's volume.
sudo docker run -v /home/rbucker/go:/go --name go ubuntu-go /bin/bash
then
sudo docker run -v /home/rbucker/go:/go --name go ubuntu-go
(the files were located in my ~/go/src/githum.com and ~/go/pkg folders.)
UPDATE: It occurs to me that during the BUILD step the /go volume has not been attached to the docker image. So it's essentially assigned to nil. But then during the run the "get install" should have retrieved it's deps.
FINALLY: this works but is clearly not the preferred method:
CMD go get github.com/gorilla/feeds && go version && go install feed && feed
notice that I performed the "go get" in the CMD rather than a RUN.
I solved something similar to this by using
go get ./...
Source:
https://coderwall.com/p/arxtja/install-all-go-project-dependencies-in-one-command

Resources