Building small container for running compiled go code - go

From
https://docs.docker.com/articles/baseimages/
I am trying to build a base image to run compiled go code, from:
https://github.com/tianon/dockerfiles/tree/master/true
I have tried to copy into docker the true.goThen: exec: "/true": permission denied
Also tried to bash into it, then: "bash"Then: executable file not found in $PATH
Also tried to use the debootstrap raring raring > /dev/null Then: "bash": executable file not found in $PATH
How do you do this?
Thanks

I'm not sure I entirely follow.
The Dockerfile from the linked project builds an image with nothing in it except an executable - there will be no shell or compiler, so running bash will be impossible. It does this by using the special scratch base image, which is simply a completely empty filesystem.
If you clone the repository and build the image using the Dockerfile (docker build -t go-image .), it will simply copy the executable directly into the image (note the Dockerfile copies the executable true-asm, not the source code true.go). If you then use docker run to start the image, it will run it (docker run go-image).
Does that make sense? The code is compiled locally (or by another container) and the compiled, stand-alone executable is placed by itself into the image.
Generally, you don't want to do this and definitely not when you're beginning - it will be easier for you to use a golang or debian image which will include basic tools such as a shell.

Related

Cannot sh into rust image (made via multistage build)

My Rust app dockerfile is as below, which is working fine
# Generate a recipe file for dependencies
FROM rust as planner
WORKDIR /app
RUN cargo install cargo-chef
COPY . .
RUN cargo chef prepare --recipe-path recipe.json
# Building our dependencies
FROM rust as cacher
WORKDIR /app
RUN cargo install cargo-chef
COPY --from=planner /app/recipe.json recipe.json
RUN cargo chef cook --release --recipe-path recipe.json
# Builder Image
FROM rust as builder
COPY . /app
WORKDIR /app
COPY --from=cacher /app/target target
COPY --from=cacher /usr/local/cargo /usr/local/cargo
RUN cargo build --release
# Final stage
FROM gcr.io/distroless/cc-debian11
COPY --from=builder /app/target/release/melt-agent-host /app/melt-agent-host
WORKDIR /app
But, I also want to bash into this image.
Is there any way to install bash in this distroless image ?
I also tried some other base images for the last stage that provides bash functionality,
Ex. alpine, busybox -- but those images, I am facing some other errors regarding libgcc.so missing
To sum up, I need a small size base image for the last stage which is compatible with Rust binary + also allows bash functionality.
Your final image is being built from a "distroless" base image. This doesn't include any userspace tools – there probably isn't even a /bin/sh binary – and that's okay. You won't be able to docker exec into this image and that's probably not a problem.
Your image just contains a set of shared libraries, some control files in /etc, and the one compiled binary (consider copying it into /usr/bin or /usr/local/bin so it's easier to run). It's not clear to me what you'd do with a docker exec shell in this case.
If for some reason you do need a shell and the various tools that normally go with it (ls, grep, etc.) you need some sort of "normal" base image, not a "distroless" image or the special scratch image. If you've built a static binary then it's possible the busybox image will work; if you have a dynamic binary, it's possible alpine will work as well (it will have a POSIX shell at /bin/sh but not GNU bash) but you might need to build your final image stage FROM debian or FROM ubuntu.
... all of your build stages ...
FROM debian:bullseye
COPY --from=builder /app/target/release/melt-agent-host /usr/local/bin
CMD ["melt-agent-host"]
Well I guess you can just have a docker command in your docker file to COPY /bin/bash to somewhere, and then execute that. I don't know if bash depends on any shared libraries which you are missing, but that would be a starting point. If shared libraries are needed too, you'll have to copy those too.
I would also suggest that whatever you think you're gaining by having a distroless image, you aren't really gaining. Find a minimalist distro... and probably most of them for docker are anyway.

Installing without package manager, why does executable binary fail with "command not found" unless I make the commands start with "./"?

I'm learning to use GNU/Linux and I want to know how to install programs that cannot be installed with the package manager.
I downloaded the tarball with the Linux 64-bit Binaries (including one called "haxelib"), extracted it, changed directory in the terminal to their location (~/Downloads/things/haxe_20201231082044_5e33a78aa/), and used chmod to make them executable.
If I try a command such as haxelib list, then the terminal returns
haxelib: command not found
If I try ./haxelib list (the same command but with ./ at the start) instead, then the command works as expected.
Why can't I use it without the ./? Programs installed with the package manager can be used without the ./.
Edit: I should probably also ask: where should I put the files from the tarball? Should they all go together in the same place? I have a feeling that a folder named "things" in my Downloads folder is not the best place for them.

Go doesn't find /usr/share/zoneinfo in docker container

In a Go program I call time.LoadLocation("Europe/Berlin") and it returns an error saying open /usr/local/go/lib/time/zoneinfo.zip: no such file or directory, even though in the container (running alpine:3.9 with tzdata installed) /usr/share/zoneinfo/Europe/Berlin exists and, according to the docs, should take precedence over the zip file. The same program finds the file on my machine (Arch Linux). The executable got statically linked on my machine and then copied into the container. I tried Go 1.11.5 and 1.10.3.
I built the executable with:
CGO_ENABLED=0 go build -a -ldflags "-s" -o gocake_static
I'm looking for any ideas that help me identify the problem.
if you use only one static zoneinfo. maybe FixedZone can solve your problem.
It does not require timezone.zip, thus no need to download zoneinfo.zip and set env ZONEINFO in Dockerfile.
for example
loc := time.FixedZone("Europe/Berlin", 1*60*60)
fmt.Println(time.Now().In(loc).Format("2006-01-02 15:04:05"))

View docs from Git hub repository

I have cloned the repository - https://github.com/hyperledger/sawtooth-supply-chain. There is a docs folder in this repository that has a folder named 'source' and files named 'Makefile' and 'supply-chain-build-docs'
I want to know if I build the contents in this directory, whether I would be able to view additional documentation other than what is in ReadMe.md file.
If so, how should I build and view the files? I have installed sphinx.
In which port will I be able to see the html documentation after the build?
If you open the supply-chain-build-docs file you will notice the instructions to build the docs is mentioned inside.
Description:
Builds the environment needed to build the Sawtooth Supply Chain docs
Running the image will put the docs in
sawtooth-supply-chain/docs/build on your local machine.
Build:
$ cd sawtooth-supply-chain
$ docker build . -f docs/supply-chain-build-docs -t supply-chain-build-docs
Run:
$ cd sawtooth-supply-chain
$ docker run -v $(pwd):/project/sawtooth-supply-chain supply-chain-build-docs
This documentation although assumes that you already have docker installed. The guide to install it in ubuntu can be found here
In which port will I be able to see the html documentation after the build?
Once you run both the steps above you can find a neat pdf on the location sawtooth-supply-chain/docs/build/latex named as sawtooth.pdf ready for you :)

Spawn a new process in docker container which image is built from scratch

I'm trying to build a minimal docker image (FROM scratch) that contains 2 executable binaries. Both are binaries built with Go. The entrypoint is set to the first one. It takes some data on the image, transforms it using environment variables, starts a new process executing the second binary and pipes the data as an input for the spawned process.
FROM scratch
COPY bin /opt/my-app
ENTRYPOINT ["/opt/my-app/first", "--run", "/opt/my-app/second"]
When I build this image on my Mac, everything works fine. But when it's created it on our build server running linux, the first process cannot start the second one. It fails with an error "fork/exec /opt/my-app/second: no such file or directory". However, "second" binary does exist. In both cases docker engine 1.13.1 is used.
It also works if parent image is changed from scratch to debian:jessie.
Are there any limitations of the scratch image that I'm not aware of?
With a scratch image there will not be a libc (or any shared libs). If it works fine on debian, then I suspect the binary is not statically linked, which is the normal default. Try CGO_ENABLED=0 go build -a -installsuffix cgo as seen here http://www.blang.io/posts/2015-04_golang-alpine-build-golang-binaries-for-alpine-linux/

Resources