Build, deploy, push docker images for golang static binaries - go

I am looking for a solution to a simple configuration problem to solve; it has been nagging me for quite some time now. :)
I have a golang project on github which gives me a static binary, and uses godeps.
Now I want to ensure that the godep go install ... command can be run after a git clone and a docker container be built from this newly built binary locally.
As an option, the user should be able to push it to docker hub or a private repo as applicable.
I am thinking of using Makefiles, but that seems too complicated (set the gopath, then godep build, modify the Dockerfile dynamically to point to the place where the binary is located, then a docker build).
Is there an easier way to do it?

So far, when I've been on your situation, I've always come up with a Makefile for doing all the work, which, like you said, has never been simple. Although it's never been simple, I've done it using at least 2 different approaches dependending on the level of dependency you want between the build process and the development environment.
The simplest way is, as you say, just throw in a Makefile the steps you would do by yourself. You can use Dockerfiles ARGuments to parameterize the binary name so you don't have to modify the Dockerfile during the build.
Here you have a quick and dirty (working) Makefile I've just made up for getting you started:
APP_IMAGE=group/example
APP_TAG=1.0
APP_BINARY=example
.PHONY: clean image binary
all: image
clean:
if [ -r $(APP_BINARY) ]; then rm $(APP_BINARY); fi
if [ -n "$$(docker images -q $(APP_IMAGE):$(APP_TAG))" ]; then docker rmi $(APP_IMAGE):$(APP_TAG); fi
image: binary
docker build --build-arg APP_BINARY=$(APP_BINARY) -t $(APP_IMAGE):$(APP_TAG) $(dir $(abspath $(lastword $(MAKEFILE_LIST))))
binary: $(APP_BINARY)
$(APP_BINARY): main.go
go build -o $# $^
That Makefile expects to be in the same directory as the Dockerfile.
Here is a minimal one (which works):
FROM alpine:3.5
ARG APP_BINARY
ADD ${APP_BINARY} /app
ENTRYPOINT /app
I've tested that having a main.go in the same top-level project directory as both Makefile and Dockerfile, in case you have a main.go nested inside some inside directory ("ROOT/cmd/bla" is commonplace) then you should change the "go build" line to account for that.
Although I've been doing things like that for a while, now that I see (and think about) your question I've come to see that a dedicated tool which knows specifically how to do this could be great to have. Specifically a tool that mimics "go build/get/install" but can build docker images... so where you can run the following command to get a binary:
go install github.com/my/program
You could also run the following command to get a simple docker image:
goi install github.com/my/program
How does that sound? Is there something like that in existence? Should I get started?

Provide a base image (with GOPATH, godep ... pre configured).
Create Dockerfile based on the base image, then user only need to alter the COPY path.

Related

Modify the Docker image and save it for not loose the content when building the container again. How?

Hi I need to modify the a docker image from Autoware_AI repository after build it. The problem is:
A) I build the image running a .sh file:
cd $WORKING_DIRECTORY/docker/generic
./run.sh -t 1.14.0
It is specifically from Autoware: https://www.svlsimulator.com/docs/system-under-test/autoware-instructions/
B) I modify the scripts contained inside the packages contained in Autoware folder
C) When I exit the container, and later enter again the modifications are not there anymore, of coure, because the image is built from Dockerfile from scratch...
To find a solution I have tried 2 different approachs:
To modify the container and save it as described here: https://www.scalyr.com/blog/create-docker-image/
Issue: When using other terminal, trying to add .txt file for Autoware_AI running container, to modify the container, Autoware_AI container does not appear as active (but it is). Just other container are avaialable when I try to copy a file to Autoware_AI:
Commit Changes To a Docker image: https://phoenixnap.com/kb/how-to-commit-changes-to-docker-image
Issue: Problem to connect to Autoware_AI server and run the ros packages. This problem does not happen when building the original Docker file with .sh
The complete description of my problem as well as output of terminal attempts are better described here:
https://answers.ros.org/question/376512/fork-autoware_ai-repository-and-create-docker-image/?answer=376583#post-id-376583
https://get-help.robotigniteacademy.com/t/fork-autoware-ai-repository-and-create-docker-image/9533/4
I am kind of new in forking,changing docker images. I do not understand how to fix this, find a solution for create my custom docker image and make it functional.
Thanks very much in advance!
As David Maze suggested one feasible way would be change the Dockerfile, and then build my image from it. It is a good idea. In my case however additional steps were required, because I had a build.sh script that called different Dockerfiles to build the image, and besides that this build.sh file also installed some ROS packages and other dependencies.
Even if the build.sh needed to change. The modifications on Dockerfile contributed to solve 90% of the issue. My Dockerfile was:
enter image description here
And After the modification, the Dockerfile became:
enter image description here
To dowload the modified ROS codes to my image, instead of autoware repo ROS pkgs, I needed:
1 - Copy the autoware.ai.repos file from here:
https://raw.githubusercontent.com/Autoware-AI/autoware.ai/1.14.0/autoware.ai.repos
To my docker local folder (docker/generic) and unwrap them with vcs import command as the Dockerfile displays above...
2- Edit the autoware.ai.repos, in order to change the address of some of the repositories contained in autoware.ai.repos to my personal github:
I Removed the lines:
autoware/visualization:
type: git
url: https://github.com/Autoware-AI/visualization.git
version: 1.14.0
And replaced by:
autoware/visualization:
type: git
url: https://github.com/marcusvinicius178/visualization
Afterwards I followed the build instructions in case 3 here: https://github.com/Autoware-AI/autoware.ai/wiki/Generic-x86-Docker#run-an-autoware-docker-container
$ ./build.sh
$ ./run.sh -t local
I know that may exist a more professional way to work with Docker images, and build a new Dockerfile based on the original one. But I am not that expert in Docker and also in this way my problem was solved.

Docker Build/Deploy using Bash Script

I have a deploy script that I am trying to use for my server for CD but I am running into issues writing the bash script to complete some of my required steps such as running npm and the migration commands.
How would I go about getting into a container bash, from this script, running the commands below and then exiting to finish pulling up the changes?
Here is the script I am trying to automate:
cd /Project
docker-compose -f docker-compose.prod.yml down
git pull
docker-compose -f docker-compose.prod.yml build
# all good until here because it opens bash and does not allow more commands to run
docker-compose -f docker-compose.prod.yml run --rm web bash
npm install # should be run inside of web bash
python manage.py migrate_all # should be run inside of web bash
exit # should be run inside of web bash
# back out of web bash
docker-compose -f docker-compose.prod.yml up -d
Typically a Docker image is self-contained, and knows how to start itself up without any user intervention. With some limited exceptions, you shouldn't ever need to docker-compose run interactive shells to do post-deploy setup, and docker exec should be reserved for emergency debugging.
You're doing two things in this script.
The first is to install Node packages. These should be encapsulated in your image; your Dockerfile will almost always look something like
FROM node
WORKDIR /app
COPY package*.json .
RUN npm ci # <--- this line
COPY . .
CMD ["node", "index.js"]
Since the dependencies are in your image, you don't need to re-install them when the image starts up. Conversely, if you change your package.json file, re-running docker-compose build will re-run the npm install step and you'll get a clean package tree.
(There's a somewhat common setup that puts the node_modules directory into an anonymous volume, and overwrites the image's code with a bind mount. If you update your image, it will get the old node_modules directory from the anonymous volume and ignore the image updates. Delete these volumes: and use the code that's built into the image.)
Database migrations are a little trickier since you can't run them during the image build phase. There are two good approaches to this. One is to always have the container run migrations on startup. You can use an entrypoint script like:
#!/bin/sh
python manage.py migrate_all
exec "$#"
Make this script be executable and make it be the image's ENTRYPOINT, leaving the CMD be the command to actually start the application. On every container startup it will run migrations and then run the main container command, whatever it may be.
This approach doesn't necessarily work well if you have multiple replicas of the container (especially in a cluster environment like Docker Swarm or Kubernetes) or if you ever need to downgrade. In these cases it might make more sense to manually run migrations by hand. You can do that separately from the main container lifecycle with
docker-compose run web \
python manage.py migrate_all
Finally, in terms of the lifecycle you describe, Docker images are immutable: this means that it's safe to rebuild new images while the old ones are running. A minimum-downtime approach to the upgrade sequence you describe might look like:
git pull
# Build new images (includes `npm install`)
docker-compose build
# Run migrations (if required)
docker-compose run web python manage.py migrate_all
# Restart all containers
docker-compose up --force-recreate

Heroku with container stack and ENTRYPOINT instruction continues to run with "/bin/sh -c"

I'll first preface with relevant sections in Heroku's documentation on their container registry and runtime for Docker deploys.
The Docker commands and runtime section states:
ENTRYPOINT is optional. If not set, /bin/sh -c will be used
CMD will always be executed by a shell so that config vars are made
available to your process; to execute single binaries or use images
without a shell please use ENTRYPOINT
I want to highlight that last sentence (emphasis added):
to execute single binaries or use images without a shell please use ENTRYPOINT
Furthermore, in the Unsupported Dockerfile commands section.
SHELL - The default shell for Docker images is /bin/sh, you can override
with ENTRYPOINT if necessary.
My understanding from this is that if the ENTRYPOINT instruction exists in my Dockerfile, then Heroku's container stack will not execute the command with /bin/sh. But apparently my understanding is wrong, because that seems to be still happening.
Here is an illustration of what my Dockerfile looks like:
FROM golang:1.14 as build
# ... builds the binary
FROM scratch
# ... other irrelevant stuff
COPY --from=build /myprogram /myprogram
ENTRYPOINT ["/myprogram"]
The final image contains the binary at /myprogram in a scratch base image. Since scratch doesn't have /bin/sh, it is necessary to override this. According to their documentation, this is done with ENTRYPOINT.
However, when I deploy it to Heroku, it appears to still execute with a shell. To illustrate:
% heroku ps -a my-app-name
== web (Free): /bin/sh -c --myflag1 --myflag2 (1)
Which means it's ultimately executing:
/myprogram /bin/sh -c --myflag1 --myflag2
Which is obviously not what I want. But what happened to this part of the documentation (which I highlighted earlier)?:
to execute single binaries or use images without a shell please use ENTRYPOINT
...
The heroku.yml file looks like the following:
---
build:
docker:
web: Dockerfile
run:
web:
command:
- --myflag1
- --myflag2
I still have the same problem with the shell form and the exec form. Meaning, I also tried with the heroku.yml file looking like this:
---
build:
docker:
web: Dockerfile
run:
web: --myflag1 --myflag2
Now, I know that I can get everything to "just work" by basing the final image on an image that has /bin/sh and remove the ENTRYPOINT instruction, specifying the command with /myprogram. I don't need to use scratch, but I want to, and should be able to use scratch, right? It's what I've been using for many years to run my statically linked binaries in containers and I have never come across problems like this when deploying on other platforms.
So am I misunderstanding their documentation? What do I need to do to get rid of this /bin/sh shenanigan?

How do I debug a failing additional Dockerfile?

I have created .ddev/web-build/Dockerfile. It builds, but it doesn't do what I want. How do I figure out why? DRUD_DEBUG=1 ddev start doesn't show any additional output. I don't know the proper docker-compose command to try building it manually either. This does not work:
cd .ddev
docker-compose build web
because it needs specific environment variables that DDEV injects.
On current versions of ddev (v1.18+) you can
cd .ddev
~/.ddev/bin/docker-compose -f .ddev-docker-compose-full.yaml build web --no-cache
and you'll see the full build go by. You can use additional build arguments as well to change the output format, etc. For example, ~/.ddev/bin/docker-compose -f .ddev-docker-compose-full.yaml build web --no-cache --progress=plain
You can review the entire Dockerfile that's being built at .ddev/.webimageBuild/Dockerfile.
See docs.

docker build hangs in directory with many files

Windows 10. I have in folder just:
app (directory with many files)
Dockerfile (simpliest docker file)
I run "docker build ." and it just hangs.
If I remove "app" directory. Build runs ok.
In docker file just one line:
FROM node
Didn't find any issues like that. It fills like it tries to scan the directory or something.
Any advice?
UPD: It seems that I should use .dockerignore https://docs.docker.com/engine/reference/builder/#/dockerignore-file
When you run docker build ... the Docker client sends the context (recursive contents of the directory) via REST to the Docker daemon for building. If that context is large, this could take some time (depending on a variety of factors, if your daemon is local / remote, platform maybe, etc...).
How long are you giving it to hang before giving up? Could be that it's still just working? Or could be that the context was so large maybe the client / daemon experienced an issue. Checking the (client / daemon) logs would help debug that.
And yes, a .dockerignore file (basically a .gitignore but for Docker context) is probably what you're looking for, unless you need the contents of the app directory during your build.
Your Dockerfile should be put in the directory that only includes it's build context. For example, if you are building a spring-boot app, you can put the Dockerfile right under /app, as shown in this official docker sample.
Docker's documentation:
In most cases, it’s best to start with an empty directory as context and keep your Dockerfile in that directory. Add only the files needed for building the Dockerfile.
Warning: Do not use your root directory, /, as the PATH as it causes the build to transfer the entire contents of your hard drive to the Docker daemon.
I've seen that simple docker examples put dockerfile in the root directory, but for complicated examples like the one I posted above, the dockerfile is put only in it's relevant directory. You can dig through the dockersamples repository and find your case.

Resources