Does it make sense to install the runtime on docker? - go

I'm considering deploying some apps on docker (aws beanstalk being the provider). Going though various resources I've found it's recommended to use a base images, in my case the official golang image but I'm wondering why would you need the runtime installed (i.e. Golang) on the container. Isn't the binary all you should deploy on the docker container?

I'm not a docker aficionado but in general, the Go runtime is compiled into your binary, and you don't need anything but that. The Go image includes the SDK, and is not the runtime. It's only useful if you want to build your app in the container. Otherwise you don't need it.
From that image's doc: The most straightforward way to use this image is to use a Go container as both the build and runtime environment.
So maybe it's a Docker pattern to just build your source on the image, or it's just a habit some people have from interpreted languages. Personally when I'm deploying Go applications (not via docker) I build an artifact on a CI machine and that's what I'm deploying, not the source.

I prefer to statically compile and then build a minimal container with only the user-space you need, here is an example.
I personally like to build inside the official container and then copy the binary to my deployment container, I inject docker into my build container with something like this
docker run -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):$(which docker)
That way I build my docker container within my build container and just add the binary with a Dockerfile ADD

Related

Is there any alternative of JIB for golang project, which creates the docker image withour using dockerfile

I want to create the docker image of golang project using cloud build.yaml but without using dockerfile.
Is there any tool available for golang which is alternative of JIB which creates the docker image without using dockerfile.
You can check the CNCF project : https://buildpacks.io/
The buildpack is similar to JIB mostly.
Here dropping Github link for GO buildpack : https://github.com/paketo-buildpacks/go

Building multi arch docker image with tags

Currently I have a spring application which was made to run on docker for which we have followed this documentation mentioned below:
https://spring.io/guides/gs/spring-boot-docker/
This I believe this command docker build -t springio/gs-spring-boot-docker . builds an image for x86 platform only.
As I'm using an x86 machine for development, how does one build a docker image for arm and x86 in dockerCLI? As I wanted an image than can run on a server(x86) and Rpi(arm) with the appropriate tags like:
org/app:x86
org/app:arm
You can do the builds by adding a build argument. For example:
docker build -t springio/gs-spring-boot-docker:arm32v7 --build-arg ARCH=arm32v7/ .
A easier way to build the image for all required platforms with one cli instruction is to use the experimental build engine buildx. See also instructions on the docker blog

How to deploy image to kubernetes with jib and maven

I have environment where I can simply push images(created with Jib) to local repository. I want to now to be able to deploy this on kubernetes, but from the "safety" of maven.
I know I can spin some Skaffold magic, but I don't like to have it installed separately. Is there some Jib-Skaffold workflow I can use to continuously force Skaffold to redeploy on source change(without running it in command line)
Is there some Skaffold plugin? I really like what they have here, but proposed kubernetes-dev-maven-plugin is probably internal only.
Skaffold can monitor your local code and detect changes that will trigger a build and deployment in your cluster. This is built-in on Skaffold using the dev mode so it solves the redeploy on source change part.
As for the workflow, Jib is a supported builder for Skaffold so the same dynamic applies.
Although these features automate the tasks, it is still necessary to run it once with skaffold dev and let it run in the "background".

Using Maven artifact in Google Cloud Docker build

I have a google cloud container build with the following steps
gcr.io/cloud-builders/mvn to run a mvn clean package cmd
gcr.io/cloud-builders/docker to create a docker image
My docker image includes and will run tomcat.
Both these steps work fine independently.
How can I copy the artifacts built by step 1 into the correct folder of my docker container? I need to move either the built wars or specific lib files from step 1 to the tomcat dir in my docker container.
Echoing out the /workspace and /root dir in my Dockerfile doesn't show the artifacts. I think I'm misunderstanding this relationship.
Thanks!
Edit:
I ended up changing the Dockerfile to set the WORKDIR to /workspace
and
COPY /{files built by maven} {target}
The working directory is a persistent volume mounted in the builder containers, by default under /workdir. You can find more details in the documentation here https://cloud.google.com/container-builder/docs/build-config#dir
I am not sure what is happening in your case. But there is an example with a Maven step and a Docker build step in the documentation of the gcr.io/cloud-builders/mvn builder. https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/mvn/examples/spring_boot. I suggest you compare with your files.
If it does not help, could you share your Dockerfile and cloudbuild.yaml. Please make sure you remove any sensitive information.
Also, you can inspect the working directory by running the build locally https://cloud.google.com/container-builder/docs/build-debug-locally#preserve_intermediary_artifacts

Modify a service inside GitLab CI

I'm attempting to set up GitLab CI and I have some integration tests that run against elasticsearch. I'd like to install elasticsearch using the official docker image, so:
services:
- elasticsearch:2.2.2
But I want the mapper-attachments plugin. I have had no luck adding a command in the before_script section to install the mapper-attachments plugin, because the elasticsearch files don't seem to be in the environment that the before_script section is running inside of. How can I modify the elasticsearch image that has been installed into the runner?
You should create you custom elasticsearch container.
You could adapt the following Dockerfile:
FROM elasticsearch:2.3
MAINTAINER Your Name <you#example.com>
RUN /usr/share/elasticsearch/bin/plugin install analysis-phonetic
You can find this image on Docker Hub.
Here are detailed steps:
Register at https://hub.docker.com and link you Github account with it
Create a new repo at Github, e.g. "elasticsearch-docker"
Create a Dockerfile which inherits FROM elasticsearch and installs your plugins (see this example)
Create an Automated build at Dockerhub form this github repo (in my case: https://hub.docker.com/r/tmaier/elasticsearch/)
Configure the Build Settings at Docker Hub
I added two tags. One "latest" and one matching the elasticsearch release I am using.
I also linked the elasticsearch repository, so that mine gets rebuilt when there is a new elasticsearch release
See that the Container gets built successfully by Docker Hub
Over at Gitlab CI, change the service to point to your new Docker image. In my example, I would use tmaier/elasticsearch:latest
See your integration tests passing

Resources