gitlab-runner - docker in docker (dind) and push to GitLab registry - laravel

I wish to use GitLab Container Registry to temporary store my newly built Docker image; in order to have Docker function (i.e. docker login, docker build, docker push), I applied docker-in-docker executor; then from GitLab Piplelines error messages, I realize I need to place a Dockerfile at project root:-
$ docker build --pull -t $CONTAINER_TEST_IMAGE .
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /builds/xxxxx.com/laravel/Dockerfile: no such file or directory
My Dockerfile includes centos:7, php, nodejs, composer and sass installations. I observe after each commit, GitLab runner will go through the Dockerfile once and install all of them from beginning, which makes the whole build stage very slow and funny - how come I just want to amend 1 word in my project but I need to install so many things again for deployment?
From my imagination, it will be nice if I can pre-build a Docker image from a Dockerfile that contains the installations mentioned above plus Docker (so that docker login, docker build and docker push can work) and stored in the GitLab-runner server, and after each commit, this image can be reused to build the new image to be pushed to GitLab Container Registry.
However, I faced 2 problems:-
1) Even I include Docker installation in the pre-build a Docker image, I cannot systemctl docker start, due to some D-bus problem
Failed to get D-Bus connection: Operation not permitted
moreover some articles also mentioned a docker in container shall not run background services;
2) when I use dind, it will require a Dockerfile at project root; with the pre-build a Docker image, actually I have nothing to do with this Dockerfile at project root; hence is dind a wrong option?
Acutally, what is the proper way to push a Laravel project image to GitLab Container Registry? (where to place those npm install and composer install commands?)
image: docker:latest
services:
- docker:dind
stages:
- build
- test
- deploy
variables:
CONTAINER_TEST_IMAGE: xxxx
CONTAINER_RELEASE_IMAGE: yyyy
before_script:
- docker login -u xxx -p yyy registry.gitlab.com
build:
stage: build
script:
- npm install here?
- docker build --pull -t $CONTAINER_TEST_IMAGE .
- docker push $CONTAINER_TEST_IMAGE

There are many questions in your post. I would like to target them as follows:
You can pre-build a docker image and then use it in your gitlab-ci.yaml file. This can be used to add your specific dependencies.
image: my custom image
services:
- docker:dind
Important to add the service to the configuration.
You problem about trying to run the docker service inside the gitlab-ci.yml. You actually don't need to do that. Gitlab exposes the docker engine to the executor (either via unix:///var/run/docker.sock or tcp://localhost:2375/). Note, that if the runners are executed in a kubernetes environment, you need to specify the DOCKER_HOST as follows:
variable:
DOCKER_HOST: tcp://localhost:2375/
You question about where to place npm install is more a fundamental question about how docker images are build. In short, npm install should be place in the Dockerfile. For a long description, please this for example.
Some references:
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-in-docker-executor
https://nodejs.org/en/docs/guides/nodejs-docker-webapp/

Related

Gitlab CI/CD fail with "bash: line 132: go: command not found"

We have installed Gitlab on our custom server. We are looking to use the gitlab CI/CD pipeline to build and release our software for that I'm working on a POC. I have created a project with the following .gitlab-ci.yml
variables:
GOOS: linux
GOARCH: amd64
stages:
- test
- build
- deb-build
run_tests:
stage: test
image: golang:latest
before_script:
- go mod tidy
script:
- go test ./...
build_binary:
stage: build
image: golang:latest
artifacts:
untracked: true
script:
- GOOS=$GOOS GOARCH=$GOARCH go build -o newer .
build deb:
stage: deb-build
image: ubuntu:latest
before_script:
- mkdir -p deb-build/usr/local/bin/
- chmod -R 0755 deb-build/*
- mkdir build
script:
- cp newer deb-build/usr/local/bin/
- dpkg-deb --build deb-build release-1.1.1.deb
- mv release-1.1.1.deb build
artifacts:
paths:
- build/*
TLDR: I have updated the gitlab-ci.yml and the screenshot of the error.
What I have noticed, the error is persistent if I use the shared runner(GJ7z2Aym) if you register a runner (i.e Specific Runner)
gitlab-runner register --non-interactive --url "https://gitlab.sboxdc.com/" --registration-token "<register_token>" --description "" --executor "docker" --docker-image "docker:latest"
I see the build passing without any problem
Failed case.
https://gist.github.com/meetme2meat/0676c2ee8b78b3683c236d06247a8a4d
One that Passed
https://gist.github.com/meetme2meat/058e2656595a428a28fcd91ba68874e8
The failing job is using a runner with shell executor, that was probably setup when you configured your GitLab instance. This can be seen on logs by this line:
Preparing the "shell" executor
Using Shell executor...
shell executor will ignore your job's image: config. It will run job script directly on the machine on which the runner is hosted, and will try to find go binary on this machine (failing in your case). It's a bit like running go commands on some Ubuntu without having go installed.
Your successful job is using a runner with docker executor, running your job's script in a golang:latest image as you requested. It's like running docker run golang:latest sh -c '[your script]'. This can be seen in job's logs:
Preparing the "docker" executor
Using Docker executor with image golang:latest ...
Pulling docker image golang:latest ...
Using docker image sha256:05e[...]
golang:latest with digest golang#sha256:04f[...]
What you can do:
Make sure you configure a runner with a docker executor. Your config.toml would then look like:
[[runners]]
# ...
executor = "docker"
[runners.docker]
# ...
It seems you already did by registering your own runner.
Configure your jobs to use this runner with job tags. You can set tag docker_executor on your Docker runner (when registering or via Gitlab UI) and setup something like:
build_binary:
stage: build
# Tags a runner must have to run this job
tags:
- docker_executor
image: golang:latest
artifacts:
untracked: true
script:
- GOOS=$GOOS GOARCH=$GOARCH go build -o newer .
See Runner registration and Docker executor for details.
Since you have used image: golang:latest, go should be in the $PATH
You need to check at which stage it is failing: run_tests or build_binary.
Add echo $PATH in your script steps, to check what $PATH is considered.
Check also if the error comes from the lack of git, used by Go for accessing modules remote repositories. See this answer as an example.
From your gists, the default GitLab runner uses a shell executor (which knows nothing about Go)
Instead, the second one uses a Docker executor, based on the Go image.
Registering the (Docker) runner is therefore the right way to ensure the expected executor.

Cache Cloud Native Buildpacks/Paketo.io pack CLI builds on GitHub Actions (e.g. with Spring Boot/Java/Maven buildpacks)?

I'm working on a Spring Boot application that should be packaged into a OCI container using Cloud Native Build Packs / Paketo.io. I build it with GitHub Actions, where my workflow build.yml looks like this:
name: build
on: [push]
jobs:
build-with-paketo-push-2-dockerhub:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Login to DockerHub Container Registry
run: echo $DOCKER_HUB_TOKEN | docker login -u jonashackt --password-stdin
env:
DOCKER_HUB_TOKEN: ${{ secrets.DOCKER_HUB_TOKEN }}
- name: Install pack CLI via apt. See https://buildpacks.io/docs/tools/pack/#pack-cli
run: |
sudo add-apt-repository ppa:cncf-buildpacks/pack-cli
sudo apt-get update
sudo apt-get install pack-cli
- name: Build app with pack CLI
run: pack build spring-boot-buildpack --path . --builder paketobuildpacks/builder:base
- name: Tag & publish to Docker Hub
run: |
docker tag spring-boot-buildpack jonashackt/spring-boot-buildpack:latest
docker push jonashackt/spring-boot-buildpack:latest
Now the step Build app with pack CLI takes relatively long, since it always downloads the Paketo builder Docker image and then does a full fresh build. This means downloading the JDK and every single Maven dependency. Is there a way to cache a Paketo build on GitHub Actions?
Caching the Docker images on GitHub Actions might be an option - which doesn't seem to be that easy. Another option would be to leverage the Docker official docker/build-push-action Action, which is able to cache the buildx-cache. But I didn't get the combination of pack CLI and buildx-caching to work (see this build for example).
Finally I stumbled upon the general Cloud Native Buildpacks approach on how to cache in the docs:
Cache Images are a way to preserve build optimizing layers across
different host machines. These images can improve performance when
using pack in ephemeral environments such as CI/CD pipelines.
I found this concept quite nice, since it uses a second cache image, which gets published on a container registry of your choice. And this image is simply used for all Paketo pack CLI builds on every machine you append the --cache-image parameter - be it your local desktop or any CI/CD platform (like GitHub Actions).
In order to use the --cache-image parameter, we also have to use the --publish flag (since the cache image needs to get published to your container registry!). This also means we need to log into the container registry before we're able to run our pack CLI command. Using Docker Hub this is something like:
echo $DOCKER_HUB_TOKEN | docker login -u YourUserNameHere --password-stdin
Also the Paketo builder image must be a trusted one. As the docs state:
By default, any builder suggested by pack builder suggest is
considered trusted.
Since I use a suggested builder, I don't have to do anything here. If you want to use another builder that isn't trusted by default, you need to run a pack config trusted-builders add your/builder-to-trust:bionic command before the final pack CLI command.
Here's the pack CLI command, which is cache enabled in case you want to build a Spring Boot app and using Docker Hub as the container registry:
pack build index.docker.io/yourApplicationImageNameHere:latest \
--builder paketobuildpacks/builder:base \
--path . \
--cache-image index.docker.io/yourCacheImageNameHere:latest \
--publish
Finally the GitHub Action workflow to build and publish the example Spring Boot app https://github.com/jonashackt/spring-boot-buildpack looks like this:
name: build
on: [push]
jobs:
build-with-paketo-push-2-dockerhub:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Login to DockerHub Container Registry
run: echo $DOCKER_HUB_TOKEN | docker login -u jonashackt --password-stdin
env:
DOCKER_HUB_TOKEN: ${{ secrets.DOCKER_HUB_TOKEN }}
- name: Install pack CLI via the official buildpack Action https://github.com/buildpacks/github-actions#setup-pack-cli-action
uses: buildpacks/github-actions/setup-pack#v4.1.0
- name: Build app with pack CLI using Buildpack Cache image (see https://buildpacks.io/docs/app-developer-guide/using-cache-image/) & publish to Docker Hub
run: |
pack build index.docker.io/jonashackt/spring-boot-buildpack:latest \
--builder paketobuildpacks/builder:base \
--path . \
--cache-image index.docker.io/jonashackt/spring-boot-buildpack-paketo-cache-image:latest \
--publish
Note that with using the pack CLI's --publish flag, we also don't need an extra step Tag & publish to Docker Hub anymore. Since this is done by pack CLI for us already.

Unable to find docker image locally

I was following this post - the reference code is on GitHub. I have cloned the repository on my local.
The project has got a react app inside it. I'm trying to run it on my local following step 7 on the same post:
docker run -p 8080:80 shakyshane/cra-docker
This returns:
Unable to find image 'shakyshane/cra-docker:latest' locally
docker: Error response from daemon: pull access denied for shakyshane/cra-docker, repository does not exist or may require 'docker login'.
See 'docker run --help'.
I tried login to docker again but looks like since it belongs to #shakyShane I cannot access it.
I idiotically tried npm start too but it's not a simple react app running on node - it's in the container and containers are not controlled by npm
Looks like docker pull shakyshane/cra-docker:latest throws this:
Error response from daemon: pull access denied for shakyshane/cra-docker, repository does not exist or may require 'docker login'
So the question is how do I run this docker image on my local mac machine?
Well this is illogical but still sharing so future people like me don't get stuck.
The problem was that I was trying to run a docker image which doesn't exist.
I needed to build the image:
docker build . -t xameeramir/cra-docker
And then run it:
docker run -p 8080:80 xameeramir/cra-docker
In my case, my image had TAG specified with it and I was not using it.
REPOSITORY TAG IMAGE ID CREATED SIZE
testimage testtag 189b7354c60a 13 hours ago 88.3MB
Unable to find image 'testimage:latest' locally for this command docker run testimage
So specifying tag like this - docker run testimage:testtag worked for me
Posting my solution since non of the above worked.
Working on macbook M1 pro.
The issue I had is that the image was built as arm/64. And I was running the command:
docker run --platform=linux/amd64 ...
So I had to build the image for amd/64 platform in order to run it.
Command below:
docker buildx build --platform=linux/amd64 ...
In conclusion your docker image platform and docker run platform needs to be the same from what I experienced.
In my case, the docker image did exist on the system and still I couldn't run the container locally, so I used the exact image ID instead of image name and tag, like this:
docker run myContainer c29150c8588e
I received this error message when I typed the name/character wrong. That is, "name1\name2" instead of "name1/name2" (wrong slash).
In my case, I saw this error when I had logged in to the dockerhub in my docker desktop. The repo I was pulling was local to my enterprise. Once i logged out of dockerhub, the pull worked.
This just happened to me because my local docker vm on macos ran out of disk space.
I just deleted some old images using docker image prune and it started working correctly again.
shakyshane/cra-docker Does not exist in that user's repo https://hub.docker.com/u/shakyshane/
The problem is you are trying to run an imagen that does not exists. If you are executing a Dockerfile, the image was not created until Dockerfile pass with no errors; so when Dockerfile tries to run the image, it can't find it. Be sure you have no errors in the execution of your scripts.
The simplest answer can be the correct one!.. make sure you have permissions to execute the command, use:
sudo docker run -p 8080:80 shakyshane/cra-docker
In my case, I didn't realise there was a difference between docker run and docker start, and I kept using the run command when I should've been using the start command.
FYI, run is for building and creating the docker container, start is to just start a stopped container
Use -d
sudo docker run -d -p 8000:8000 rasa/duckling
learn about -d here
sudo docker run --help
At first, i build image on mac-m1-pro with this command docker build -t hello_k8s_world:0.0.1 ., when is run this image the issue appear.
After read Master Yi's answer, i realize the crux of the matter and rebuild my images like this docker build --platform=arm64 -t hello_k8s_world:0.0.1 .
Finally,it worked.

How to update the application running in the docker container? (for example, spring boot)

I'm trying to figure out the docker for the hundredth time. Now I create spring-boot simple application and create docker image for it:
FROM openjdk:8
EXPOSE 8080
ADD /build/libs/hello-docker-0.0.1-SNAPSHOT.jar hello-docker.jar
ENTRYPOINT ["java", "-jar", "hello-docker.jar"]
What did i do step by step:
1) Build my app(as a result, my hello-docker-0.0.1-SNAPSHOT.jar appeared in the build folder)
2) moved to the directory where the Docker file is located and executed the command: docker build -f Dockerfile -t springdocker .
3) Run this image as container: docker run -p 8080:8080 springdocker
As a result, my application started successfully.
But now I decided to change something in my application. I did it and tried to repeat all the steps. 1-> 2-> error
On 2 stepa new image was created and the old one was in the active state. It turns out that I need to stop the old one and start a new one.
it is not comfortable. I intuitively understand that this is not how it should work. I do not understand how to properly update the container. I constantly change my application and if I need to stop the container then create an image and then launch it - this is not a convenience but a complication. With the same success, I can stop the my application and launch a new one without a docker. I understand that I do not understand how this works, and I want someone to explain to me how to do it.
You can use docker-compose, this will save you a lot of time.
Just write a docker-compose.yml file in the same folder as Dockerfile:
version: '3.5'
services:
yourapplication:
build: . -> this is the location of your dockerfile
image: yourapplication_imagename
container_name: yourapplication_containername
ports:
- "8080:8080"
tty: true
And, to update the application you only have to run:
docker-compose build --no-cache yourapplication
docker-compose up -d
This will recreate the image and replace the container with your updated app.
Without docker-compose: you already have a good thread about this here
You should automate your build process. If the desired result of this process is a docker image, your automated build process should not stop after creating the .jar
Fortunately, there is a good tutorial for building a Docker image for running a Spring Boot application using different build tools (Maven, Gradle):
https://spring.io/guides/gs/spring-boot-docker/
In the end, you will be able to deploy your application to a docker image using just
./mvnw install dockerfile:build

How I can Dockerize my web api on windows

Docker is a full development platform for creating containerized apps, and Docker for Windows is the best way to get started with Docker on Windows systems.
Start your favorite shell (cmd.exe, PowerShell, or other) to check your versions of docker and docker-compose, and verify the installation.
PS C:\Users\Docker> docker --version
Docker version 17.03.0-ce, build 60ccb22
PS C:\Users\Docker> docker-compose --version
docker-compose version 1.11.2, build dfed245
Your questions is not very specific but it appears that you are trying to containerize an asp.net web app, Here is a basic clue to what you want to accomplish by using docker.
Docker is a linux containers system means it's based on linux kernel and by installing docker in windows you are installing a linux guest machine to built your containers in and you will customize your containers to forward ports that will serve your app development from inside the container to your host machine, So basically How this is going to happen? after installing docker first docker needs a base image(linux image) to run your containers from, so a great place to find docker images is docker hub, so also for a basic scenario you need:
1) Pull an image.
2) Run a container based on this image.
To accomplish number 1: we will use microsoft dotnet official docker hub as an example.
docker pull microsoft/aspnetcore
docker pull: will pull the dotnet:latest image from docker hub, :latest is a tag specify the latest stable release of dotnet means if you want another runtime version you will use docker pull dotnet:runtime from the above dotnet official docker hub link you will find tags under Supported tags
To accomplish number 2: we need to run a container by using this image.
docker run -d -p 8000:80 --name firstwebapptest microsoft/aspnetcore
docker run: will create a container name firstwebapptest based on microsoft/aspnetcore forwarding the container port 80to the host port 8000 and all of that will run as a detached mode -d
And now check your browser localhost:8000
This is a very basic scenario using the docker command line tools.
So another way to accomplish this scenario is by using a dockerfile you will find How to use this image in microsoft dotnet official docker hub link, It assumes that you already in your app directory that contain your compiled myapp.dll. What will you do is create a file called dockerfile in this directory and write this inside:
FROM microsoft/aspnetcore
WORKDIR /app
COPY . .
ENTRYPOINT ["dotnet", "myapp.dll"]
FROM: base image that we already pulled
WORKDIR: that will be the directory inside the linux container
COPY: . . the first . is copying your host directory content inside the container the second . is your guest directory in that case will be /app
ENTREYPOINT: is the linux command that will run once this container is up and running in that case dotnet myapp.dll means you are running the command dotnet from the linux container inside the WORKDIR /app with all your host directory app structure that contains your compiled myapp.dll. that we already copied it COPY . .
so now we have the dockerfile all what we need is to build and run it.
docker build -t secondwebapptest .
docker run -d -p 8001:80 secondwebapptest
docker build: will build a container named -t secondwebapptest from . the dot refer to the dockerfile that you just built and that you are already in the working directory otherwise you have to specify a path to the docker file by using -f but that is not our case.
docker run: will run the container that already been created that named secondwebapptest based on forwarding the container port 80to the host port 8001 and all of that will run as a detached mode -d.
And now check your browser localhost:8001

Resources