Kaniko And Proxy Set Up - proxy

Server where runners for my pipeline which runs on GitLab are rigistered is behined proxy. I followed the documentation so that I can set up the build of image with Kaniko behind the proxy, but I still get an error which based on documentation is for missing configuration for proxy. Can someone tell me if there is a step that I may have left out or if there is something else that needs to be adjusted and I don't know about it. After build of image is successfully done I want to be able to push image to JFrog registry with latest tag.
Thank you for your help !
Below I will leave how the settings for that stage look like in pipeline:
8:build docker image:
stage: Build Docker Image
variables:
ftp_proxy: "${PROXY_ADDRESS}"
FTP_PROXY: "${PROXY_ADDRESS}"
http_proxy: "${PROXY_ADDRESS}"
HTTP_PROXY: "${PROXY_ADDRESS}"
https_proxy: "${PROXY_ADDRESS}"
HTTPS_PROXY: "${PROXY_ADDRESS}"
no_proxy: "localhost,127.0.0.0/8,X.0.0.0/8,X.X.0.0/12,X.X.0.0/16"
NO_PROXY: "localhost,127.0.0.0/8,X.0.0.0/8,X.X.0.0/12,X.X.0.0/16"
image:
name: gcr.io/kaniko-project/executor:v1.9.0-debug
entrypoint: [""]
allow_failure: true
script:
- echo "{\"auths\":{\"$JFROG_URL\":{\"username\":\"$JFROG_USER\",\"password\":\"$JFROG_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor
--context "${CI_PROJECT_DIR}"
--build-arg "ftp_proxy=${ftp_proxy}"
--build-arg "FTP_PROXY=${ftp_proxy}"
--build-arg "http_proxy=${http_proxy}"
--build-arg "HTTP_PROXY=${http_proxy}"
--build-arg "https_proxy=${https_proxy}"
--build-arg "HTTPS_PROXY=${https_proxy}"
--build-arg "no_proxy=${no_proxy}"
--build-arg "NO_PROXY=${no_proxy}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination "${JFROG_REPO}:latest"
--verbosity info
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
tags:
- dev
- docker
The error I am getting:
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: getting tag for destination: repository can only contain the characters `abcdefghijklmnopqrstuvwxyz0123456789_-./`: /artifactory.XXXXX:XXX/XXXX/XXXX/

Related

docker-compose build context dockerfile envar image

I would like use docker-compose to build/run dockerfiles that have envars in their FROM keyword. The problem that I am getting now is that I seem to be unable to pass envars from my environment through docker-compose into the dockerfile.
docker-compose.yml
version: "3.2"
services:
api:
build: 'api/'
restart: on-failure
depends_on:
- mysql
networks:
- frontend
- backend
volumes:
- ./api/php/:/var/www/html/
Dockerfile in 'api/'
FROM ${DOCKER_IMAGE_API}
RUN apk update
RUN apk upgrade
RUN docker-php-ext-install mysqli
Why?
I want to do this so that I can run docker-compose from a bash script that detects the host architecture and changes the base image of the underlying dockerfiles in the host application.
FROM instructions support variables that are declared by any ARG instructions that occur before the first FROM. So what you can do is this:
ARG IMAGE
FROM $IMAGE
when you run the build command, you then pass the --build-arg as follows:
docker build -t test --build-arg IMAGE=alpine .
you can also choose to have a default value for the IMAGE variable, to be used if the --build-arg flag isn't used.
Alternatively, in case you were to use docker compose build and not docker build (and I think this is your case), you can specify the variable in the docker-compose build --build-arg:
version: "3.9"
services:
api:
build: .
and then
docker compose build --build-arg IMAGE=alpine

Get Git History in Google Cloud Build Step

I'm using Google Cloud Build to run CI for my Nx workspace. Here's the cloudbuild.yaml file:
steps:
- name: 'gcr.io/cloud-builders/docker'
id: Test_Affected_Projects
entrypoint: 'sh'
args: [
'-c',
'docker build --build-arg NPM_TOKEN=$$NPM_TOKEN --file ./test/Dockerfile.test-runner -t mha-test-runner .']
secretEnv: ['NPM_TOKEN']
# Remove the docker image
secrets:
- kmsKeyName: /path/to/key
secretEnv:
NPM_TOKEN: some_key_value
(There are currently two steps, but I removed the second for brevity. The second step just removes the created docker image.)
Now the command inside the Docker image here runs all the tests for the Nx workspace. The thing is, Nx has a great command where only the affected libraries will be tested. But for the command to run, the git history of the project needs to be available.
I've tried to get the git history in the cloud build context, but I haven't been able to get it working. This is the step I added to try and get everything working:
steps:
- name: 'gcr.io/cloud-builders/git'
args: ['fetch', '--unshallow']
- name: 'gcr.io/cloud-builders/docker'
id: Test_Affected_Projects
entrypoint: 'sh'
args: [
'-c',
'docker build --build-arg NPM_TOKEN=$$NPM_TOKEN --file ./test/Dockerfile.test-runner -t mha-test-runner .']
secretEnv: ['NPM_TOKEN']
# Remove the docker image
secrets:
- kmsKeyName: /path/to/key
secretEnv:
NPM_TOKEN: some_key_value
That new first command, which should get the git history, fails. The error message says that it's not a git repo, so the command fails.
My question is: how can I get the git history in the cloud build context so that I can use it with different commands in the build/testing process?
I think the reason this isn't working is that you need to store the github credentials in the cloud build environment.
I believe this guide can help.
will allow you to do so, and then you will be able to call the git fetch --unshallow as you already have.

Using dind on drone.io

I'm trying move from gitlab ci to drone.io. But I can't make DIND works well as on gitlab. Above is how I did on gitlab.
variables:
NODE_ENV: 'test'
DOCKER_DRIVER: overlay
image: gitlab/dind
services:
- docker:dind
cache:
untracked: true
stages:
- test
test:
stage: test
before_script:
- docker info
- docker-compose --version
- docker-compose pull
- docker-compose build
after_script:
- docker-compose down
script:
- docker-compose run --rm api yarn install
How can I create an equivalent drone file ?
You can use the services section to start the docker daemon.
pipeline:
ping:
image: docker
environment:
- DOCKER_HOST=unix:///drone/docker.sock
commands:
- sleep 10 # give docker enough time to initialize
- docker ps -a
services:
docker:
image: docker:dind
privileged: true
command: [ '-H', 'unix:///drone/docker.sock' ]
Note that we change the default location of the docker socket and write to the drone volume which is shared among all containers in the pipeline:
command: [ '-H', 'unix:///drone/docker.sock' ]
Also note that we need to run the dind container in privileged mode. The privileged flag can only be used by trusted repositories. You will therefore need a user administrator to set the trusted flag to true for your repository in the drone user interface.
privileged: true

unable to link gitlab services to own container in .gitlab-ci.yml

I have a simple .gitlab-ci.yml file:
image: docker:latest
services:
- docker:dind
- postgres:9.5
stages:
- build
- test
variables:
STAGING_REGISTRY: "dhub.example.com"
CONTAINER_TEST_IMAGE: ${STAGING_REGISTRY}/${CI_PROJECT_NAME}:latest
before_script:
- docker login -u gitlab-ci -p $DHUB_PASSWORD $STAGING_REGISTRY
build:
stage: build
script:
- docker build --pull -t $CONTAINER_TEST_IMAGE -f Dockerfile-dev .
- docker push $CONTAINER_TEST_IMAGE
test:
stage: test
script:
- docker run --env-file=.environment --link=postgres:db $CONTAINER_TEST_IMAGE nosetests
Everything works fine until the actual test stage. In test I'm unable to access my postgres service.
docker: Error response from daemon: Could not get container for postgres.
I tried to write test like this:
test1:
stage: test
image: $CONTAINER_TEST_IMAGE
services:
- postgres:9.5
script:
- python manage.py test
But in this case, I'm unable to pull this image, because of authentication:
ERROR: Preparation failed: unauthorized: authentication required
Am I missing something?

docker-compose build and http_proxy

I want to test ELK.
It works fine
BUt when I want to do a
docker-compose up
behind a proxy
docker-compose up --no-recreate
Building kibana
Step 1 : FROM kibana:latest
---> 544887fbfa30
Step 2 : RUN apt-get update && apt-get install -y netcat
---> Running in 794342b9d807
It failed
W: Some index files failed to download. They have been ignored, or old ones used instead.
Is' OK with
docker build --build-arg http_proxy=http://proxy:3128 --build-arg https_proxy=http://proxy:3128 kibana
But when I redo a docker-compose up, il tries to re-build, and failed to pass through proxy
Any help ?
You will need docker-compose 1.6.0-rc1 in order to pass the proxy to your build through docker-compose.
See commit 47e53b4 from PR 2653 for issue 2163.
Move all build related configuration into a build: section in the service.
Example:
web:
build:
context: .
dockerfile: Dockerfile.name
args:
key: value
As mkjeldsen points out in the comments
If key should assume the value of an environment variable of the same name, value can be omitted (docker-compose ARGS):
Especially useful for https_proxy: if the envvar is unset or empty, the builder will not apply proxy, otherwise it will.
I ran into the same problem. What helped me was using the explicit version 2.2 and then build - args and - network as described in the documentation.
VonC is right, it works for me by adding args section under the build lines in docker-compose file:
original:
ssh:
build: ssh/.
container_name: ssh
ports:
- "3000:22"
networks:
vault_net:
ipv4_address: 172.16.238.20
Modified:
ssh:
build:
context: "ssh/."
args:
HTTP_PROXY: http://X.X.X.X:XXXX
HTTPS_PROXY: http://X.X.X.X:XXXX
NO_PROXY: .domain.ltd,127.0.0.1
container_name: ssh
ports:
- "3000:22"
networks:
vault_net:
ipv4_address: 172.16.238.20
Note that I have to add quotes for context since it needs to be formatted as string.
Thanks a lot.
did you try it on clean machine?
docker-machine stop default
docker-machine create -d virtualbox test
docker-machine start test
eval $(docker-machine env test)
docker-compose up

Resources