I want to achieve the following build process:
decide the value of environment var depending on the build branch
persist this value through diff build steps
use this var to pass it as build-arg to docker build
Here is some of the cloudbuild config I've got:
- id: 'Get env from branch'
name: bash
args:
- '-c'
- |-
environment="dev"
if [[ "${BRANCH_NAME}" == "staging" ]]; then
environment="stg"
elif [[ "${BRANCH_NAME}" == "master" ]]; then
environment="prd"
fi
echo $environment > /workspace/environment.txt
- id: 'Build Docker image'
name: bash
dir: $_SERVICE_DIR
args:
- '-c'
- |-
environment=$(cat /workspace/environment.txt)
echo "===== ENV: $environment"
docker build --build-arg ENVIRONMENT=$environment -t gcr.io/${_GCR_PROJECT_ID}/${_SERVICE_NAME}/${COMMIT_SHA} .
The problem lies in the 2nd step. If I use bash step image, then I've got no docker executable in order to build my custom image.
And if I use gcr.io/cloud-builders/docker step image, then I can't execute bash scripts. In the args field I can only pass arguments for the docker executable. And this way I cannot extract the value of environment that I've persisted through the steps of the build.
The way I managed to accomplish both is to use my own, custom, pre-built image, which contains both bash and docker executables. I have that image in the container registry and I use it as the build step image. But this requires some custom work from my side. I was wondering if there is a better, more standardized way with built-in tools from cloudbuild.
Sources:
how to run inline bash scripts
how to persist values through build steps
You can change the default entrypoint by adding entrypoint: parameter
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- -c
- |
echo $PROJECT_ID
environment=$(cat /workspace/environment.txt)
echo "===== ENV: $environment"
docker build --build-arg ENVIRONMENT=$environment -t gcr.io/${_GCR_PROJECT_ID}/${_SERVICE_NAME}/${COMMIT_SHA} .
Related
I am trying to deploy a branch other than the default (master) branch. For some reason the predefined variables are not visible inside the roll-out.sh script. But the echo statements before calling the script do print the variables correctly.
I have another script that rolls out the master branch. In this script it is able to run docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY with no problems.
I have tried with the branch both protected and not protected. The variables are still undefined for some reason.
What I am doing wrong here?
build:
stage: build
image: docker:20.10.12
services:
- docker:20.10-dind
rules:
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH # not master branch
allow_failure: true
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker pull $CI_REGISTRY_IMAGE/client/base:latest || true
- docker build --cache-from $CI_REGISTRY_IMAGE/client/base:latest --cache-from $CI_REGISTRY_IMAGE/client:latest -t $CI_REGISTRY_IMAGE/client:$CI_COMMIT_BRANCH .
- docker push $CI_REGISTRY_IMAGE/client:$CI_COMMIT_BRANCH
deploy:
variables:
branch: $CI_COMMIT_BRANCH
stage: deploy
image: alpine
rules:
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH # not master branch
before_script:
- apk add openssh-client
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
script:
- echo $CI_COMMIT_REF_NAME
- echo $CI_COMMIT_BRANCH
- echo $branch
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$DOMAIN 'bash -s' < ./script/roll_out.sh
If you are sshing to another machine within your ci script I don’t think it would have accsess to the variables you are echoing out in the lines before because it’s a new session on a different machine.
You could try a few different things though to achieve what you want;
Try to send the variables as arguments (not great sending information to a machine through ssh).
Install a gitlab runner on the host you are trying to ssh to, tag this runner so it only runs the specific deployment job and then you’ll have the variables available on the host.
The problem was solved by explicitly passing the environment variables to the script
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$DOMAIN "CI_REGISTRY_IMAGE=$CI_REGISTRY_IMAGE CI_COMMIT_BRANCH=$CI_COMMIT_BRANCH bash -s" < ./script/roll_out_tagged.sh
I had another developer with lots of experience take a look at it.
He had no idea why docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY worked. That is what lead me to believe that all of the environment variables would be available on the remote. But seems like those three variables are some sort of exception.
I have the following script in my gitlab-ci and will like to run the loops same time, anyone knows a great way to do this? so that they both run at same time
NOTE the job is a manual job and am looking for a single button click to loop through all the packages in the bash script as shown below
when: manual
script:
- |-
for PACKAGE in name1 name2; do
export IMAGE="$CI_REGISTRY/$GITLAB_REPO/$PACKAGE:${BUILD_TAG}"
docker build -t ${IMAGE} -f $PACKAGE/Dockerfile .
docker push ${IMAGE}
done
currently it runs first for name1 and then after that is finished then runs for name2. I will like to run both at same exact time since there is no dependency
Here is what i tried from an answer on SO => (https://unix.stackexchange.com/a/216475/138406)
when: manual
script:
- |-
task(){
export IMAGE="$CI_REGISTRY/$GITLAB_REPO/$1:${BUILD_TAG}"
docker build -t ${IMAGE} -f $1/Dockerfile .
docker push ${IMAGE}
}
for PACKAGE in name1 name2; do
task "$PACKAGE" &
done
This works in regular bash script but when i used it with gitlab-ci, it doesnt run as expected and does not even run any of the commands and just succeeds the job instantly
Anyone willing to help on where the issue is and how to solve this issue?
To achieve your use case, I'd suggest you rather "parallelize" the build using several dedicated GitLab-CI jobs, rather than using several bash jobs in a single GitLab-CI job.
Proof-of-concept:
stages:
- push
.push-template:
stage: push
image: docker:latest
services:
- docker:dind
variables:
IMAGE: "${CI_REGISTRY}/${GITLAB_REPO}/${PACKAGE}:${BUILD_TAG}"
# where ${PACKAGE} should be provided by the child jobs...
before_script: |
if [ -z "${PACKAGE}" ]; then
echo 'Error: variable PACKAGE is undefined' >&2
false
fi
# just for completeness, this may be required:
echo "${CI_JOB_TOKEN}" | docker login -u "${CI_REGISTRY_USER}" --password-stdin "${CI_REGISTRY}"
script:
- docker build -t "${IMAGE}" -f "${PACKAGE}/Dockerfile" .
- docker push "${IMAGE}"
- docker logout "${CI_REGISTRY}" || true
push-name1:
extends: .push-template
variables:
PACKAGE: name1
push-name2:
extends: .push-template
variables:
PACKAGE: name2
See the .gitlab-ci.yml reference manual for details on the extends keyword.
just succeeds the job instantly
That is what running in the background means - it means that the main process will continue instantly. You have to wait for background processes to finish.
- |-
task(){
export IMAGE="$CI_REGISTRY/$GITLAB_REPO/$1:${BUILD_TAG}"
docker build -t ${IMAGE} -f $1/Dockerfile .
docker push ${IMAGE}
}
for PACKAGE in name1 name2; do
task "$PACKAGE" &
done
wait
But that will not catch any errors, which will result in problems going undetected. You would have to collect pids and wait on the individually:
...
childs=""
for package in name1 name2; do
task "$package" &
childs="$childs $!"
done
for pid in $childs; do
if ! wait "$pid"; then
echo "Process with pid=$pid failed"
kill $childs
wait
exit 1
fi
done
But anyway, this is cumbersome and it's reinventing the wheel. Install GNU xargs (or even better parallel) and make sure your docker container has bash shell. Then, just export the function and run it in subprocess with xargs:
...
export -f task
printf "%s\n" name1 name2 | xargs -P0 -d '\n' bash -xeuo pipefail -c 'func "$#"' --
You may want to research https://man7.org/linux/man-pages/man1/xargs.1.html or even https://www.gnu.org/software/bash/manual/html_node/Job-Control-Basics.html .
Definitely, instead of writing long scripts in .gitlab-ci yaml file - move it all a dedicated script file, so that you can test the run locally. Check your scripts with shellcheck. And anyway - using docker-compose might also be simpler.
Anyway, this is all odd and for sure I wouldn't do it that way anyway. Gitlab-runner is already the tool that gives you parallelization - it runs multiple jobs in the same stage in parallel. Just run two tasks.
.todo:
script:
export IMAGE="$CI_REGISTRY/$GITLAB_REPO/${CI_JOB_NAME}:${BUILD_TAG}"
docker build -t ${IMAGE} -f ${CI_JOB_NAME}/Dockerfile .
docker push ${IMAGE}
name1:
extends: .todo
name2:
extends: .todo
Such approach will give your pipeline visible indication over what specific task failed, so you wouldn't need to scroll through mangled unreadable logs of two processes running in parallel. Just one job with one task.
From my gitlab-ci I would need to pass an environment variable with the spring profiles to docker compose. Such variable is defined for each server environment where we deploy.
So, in my gitlab-ci I have this:
.deploy_template: &deploy_template
script:
- echo $ENV_SPRING_PROFILES
# start containers
- $SSH_COMMAND user#$CI_ENVIRONMENT_URL "cd $REMOTE_DEPLOY_DIR/docker && SPRING_ACTIVE_PROFILES=$ENV_SPRING_PROFILES && DOCKER_HOST=tcp://localhost:2375 && docker-compose up -d"
deploy_811AC:
<<: *deploy_template
stage: deploy
when: manual
only:
- /^feature.*$/
- /^fix.*$/
environment:
name: ccvli-ecp626
url: 10.135.XXX.XXX
variables:
ENV_SPRING_PROFILES: "mock"
When I run the runner, I can see with this - echo $ENV_SPRING_PROFILES the value of the variable. However, it seems not be replaced in the SSH command as docker-compose say the variable SPRING_ACTIVE_PROFILES is empty.
It is becoming a kind of nightmare so any clue is welcome.
Thanks in advance
I have not to much experience with gitlab ci, but what I think is that the way variables are "declared" is not with "=" but like this:
variables:
SPRING_ACTIVE_PROFILES: $SPRING_ACTIVE_PROFILES
script:
...
- $SSH_COMMAND user#$CI_ENVIRONMENT_URL "cd $REMOTE_DEPLOY_DIR/docker && $ENV_SPRING_PROFILES && DOCKER_HOST=tcp://localhost:2375 && docker-compose up -d"
Once it is declare on the first block, you can start using it on the code :)
I'm using the bash shell provided by Git for Windows for Docker toolbox for Windows. I want to export a string representing a unix path to a environment variable to then use in a docker container. Something like:
export MY_VAR=/my/path; docker-compose up
The problem is that in my container the variable will be something like:
echo $MY_VAR # prints c:/Program Files/Git/my/path
So it seems the shell (my guess) recognizes the string as a path and converts it to windows format. Is there a way to stop this?
I've attempted to use MSYS_NO_PATHCONV=1:
MSYS_NO_PATHCONV=1; export LOG_PATH=/my/path; docker-compose up
But it did not have any effect.
I don't think it's an issue with my docker-compose and dockerfile but I'll attach them if someone is interested.
My Dockerfile:
FROM node:8-slim
RUN mkdir /test \
&& chown node:node /test
USER node
ENTRYPOINT [ "/bin/bash" ]
My docker-compose.yml:
version: '2'
services:
test:
build:
context: .
image: test
environment:
- MY_VAR
volumes:
- ${MY_VAR}:/test
command: -c 'sleep 100000'
The Final goal here is to make a directory on the host machine accessible from the docker container (for logs and such). The directory should be set by an environment variable. Setting the directory in the docker-compose.yml does work, just not for my use case.
If you want your command docker-compose up to be run with MSYS_NO_PATHCONV=1; you have two options:
export LOG_PATH=/c/Windows; export MSYS_NO_PATHCONV=1; docker-compose up This will affect your bash session as the variable is exported
export LOG_PATH=/c/Windows; MSYS_NO_PATHCONV=1 docker-compose up; (note I removed one semi-colon intentionally) This will set MSYS_NO_PATHCONV only in the context of the command to run
Test it with:
$ export LOG_PATH=/c/Windows ; cmd "/c echo %LOG_PATH%";
C:/Windows --> Fails
$ export LOG_PATH=/c/Windows ; MSYS_NO_PATHCONV=1 cmd "/c echo %LOG_PATH%"
/c/Windows --> Success
$ export LOG_PATH=/c/Windows ; export MSYS_NO_PATHCONV=1; cmd "/c echo %LOG_PATH%";
/c/Windows --> Success but MSYS_NO_PATHCONV is now "permanently" set
Seems a workaround is to remove the first / from the string and add it in the docker-compose.yml instead.
new docker-compose.yml:
version: '2'
services:
test:
build:
context: .
image: test
environment:
- MY_VAR
volumes:
- /${MY_VAR}:/test # added '/' to the beginning of the line
command: -c 'sleep 100000'
and then starting the container with:
export MY_VAR=my/path; docker-compose up # removed the '/' from the beginning of the path.
This does seem more like a "lucky" workaround than a perfect solution as when I'll build this on other systems I'll have to remind myself to remove the /. Doable but a bit annoying. Maybe someone has a better idea.
My problem is the bash script I created got this error "/bin/sh: eval: line 88: ./deploy.sh: not found" on gitlab. Below is my sample script .gitlab-ci.yml.
I suspect that gitlab ci is not supporting bash script.
image: docker:latest
variables:
IMAGE_NAME: registry.gitlab.com/$PROJECT_OWNER/$PROJECT_NAME
DOCKER_DRIVER: overlay
services:
- docker:dind
stages:
- deploy
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker pull $IMAGE_NAME:$CI_BUILD_REF_NAME || true
production-deploy:
stage: deploy
only:
- master#$PROJECT_OWNER/$PROJECT_NAME
script:
- echo "$PRODUCTION_DOCKER_FILE" > Dockerfile
- docker build --cache-from $IMAGE_NAME:$CI_BUILD_REF_NAME -t $IMAGE_NAME:$CI_BUILD_REF_NAME .
- docker push $IMAGE_NAME:$CI_BUILD_REF_NAME
- echo "$PEM_FILE" > deploy.pem
- echo "$PRODUCTION_DEPLOY" > deploy.sh
- chmod 600 deploy.pem
- chmod 700 deploy.sh
- ./deploy.sh
environment:
name: production
url: https://www.example.com
And this also my deploy.sh.
#!/bin/bash
ssh -o StrictHostKeyChecking=no -i deploy.pem ec2-user#targetIPAddress << 'ENDSSH'
// command goes here
ENDSSH
All I want is to execute deploy.sh after docker push but unfortunately got this error about /bin/bash thingy.
I really need your help guys. I will be thankful if you can solve my problem about gitlab ci bash script got error "/bin/sh: eval: line 88: ./deploy.sh: not found".
This is probably related to the fact you are using Docker-in-Docker (docker:dind). Your deploy.sh is requesting /bin/bash as the script executor which is NOT present in that image.
You can test this locally on your computer with Docker:
docker run --rm -it docker:dind bash
It will report an error. So rewrite the first line of deploy.sh to
#!/bin/sh
After fixing that you will run into the problem that the previous answer is addressing: ssh is not installed either. You will need to fix that too!
docker:latest is based on alpine linux which is very minimalistic and does not have a lot installed by default. For example, ssh is not available out of the box, so if you want to use ssh commands you need to install it first. In your before_script, add:
- apk update && apk add openssh
Thanks. This worked for me by adding bash
before_script:
- apk update && apk add bash
Let me know if that still doesn't work for you.