Multiple commands in the same build step in Google Cloud Builder - google-cloud-build

I want to run our automated backend test suite on Google Cloud Builder environment. However, naturally, I bumped into the need to install various dependencies and prerequisites within the Cloud Builder so that our final test runner (php tests/run) can run.
Here's my current cloudbuild.yaml:
steps:
- name: 'ubuntu'
args: ['bash', './scripts/install-prerequisites.sh', '&&', 'composer install -n -q --prefer-dist', '&&', 'php init --overwrite=y', '&&', 'php tests/run']
At the moment, the chaining of multiple commands doesn't work. The only thing that's executed is the bash ./scripts/install-prerequisites.sh part. How do I get all of these commands get executed in order?

A more readable way to run the script could be to use breakout syntax (source: mastering cloud build syntax)
steps:
- name: 'ubuntu'
entrypoint: 'bash'
args:
- '-c'
- |
./scripts/install-prerequisites.sh \
&& composer install -n -q --prefer-dist \
&& php init --overwrite=y \
&& php tests/run
However, this only works if your build step image has the appropriate deps installed (php, composer).

You have 2 options to achieve this at the moment I believe:
create a script that has the sequence of commands you'd like and call the script directly:
# cloudbuild.yaml
steps:
- name: 'ubuntu'
args: ['./my-awesome-script.sh']
# my-awesome-script.sh
/usr/bin/env/bash
set -eo pipefail
./scripts/install-prerequisites.sh
composer install -n -q --prefer-dist
php init --overwrite=y
php tests/run
Call bash -c with all the commands you'd like to follow:
steps:
- name: 'ubuntu'
args: ['bash', '-c', './scripts/install-prerequisites.sh && composer install -n -q --prefer-dist && php init --overwrite=y && php tests/run']

See:
https://cloud.google.com/cloud-build/docs/configuring-builds/configure-build-step-order
https://cloud.google.com/cloud-build/docs/configuring-builds/store-images-artifacts
https://github.com/GoogleCloudPlatform/cloud-builders-community
https://github.com/GoogleCloudPlatform/cloud-builders
By default, build steps run sequentially, but you can configure them to run concurrently.
The order of the build steps in the steps field relates to the order in which the steps are executed. Steps will run serially or concurrently based on the dependencies defined in their waitFor fields.
A step is dependent on every id in its waitFor and will not launch until each dependency has completed successfully.
So you only separate command as each step.
Like this.
steps:
- name: 'ubuntu'
args: ['bash', './scripts/install-prerequisites.sh']
id: 'bash ./scripts/install-prerequisites.sh'
- name: 'ubuntu'
args: ['composer', 'install', '-n', '-q', '--prefer-dist']
id: 'composer install -n -q --prefer-dist'
- name: 'ubuntu'
args: ['php', 'init', '--overwrite=y']
id: 'php init --overwrite=y'
- name: 'ubuntu'
args: ['php', 'tests/run']
id: 'php tests/run'
By the way, Can using ubuntu image run php and composer command?
I think that you should use or build docker image which can run php and composer command.
The composer docker image is here.
steps:
- name: 'gcr.io/$PROJECT_ID/composer'
args: ['install']

Related

See Gitlab CI/CD pipeline on my local machine [duplicate]

If a GitLab project is configured on GitLab CI, is there a way to run the build locally?
I don't want to turn my laptop into a build "runner", I just want to take advantage of Docker and .gitlab-ci.yml to run tests locally (i.e. it's all pre-configured). Another advantage of that is that I'm sure that I'm using the same environment locally and on CI.
Here is an example of how to run Travis builds locally using Docker, I'm looking for something similar with GitLab.
Since a few months ago this is possible using gitlab-runner:
gitlab-runner exec docker my-job-name
Note that you need both docker and gitlab-runner installed on your computer to get this working.
You also need the image key defined in your .gitlab-ci.yml file. Otherwise won't work.
Here's the line I currently use for testing locally using gitlab-runner:
gitlab-runner exec docker test --docker-volumes "/home/elboletaire/.ssh/id_rsa:/root/.ssh/id_rsa:ro"
Note: You can avoid adding a --docker-volumes with your key setting it by default in /etc/gitlab-runner/config.toml. See the official documentation for more details. Also, use gitlab-runner exec docker --help to see all docker-based runner options (like variables, volumes, networks, etc.).
Due to the confusion in the comments, I paste here the gitlab-runner --help result, so you can see that gitlab-runner can make builds locally:
gitlab-runner --help
NAME:
gitlab-runner - a GitLab Runner
USAGE:
gitlab-runner [global options] command [command options] [arguments...]
VERSION:
1.1.0~beta.135.g24365ee (24365ee)
AUTHOR(S):
Kamil TrzciƄski <ayufan#ayufan.eu>
COMMANDS:
exec execute a build locally
[...]
GLOBAL OPTIONS:
--debug debug mode [$DEBUG]
[...]
As you can see, the exec command is to execute a build locally.
Even though there was an issue to deprecate the current gitlab-runner exec behavior, it ended up being reconsidered and a new version with greater features will replace the current exec functionality.
Note that this process is to use your own machine to run the tests using docker containers. This is not to define custom runners. To do so, just go to your repo's CI/CD settings and read the documentation there. If you wanna ensure your runner is executed instead of one from gitlab.com, add a custom and unique tag to your runner, ensure it only runs tagged jobs and tag all the jobs you want your runner to be responsible of.
I use this docker-based approach:
Edit: 2022-10
docker run --entrypoint bash --rm -w $PWD -v $PWD:$PWD -v /var/run/docker.sock:/var/run/docker.sock gitlab/gitlab-runner:latest -c 'git config --global --add safe.directory "*";gitlab-runner exec docker test'
For all git versions > 2.35.2. You must add safe.directory within the container to avoid fatal: detected dubious ownership in repository at.... This also true for patched git versions < 2.35.2. The old command will not work anymore.
Details
0. Create a git repo to test this answer
mkdir my-git-project
cd my-git-project
git init
git commit --allow-empty -m"Initialize repo to showcase gitlab-runner locally."
1. Go to your git directory
cd my-git-project
2. Create a .gitlab-ci.yml
Example .gitlab-ci.yml
image: alpine
test:
script:
- echo "Hello Gitlab-Runner"
3. Create a docker container with your project dir mounted
docker run -d \
--name gitlab-runner \
--restart always \
-v $PWD:$PWD \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
(-d) run container in background and print container ID
(--restart always) or not?
(-v $PWD:$PWD) Mount current directory into the current directory of the container - Note: On Windows you could bind your dir to a fixed location, e.g. -v ${PWD}:/opt/myapp. Also $PWD will only work at powershell not at cmd
(-v /var/run/docker.sock:/var/run/docker.sock) This gives the container access to the docker socket of the host so it can start "sibling containers" (e.g. Alpine).
(gitlab/gitlab-runner:latest) Just the latest available image from dockerhub.
4. Execute with
Avoid fatal: detected dubious ownership in repository at... More info
docker exec -it -w $PWD gitlab-runner git config --global --add safe.directory "*"
Actual execution
docker exec -it -w $PWD gitlab-runner gitlab-runner exec docker test
# ^ ^ ^ ^ ^ ^
# | | | | | |
# (a) (b) (c) (d) (e) (f)
(a) Working dir within the container. Note: On Windows you could use a fixed location, e.g. /opt/myapp.
(b) Name of the docker container
(c) Execute the command "gitlab-runner" within the docker container
(d)(e)(f) run gitlab-runner with "docker executer" and run a job named "test"
5. Prints
...
Executing "step_script" stage of the job script
$ echo "Hello Gitlab-Runner"
Hello Gitlab-Runner
Job succeeded
...
Note: The runner will only work on the commited state of your code base. Uncommited changes will be ignored. Exception: The .gitlab-ci.yml itself does not have be commited to be taken into account.
Note: There are some limitations running locally. Have a look at limitations of gitlab runner locally.
I'm currently working on making a gitlab runner that works locally.
Still in the early phases, but eventually it will become very relevant.
It doesn't seem like gitlab want/have time to make this, so here you go.
https://github.com/firecow/gitlab-runner-local
If you are running Gitlab using the docker image there: https://hub.docker.com/r/gitlab/gitlab-ce, it's possible to run pipelines by exposing the local docker.sock with a volume option: -v /var/run/docker.sock:/var/run/docker.sock. Adding this option to the Gitlab container will allow your workers to access to the docker instance on the host.
The GitLab runner appears to not work on Windows yet and there is an open issue to resolve this.
So, in the meantime I am moving my script code out to a bash script, which I can easily map to a docker container running locally and execute.
In this case I want to build a docker container in my job, so I create a script 'build':
#!/bin/bash
docker build --pull -t myimage:myversion .
in my .gitlab-ci.yaml I execute the script:
image: docker:latest
services:
- docker:dind
before_script:
- apk add bash
build:
stage: build
script:
- chmod 755 build
- build
To run the script locally using powershell I can start the required image and map the volume with the source files:
$containerId = docker run --privileged -d -v ${PWD}:/src docker:dind
install bash if not present:
docker exec $containerId apk add bash
Set permissions on the bash script:
docker exec -it $containerId chmod 755 /src/build
Execute the script:
docker exec -it --workdir /src $containerId bash -c 'build'
Then stop the container:
docker stop $containerId
And finally clean up the container:
docker container rm $containerId
Another approach is to have a local build tool that is installed on your pc and your server at the same time.
So basically, your .gitlab-ci.yml will basically call your preferred build tool.
Here an example .gitlab-ci.yml that i use with nuke.build:
stages:
- build
- test
- pack
variables:
TERM: "xterm" # Use Unix ASCII color codes on Nuke
before_script:
- CHCP 65001 # Set correct code page to avoid charset issues
.job_template: &job_definition
except:
- tags
build:
<<: *job_definition
stage: build
script:
- "./build.ps1"
test:
<<: *job_definition
stage: test
script:
- "./build.ps1 test"
variables:
GIT_CHECKOUT: "false"
pack:
<<: *job_definition
stage: pack
script:
- "./build.ps1 pack"
variables:
GIT_CHECKOUT: "false"
only:
- master
artifacts:
paths:
- output/
And in nuke.build i've defined 3 targets named like the 3 stages (build, test, pack)
In this way you have a reproducible setup (all other things are configured with your build tool) and you can test directly the different targets of your build tool.
(i can call .\build.ps1 , .\build.ps1 test and .\build.ps1 pack when i want)
I am on Windows using VSCode with WSL
I didn't want to register my work PC as a runner so instead I'm running my yaml stages locally to test them out before I upload them
$ sudo apt-get install gitlab-runner
$ gitlab-runner exec shell build
yaml
image: node:10.19.0 # https://hub.docker.com/_/node/
# image: node:latest
cache:
# untracked: true
key: project-name
# key: ${CI_COMMIT_REF_SLUG} # per branch
# key:
# files:
# - package-lock.json # only update cache when this file changes (not working) #jkr
paths:
- .npm/
- node_modules
- build
stages:
- prepare # prepares builds, makes build needed for testing
- test # uses test:build specifically #jkr
- build
- deploy
# before_install:
before_script:
- npm ci --cache .npm --prefer-offline
prepare:
stage: prepare
needs: []
script:
- npm install
test:
stage: test
needs: [prepare]
except:
- schedules
tags:
- linux
script:
- npm run build:dev
- npm run test:cicd-deps
- npm run test:cicd # runs puppeteer tests #jkr
artifacts:
reports:
junit: junit.xml
paths:
- coverage/
build-staging:
stage: build
needs: [prepare]
only:
- schedules
before_script:
- apt-get update && apt-get install -y zip
script:
- npm run build:stage
- zip -r build.zip build
# cache:
# paths:
# - build
# <<: *global_cache
# policy: push
artifacts:
paths:
- build.zip
deploy-dev:
stage: deploy
needs: [build-staging]
tags: [linux]
only:
- schedules
# # - branches#gitlab-org/gitlab
before_script:
- apt-get update && apt-get install -y lftp
script:
# temporarily using 'verify-certificate no'
# for more on verify-certificate #jkr: https://www.versatilewebsolutions.com/blog/2014/04/lftp-ftps-and-certificate-verification.html
# variables do not work with 'single quotes' unless they are "'surrounded by doubles'"
- lftp -e "set ssl:verify-certificate no; open mediajackagency.com; user $LFTP_USERNAME $LFTP_PASSWORD; mirror --reverse --verbose build/ /var/www/domains/dev/clients/client/project/build/; bye"
# environment:
# name: staging
# url: http://dev.mediajackagency.com/clients/client/build
# # url: https://stg2.client.co
when: manual
allow_failure: true
build-production:
stage: build
needs: [prepare]
only:
- schedules
before_script:
- apt-get update && apt-get install -y zip
script:
- npm run build
- zip -r build.zip build
# cache:
# paths:
# - build
# <<: *global_cache
# policy: push
artifacts:
paths:
- build.zip
deploy-client:
stage: deploy
needs: [build-production]
tags: [linux]
only:
- schedules
# - master
before_script:
- apt-get update && apt-get install -y lftp
script:
- sh deploy-prod
environment:
name: production
url: http://www.client.co
when: manual
allow_failure: true
The idea is to keep check commands outside of .gitlab-ci.yml. I use Makefile to run something like make check and my .gitlab-ci.yml runs the same make commands that I use locally to check various things before committing.
This way you'll have one place with all/most of your commands (Makefile) and .gitlab-ci.yml will have only CI-related stuff.
I have written a tool to run all GitLab-CI job locally without have to commit or push, simply with the command ci-toolbox my_job_name.
The URL of the project : https://gitlab.com/mbedsys/citbx4gitlab
Years ago I build this simple solution with Makefile and docker-compose to run the gitlab runner in docker, you can use it to execute jobs locally as well and should work on all systems where docker works:
https://gitlab.com/1oglop1/gitlab-runner-docker
There are few things to change in the docker-compose.override.yaml
version: "3"
services:
runner:
working_dir: <your project dir>
environment:
- REGISTRATION_TOKEN=<token if you want to register>
volumes:
- "<your project dir>:<your project dir>"
Then inside your project you can execute it the same way as mentioned in other answers:
docker exec -it -w $PWD runner gitlab-runner exec <commands>..
I recommend using gitlab-ci-local
https://github.com/firecow/gitlab-ci-local
It's able to run specific jobs as well.
It's a very cool project and I have used it to run simple pipelines on my laptop.

Can someone look at my yaml file for code deployment using Bitbucket Pipelines?

This is my first attempt at setting up pipelines or even using any CI/CD tool. So, reading the documentation at Bitbucket, I added the bitbucket-pipelines.yml file in the root of my Laravel application for a build. Here is the file.
image: php:7.4-fpm
pipelines:
default:
- step:
name: Build and test
caches:
- composer
script:
- apt-get update && apt-get install -qy git curl libmcrypt-dev mariadb-client ghostscript
- yes | pecl install mcrypt-1.0.3
- docker-php-ext-install pdo_mysql bcmath exif
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --file name=composer
- composer install
- ln -f -s .env.pipelines .env
- php artisan migrate
- ./vendor/bin/phpunit
services:
- mysql
- redis
definitions:
services:
mysql:
image: mysql:5.7
environment:
MYSQL_DATABASE: "laravel-pipeline"
MYSQL_RANDOM_ROOT_PASSWORD: "yes"
MYSQL_USER: "homestead"
MYSQL_PASSWORD: "secret"
redis:
image: redis
The above works fine in building the application, running tests,etc. But when I add the below to deploy, using the scp pipe, I get a notice saying either I need to include an image or at times the notice says there is a bad indentation of a mapping entry.
- step:
name: Deploy to test
deployment: test
# trigger: manual # Uncomment to make this a manual deployment.
script:
- pipe: atlassian/scp-deploy:0.3.13
variables:
USER: '${remoteUser}'
SERVER: '${server}'
REMOTE_PATH: '${remote}'
LOCAL_PATH: '${BITBUCKET_CLONE_DIR}/*'
I don't really know yaml, and this is my first time working with a CI/CD tool so I am lost. Can someone guide me in what I am doing wrong?
Your indentation for name and deployment is not the same as for the script. Try putting it all on the same indentation like this.
- step:
name: Deploy to test
deployment: test
script:
- pipe: atlassian/scp-deploy:0.3.13
variables:
USER: '${remoteUser}'
SERVER: '${server}'
REMOTE_PATH: '${remote}'
LOCAL_PATH: '${BITBUCKET_CLONE_DIR}/*'

Is it possible to cache images for docker-compose in CircleCI?

I'm aware how to cache dependencies in CircleCI:
- restore_cache:
keys:
- my-project-v1-{{ checksum "project.clj" }}
# fallback to using the latest cache if no exact match is found
- my-project-v1
- run: lein with-profile test deps
- save_cache:
paths:
- ~/.m2
key: my-project-v1-{{ checksum "project.clj" }}
I'm also aware how to use docker-compose:
- run:
name: Install Docker Compose
command: |
curl -L https://github.com/docker/compose/releases/download/1.24.0/docker-compose-`uname -s`-`uname -m` > ~/docker-compose
chmod +x ~/docker-compose
sudo mv ~/docker-compose /usr/local/bin/docker-compose
- run: docker-compose up -d
However, every time the job runs docker-compose up -d, it downloads the images, specified in the docker-compose.yml file. Is there a way to make CircleCI download them once and then use them (until docker-compose.yml is modified)?
You can use docker layer caching to achieve this by adding the following step as one of the first steps in your job:
- setup_remote_docker:
docker_layer_caching: true

docker-compose deployment configuration for Circle CI

I am using Circle CI to deploy a microservice to a Digital Ocean droplet and had a few questions about whether my approach is the right one.
My microservice is built using docker-compose, and therefore requires a docker-compose.yml file to pull, start the images that constitute it.
In a nutshell, my deployment approach would be:
Merge branch to master will kick off a CircleCI build
CircleCI will run unit tests
Upon all tests passing, docker-compose build and docker-compose push to Docker Hub
Stop all running images of that service on remote server.
Remove dangling images, and local networks.
Download the relevant docker-compose.yml, Dockerfile and docker-compose.env files.
Pull using docker-compose pull
Start images using docker-compose up
I am using this configuration in CircleCI:
version: 2.1
jobs:
build:
docker:
- image: "circleci/node:10.16.0"
steps:
- checkout
- run:
name: Update to latest npm version
command: "sudo npm install -g npm#latest"
- restore_cache:
key: dependency-cache-{{ checksum "package-lock.json" }}
- run:
name: Install dependencies
command: npm install
- run:
name: Install `docker-compose`
command: |
curl -L https://github.com/docker/compose/releases/download/1.19.0/docker-compose-`uname -s`-`uname -m` > ~/docker-compose
chmod +x ~/docker-compose
sudo mv ~/docker-compose /usr/local/bin/docker-compose
- setup_remote_docker:
docker_layer_caching: false
- run:
name: Build using `docker-compose`
command: |
docker-compose build
- run:
name: Login for Docker Hub
command: |
echo "$DOCKER_PASSWORD" | docker login --username $DOCKER_USERNAME --password-stdin
- run:
name: Push to Docker Hub
command: |
docker-compose push
- run: ssh-keyscan $DIGITALOCEAN_HOST >> ~/.ssh/known_hosts
- add_ssh_keys:
fingerprints:
- fo:of:fe:ef:af
- run:
name: Remove currently running containers
command: |
ssh root#$DIGITALOCEAN_HOST ./deploy_image.sh
I am planning on creating a bash script to handle steps 4 to 8 from my list above.
Is it a good idea to have a script take care of the Docker steps?
Or is there a better way to have a more "native" CircleCI configuration?

Execute bash script when running docker-compose in gitlab CI

I've got the following .gitlab-ci.yml file:
image:
name: docker/compose:1.24.1
entrypoint: ["/bin/sh","-l","-c"]
services:
- docker:dind
stages:
- test
- deploy
pytest:
stage: test
script:
- cd tests/
- docker-compose -f docker-compose.test.yml up --abort-on-container-exit --exit-code-from pytest
which runs the docker-compose.test.yml file, which looks like this:
version: '3'
services:
pytest:
build: ..
command: bash -c "/wait-for-it.sh dynamodb:8000 -- && cd /tests/ && pytest -s"
dynamodb:
image: amazon/dynamodb-local
ports:
- '8000:8000'
command: -jar DynamoDBLocal.jar -inMemory -port 8000
In this case, pytest waits for dynamodb to be up and running and then runs the python tests. This works well on my machine.
However, when Gitlab CI actually runs it, I get the following error:
bash: /wait-for-it.sh: Permission denied
How to avoid this issue? Using chmod -x, file is not found.
you need to make wait-for-it.sh executable, in your Dockerfile add:
RUN chmod +x /path/to/wait-for-it.sh

Resources