dynamic image name in CloudBuilder - bash

I am building docker images with dynamic tags in CloudBuilder. I would like to be able to then run that same image, but I'm having trouble getting it to work.
Here's what I've got:
steps:
- id: "Store value for docker image tag"
name: ubuntu
entrypoint: bash
args:
- -c
- date +%Y%m%d%H%M%S > /workspace/image_tag.txt
- name: 'gcr.io/cloud-builders/docker'
entrypoint: bash
args: [ '-c', 'docker build -t gcr.io/blah/my_image:$(cat /workspace/image_tag.txt) -f src/Dockerfile ./src' ]
- name: 'gcr.io/cloud-builders/docker'
entrypoint: bash
args: [ '-c', 'docker push gcr.io/blah/my_image:$(cat /workspace/image_tag.txt)' ]
...
- name: 'gcr.io/blah/my_image:$(cat /workspace/image_tag.txt)'
entrypoint: /bin/sh
args:
- -c
# - execute some commands and script within the image...
(gcr.io/blah/my_image is a custom builder)
Obviously, the name 'gcr.io/blah/my_image:$(cat /workspace/image_tag.txt)' does not work, I get an error:
Your build failed to run: generic::invalid_argument: invalid build: invalid build step name "gcr.io/blah/my_image:$(cat /workspace/image_tag.txt)": could not parse reference: gcr.io/blah/my_image:$(cat /workspace/image_tag.txt)
The image gets pushed fine, but I want the a step that runs the image that got pushed earlier. Did I just mess up the syntax? If I can't do it as easily as I want, is there some other way to do this?

If you want to interpret the content, use " instead of ', like that
- name: 'gcr.io/cloud-builders/docker'
entrypoint: bash
args: [ '-c', "docker build -t gcr.io/blah/my_image:$(cat /workspace/image_tag.txt) -f src/Dockerfile ./src" ]
- name: 'gcr.io/cloud-builders/docker'
entrypoint: bash
args: [ '-c', "docker push gcr.io/blah/my_image:$(cat /workspace/image_tag.txt)" ]

Related

How to run bash command using cloudbuild

Trying to run a bash command that creates a directory in GCP but getting an error
"create_directory": Error response from daemon: manifest for gcr.io/cloud-builders/bash:latest not found: manifest unknown: Failed to fetch "latest" from request "/v2/cloud-builders/bash/manifests/latest".
- name: 'gcr.io/cloud-builders/bash:latest'
entrypoint: 'bash'
args:
- '-c'
- 'mkdir my_dir'
id: "create_directory"
I'm able to create it in cloud shell by running
mkdir my_dir
How do i resolve this?
usually I do something like this (pay attention to the builder name, and use commands without quote signs):
- id: "some identifier"
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
echo "something"

Cloudbuild - build docker image with custom variable from a different step

I want to achieve the following build process:
decide the value of environment var depending on the build branch
persist this value through diff build steps
use this var to pass it as build-arg to docker build
Here is some of the cloudbuild config I've got:
- id: 'Get env from branch'
name: bash
args:
- '-c'
- |-
environment="dev"
if [[ "${BRANCH_NAME}" == "staging" ]]; then
environment="stg"
elif [[ "${BRANCH_NAME}" == "master" ]]; then
environment="prd"
fi
echo $environment > /workspace/environment.txt
- id: 'Build Docker image'
name: bash
dir: $_SERVICE_DIR
args:
- '-c'
- |-
environment=$(cat /workspace/environment.txt)
echo "===== ENV: $environment"
docker build --build-arg ENVIRONMENT=$environment -t gcr.io/${_GCR_PROJECT_ID}/${_SERVICE_NAME}/${COMMIT_SHA} .
The problem lies in the 2nd step. If I use bash step image, then I've got no docker executable in order to build my custom image.
And if I use gcr.io/cloud-builders/docker step image, then I can't execute bash scripts. In the args field I can only pass arguments for the docker executable. And this way I cannot extract the value of environment that I've persisted through the steps of the build.
The way I managed to accomplish both is to use my own, custom, pre-built image, which contains both bash and docker executables. I have that image in the container registry and I use it as the build step image. But this requires some custom work from my side. I was wondering if there is a better, more standardized way with built-in tools from cloudbuild.
Sources:
how to run inline bash scripts
how to persist values through build steps
You can change the default entrypoint by adding entrypoint: parameter
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- -c
- |
echo $PROJECT_ID
environment=$(cat /workspace/environment.txt)
echo "===== ENV: $environment"
docker build --build-arg ENVIRONMENT=$environment -t gcr.io/${_GCR_PROJECT_ID}/${_SERVICE_NAME}/${COMMIT_SHA} .

Send variables into docker container to use in a script

I am running a script in the CI/CD of the pipeline. The goal is to get a string to work with.
When I get that result, I save it into a variable and save result in the yaml file of the dockerfile.
I am wanting to pass that variable from the CI environment, into the docker-compose container. So, I am trying to export this like another things are exported, however it doesn't work:
ci/pdf/jenkins-changes.sh
LOG="$(cat ".log")"
export LOG
I have added a variables.env file that looks like this:
LOG=LOG
And then modified the docker-compose.yaml to read the var :
pdf:
image: thisimage/this
build:
context: ../
dockerfile: ./docker/Dockerfile.name
args:
git_branch: ${GIT_BRANCH}
env_file:
- variables.env
environment:
- LOG=${LOG}
volumes:
- do-build:/src/do-build
And in the script that finally runs the docker-container, I have also
Declared it:
FROM ubuntu:16.04 as pdf-builder
ARG log
ENV log=${log}
RUN LOG=${log}
RUN export $LOG
And right after, I run the script.sh that requires the variable, however, it returns Unbound variable and breaks.
LOG=${log}
echo ${LOG}
The answer to this question was:
ci/pdf/jenkins-changes.sh
LOG="$(cat ".log")"
export LOG
Then pass it as an argument, instead of a variable:
pdf:
image: thisimage/this
build:
context: ../
dockerfile: ./docker/Dockerfile.name
args:
git_branch: ${GIT_BRANCH}
env_file:
- variables.env
environment:
- LOG=${LOG}
volumes:
- do-build:/src/do-build
And then, in the dockerfile call it and define it.
ARG log
This should leave it global for any script to use it.

CircleCI test failing due to the "hstore" Postgres extension not being installed

I'm trying to get a certain commit to pass the tests on CircleCI, but there are some differences with my local environment that I'm still trying to resolve. Here is the ./circleci/config.yml file:
version: 2
jobs:
build:
working_directory: ~/lucy/lucy_web/
docker:
- image: python:3.6.0
environment:
DATABASE_URL: postgresql://my_app:my_password#localhost/my_db?sslmode=disable
- image: jannkleen/docker-postgres-gis-hstore
environment:
POSTGRES_USER: my_app
POSTGRES_DB: my_db
POSTGRES_PASSWORD: my_password
steps:
- checkout
- restore_cache:
key: deps1-{{ .Branch }}-{{ checksum "lucy-web/requirements.txt" }}
- run:
name: Install Python deps in a venv
command: |
cd lucy-web
python3 -m venv venv
. venv/bin/activate
pip3 install -r requirements.txt
- save_cache:
key: deps1-{{ .Branch }}-{{ checksum "lucy-web/requirements.txt" }}
paths:
- "venv"
- run:
command: |
cd lucy-web
source venv/bin/activate
python manage.py compilescss --verbosity 0
python manage.py collectstatic --clear --no-input --verbosity 0
python manage.py makemigrations --no-input --verbosity 0
python manage.py migrate --no-input --verbosity 0
python manage.py test
- store_artifacts:
path: test-reports/
destination: tr1
- store_test_results:
path: test-reports/
The problem is that the tests error out due to the hstore type not existing:
django.db.utils.ProgrammingError: type "hstore" does not exist
LINE 1: ..., "options" varchar(255)[] NOT NULL, "conditions" hstore NOT...
^
Exited with code 1
On my local machine, I solved this by running psql my_db followed by create extension hstore;. Looking at the PostgreSQL image's source code (https://github.com/JannKleen/docker-postgres-gis-hstore), I believe it runs the following bash script:
#!/bin/sh
POSTGRES="gosu postgres"
echo "******CREATING EXTENSIONS******"
${POSTGRES} psql -d postgres -c "UPDATE pg_database SET datallowconn = TRUE WHERE datname = 'template0';"
${POSTGRES} psql -d postgres -c "UPDATE pg_database SET datallowconn = TRUE WHERE datname = 'template1';"
${POSTGRES} psql -d template1 -c "CREATE EXTENSION IF NOT EXISTS hstore;"
${POSTGRES} psql -d template1 -c "CREATE EXTENSION IF NOT EXISTS postgis;"
${POSTGRES} psql -d template1 -c "CREATE EXTENSION IF NOT EXISTS postgis_topology;"
echo ""
echo "******DATABASE EXTENSIONS******"
As I understand it, if the extension is created in the template1 database, it should apply the the my_db database too, right? (Cf. https://www.postgresql.org/docs/9.5/static/manage-ag-templatedbs.html)
How can I fix this error?
I ran into a similar issue and here's how I solved it. My app is a node app but the basic idea should be the same. I've trimmed out anything specific to my project apart from the Postgres setup.
version: 2
jobs:
build:
docker:
- image: circleci/postgres:10.3-alpine
environment:
- POSTGRES_USER: root
- POSTGRES_PASS: test
- POSTGRES_DB: circle-test
steps:
- checkout
- run:
name: Postgres Client
command: sudo apt install postgresql-client
- run:
name: Stash the PG Password
command: echo "test" > .pgpass
- run:
name: Waiting for PostgreSQL to start
command: |
for i in `seq 1 10`;
do
nc -z localhost 5432 && echo Success && exit 0
echo -n .
sleep 2
done
echo Failed waiting for Postgres && exit 1
- run:
name: Enable hstore in Postgres
command: psql -U root -d circle-test -h localhost -p 5432 -c "CREATE EXTENSION IF NOT EXISTS hstore;"

How can I run a small command in a concourse pipeline?

I basically want to run the npm install and grunt build command within the newly added repo.
inputs:
- name: repo
- path:
run:
path: repo/
args:
- npm install
- grunt build
path: refers to the path in the container to the binary / script to execute.
Check out this example on the Tasks documentation here : https://concourse-ci.org/tasks.html#task-environment
run:
path: sh
args:
- -exc
- |
whoami
env
sh is the program to execute, and args are passed to the sh program
slight variation of Topher Bullock's answer
run:
path: sh
args:
- -exc
- whoami && env
which will run env if only whoami doesn't return error
This will run env even if whoami fails.
run:
path: sh
args:
- -exc
- whoami || env

Resources