Any practical difference between using `docker build` & `docker push` together vs. `cloud build submit` in cloud build config files? - google-cloud-build

Google Cloud docs use the first code block but I'm wondering why they don't use the second one. As far as I can tell they achieve the same result. Is there any practical difference?
# config 1
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/project-id/project-name','.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/project-id/project-name']
# Deploy container image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args: ['run', 'deploy', 'project-name', '--image', 'gcr.io/project-id/project-name', '--region', 'us-central1']
images: ['gcr.io/project-id/project-name']
# config 2
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/gcloud'
args: ['builds', 'submit', '--region', 'us-central1', '--tag', 'gcr.io/project-id/project-name','.']
# Deploy container image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args: ['run', 'deploy', 'project-name', '--image', 'gcr.io/project-id/project-name', '--region', 'us-central1']
I run this with gcloud builds submit --config cloudbuild.yaml

In the second config, you call a Cloud Build from inside a Cloud Build, that means you pay twice the docker build/push process in the second config.
That time of timeline in fact
Cloud Build 1
Cloud build 2
Docker Build
Docker Push
Deploy on cloud run
In addition, the number of concurrent build are limited and with the config 2 you use 2 times more quotas.
And the result is the same (it should be slightly faster with the config 1 because you haven't a new Cloud Build to spin up)

Related

Drone CI/CD only stuck in exec pipeline

When i use docker pipeline it will success building.
But when i use exec pipe line it's always stuck in pending.
And i don't what going wrong.
kind: pipeline
type: exec
name: deployment
platform:
os: linux
arch: amd64
steps:
- name: backend image build
commands:
- echo start build images...
# - export MAJOR_VERSION=1.0.rtm.
# - export BUILD_NUMBER=$DRONE_BUILD_NUMBER
# - export WORKSPACE=`pwd`
# - bash ./jenkins_build.sh
when:
branch:
- master
Docker Pipe Line is fine.
kind: pipeline
type: docker
name: deployment
steps:
- name: push image to repo
image: plugins/docker
settings:
dockerfile: src/ZR.DataHunter.Api/Dockerfile
tags: latest
insecure: true
registry: "xxx"
repo: "xxx"
username:
from_secret: username
password:
from_secret: userpassword
First of all it's important to note that the exec pipeline can be used only when Drone is self-hosted. It is written in the official docs:
Please note exec pipelines are disabled on Drone Cloud. This feature is only available when self-hosting
When Drone is self-hosted be sure, that:
The exec runner is installed
It is configured properly in its config file (to be able to connect to the Drone server)
And the drone-runner-exec service is running
After the service is started, look for its log file and you have to see an info message saying it was able to connect to your Drone server:
level=info msg="successfully pinged the remote server"
level=info msg="polling the remote server"
It's also possible to view the web UI (Dashboard) of the running service if you enable it.
So if you see it can poll your server, then your exec pipeline should run as expected.

Is it possible to start PubSub Emulator from Cloud Build step

As the title mentions, I would like to know if, from a Cloud Build step, I can start and use the pubsub emulator?
options:
env:
- GO111MODULE=on
- GOPROXY=https://proxy.golang.org
- PUBSUB_EMULATOR_HOST=localhost:8085
volumes:
- name: "go-modules"
path: "/go"
steps:
- name: "golang:1.14"
args: ["go", "build", "."]
# Starts the cloud pubsub emulator
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args: [
'-c',
'gcloud beta emulators pubsub start --host-port 0.0.0.0:8085 &'
]
- name: "golang:1.14"
args: ["go", "test", "./..."]
For a test I need it, it works locally and instead of using a dedicated pubsub from cloud build, I want to use an emulator.
Thanks
As I found a workaround and an interesting git repository, I wanted to share with you the solution.
As required, you need a cloud-build.yaml and you want to add a step where the emulator will get launched:
options:
env:
- GO111MODULE=on
- GOPROXY=https://proxy.golang.org
- PUBSUB_EMULATOR_HOST=localhost:8085
volumes:
- name: "go-modules"
path: "/go"
steps:
- name: "golang:1.14"
args: ["go", "build", "."]
- name: 'docker/compose'
args: [
'-f',
'docker-compose.cloud-build.yml',
'up',
'--build',
'-d'
]
id: 'pubsub-emulator-docker-compose'
- name: "golang:1.14"
args: ["go", "test", "./..."]
As you can see, I run a docker-compose command which will actually start the emulator.
version: "3.7"
services:
pubsub:
# Required for cloudbuild network access (when external access is required)
container_name: pubsub
image: google/cloud-sdk
ports:
- '8085:8085'
command: ["gcloud", "beta", "emulators", "pubsub", "start", "--host-port", "0.0.0.0:8085"]
network_mode: cloudbuild
networks:
default:
external:
name: cloudbuild
It is important to set the container name as well as the network, otherwise you won't be able to access the pubsub emulator from another cloud build step.
It is possible since every step on Cloud Build is executed in a docker container, but the image gcr.io/cloud-builders/gcloud only has a minimum installation of the gcloud components, before start the emulator your need to install the pubsub emulator via gcloud command
gcloud components install pubsub-emulator
Also it is necessary to install Open JDK7 since most of the Gcloud emulators needs java to operate.

Google cloud build - provide assets to docker build step

I'm trying to combine two examples from the Google Cloud Build documentation
This example where a container is built using a yaml file
And this example where a persistent volume is used
My scenario is simple, the first step in my build produces some assets I'd like to bundle into a Docker image built in a subsequent step. However, since the COPY command is relative to the build directory of the image, I'm unable to determine how to reference the assets from the first step inside the Dockerfile used for the second step.
There are potentially multiple ways to solve this problem, but the easiest way I've found is to use the docker cloud builder and run an sh script (since Bash is not included in the docker image).
In this example, we build on the examples from the question, producing an asset file in the first build step, then using it inside the Dockerfile of the second step.
cloudbuild.yaml
steps:
- name: 'ubuntu'
volumes:
- name: 'vol1'
path: '/persistent_volume'
entrypoint: 'bash'
args:
- '-c'
- |
echo "Hello, world!" > /persistent_volume/file
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'sh'
volumes:
- name: 'vol1'
path: '/persistent_volume'
args: [ 'docker-build.sh' ]
env:
- 'PROJECT=$PROJECT_ID'
images:
- 'gcr.io/$PROJECT_ID/quickstart-image'
docker-build.sh
#/bin/sh
cp -R /persistent_volume .
docker build -t gcr.io/$PROJECT/quickstart-image .

Get Git History in Google Cloud Build Step

I'm using Google Cloud Build to run CI for my Nx workspace. Here's the cloudbuild.yaml file:
steps:
- name: 'gcr.io/cloud-builders/docker'
id: Test_Affected_Projects
entrypoint: 'sh'
args: [
'-c',
'docker build --build-arg NPM_TOKEN=$$NPM_TOKEN --file ./test/Dockerfile.test-runner -t mha-test-runner .']
secretEnv: ['NPM_TOKEN']
# Remove the docker image
secrets:
- kmsKeyName: /path/to/key
secretEnv:
NPM_TOKEN: some_key_value
(There are currently two steps, but I removed the second for brevity. The second step just removes the created docker image.)
Now the command inside the Docker image here runs all the tests for the Nx workspace. The thing is, Nx has a great command where only the affected libraries will be tested. But for the command to run, the git history of the project needs to be available.
I've tried to get the git history in the cloud build context, but I haven't been able to get it working. This is the step I added to try and get everything working:
steps:
- name: 'gcr.io/cloud-builders/git'
args: ['fetch', '--unshallow']
- name: 'gcr.io/cloud-builders/docker'
id: Test_Affected_Projects
entrypoint: 'sh'
args: [
'-c',
'docker build --build-arg NPM_TOKEN=$$NPM_TOKEN --file ./test/Dockerfile.test-runner -t mha-test-runner .']
secretEnv: ['NPM_TOKEN']
# Remove the docker image
secrets:
- kmsKeyName: /path/to/key
secretEnv:
NPM_TOKEN: some_key_value
That new first command, which should get the git history, fails. The error message says that it's not a git repo, so the command fails.
My question is: how can I get the git history in the cloud build context so that I can use it with different commands in the build/testing process?
I think the reason this isn't working is that you need to store the github credentials in the cloud build environment.
I believe this guide can help.
will allow you to do so, and then you will be able to call the git fetch --unshallow as you already have.

Unable to deploy container images on Google Compute Engine

I'm trying to deploy my container image on Compute Engine using cloudbuild.yaml but getting error. Below is my cloudbuild.yaml file content:
# gis-account-manager -> Project ID on GCP
steps:
# Build the Docker image.
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/gis-account-manager/ams', '-f', 'Dockerfile', '.']
# Push it to GCR.
- name: gcr.io/cloud-builders/docker
args: ['push', 'gcr.io/gis-account-manager/ams']
# Deploy to Prod env (THIS STEP IS FAILING)
- name: gcr.io/cloud-builders/gcloud
args: [ 'compute', 'instances', 'update-container', 'instance-2-production' , '--container-image', 'gcr.io/gis-account-manager/ams:latest']
# Set the Docker image in Cloud Build
images: ['gcr.io/gis-account-manager/ams']
# Build timeout
timeout: '3600s'
Error:
Starting Step #2
Step #2: Already have image (with digest): gcr.io/cloud-builders/gcloud
Step #2: ERROR: (gcloud.compute.instances.update-container) Underspecified resource [instance-2-production]. Specify the [--zone] flag.
If I run the same command from Cloud SDK Sheel it works as expected.
PS: I've also tried by providing ZONE Flag.
You need to specify your zone in gcloud compute command:
# Deploy to Prod env (THIS STEP IS FAILING)
- name: gcr.io/cloud-builders/gcloud
args: [ 'config', 'set', 'compute/zone', 'us-central1-a']
- name: gcr.io/cloud-builders/gcloud
args: [ 'compute', 'instances', 'update-container', 'instance-2-production' , '--container-image', 'gcr.io/gis-account-manager/ams:latest']
You need to to change asia-east1 by zone from this list. And since you are updating the container then the zone may be already specified.
You can write the command: gcloud compute zones list to list all available zones
Cloud Build does not have enough permissions to execute the operation, hence you are receiving an error when operating on Cloud Build, but not when executing the same operation in gcloud command-line tool, which works differently.
I granted these Cloud Build Service Account and Cloud Build Service Agent with the Compute Admin role:
[REDACTED]#cloudbuild.gserviceaccount.com`
service-[REDACTED]#gcp-sa-cloudbuild.iam.gserviceaccount.com`
My cloudbuild.yaml looks identical to what you should have now:
steps:
- name: gcr.io/cloud-builders/gcloud
args: [ 'config', 'set', 'compute/zone', 'YOUR_ZONE']
- name: gcr.io/cloud-builders/gcloud
args: [ 'compute', 'instances', 'update-container', '[YOUR_INSTANCE_NAME]' , '--container-image', 'gcr.io/gis-account-manager/ams:latest']
where [YOUR_ZONE] is your configured zone and [YOUR_INSTANCE_NAME] is the name of your instance.
I would recommend that you read on this Documentation for more information about Cloud Build service accounts permissions.
I've solved this issue by authenticating via service account (First need to generate keys for Compute Engine Service Account).
Updated cloudbuild.yaml file:
# Deploy to GOOGLE COMPUTE ENGINE Prod env
- name: gcr.io/cloud-builders/gcloud
args: [ 'auth', 'activate-service-account', '123456789-compute#developer.gserviceaccount.com', '--key-file=PATH_TO_FILE', '--project=${_PROJECT_ID}']
- name: gcr.io/cloud-builders/gcloud
args: ['compute', 'instances', 'update-container', '${_VM_INSTANCE}' , '--container-image=gcr.io/${_PROJECT_ID}/ams:latest', '--zone=us-central1-a']

Resources