I'm trying to deploy my container image on Compute Engine using cloudbuild.yaml but getting error. Below is my cloudbuild.yaml file content:
# gis-account-manager -> Project ID on GCP
steps:
# Build the Docker image.
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/gis-account-manager/ams', '-f', 'Dockerfile', '.']
# Push it to GCR.
- name: gcr.io/cloud-builders/docker
args: ['push', 'gcr.io/gis-account-manager/ams']
# Deploy to Prod env (THIS STEP IS FAILING)
- name: gcr.io/cloud-builders/gcloud
args: [ 'compute', 'instances', 'update-container', 'instance-2-production' , '--container-image', 'gcr.io/gis-account-manager/ams:latest']
# Set the Docker image in Cloud Build
images: ['gcr.io/gis-account-manager/ams']
# Build timeout
timeout: '3600s'
Error:
Starting Step #2
Step #2: Already have image (with digest): gcr.io/cloud-builders/gcloud
Step #2: ERROR: (gcloud.compute.instances.update-container) Underspecified resource [instance-2-production]. Specify the [--zone] flag.
If I run the same command from Cloud SDK Sheel it works as expected.
PS: I've also tried by providing ZONE Flag.
You need to specify your zone in gcloud compute command:
# Deploy to Prod env (THIS STEP IS FAILING)
- name: gcr.io/cloud-builders/gcloud
args: [ 'config', 'set', 'compute/zone', 'us-central1-a']
- name: gcr.io/cloud-builders/gcloud
args: [ 'compute', 'instances', 'update-container', 'instance-2-production' , '--container-image', 'gcr.io/gis-account-manager/ams:latest']
You need to to change asia-east1 by zone from this list. And since you are updating the container then the zone may be already specified.
You can write the command: gcloud compute zones list to list all available zones
Cloud Build does not have enough permissions to execute the operation, hence you are receiving an error when operating on Cloud Build, but not when executing the same operation in gcloud command-line tool, which works differently.
I granted these Cloud Build Service Account and Cloud Build Service Agent with the Compute Admin role:
[REDACTED]#cloudbuild.gserviceaccount.com`
service-[REDACTED]#gcp-sa-cloudbuild.iam.gserviceaccount.com`
My cloudbuild.yaml looks identical to what you should have now:
steps:
- name: gcr.io/cloud-builders/gcloud
args: [ 'config', 'set', 'compute/zone', 'YOUR_ZONE']
- name: gcr.io/cloud-builders/gcloud
args: [ 'compute', 'instances', 'update-container', '[YOUR_INSTANCE_NAME]' , '--container-image', 'gcr.io/gis-account-manager/ams:latest']
where [YOUR_ZONE] is your configured zone and [YOUR_INSTANCE_NAME] is the name of your instance.
I would recommend that you read on this Documentation for more information about Cloud Build service accounts permissions.
I've solved this issue by authenticating via service account (First need to generate keys for Compute Engine Service Account).
Updated cloudbuild.yaml file:
# Deploy to GOOGLE COMPUTE ENGINE Prod env
- name: gcr.io/cloud-builders/gcloud
args: [ 'auth', 'activate-service-account', '123456789-compute#developer.gserviceaccount.com', '--key-file=PATH_TO_FILE', '--project=${_PROJECT_ID}']
- name: gcr.io/cloud-builders/gcloud
args: ['compute', 'instances', 'update-container', '${_VM_INSTANCE}' , '--container-image=gcr.io/${_PROJECT_ID}/ams:latest', '--zone=us-central1-a']
Related
Google Cloud docs use the first code block but I'm wondering why they don't use the second one. As far as I can tell they achieve the same result. Is there any practical difference?
# config 1
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/project-id/project-name','.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/project-id/project-name']
# Deploy container image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args: ['run', 'deploy', 'project-name', '--image', 'gcr.io/project-id/project-name', '--region', 'us-central1']
images: ['gcr.io/project-id/project-name']
# config 2
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/gcloud'
args: ['builds', 'submit', '--region', 'us-central1', '--tag', 'gcr.io/project-id/project-name','.']
# Deploy container image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args: ['run', 'deploy', 'project-name', '--image', 'gcr.io/project-id/project-name', '--region', 'us-central1']
I run this with gcloud builds submit --config cloudbuild.yaml
In the second config, you call a Cloud Build from inside a Cloud Build, that means you pay twice the docker build/push process in the second config.
That time of timeline in fact
Cloud Build 1
Cloud build 2
Docker Build
Docker Push
Deploy on cloud run
In addition, the number of concurrent build are limited and with the config 2 you use 2 times more quotas.
And the result is the same (it should be slightly faster with the config 1 because you haven't a new Cloud Build to spin up)
I am building a concourse pipeline for a spring-boot service. The pipeline has two jobs (test, package-build-and-push).
The first job (test) has a single task (mvn-test-task) that will be responsible for running a maven test.
The second job has two tasks (mvn-package-task, build-image-task). The second job will only run after the first job (test) has finished. The first task (mvn-package-task) will be doing a maven package. Once the task (mvn-package-task) is finished it should copy the complete project folder with the generated target folder to the directory "concourse-demo-repo-out". This folder (concourse-demo-repo-out) then becomes the input folder for the next task (build-image-task) that will use to generate a docker image. The output will be a docker image that is placed in the directory (image).
I now need to push this image to an Amazon ECR repository. For this I am using a resource of the type docker-image (https://github.com/concourse/docker-image-resource). The problem that I am facing now is:
How do I tag the image that was created, and is placed inside the image directory?
How do I specify that I already have an image created to this docker image resource type, so that it uses that, tags it, and then pushes it to ECR.
My pipeline looks like this
resources:
- name: concourse-demo-repo
type: git
icon: github
source:
branch: main
uri: https://github.xyz.com/gzt/concourse-demo.git
username: <my-username>
password: <my-token>
- name: ecr-docker-reg
type: docker-image
icon: docker
source:
aws_access_key_id: <my-aws-access-key>
aws_secret_access_key: <my-aws-secret-key>
repository: <my-aws-ecr-repo>
jobs:
- name: test
public: true
plan:
- get: concourse-demo-repo
trigger: true
- task: mvn-test-task
file: concourse-demo-repo/ci/tasks/maven-test.yml
- name: package-build-and-push
public: true
serial: true
plan:
- get: concourse-demo-repo
trigger: true
passed: [test]
- task: mvn-package-task
file: concourse-demo-repo/ci/tasks/maven-package.yml
- task: build-image-task
privileged: true # oci-build-task must run in a privileged container
file: concourse-demo-repo/ci/tasks/build-image.yml
- put: ecr-docker-reg
params:
load: image
The maven package task and script
---
platform: linux
image_resource:
type: docker-image
source:
repository: maven
inputs:
- name: concourse-demo-repo
run:
path: /bin/sh
args: ["./concourse-demo-repo/ci/scripts/maven-package.sh"]
outputs:
- name: concourse-demo-repo-out
#!/bin/bash
set -e
cd concourse-demo-repo
mvn clean package
#cp -R ./target ../concourse-demo-repo-out
cp -a * ../concourse-demo-repo-out
The build image task
---
platform: linux
image_resource:
type: registry-image
source:
repository: concourse/oci-build-task
inputs:
- name: concourse-demo-repo-out
outputs:
- name: image
params:
CONTEXT: concourse-demo-repo-out
run:
path: build
So finally when I run the pipeline, everything works fine, I am able to build a docker image that is placed in the image directory but I am not able to use the image as a part of "put: ecr-docker-reg".
The error that I get while running the pipeline is
selected worker: ed5d4164f835
waiting for docker to come up...
open image/image: no such file or directory
Consider using registry-image instead. Use this example for image resource definition where you specify the desired tag.
Your build-image-task is analogous to the build task from the example.
Finally, the put will look something like:
- put: ecr-docker-reg
params:
image: image/image <--- but make sure this file exists, read on...
Given that concourse complained about open image/image: no such file or directory, verify that the filename is what you expect it to be. Hijack the failed task and look inside of the image output directory:
$ fly -t my-concourse i -j mypipeline/package-build-and-push
(select the build-image-task from the list)
# ls -al image/
(actual filename here)
👍
When i use docker pipeline it will success building.
But when i use exec pipe line it's always stuck in pending.
And i don't what going wrong.
kind: pipeline
type: exec
name: deployment
platform:
os: linux
arch: amd64
steps:
- name: backend image build
commands:
- echo start build images...
# - export MAJOR_VERSION=1.0.rtm.
# - export BUILD_NUMBER=$DRONE_BUILD_NUMBER
# - export WORKSPACE=`pwd`
# - bash ./jenkins_build.sh
when:
branch:
- master
Docker Pipe Line is fine.
kind: pipeline
type: docker
name: deployment
steps:
- name: push image to repo
image: plugins/docker
settings:
dockerfile: src/ZR.DataHunter.Api/Dockerfile
tags: latest
insecure: true
registry: "xxx"
repo: "xxx"
username:
from_secret: username
password:
from_secret: userpassword
First of all it's important to note that the exec pipeline can be used only when Drone is self-hosted. It is written in the official docs:
Please note exec pipelines are disabled on Drone Cloud. This feature is only available when self-hosting
When Drone is self-hosted be sure, that:
The exec runner is installed
It is configured properly in its config file (to be able to connect to the Drone server)
And the drone-runner-exec service is running
After the service is started, look for its log file and you have to see an info message saying it was able to connect to your Drone server:
level=info msg="successfully pinged the remote server"
level=info msg="polling the remote server"
It's also possible to view the web UI (Dashboard) of the running service if you enable it.
So if you see it can poll your server, then your exec pipeline should run as expected.
As the title mentions, I would like to know if, from a Cloud Build step, I can start and use the pubsub emulator?
options:
env:
- GO111MODULE=on
- GOPROXY=https://proxy.golang.org
- PUBSUB_EMULATOR_HOST=localhost:8085
volumes:
- name: "go-modules"
path: "/go"
steps:
- name: "golang:1.14"
args: ["go", "build", "."]
# Starts the cloud pubsub emulator
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args: [
'-c',
'gcloud beta emulators pubsub start --host-port 0.0.0.0:8085 &'
]
- name: "golang:1.14"
args: ["go", "test", "./..."]
For a test I need it, it works locally and instead of using a dedicated pubsub from cloud build, I want to use an emulator.
Thanks
As I found a workaround and an interesting git repository, I wanted to share with you the solution.
As required, you need a cloud-build.yaml and you want to add a step where the emulator will get launched:
options:
env:
- GO111MODULE=on
- GOPROXY=https://proxy.golang.org
- PUBSUB_EMULATOR_HOST=localhost:8085
volumes:
- name: "go-modules"
path: "/go"
steps:
- name: "golang:1.14"
args: ["go", "build", "."]
- name: 'docker/compose'
args: [
'-f',
'docker-compose.cloud-build.yml',
'up',
'--build',
'-d'
]
id: 'pubsub-emulator-docker-compose'
- name: "golang:1.14"
args: ["go", "test", "./..."]
As you can see, I run a docker-compose command which will actually start the emulator.
version: "3.7"
services:
pubsub:
# Required for cloudbuild network access (when external access is required)
container_name: pubsub
image: google/cloud-sdk
ports:
- '8085:8085'
command: ["gcloud", "beta", "emulators", "pubsub", "start", "--host-port", "0.0.0.0:8085"]
network_mode: cloudbuild
networks:
default:
external:
name: cloudbuild
It is important to set the container name as well as the network, otherwise you won't be able to access the pubsub emulator from another cloud build step.
It is possible since every step on Cloud Build is executed in a docker container, but the image gcr.io/cloud-builders/gcloud only has a minimum installation of the gcloud components, before start the emulator your need to install the pubsub emulator via gcloud command
gcloud components install pubsub-emulator
Also it is necessary to install Open JDK7 since most of the Gcloud emulators needs java to operate.
I'm trying to combine two examples from the Google Cloud Build documentation
This example where a container is built using a yaml file
And this example where a persistent volume is used
My scenario is simple, the first step in my build produces some assets I'd like to bundle into a Docker image built in a subsequent step. However, since the COPY command is relative to the build directory of the image, I'm unable to determine how to reference the assets from the first step inside the Dockerfile used for the second step.
There are potentially multiple ways to solve this problem, but the easiest way I've found is to use the docker cloud builder and run an sh script (since Bash is not included in the docker image).
In this example, we build on the examples from the question, producing an asset file in the first build step, then using it inside the Dockerfile of the second step.
cloudbuild.yaml
steps:
- name: 'ubuntu'
volumes:
- name: 'vol1'
path: '/persistent_volume'
entrypoint: 'bash'
args:
- '-c'
- |
echo "Hello, world!" > /persistent_volume/file
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'sh'
volumes:
- name: 'vol1'
path: '/persistent_volume'
args: [ 'docker-build.sh' ]
env:
- 'PROJECT=$PROJECT_ID'
images:
- 'gcr.io/$PROJECT_ID/quickstart-image'
docker-build.sh
#/bin/sh
cp -R /persistent_volume .
docker build -t gcr.io/$PROJECT/quickstart-image .