As the title mentions, I would like to know if, from a Cloud Build step, I can start and use the pubsub emulator?
options:
env:
- GO111MODULE=on
- GOPROXY=https://proxy.golang.org
- PUBSUB_EMULATOR_HOST=localhost:8085
volumes:
- name: "go-modules"
path: "/go"
steps:
- name: "golang:1.14"
args: ["go", "build", "."]
# Starts the cloud pubsub emulator
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args: [
'-c',
'gcloud beta emulators pubsub start --host-port 0.0.0.0:8085 &'
]
- name: "golang:1.14"
args: ["go", "test", "./..."]
For a test I need it, it works locally and instead of using a dedicated pubsub from cloud build, I want to use an emulator.
Thanks
As I found a workaround and an interesting git repository, I wanted to share with you the solution.
As required, you need a cloud-build.yaml and you want to add a step where the emulator will get launched:
options:
env:
- GO111MODULE=on
- GOPROXY=https://proxy.golang.org
- PUBSUB_EMULATOR_HOST=localhost:8085
volumes:
- name: "go-modules"
path: "/go"
steps:
- name: "golang:1.14"
args: ["go", "build", "."]
- name: 'docker/compose'
args: [
'-f',
'docker-compose.cloud-build.yml',
'up',
'--build',
'-d'
]
id: 'pubsub-emulator-docker-compose'
- name: "golang:1.14"
args: ["go", "test", "./..."]
As you can see, I run a docker-compose command which will actually start the emulator.
version: "3.7"
services:
pubsub:
# Required for cloudbuild network access (when external access is required)
container_name: pubsub
image: google/cloud-sdk
ports:
- '8085:8085'
command: ["gcloud", "beta", "emulators", "pubsub", "start", "--host-port", "0.0.0.0:8085"]
network_mode: cloudbuild
networks:
default:
external:
name: cloudbuild
It is important to set the container name as well as the network, otherwise you won't be able to access the pubsub emulator from another cloud build step.
It is possible since every step on Cloud Build is executed in a docker container, but the image gcr.io/cloud-builders/gcloud only has a minimum installation of the gcloud components, before start the emulator your need to install the pubsub emulator via gcloud command
gcloud components install pubsub-emulator
Also it is necessary to install Open JDK7 since most of the Gcloud emulators needs java to operate.
Related
For some reason the build step for my NestJS project in my GitHub Action fails for a few days now. I use Turborepo with pnpm in a monorepo and try to run the build with turbo run build. This works flawlessly on my local machine, but somehow in GitHub it fails with sh: 1: nest: Permission denied. ELIFECYCLE  Command failed with exit code 126. I'm not sure how this is possible, since I couldn't find any meaningful change I made to the code in the meantime. It just stopped working unexpectedly. I actually think it is an issue with GH Actions, since it actually works in my local Docker build as well.
Has anyone else encountered this issue with NestJS in GH Actions?
This is my action yml:
name: Test, lint and build
on:
push:
jobs:
test-lint-build:
runs-on: ubuntu-latest
services:
postgres:
# Docker Hub image
image: postgres
# Provide the password for postgres
env:
POSTGRES_HOST: localhost
POSTGRES_USER: test
POSTGRES_PASSWORD: docker
POSTGRES_DB: financing-database
ports:
# Maps tcp port 5432 on service container to the host
- 2345:5432
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Install pnpm
uses: pnpm/action-setup#v2.2.2
with:
version: latest
- name: Install
run: pnpm i
- name: Lint
run: pnpm run lint
- name: Test
run: pnpm run test
- name: Build
run: pnpm run build
env:
VITE_SERVER_ENDPOINT: http://localhost:8000/api
- name: Test financing-server (e2e)
run: pnpm --filter #project/financing-server run test:e2e
I found out what was causing the problem. I was using node-linker = hoisted to mitigate some issues the pnpm way of linking modules was causing with my jest tests. Removing this from my project suddenly made the action work again.
I still don't know why this only broke the build recently, since I've had this option activated for some time now.
Google Cloud docs use the first code block but I'm wondering why they don't use the second one. As far as I can tell they achieve the same result. Is there any practical difference?
# config 1
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/project-id/project-name','.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/project-id/project-name']
# Deploy container image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args: ['run', 'deploy', 'project-name', '--image', 'gcr.io/project-id/project-name', '--region', 'us-central1']
images: ['gcr.io/project-id/project-name']
# config 2
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/gcloud'
args: ['builds', 'submit', '--region', 'us-central1', '--tag', 'gcr.io/project-id/project-name','.']
# Deploy container image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args: ['run', 'deploy', 'project-name', '--image', 'gcr.io/project-id/project-name', '--region', 'us-central1']
I run this with gcloud builds submit --config cloudbuild.yaml
In the second config, you call a Cloud Build from inside a Cloud Build, that means you pay twice the docker build/push process in the second config.
That time of timeline in fact
Cloud Build 1
Cloud build 2
Docker Build
Docker Push
Deploy on cloud run
In addition, the number of concurrent build are limited and with the config 2 you use 2 times more quotas.
And the result is the same (it should be slightly faster with the config 1 because you haven't a new Cloud Build to spin up)
name: Rspec
on: [push]
jobs:
build:
runs-on: [self-hosted, linux]
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
env:
discovery.type: single-node
options: >-
--health-cmd "curl http://localhost:9200/_cluster/health"
--health-interval 10s
--health-timeout 5s
--health-retries 10
redis:
image: redis
options: --entrypoint redis-server
steps:
- uses: actions/checkout#v2
- name: running tests
run: |
sleep 60
curl -X GET http://elasticsearch:9200/
I am running tests self hosted, I see on host with docker ps the containers (redis and elasticsearch) when they up to test.
I access a container of redis, install a curl and run curl -X GET http://elasticsearch:9200/ and i see a response ok before 60 sec (wait time to service up)
On step running tests I got error message "Could not resolve host: elasticsearch"
So, inside service/container redis I see a host elasticsearch but on step running tests no. What I can do?
You have to map the ports of your service containers and use localhost:host-port as address in your steps running on the GitHub Actions runner.
If you configure the job to run directly on the runner machine and your step doesn't use a container action, you must map any required Docker service container ports to the Docker host (the runner machine). You can access the service container using localhost and the mapped port.
https://docs.github.com/en/free-pro-team#latest/actions/reference/workflow-syntax-for-github-actions#jobsjob_idservices
name: Rspec
on: [push]
jobs:
build:
runs-on: [self-hosted, linux]
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
env:
discovery.type: single-node
options: >-
--health-cmd "curl http://localhost:9200/_cluster/health"
--health-interval 10s
--health-timeout 5s
--health-retries 10
ports:
# <port on host>:<port on container>
- 9200:9200
redis:
image: redis
options: --entrypoint redis-server
steps:
- uses: actions/checkout#v2
- name: running tests
run: |
sleep 60
curl -X GET http://localhost:9200/
Alternative:
Also run your job in a container. Then the job has to access the service containers by hostname.
name: Rspec
on: [push]
jobs:
build:
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
env:
discovery.type: single-node
options: >-
--health-cmd "curl http://localhost:9200/_cluster/health"
--health-interval 10s
--health-timeout 5s
--health-retries 10
redis:
image: redis
options: --entrypoint redis-server
# Containers must run in Linux based operating systems
runs-on: [self-hosted, linux]
# Docker Hub image that this job executes in, pick any image that works for you
container: node:10.18-jessie
steps:
- uses: actions/checkout#v2
- name: running tests
run: |
sleep 60
curl -X GET http://elasticsearch:9200/
I'm trying to deploy my container image on Compute Engine using cloudbuild.yaml but getting error. Below is my cloudbuild.yaml file content:
# gis-account-manager -> Project ID on GCP
steps:
# Build the Docker image.
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/gis-account-manager/ams', '-f', 'Dockerfile', '.']
# Push it to GCR.
- name: gcr.io/cloud-builders/docker
args: ['push', 'gcr.io/gis-account-manager/ams']
# Deploy to Prod env (THIS STEP IS FAILING)
- name: gcr.io/cloud-builders/gcloud
args: [ 'compute', 'instances', 'update-container', 'instance-2-production' , '--container-image', 'gcr.io/gis-account-manager/ams:latest']
# Set the Docker image in Cloud Build
images: ['gcr.io/gis-account-manager/ams']
# Build timeout
timeout: '3600s'
Error:
Starting Step #2
Step #2: Already have image (with digest): gcr.io/cloud-builders/gcloud
Step #2: ERROR: (gcloud.compute.instances.update-container) Underspecified resource [instance-2-production]. Specify the [--zone] flag.
If I run the same command from Cloud SDK Sheel it works as expected.
PS: I've also tried by providing ZONE Flag.
You need to specify your zone in gcloud compute command:
# Deploy to Prod env (THIS STEP IS FAILING)
- name: gcr.io/cloud-builders/gcloud
args: [ 'config', 'set', 'compute/zone', 'us-central1-a']
- name: gcr.io/cloud-builders/gcloud
args: [ 'compute', 'instances', 'update-container', 'instance-2-production' , '--container-image', 'gcr.io/gis-account-manager/ams:latest']
You need to to change asia-east1 by zone from this list. And since you are updating the container then the zone may be already specified.
You can write the command: gcloud compute zones list to list all available zones
Cloud Build does not have enough permissions to execute the operation, hence you are receiving an error when operating on Cloud Build, but not when executing the same operation in gcloud command-line tool, which works differently.
I granted these Cloud Build Service Account and Cloud Build Service Agent with the Compute Admin role:
[REDACTED]#cloudbuild.gserviceaccount.com`
service-[REDACTED]#gcp-sa-cloudbuild.iam.gserviceaccount.com`
My cloudbuild.yaml looks identical to what you should have now:
steps:
- name: gcr.io/cloud-builders/gcloud
args: [ 'config', 'set', 'compute/zone', 'YOUR_ZONE']
- name: gcr.io/cloud-builders/gcloud
args: [ 'compute', 'instances', 'update-container', '[YOUR_INSTANCE_NAME]' , '--container-image', 'gcr.io/gis-account-manager/ams:latest']
where [YOUR_ZONE] is your configured zone and [YOUR_INSTANCE_NAME] is the name of your instance.
I would recommend that you read on this Documentation for more information about Cloud Build service accounts permissions.
I've solved this issue by authenticating via service account (First need to generate keys for Compute Engine Service Account).
Updated cloudbuild.yaml file:
# Deploy to GOOGLE COMPUTE ENGINE Prod env
- name: gcr.io/cloud-builders/gcloud
args: [ 'auth', 'activate-service-account', '123456789-compute#developer.gserviceaccount.com', '--key-file=PATH_TO_FILE', '--project=${_PROJECT_ID}']
- name: gcr.io/cloud-builders/gcloud
args: ['compute', 'instances', 'update-container', '${_VM_INSTANCE}' , '--container-image=gcr.io/${_PROJECT_ID}/ams:latest', '--zone=us-central1-a']
I'm trying move from gitlab ci to drone.io. But I can't make DIND works well as on gitlab. Above is how I did on gitlab.
variables:
NODE_ENV: 'test'
DOCKER_DRIVER: overlay
image: gitlab/dind
services:
- docker:dind
cache:
untracked: true
stages:
- test
test:
stage: test
before_script:
- docker info
- docker-compose --version
- docker-compose pull
- docker-compose build
after_script:
- docker-compose down
script:
- docker-compose run --rm api yarn install
How can I create an equivalent drone file ?
You can use the services section to start the docker daemon.
pipeline:
ping:
image: docker
environment:
- DOCKER_HOST=unix:///drone/docker.sock
commands:
- sleep 10 # give docker enough time to initialize
- docker ps -a
services:
docker:
image: docker:dind
privileged: true
command: [ '-H', 'unix:///drone/docker.sock' ]
Note that we change the default location of the docker socket and write to the drone volume which is shared among all containers in the pipeline:
command: [ '-H', 'unix:///drone/docker.sock' ]
Also note that we need to run the dind container in privileged mode. The privileged flag can only be used by trusted repositories. You will therefore need a user administrator to set the trusted flag to true for your repository in the drone user interface.
privileged: true