Google Cloud Build - How to Cache Bazel? - caching

I recently started using Cloud Build with Bazel.
So I have a basic cloudbuild.yaml
steps:
- id: 'run unit tests'
name: gcr.io/cloud-builders/bazel
args: ['test', '//...']
which runs all tests of my Bazel project.
But as you can see from this screenshot, every build takes around 4 minutes, although I haven't touched any code which would affect my tests.
Locally running the tests for the first time takes about 1 minute. But running the tests a second time, with the help of Bazels cache, it takes only a few seconds.
So my goal is to use the Bazel cache with Google Cloud Build
Update
As suggested by Thierry Falvo I'v looked into those recommendations. An thus I tried to the add the following to my cloudbuild.yaml:
steps:
- name: gcr.io/cloud-builders/gsutil
args: ['cp', 'gs://cents-ideas-build-cache/bazel-bin', 'bazel-bin']
- id: 'run unit tests'
name: gcr.io/cloud-builders/bazel
args: ['test', '//...']
- name: gcr.io/cloud-builders/gsutil
args: ['cp', 'bazel-bin', 'gs://cents-ideas-build-cache/bazel-bin']
Although I created the bucket and folder, I get this error:
CommandException: No URLs matched

I think that rather than cache discrete results (artifacts), you want to use GCS (cloud storage) as a bazel remote cache.
- name: gcr.io/cloud-builders/bazel
args: ['test', '--remote_cache=https://storage.googleapis.com/<bucketname>', '--google_default_credentials', '--test_output=errors', '//...']

Related

Percy not running in CircleCI orbs (w/ Cypress)

I'm trying to get Percy.io to take snapshots of a simple test written in Cypress, building in CircleCI. However, the 'builds' are showing up as failed in the Percy dashboard despite the test/build passing in CircleCI. In the Cypress test runner it is showing 'Percy not running' where my snapshots are placed.
I've followed the tutorials on the Percy and Cypress sites. I can get Percy to work locally, by running percy exec -- cypress run
but the CircleCI config doesn't run Cypress via the command cypress run, it runs it via the cypress orb.
It seems like the two orbs, Cypress and Percy, doesn't know the other exists.
Here's my CircleCI config file:
version: 2.1
orbs:
node: circleci/node#4.5.1
cypress: cypress-io/cypress#1.28.0
slack: circleci/slack#4.4.2
percy: percy/agent#0.1.3
workflows:
version: 2
commit-workflow:
jobs:
- cypress/run:
name: Smoke Tests
record: true
store_artifacts: true
spec: cypress/integration/E2E/*
post-steps:
- store_test_results:
path: test-results
- slack/notify:
channel: general
event: fail
template: basic_fail_1
mentions: '#Jac'
- slack/notify:
channel: general
event: pass
template: basic_success_1
mentions: '#Jac'
- percy/finalize_all:
requires:
- Smoke Tests
The Run Cypress Tests step doesn't make any mention of Percy, so I'm assuming it simply isn't running - that despite using the Percy orb, there's some sort of config I'm missing?
Apologies, I keep finding answers to my questions after posting to Stack
Overflow! I obviously don't know the properties of cypress/run well enough. But essentially, there's a custom command-prefix property that can be added for the purpose of amending the command used to run cypress. In fact, Percy is the example used in the Cypress docs.
Config now looks like:
version: 2.1
orbs:
node: circleci/node#4.5.1
cypress: cypress-io/cypress#1.28.0
slack: circleci/slack#4.4.2
percy: percy/agent#0.1.3
workflows:
version: 2
commit-workflow:
jobs:
- cypress/run:
name: Smoke Tests
record: true
store_artifacts: true
spec: cypress/integration/E2E/*
command-prefix: npx percy exec --
post-steps:
- store_test_results:
path: test-results
- slack/notify:
channel: general
event: fail
template: basic_fail_1
mentions: '#Jac'
- slack/notify:
channel: general
event: pass
template: basic_success_1
mentions: '#Jac'
- percy/finalize_all:
requires:
- Smoke Tests

How can I create several executors for a job in Circle CI orb?

NOTE: The actual problem I am trying to solve is run testcontainers in Circle CI.
To make it reusable, I decided to extend the existing orb in my organisation.
The question, how can I create several executors for a job? I was able to create the executor itself.
Executor ubuntu.yml:
description: >
The executor to run testcontainers without extra setup in Circle CI builds.
parameters:
# https://circleci.com/docs/2.0/configuration-reference/#resource_class
resource-class:
type: enum
default: medium
enum: [medium, large, xlarge, 2xlarge]
tag:
type: string
default: ubuntu-2004:202010-01
resource_class: <<parameters.resource-class>>
machine:
image: <<parameters.tag>>
One of the jobs itself:
parameters:
executor:
type: executor
default: openjdk
resource-class:
type: enum
default: medium
enum: [small, medium, medium+, large, xlarge]
executor: << parameters.executor >>
resource_class: << parameters.resource-class >>
environment:
# Customize the JVM maximum heap limit
MAVEN_OPTS: -Xmx3200m
steps:
# Instead of checking out code, just grab it the way it is
- attach_workspace:
at: .
# Guessing this is still necessary (we only attach the project folder)
- configure-maven-settings
- cloudwheel/fetch-and-update-maven-cache
- run:
name: "Deploy to Nexus without running tests"
command: mvn clean deploy -DskipTests
I couldn't find a good example of adding several executors, and I assume that I will need to add ubuntu and openjdk for every job. Am I right?
I continue looking into other orbs and documentation but cannot find a similar case to mine.
As it is stated in Circle CI documentation, executors can be defined like this:
executors:
my-executor:
machine: true
my-openjdk:
docker:
- image: openjdk:11
Side note, there can be many executors of any type such as docker, machine (Linux), macos, win.
See, StackOverflow question how to invoke executors from CircleCI orbs.

Unable to deploy pre built image in app engine standard environment (GCP)

My spring boot application was working fine in cloud build & deployed without any issue till September.
Now my trigger fails in gcloud app deploy.
Step #4: ERROR: (gcloud.app.deploy) INVALID_ARGUMENT: Deployment cannot use a pre-built image. Pre-built images are only allowed in the App Engine Flexible Environment.
app.yaml
runtime: java11
env: standard
service: service
handlers:
- url: /.*
script: this field is required, but ignored
cloudbuild.yaml
steps:
# backend deployment
# Step 1:
- name: maven:3-jdk-14
entrypoint: mvn
dir: 'service'
args: ["test"]
# Step 2:
- name: maven:3-jdk-14
entrypoint: mvn
dir: 'service'
args: ["clean", "install", "-Dmaven.test.skip=true"]
# Step 3:
- name: docker
dir: 'service'
args: ["build", "-t", "gcr.io/service-base/base", "."]
# Step 4:
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/service-base/base"]
# Step 5:
- name: 'gcr.io/cloud-builders/gcloud'
dir: 'service/src/main/appengine'
args: ['app', 'deploy', "--image-url=gcr.io/service-base/base"]
timeout: "30m0s"
# Step 6:
# dispatch.yaml deployment
- name: "gcr.io/cloud-builders/gcloud"
dir: 'service/src/main/appengine'
args: ["app", "deploy", "dispatch.yaml"]
timeout: "30m0s"
timeout: "100m0s"
images: ["gcr.io/service-base/base"]
Cloud build error
Thanks in advance. Im confused how my build was working fine before & what am i doing wrong now.
You can't deploy custom container on App Engine standard. You have to provide your code and the environment runtime. Then Buildpack is used to create a standard container on Google Side (for information, a new Cloud Build job is ran for this) and deployed on App Engine.
I recommend you to have a look to Cloud Run to use your custom container. It's very close to App Engine (and even better on many points!) and very customizable.
What your cloudbuild.yaml comment's refer to as Step 5 corresponds to the Step #4 in the error because system begins numbering steps from 0.
The error message is accurate; App Engine standard (!) differs from App Engine flexible in that the latter (flexible) permits container image deployments. App Engine standard deploys from sources.
See Google's example.
It's possible that something has changed Google's side that's causing the issue but, the env: standard in your app.yaml suggests the build file has changed.

AzureDevops agent must finish current build before taking an other one

Situation
I'm working on an AzureDevops Server 2020 with only one agent.
I have 2 build pipelines:
Build pipeline (yaml)
Merge pipeline (yaml)
Each pipeline contains multiple stages that contains only one job (because each task of the stage must run on same agent).
Current behavior
If I run the two pipelines at the same time, the agent run the two in a "fake" parallel that make the two builds very slow.
Exemple of agent process order:
build-stage1, build-stage2, merge-stage1, merge-stage2, merge-stage3, merge-stage4, build-stage3...
Wanted behavior
This is not something unexpected if we have more agents than build executions. But this will never be my case.
So I will prefer to lock the agent for the current build (like built-in in Jenkins).
Exemple of agent wanted process order:
build-stage1, build-stage2, build-stage3, build-stage4, build-stage5(latest), merge-stage1, merge-stage2, merge-stage3, merge-stage4, merge-stage5(latest)
Is it possible to set the agent work attribution policy ?
This occurs because a job is not added to agent queue if it depends on something (and by default it depends on previous stage).
Using dependsOn: [] let Azure Devops know that it depends on nothing so each jobs are added to the queue and are executed in FIFO order.
Agree with Dom.
Based on my test, I could reproduce this behavior. But I am afraid that there is no method to lock the agent to complete the pipeline before taking another one.
For a workaround:
You could use Pipeline trigger to trigger the merge Pipeline.
Build Pipeline:
pool:
name: Default
stages:
- stage: Build_Stage1
displayName: Stage1
....
- stage: Build_Stage2
displayName: Stage2
dependsOn: Build_Stage1
....
- stage: Build_Stage3
displayName: Stage3
dependsOn: Build_Stage2
....
Merge Pipeline:
resources:
pipelines:
- pipeline: TestTrigger
source: ABC
trigger:
branches:
- '*'
pool:
name: Default
stages:
- stage: Merge_Stage1
displayName: Merge1
...
- stage: Merge_Stage2
displayName: Merge2
dependsOn: Merge_Stage1
...
- stage: Merge_Stage3
displayName: Merge3
dependsOn: Merge_Stage2
...
In this case, you could queue the Build Pipeline alone. Then the Merge Pipeline will be triggered after completing the Build Pipeline.
The Process: build-stage1, build-stage2, build-stage3, build-stage4, build-stage5(latest), -> Tiggeer -> merge-stage1, merge-stage2, merge-stage3, merge-stage4, merge-stage5(latest)
On the other hand, this requirement is valuable.
To get this feature, you could add your request for this feature on our UserVoice site, which is our main forum for product suggestions. Thank you for helping us build a better Azure DevOps.

Google Cloud Build timing out

I have a Google Cloud Build build that times out after 10 min, 3 sec. Is there a way to extend that timeout?
The build status is set to "Build failed (timeout)" and I'm okay with it taking longer than 10 minutes.
In cloudbuild.yaml you have to add something like timeout: 660s.
E.g.
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/[PRODUCT_ID]/[CONTAINER_IMAGE]', '.' ]
images:
- 'gcr.io/[PRODUCT_ID]/[CONTAINER_IMAGE]'
timeout: 660s
If you defined your build using a cloudbuild.yaml, you can just set the timeout field; see the full definition of a Build Resource in the documentation.
If you are using the gcloud CLI, it takes a --timeout flag; try gcloud builds submit --help for details.
Example: gcloud builds submit --timeout=900s ...

Resources