Build times out, can't increase time out - google-cloud-build

I'm deploying to Kubernettes via Cloud Build. Every now and then the build times out because it exceeds the build-in time out of ten minutes. I can't figure out how to increase this time out. I'm using in-line build config in my trigger. It looks like this
steps:
- name: gcr.io/cloud-builders/docker
args:
- build
- '-t'
- '$_IMAGE_NAME:$COMMIT_SHA'
- .
- '-f'
- $_DOCKERFILE_NAME
dir: $_DOCKERFILE_DIR
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_IMAGE_NAME:$COMMIT_SHA'
id: Push
- name: gcr.io/cloud-builders/gke-deploy
args:
- prepare
- '--filename=$_K8S_YAML_PATH'
- '--image=$_IMAGE_NAME:$COMMIT_SHA'
- '--app=$_K8S_APP_NAME'
- '--version=$COMMIT_SHA'
- '--namespace=$_K8S_NAMESPACE'
- '--label=$_K8S_LABELS'
- '--annotation=$_K8S_ANNOTATIONS,gcb-build-id=$BUILD_ID'
- '--create-application-cr'
- >-
--links="Build
details=https://console.cloud.google.com/cloud-build/builds/$BUILD_ID?project=$PROJECT_ID"
- '--output=output'
id: Prepare deploy
- name: gcr.io/cloud-builders/gsutil
args:
- '-c'
- |-
if [ "$_OUTPUT_BUCKET_PATH" != "" ]
then
gsutil cp -r output/suggested gs://$_OUTPUT_BUCKET_PATH/config/$_K8S_APP_NAME/$BUILD_ID/suggested
gsutil cp -r output/expanded gs://$_OUTPUT_BUCKET_PATH/config/$_K8S_APP_NAME/$BUILD_ID/expanded
fi
id: Save configs
entrypoint: sh
- name: gcr.io/cloud-builders/gke-deploy
args:
- apply
- '--filename=output/expanded'
- '--cluster=$_GKE_CLUSTER'
- '--location=$_GKE_LOCATION'
- '--namespace=$_K8S_NAMESPACE'
id: Apply deploy
timeout: 900s
images:
- '$_IMAGE_NAME:$COMMIT_SHA'
options:
substitutionOption: ALLOW_LOOSE
substitutions:
_K8S_NAMESPACE: default
_OUTPUT_BUCKET_PATH: xxxxx-xxxxx-xxxxx_cloudbuild/deploy
_K8S_YAML_PATH: kubernetes/
_DOCKERFILE_DIR: ''
_IMAGE_NAME: xxxxxxxxxxx
_K8S_ANNOTATIONS: gcb-trigger-id=xxxxxxxx-xxxxxxx
_GKE_CLUSTER: xxxxx
_K8S_APP_NAME: xxxxx
_DOCKERFILE_NAME: Dockerfile
_K8S_LABELS: ''
_GKE_LOCATION: xxxxxxxx
tags:
- gcp-cloud-build-deploy
- $_K8S_APP_NAME
I've tried sticking the timeout: 900 arg in in various places with no luck.

The timeout of 10 minutes is the default for the whole build, therefore if you add the timeout: 900s option in any of the steps, it will only apply to the step that it has been added to. You can make a step have a larger timeout than the overall build timeout, but the whole build process will fail if the sum of all the steps exceeds the build timeout. This example shows this behavior:
steps:
- name: 'ubuntu'
args: ['sleep', '600']
timeout: 800s # Step timeout -> Allows the step to run up to 800s, but as the overall timeout is 600s, it will fail after that time has been passed, so the effective timeout value is 600s.
timeout: 600s # Overall build timeout
That said, the solution is to expand the overall build timeout by adding it outside of any step, and then you can have a build with up to 24h to finish before it fails with a timeout error.
Something like the following example should work out for you:
steps:
- name: 'ubuntu'
args: ['sleep', '600']
timeout: 3600s

Another way to solve this problem is to use a high-end machine so that overall it takes less time in the build process.
You can specify it like
options:
machineType: N1_HIGHCPU_8
Note: This performance benefits come at a cost. Please look into the pricing section to use optimal machine as per your requirement and budget.

Related

Container 'docker' exceeded memory limit

Hi I have the following pipeline config
image: maven:3.3.9
definitions:
steps:
- step: &build-step
name: SonarQube analysis
script:
- pipe: sonarsource/sonarcloud-scan:1.4.0
variables:
SONAR_HOST_URL: ${SONAR_HOST_URL} # Get the value from the repository/workspace variable.
SONAR_SCANNER_OPTS: -Xmx1024m
SONAR_TOKEN: ${SONAR_TOKEN} # Get the value from the repository/workspace variable. You shouldn't set secret in clear text here.
caches:
sonar: ~/.sonar
clone:
depth: full
pipelines:
branches:
'{master}': # or the name of your main branch
- step: *build-step
I want to increase the size of the docker to 2x could you please help with the proper YML config for the same?
I tried changing it to
- step:
size: 2x
but it won't work, I get erros that step is null or missing

How to debug GitHub Action failure

Just yesterday a stable GitHub Action (CI) started failing rather cryptically and I've run out of tools to debug it.
All I can think of is our BUNDLE_ACCESS_TOKEN went bad somehow but I didn't set that up. It's an Action secret under Repository Secrets that are not visible in GitHub UI. How can I test to see if it's valid?
Or maybe it's something else?!? "Bad credentials" is vague...
Here's the meat of the action we're trying to run:
#my_tests.yml
jobs:
my-test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:13.4
env:
POSTGRES_USERNAME: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: myapp_test
ports:
- 5432:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
env:
RAILS_ENV: test
POSTGRES_HOST: localhost
POSTGRES_USERNAME: pg
POSTGRES_PASSWORD: pg
GITHUB_TOKEN: ${{ secrets.BUNDLE_ACCESS_TOKEN }}
BUNDLE_GITHUB__COM: x-access-token:${{ secrets.BUNDLE_ACCESS_TOKEN }}
CUCUMBER_FORMAT: progress
steps:
- uses: actions/checkout#v2
- uses: FedericoCarboni/setup-ffmpeg#v1
...
And with debug turned on here's the Failure (line 20) from GitHub Actions:
Run FedericoCarboni/setup-ffmpeg#v1
1 ------- ##[debug]Evaluating condition for step: 'Run FedericoCarboni/setup-ffmpeg#v1'
2 ##[debug]Evaluating: success()
3 ##[debug]Evaluating success:
4 ##[debug]=> true
5 ##[debug]Result: true
6 ##[debug]Starting: Run FedericoCarboni/setup-ffmpeg#v1
7 ##[debug]Loading inputs
8 ##[debug]Loading env
9 Run FedericoCarboni/setup-ffmpeg#v1
10 with:
11 env:
12 RAILS_ENV: test
13 POSTGRES_HOST: localhost
14 POSTGRES_USERNAME: pg
15 POSTGRES_PASSWORD: pg
16 GITHUB_TOKEN: ***
17 BUNDLE_GITHUB__COM: x-access-token:***
19 CUCUMBER_FORMAT: progress
20 Error: Bad credentials
21 ##[debug]Node Action run completed with exit code 1
22 ##[debug]Finishing: Run FedericoCarboni/setup-ffmpeg#v1
Thanks for any help.
For your particular case try scoping GITHUB_TOKEN and BUNDLE_GITHUB__COM only to steps that actually use it instead the whole job.
Also consider switching to FedericoCarboni/setup-ffmpeg#v2 it has built in support for github.token.
Generic GH Action Debugging
https://github.com/nektos/act
Run actions locally. Mostly gives you faster feedback for experiments.
https://github.com/mxschmitt/action-tmate
Allows you to create interactive remote session where you can poke around.

Any practical difference between using `docker build` & `docker push` together vs. `cloud build submit` in cloud build config files?

Google Cloud docs use the first code block but I'm wondering why they don't use the second one. As far as I can tell they achieve the same result. Is there any practical difference?
# config 1
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/project-id/project-name','.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/project-id/project-name']
# Deploy container image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args: ['run', 'deploy', 'project-name', '--image', 'gcr.io/project-id/project-name', '--region', 'us-central1']
images: ['gcr.io/project-id/project-name']
# config 2
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/gcloud'
args: ['builds', 'submit', '--region', 'us-central1', '--tag', 'gcr.io/project-id/project-name','.']
# Deploy container image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args: ['run', 'deploy', 'project-name', '--image', 'gcr.io/project-id/project-name', '--region', 'us-central1']
I run this with gcloud builds submit --config cloudbuild.yaml
In the second config, you call a Cloud Build from inside a Cloud Build, that means you pay twice the docker build/push process in the second config.
That time of timeline in fact
Cloud Build 1
Cloud build 2
Docker Build
Docker Push
Deploy on cloud run
In addition, the number of concurrent build are limited and with the config 2 you use 2 times more quotas.
And the result is the same (it should be slightly faster with the config 1 because you haven't a new Cloud Build to spin up)

CircleCI getting an error on running a make command

I want to config.yml that has a "build" and "test" jobs.
We have a Makefile in our docker image but it seems it's not finding the make file since I'm always getting this error
#!/bin/bash -eo pipefail
make test
make: *** No rule to make target 'test'. Stop.
Exited with code exit status 2
CircleCI received exit code 2
Is anyone familiar with this? I'm not sure what I have missed.
Thank you!
Here is the yml file:
version: 2.1
workflows:
my-workflow:
jobs:
- build:
context: circleci-aws-credentials
- test
jobs:
build:
docker:
- image: circleci/openjdk:11-jdk
working_directory: ~/my-project
environment:
# Customize the JVM maximum heap limit
JVM_OPTS: -Xmx3200m
steps:
- checkout
# Download and cache dependencies
- restore_cache:
key: my-project-service-{{ checksum "./build.gradle" }}
test:
docker:
- image: circleci/openjdk:11-jdk
working_directory: ~/my-project
environment:
# Customize the JVM maximum heap limit
JVM_OPTS: -Xmx3200m
steps:
- run:
name: Gradle test
command: make test
- store_test_results:
path: ./build/test-results
Thank you for your help in advance!

Google cloud build - provide assets to docker build step

I'm trying to combine two examples from the Google Cloud Build documentation
This example where a container is built using a yaml file
And this example where a persistent volume is used
My scenario is simple, the first step in my build produces some assets I'd like to bundle into a Docker image built in a subsequent step. However, since the COPY command is relative to the build directory of the image, I'm unable to determine how to reference the assets from the first step inside the Dockerfile used for the second step.
There are potentially multiple ways to solve this problem, but the easiest way I've found is to use the docker cloud builder and run an sh script (since Bash is not included in the docker image).
In this example, we build on the examples from the question, producing an asset file in the first build step, then using it inside the Dockerfile of the second step.
cloudbuild.yaml
steps:
- name: 'ubuntu'
volumes:
- name: 'vol1'
path: '/persistent_volume'
entrypoint: 'bash'
args:
- '-c'
- |
echo "Hello, world!" > /persistent_volume/file
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'sh'
volumes:
- name: 'vol1'
path: '/persistent_volume'
args: [ 'docker-build.sh' ]
env:
- 'PROJECT=$PROJECT_ID'
images:
- 'gcr.io/$PROJECT_ID/quickstart-image'
docker-build.sh
#/bin/sh
cp -R /persistent_volume .
docker build -t gcr.io/$PROJECT/quickstart-image .

Resources