Google cloud build with pack and secrets manager not accessing environment variables - google-cloud-build

I'm using a standard gcr.io/k8s-skaffold/pack build function to build my app for google cloud run using google cloud build.
In my cloudbuild.yaml I load 2 secrets from google secrets manager and pass it to the build function. The google cloud build has access to those secrets, otherwise I would get an error message for this (I got this kind of error at the beginning when setting up the build, now it seems to have access).
However, it seems like the environment variables don't get set.
I think that it might be a syntactical problem of how I try to pass the variables.
This is the stripped down cloudbuild.yaml
steps:
- name: gcr.io/k8s-skaffold/pack
args:
- build
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- '--builder=gcr.io/buildpacks/builder:v1'
- '--network=cloudbuild'
- '--path=.'
- '--env=SEC_A=$$SEC_A'
- '--env=SEC_B=$$SEC_B'
secretEnv: ['SEC_A', 'SEC_B']
id: Buildpack
entrypoint: pack
availableSecrets:
secretManager:
- versionName: projects/<pid>/secrets/SEC_A/versions/latest
env: SEC_A
- versionName: projects/<pid>/secrets/SEC_B/versions/latest
env: SEC_B
An Error message that I hacked into the build for checking shows me that the env var is empty during this build step.
I tried using $, $$ (as seen above), &&, ${...}, for substitution. But maybe the problem lies somewhere else.

Yes, it's a common issue and a trap on Cloud Build. In fact, your secrets can't be read if you use the args[] arrays to pass argument. you have to use the script mode, like that
steps:
- name: gcr.io/k8s-skaffold/pack
entrypoint: bash
args:
- -c
- |
pack build $_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA --builder=gcr.io/buildpacks/builder:v1 --network=cloudbuild --path=. --env=SEC_A=$$SEC_A --env=SEC_B=$$SEC_B
secretEnv: ['SEC_A', 'SEC_B']
id: Buildpack

Related

Getting error while running sonar scannar using cloud build with an advantage of secret manager

Can you please help on my below issue.
As i am doing sonar scanner using cloud build with an advantage of secret manger but facing issue.
And followed same steps of https://cloud.google.com/cloud-build/docs/securing-builds/use-secrets
here is my code
steps:
- name: 'gcr.io/$_PROJECT_ID/sonar-scanner:latest'
entrypoint: 'bash'
args:
- '-c'
- '-Dsonar.host.url=http://sonar:9000/'
- '-Dsonar.login=$$USERNAME'
- '-Dsonar.password=$$PASSWORD'
- '-Dsonar.projectKey=$_BRANCH-analytics'
- '-Dsonar.sources=.'
secretEnv: ['USERNAME', 'PASSWORD']
dir: 'analytics'
availableSecrets:
secretManager:
- versionName: projects/project-id/secrets/sonar_pass/versions/1
env: 'PASSWORD'
- versionName: projects/project-id/secrets/sonar_user/versions/2
env: 'USERNAME'
tags: ['cloud-builders-community']
and the issue i am facing is:
bash: line 0: bash: -Dsonar.login=$USERNAME: invalid option name
ERROR
ERROR: build step 0 "gcr.io/project-id/sonar-scanner:latest" failed: step exited with non-zero status: 2
tried with different items but can't find a solution.
I am grateful if you guys help me on this.
Thank you
I actually had the same problem as you. It is indeed quite important that you use entrypoint: 'bash' and '-c', otherwise Cloud Build doesn't recognise the variables from the secret manager.
My cloudbuild.yaml step looks like this:
steps:
id: 'sonarQube'
name: 'gcr.io/$PROJECT_ID/sonar-scanner:latest'
entrypoint: 'bash'
args:
- '-c'
- |
sonar-scanner -Dsonar.host.url=<url> -Dsonar.login=$$SONARQUBE_TOKEN -Dsonar.projectKey=<project-key> -Dsonar.sources=.
secretEnv: ['SONARQUBE_TOKEN']
availableSecrets:
secretManager:
- versionName: projects/<project-id>/secrets/sonarqube-token/versions/latest
env: 'SONARQUBE_TOKEN'
I had some problems with the latest sonar-scanner image, because it used alpine. I got the next error: jre-bin-java-not-found even though the image has Java. Based on this, I created thus my own Docker image based on Ubuntu instead of Alpine. You can find the image in a pull request.
I found this example of using sonar-scanner in Cloud Build. It seems that sonar-scanner should be used without bash
I think that you should remove entrypoint: 'bash' and '-c'.
The similar approach is in this SO question. It should solve this error.

Unable to deploy pre built image in app engine standard environment (GCP)

My spring boot application was working fine in cloud build & deployed without any issue till September.
Now my trigger fails in gcloud app deploy.
Step #4: ERROR: (gcloud.app.deploy) INVALID_ARGUMENT: Deployment cannot use a pre-built image. Pre-built images are only allowed in the App Engine Flexible Environment.
app.yaml
runtime: java11
env: standard
service: service
handlers:
- url: /.*
script: this field is required, but ignored
cloudbuild.yaml
steps:
# backend deployment
# Step 1:
- name: maven:3-jdk-14
entrypoint: mvn
dir: 'service'
args: ["test"]
# Step 2:
- name: maven:3-jdk-14
entrypoint: mvn
dir: 'service'
args: ["clean", "install", "-Dmaven.test.skip=true"]
# Step 3:
- name: docker
dir: 'service'
args: ["build", "-t", "gcr.io/service-base/base", "."]
# Step 4:
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/service-base/base"]
# Step 5:
- name: 'gcr.io/cloud-builders/gcloud'
dir: 'service/src/main/appengine'
args: ['app', 'deploy', "--image-url=gcr.io/service-base/base"]
timeout: "30m0s"
# Step 6:
# dispatch.yaml deployment
- name: "gcr.io/cloud-builders/gcloud"
dir: 'service/src/main/appengine'
args: ["app", "deploy", "dispatch.yaml"]
timeout: "30m0s"
timeout: "100m0s"
images: ["gcr.io/service-base/base"]
Cloud build error
Thanks in advance. Im confused how my build was working fine before & what am i doing wrong now.
You can't deploy custom container on App Engine standard. You have to provide your code and the environment runtime. Then Buildpack is used to create a standard container on Google Side (for information, a new Cloud Build job is ran for this) and deployed on App Engine.
I recommend you to have a look to Cloud Run to use your custom container. It's very close to App Engine (and even better on many points!) and very customizable.
What your cloudbuild.yaml comment's refer to as Step 5 corresponds to the Step #4 in the error because system begins numbering steps from 0.
The error message is accurate; App Engine standard (!) differs from App Engine flexible in that the latter (flexible) permits container image deployments. App Engine standard deploys from sources.
See Google's example.
It's possible that something has changed Google's side that's causing the issue but, the env: standard in your app.yaml suggests the build file has changed.

pre-commit on windows for terraform

trying to get pre-commit up and running on windows, trying a simple terraform fmt command, but dont many examples of how to run exe piecing it together i have the below:
my .pre-commit-config.yaml
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.1.0 # Use the ref you want to point at
hooks:
- id: detect-aws-credentials
- id: detect-private-key
- repo: local
hooks:
- id: terraform-fmt
name: terraform fmt
description: runs terraform fmt
entry: terraform fmt
args: [-recursive]
language: system
but im getting the error below from pre-commit run -a:
Detect AWS Credentials...................................................Passed
Detect Private Key.......................................................Passed
terraform fmt............................................................Failed
- hook id: terraform-fmt
- exit code: 1
The fmt command expects at most one argument.
Usage: terraform fmt [options] [DIR]
It then looks like its running terraform fmt multiple times as i keep getting the error in a loop. Any idea what im missing?
you may have better luck with https://github.com/antonbabenko/pre-commit-terraform
that said, you can get your example working by using the following I believe:
- repo: local
hooks:
- id: terraform-fmt
name: terraform fmt
description: runs terraform fmt
entry: terraform fmt -recursive
language: system
pass_filenames: false
note that I've done several things:
pass_filenames: false -- pre-commit normally works by passing filenames to hooks, this is why your thing is being invoked multiple times
I removed args (it's unnecessary and only really helpful for remote repositories) and combined it with entry
note that using this as a local hook will ~generally be worse than using the repository above because it will always run against all files instead of just the files you changed (usually making it much much slower!)
disclaimer: I'm the author of pre-commit

CircleCI version 2.1 - "Cannot find a definition for command named 'restore-cache'"

I'm currently attempting to use the commands feature available in CircleCI version 2.1, so that I can reuse some common commands. I'm testing using the CLI command:
circleci config process ./.circleci/config.latest.yaml > ./.circleci/config.yml
But I recieve the following error:
Error: Error calling workflow: 'main'
Error calling job: 'build'
Error calling command: 'build_source'
Cannot find a definition for command named restore-cache
It seems that restore-cache works just fine in a straight-up version 2 config file, but when I try and process a 2.1 file using process it kicks up a fuss.
Below is an edited version of my config.yaml file which should hopefully be of some use. Please let me know if there is any additional information that would be useful.
version: 2.1
defaults: &defaults
/**
* Unimportant stuff
*/
aliases:
- &restore-root-cache
keys:
- v1-deps-{{ .Branch }}-{{ checksum "package.json" }}
- v1-deps-{{ .Branch }}
- v1-deps
commands:
build_source:
description: 'Installs dependencies, then builds src, builds documentation, and runs tests'
steps:
- restore-cache: *restore-root-cache
- other-commands...
jobs:
build:
<<: *defaults
steps:
- checkout
- build_source
workflows:
version: 2.1
main:
jobs:
- build:
filters:
branches:
ignore: develop
The command is restore_cache (with an underscore), not restore-cache (with a dash) https://circleci.com/docs/2.0/configuration-reference/#restore_cache
It should work in commands.
restore cache is a special step that needs to be under a job. Not another command.

drone.io 0.5 slack no longer working

We had slack notification working in drone.io 0.4 just fine, but since we updated to 0.5 I can't get it working despite trying out the documentation.
Before, it was like this
build:
build and deploy stuff...
notify:
slack:
webhook_url: $$SLACK_WEBHOOK_URL
channel: continuous_integratio
username: drone
You can see here that I used the $$ to reference the special drone config file of old.
Now my latest attempt looks like this
pipeline:
build and deploy stuff...
slack:
image: plugins/slack
webhook: https://hooks.slack.com/services/...
channel: continuous_integratio
username: drone
According to the documentation slack is now indented within the pipeline (previously build) level.
I tried changing slack out for notify like it was before, used the SLACK_WEBHOOK secret only via the drone cli and there where other things I attempted as well.
Does anyone know what I might be doing wrong?
This is an (almost exact) yaml I am using with slack notification enabled with the exception that I've masked the credentials
pipeline:
build:
image: golang
commands:
- go build
- go test
slack:
image: plugins/slack
webhook: https://hooks.slack.com/services/XXXXXXXXX/YYYYYYYYY/ZZZZZZZZZZZZZZZZZZZZZZZZ
when:
status: [ success, failure ]
There is unfortunately nothing in your example that jumps out, perhaps with the exception of the channel name having a typo (although I'm not sure if that represents your real yaml configuration or not)
If you are attempting to use secrets (via the cli) you need to make sure you sign your yaml file and commit the signature file to your repository. You can then reference your secret in the yaml similar to 0.4 but with a slightly different syntax:
pipeline:
build:
image: golang
commands:
- go build
- go test
slack:
image: plugins/slack
webhook: ${SLACK_WEBHOOK}
when:
status: [ success, failure ]
You can read more about secrets at http://readme.drone.io/usage/secret-guide/
You can also invoke the plugin directly from the command line to help test different input values. This can help with debugging. See https://github.com/drone-plugins/drone-slack#usage
The issue was that in 0.4 the notify plugin was located outside the scope of the pipeline (then build) and now since 0.5 its located inside the pipeline. This combined with the fact that when a pipeline fails it quits the scope immediately, which means the slack (then notify) step never get's reached at all anymore.
The solution to this is to just explicitly tell it to execute the step on failure with the when command:
when:
status: [ success, failure ]
This is actually mentioned in the getting-started guide, though, but I didn't go through till the end as I was aiming to quickly get it up and running and didn't worry about what I considered to be edge cases.

Resources