Patching CloudBuild Triggers - google-cloud-build

Is there a CLI option for CloudBuild triggers patch command?
GCB API version has 'patch' option
https://cloud.google.com/cloud-build/docs/api/reference/rest/v1/projects.triggers/patch
yet gcloud CLI doesn't have 'patch' listed https://cloud.google.com/sdk/gcloud/reference/beta/builds/triggers/
I tried gcloud beta builds triggers patch $trigger_name, but it didn't work
My use case. I've got a folder with all GCB triggers and I'm using GCB itself to create new/update GCB triggers from trigger.yaml files in that folder. Currently it deletes all the triggers and recreates them from those files. Maybe patching is a better option so that I don't have too many trigger IDs.
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
echo 'Starting bash'
for gcb_trigger in folder/triggers/*.yaml; do
gcloud beta builds triggers delete "$(basename $gcb_trigger .yaml)" --quiet
done
for gcb_trigger in folder/triggers/*.yaml; do
gcloud beta builds triggers create cloud-source-repositories --trigger-config="$gcb_trigger"
done
echo 'Finishing bash'

Patching triggers is not yet available via gcloud command, but it's possible to patch the triggers using the API inside the curl.
On curl:
curl -X PATCH \
'https://cloudbuild.googleapis.com/v1/projects/[PROJECT_ID]/triggers/[TRIGGER_NAME]' \
-H 'Content-Type: application/json; charset=utf-8' \
-H 'Authorization: [ACCESS TOKEN]'\
-d '{
"name": "foobar"
}'
This request will update the name of your trigger to foobar.
Take note that before doing this, you need to supply an access token for your auth header.

Related

Trying to curl on rancher pod start up but unable to curl

Currently trying to make an init container on rancher which will send a curl to one of my services. I am having repeated issues trying to get this work and I cannot pinpoint why. I am certain my yaml format is correct and I am installing busy box so curl should be available for use
You are missing the -c option for the shell to tell it that it should read commands from the command line instead from a file:
sh -c curl -X POST ...
So you have to put -c as first container arg:
...
- args:
- -c
- curl
- -X
...

how to add heroku ssh keys to known_hosts (used to work before)

I'm having a CI pipeline where I deploy to Heroku (on gitlab). I don't want to use my personal api key, since this is a shared repository. So I had this CI-config working until a few weeks ago:
deploy-heroku:
variables:
GIT_DEPTH: 200
stage: deploy
only:
- master
except:
- schedules
script:
- apk update && apk upgrade && apk add curl bash git openssh-client
- curl https://cli-assets.heroku.com/install.sh | sh
- heroku git:remote -a $HEROKU_APP_NAME --ssh-git
- mkdir -p ~/.ssh
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_ed25519
- chmod 700 ~/.ssh/id_ed25519
- eval "$(ssh-agent -s)"
- ssh-add ~/.ssh/id_ed25519
- ssh-keyscan -H 'heroku.com' >> ~/.ssh/known_hosts
- git push -f heroku HEAD:master --no-verify
This worked flawlessly, and in the logs:
$ ssh-keyscan -H 'heroku.com' >> ~/.ssh/known_hosts
# heroku.com:22 SSH-2.0-endosome
# heroku.com:22 SSH-2.0-endosome
# heroku.com:22 SSH-2.0-endosome
# heroku.com:22 SSH-2.0-endosome
# heroku.com:22 SSH-2.0-endosome
However, since a few weeks, this fails on the ssh-keyscan:
$ ssh-keyscan -H 'heroku.com' >> ~/.ssh/known_hosts
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
So it seems the ssh-keyscan doesn't work anymore. When running ssh-keyscan -H 'heroku.com', it doesn't give any results anymore (it used to give some results).
How to make the keyscan work (or how to make sure the right keys are in known_hosts)?
Or, more general: how to make the heroku deployment work without using a personal API key?
git over ssh was deprecated and removed from Heroku.
This script does work:
- apk update && apk upgrade && apk add curl bash git openssh-client
- curl https://cli-assets.heroku.com/install.sh | sh
- git push --no-verify https://heroku:$HEROKU_API_KEY#git.heroku.com/$HEROKU_APP_NAME.git HEAD:master
In this case, the --no-verify is necessary, because git looks for git-lfs in one of the hooks. With the --no-verify flag, this hook is skipped.
The HEROKU_API_KEY can be generated locally, when you login to heroku and generate a long-living key:
$ heroku login
heroku: Press any key to open up the browser to login or q to exit:
Opening browser to https://cli-auth.heroku.com/auth/cli/browser/89f5...?requestor=SFMyN...
Logging in... done
Logged in as ...
$ heroku authorizations:create
Creating OAuth Authorization... done
Client: <none>
ID: ...
Description: Long-lived user authorization
Scope: global
Token: <HEROKU_API_KEY>
Updated at: Tue Apr 12 2022 17:34:15 GMT+0200 (Central European Summer Time) (less than a minute ago)
Get the api key from the token field. (You can check all the tokens/keys by ID with heroku authorizations)
Add both HEROKU_API_KEY and HEROKU_APP_NAME as protected variables in your repository.

Assigning output from az artifacts universal download to variable

I have a question. Is it possible to assign output from az aritfacts universal download to variable?
I have a Jenkins job where I have script in shell like this:
az artifacts universal download \
--organization "sampleorganization" \
--project "sampleproject" \
--scope project \
--feed "sample-artifacts" \
--name $PACKAGE \
--version $VERSION \
--debug \
--path .
Then I would like transport the file to artifactory with this:
curl -v -u $ARTIFACTORY_USER:$ARTIFACTORY_PASS -X PUT https://artifactory.com/my/test/repo/my_test_file_{VERSION}
I ran the job but noticed that I passed to artifactory empty file. It created my_test_file_{VERSION} but it had 0 mb. As far as I understand I just created empty file with curl. So I would like to pass the output from az download to artifactory repo. Is it possible? How can I do this?
I understand that I need to assign file output to variable and pass it to the curl like:
$MyVariableToPass = az artifacts universal download output
And then pass this var to curl.
Is it possible? How can I pass files between Jenkins which triggers shell job to artifactory?
Also I am not using any plugin right now.
Please help.
The possible solution is to use a VM as the agent of the Jenkins, and then install the Azure CLI inside the VM. You can run the task in that node. To set a variable with the value of the CLI command, for example, the output looks like this:
{
"name": "test_name",
....
}
Then you can set the variable like this:
name=$(az ...... --query name -o tsv)
This is in the Linux system. If it's in the Windows, you can set it like this:
$name = $(az ...... --query name -o tsv)
And as I know, the command to download the file won't output the content of the file. So if you want to set the content of the file as a variable, it's not suitable.

Codecov bash uploader `eval error` on `alpine:edge` docker image

I'm trying to upload coverage reports to codecov.io using the codecov-bash script provided by Codecov. The bash script fails to run on Gitlab CI running an alpine:edge docker image.
Below is the error:
$ /bin/bash <(curl -s https://codecov.io/bash)
/bin/sh: eval: line 107: syntax error: unexpected "("
And here is the relevant part of my .gitlab-ci.yml file:
after_script:
- apk -U add git curl bash findutils
- /bin/bash <(curl -s https://codecov.io/bash)
Line 107 of the script is inside the show_help() function, just under This is non-exclusive, use -s "*.foo" to match specific paths.:
show_help() {
cat << EOF
Codecov Bash $VERSION
Global report uploading tool for Codecov
Documentation at https://docs.codecov.io/docs
Contribute at https://github.com/codecov/codecov-bash
-h Display this help and exit
-f FILE Target file(s) to upload
-f "path/to/file" only upload this file
skips searching unless provided patterns below
-f '!*.bar' ignore all files at pattern *.bar
-f '*.foo' include all files at pattern *.foo
Must use single quotes.
This is non-exclusive, use -s "*.foo" to match specific paths.
-s DIR Directory to search for coverage reports.
Already searches project root and artifact folders.
-t TOKEN Set the private repository token
(option) set environment variable CODECOV_TOKEN=:uuid
-t #/path/to/token_file
-t uuid
-n NAME Custom defined name of the upload. Visible in Codecov UI
-e ENV Specify environment variables to be included with this build
Also accepting environment variables: CODECOV_ENV=VAR,VAR2
-e VAR,VAR2
-X feature Toggle functionalities
-X gcov Disable gcov
-X coveragepy Disable python coverage
-X fix Disable report fixing
-X search Disable searching for reports
-X xcode Disable xcode processing
-X network Disable uploading the file network
-X gcovout Disable gcov output
-X html Enable coverage for HTML files
-X recursesubs Enable recurse submodules in git projects when searching for source files
-N The commit SHA of the parent for which you are uploading coverage. If not present,
the parent will be determined using the API of your repository provider.
When using the repository provider's API, the parent is determined via finding
the closest ancestor to the commit.
-R root dir Used when not in git/hg project to identify project root directory
-F flag Flag the upload to group coverage metrics
-F unittests This upload is only unittests
-F integration This upload is only integration tests
-F ui,chrome This upload is Chrome - UI tests
-c Move discovered coverage reports to the trash
-Z Exit with 1 if not successful. Default will Exit with 0
-- xcode --
-D Custom Derived Data Path for Coverage.profdata and gcov processing
Default '~/Library/Developer/Xcode/DerivedData'
-J Specify packages to build coverage. Uploader will only build these packages.
This can significantly reduces time to build coverage reports.
-J 'MyAppName' Will match "MyAppName" and "MyAppNameTests"
-J '^ExampleApp$' Will match only "ExampleApp" not "ExampleAppTests"
-- gcov --
-g GLOB Paths to ignore during gcov gathering
-G GLOB Paths to include during gcov gathering
-p dir Project root directory
Also used when preparing gcov
-k prefix Prefix filepaths to help resolve path fixing: https://github.com/codecov/support/issues/472
-x gcovexe gcov executable to run. Defaults to 'gcov'
-a gcovargs extra arguments to pass to gcov
-- Override CI Environment Variables --
These variables are automatically detected by popular CI providers
-B branch Specify the branch name
-C sha Specify the commit sha
-P pr Specify the pull request number
-b build Specify the build number
-T tag Specify the git tag
-- Enterprise --
-u URL Set the target url for Enterprise customers
Not required when retrieving the bash uploader from your CCE
(option) Set environment variable CODECOV_URL=https://my-hosted-codecov.com
-r SLUG owner/repo slug used instead of the private repo token in Enterprise
(option) set environment variable CODECOV_SLUG=:owner/:repo
(option) set in your codecov.yml "codecov.slug"
-S PATH File path to your cacert.pem file used to verify ssl with Codecov Enterprise (optional)
(option) Set environment variable: CODECOV_CA_BUNDLE="/path/to/ca.pem"
-U curlargs Extra curl arguments to communicate with Codecov. e.g., -U "--proxy http://http-proxy"
-A curlargs Extra curl arguments to communicate with AWS.
-- Debugging --
-d Don't upload, but dump upload file to stdout
-q PATH Write upload file to path
-K Remove color from the output
-v Verbose mode
EOF
}
I've tried many things to solve the issue, but I can't find a solution. On their GitHub repo, there is this issue that seems linked but the proposed solution has not worked for me: Failing on busybox 1.26, incorrect flags passed to find.
You can find the full log of the job here, line 434: https://gitlab.com/gaspacchio/back-to-the-future/-/jobs/788303704
Based on KamilCuk's comment, below is the full line needed to properly upload code coverage reports to codecov:
bash -c '/bin/bash <(curl -s https://codecov.io/bash)'
As pointed out by KamilCuk, notice the closing '.
The -c flag is documented as such in the man pages for bash:
-c string
If the -c option is present, then commands are read from string. If there are arguments after the string, they are assigned to the positional parameters, starting with $0.
As of today, I don't know why this works. Feel free to edit this answer if you have any clues.

Why AWS cli being seen by one bash script but not the other?

I've got 2 bash scripts in one directory. First is ran and executes:
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
unzip awscli-bundle.zip
./awscli-bundle/install -b ~/bin/aws
export PATH=~/bin:$PATH
eval $(aws ecr get-login --region eu-west-2 --no-include-email) # executes OK
second script is ran and executes:
configure_aws_cli() {
aws --version
aws configure set default.region eu-west-2
aws configure set default.output json
echo "AWS configured."
}
configure_aws_cli
How come I get aws: command not found ?
I get this error even when violating DRY like so:
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
unzip awscli-bundle.zip
./awscli-bundle/install -b ~/bin/aws
export PATH=~/bin:$PATH
configure_aws_cli() {
aws --version
aws configure set default.region eu-west-2
aws configure set default.output json
echo "AWS configured."
}
configure_aws_cli
If you just execute a script, it will be executed by child process, and your "export PATH" will die with this child process.
Try to run first process with "." or "source"
. ./first.sh
./second.sh
This happens when you, for whichever reason, run the two scripts with different interpreters.
Your script works with Bash because it expands ~ both after = and in PATH. It fails in e.g. dash, which does neither.
You can make it work in all shells by using $HOME instead of ~:
export PATH="$HOME/bin:$PATH"
If you additionally want changes to PATH to apply to future script, regular source rules apply.
Problem solved by installing AWS command line tool using pip instead of pulling it from public API provided by AWS. No messing around with PATH was necessary.

Resources