Not able to set environment variable in cloudbuild.yaml file - yaml

I am trying to set env variable in the cloudbuild.yaml file but it's not getting set. Am I missing something ? Below is the yaml file:
cloudbuild.yaml
steps:
# Install npm
- name: "node:10.16.3"
id: installing_npm
args: ["npm", "install"]
dir: "/workspace/API/ground_truth_trigger"
# Test Cloud Function
- name: 'node:10.16.3'
id: run_test_coverage
dir: '/workspace/API/ground_truth_trigger'
entrypoint: bash
env: ['BUCKET_NAME = dummycblbucket', 'AUDIT_BUCKET_NAME = dummyAuditbucket']
args:
- '-c'
- |
if [[ $BRANCH_NAME =~ ^ground_truth_trigger-[0-9]+-api$ ]]
then
npm run test
fi
# env:
# - 'BUCKET_NAME = dummycblbucket'
# - 'AUDIT_BUCKET_NAME = dummyAuditbucket'
Below are the logs:
Step #1 - "run_test_coverage": Already have image: node:10.16.3
Step #1 - "run_test_coverage":
Step #1 - "run_test_coverage": > ground_truth_trigger#1.0.0 test /workspace/API/ground_truth_trigger
Step #1 - "run_test_coverage": > nyc --reporter=lcov --reporter=text mocha test/unit/*
Step #1 - "run_test_coverage":
Step #1 - "run_test_coverage": envs { npm_config_cache_lock_stale: '60000',
Step #1 - "run_test_coverage": npm_config_ham_it_up: '',
Step #1 - "run_test_coverage": npm_config_legacy_bundling: '',
Step #1 - "run_test_coverage": npm_config_sign_git_tag: '',
Step #1 - "run_test_coverage": npm_config_user_agent: 'npm/6.9.0 node/v10.16.3 linux x64',
Step #1 - "run_test_coverage": '{"_":["mocha"],"reporter":["lcov","text"],"r":["lcov","text"],"cwd":"/workspace/API/ground_truth_trigger","temp-dir":"./.nyc_output","t":"./.nyc_output","tempDir":"./.nyc_output","exclude":["coverage/**","packages/*/test{,s}/**","**/*.d.ts","test{,s}/**","test{,-*}.{js,cjs,mjs,ts}","**/*{.,-}test.{js,cjs,mjs,ts}","**/__tests__/**","**/{ava,nyc}.config.{js,cjs,mjs}","**/jest.config.{js,cjs,mjs,ts}","**/{karma,rollup,webpack}.config.js","**/{babel.config,.eslintrc,.mocharc}.{js,cjs}"],"x":["coverage/**","packages/*/test{,s}/**","**/*.d.ts","test{,s}/**","test{,-*}.{js,cjs,mjs,ts}","**/*{.,-}test.{js,cjs,mjs,ts}","**/__tests__/**","**/{ava,nyc}.config.{js,cjs,mjs}","**/jest.config.{js,cjs,mjs,ts}","**/{karma,rollup,webpack}.config.js","**/{babel.config,.eslintrc,.mocharc}.{js,cjs}"],"exclude-node-modules":true,"excludeNodeModules":true,"include":[],"n":[],"extension":[".js",".cjs",".mjs",".ts",".tsx",".jsx"],"e":[".js",".cjs",".mjs",".ts",".tsx",".jsx"],"ignore-class-methods":[],"ignoreClassMethods":[],"auto-wrap":true,"autoWrap":true,"es-modules":true,"esModules":true,"parser-plugins":["asyncGenerators","bigInt","classProperties","classPrivateProperties","dynamicImport","importMeta","objectRestSpread","optionalCatchBinding"],"parserPlugins":["asyncGenerators","bigInt","classProperties","classPrivateProperties","dynamicImport","importMeta","objectRestSpread","optionalCatchBinding"],"compact":true,"preserve-comments":true,"preserveComments":true,"produce-source-map":true,"produceSourceMap":true,"source-map":true,"sourceMap":true,"require":[],"i":[],"instrument":true,"exclude-after-remap":true,"excludeAfterRemap":true,"branches":0,"functions":0,"lines":90,"statements":0,"per-file":false,"perFile":false,"check-coverage":false,"checkCoverage":false,"report-dir":"coverage","reportDir":"coverage","show-process-tree":false,"showProcessTree":false,"skip-empty":false,"skipEmpty":false,"skip-full":false,"skipFull":false,"silent":false,"s":false,"all":false,"a":false,"eager":false,"cache":true,"c":true,"babel-cache":false,"babelCache":false,"use-spawn-wrap":false,"useSpawnWrap":false,"hook-require":true,"hookRequire":true,"hook-run-in-context":false,"hookRunInContext":false,"hook-run-in-this-context":false,"hookRunInThisContext":false,"clean":true,"in-place":false,"inPlace":false,"exit-on-error":false,"exitOnError":false,"delete":false,"complete-copy":false,"completeCopy":false,"$0":"node_modules/.bin/nyc","instrumenter":"./lib/instrumenters/istanbul"}',
Step #1 - "run_test_coverage": NYC_CWD: '/workspace/API/ground_truth_trigger',
Step #1 - "run_test_coverage": NODE_OPTIONS:
Step #1 - "run_test_coverage": ' --require /workspace/API/ground_truth_trigger/node_modules/node-preload/preload-path/node-preload.js',
Step #1 - "run_test_coverage": NODE_PRELOAD_904597faf3dd793b123e0cc47c7e6f55e1b18fb4:
Step #1 - "run_test_coverage": '/workspace/API/ground_truth_trigger/node_modules/nyc/lib/register-env.js:/workspace/API/ground_truth_trigger/node_modules/nyc/lib/wrap.js',
Step #1 - "run_test_coverage": NYC_PROCESS_ID: '2403b1ad-d5b2-4715-b9de-abbb54f424cf' }
Step #1 - "run_test_coverage":
Step #1 - "run_test_coverage": Error: A bucket name is needed to use Cloud Storage.
Step #1 - "run_test_coverage": at Storage.bucket (/workspace/API/ground_truth_trigger/node_modules/#google-cloud/storage/build/src/storage.js:151:19)
Step #1 - "run_test_coverage": at /workspace/API/ground_truth_trigger/src/index.js:4:48
Could you please help!

You need to remove the space before and after the =
env: ['BUCKET_NAME=dummycblbucket', 'AUDIT_BUCKET_NAME=dummyAuditbucket']
You can check the value in Cloud Build by performing
An echo of the env var echo $$BUCKET_NAME. The double $ is required to indicate to Cloud Build to not replace with substituons variables.
Use the printenv command.

Related

Pre-commit Pylint "exit code: 32" upon committing, no issues on `run --all-files`

When I run pre-commit run --all-files all goes well, when I try to commit, pylint throws an error with: Exit code: 32, followed by the list of usage options. The only files changed are .py files:
git status
On branch include-gitlab-arg
Your branch is up to date with 'origin/include-gitlab-arg'.
Changes to be committed:
(use "git restore --staged <file>..." to unstage)
renamed: code/project1/src/Main.py -> code/project1/src/GitLab/GitLab_runner_token_getter.py
renamed: code/project1/src/get_gitlab_runner_token.py -> code/project1/src/GitLab/get_gitlab_runner_token.py
modified: code/project1/src/__main__.py
modified: code/project1/src/control_website.py
deleted: code/project1/src/get_website_controller.py
modified: code/project1/src/helper.py
Error Output:
The git commit -m "some change." command yields the following pre-commit error:
pylint...................................................................Failed
- hook id: pylint
- exit code: 32
usage: pylint [options]
optional arguments:
-h, --help
show this
help
message and
exit
Commands:
Options which are actually commands. Options in this group are mutually exclusive.
--rcfile RCFILE
whereas pre-commit run --all-files passes.
And the .pre-commit-config.yaml contains:
# This file specifies which checks are performed by the pre-commit service.
# The pre-commit service prevents people from pushing code to git that is not
# up to standards. # The reason mirrors are used instead of the actual
# repositories for e.g. black and flake8, is because those repositories also
# need to contain a pre-commit hook file, which they often don't by default.
# So to resolve that, a mirror is created that includes such a file.
default_language_version:
python: python3.8. # or python3
repos:
# Test if the python code is formatted according to the Black standard.
- repo: https://github.com/Quantco/pre-commit-mirrors-black
rev: 22.6.0
hooks:
- id: black-conda
args:
- --safe
- --target-version=py36
# Test if the python code is formatted according to the flake8 standard.
- repo: https://github.com/Quantco/pre-commit-mirrors-flake8
rev: 5.0.4
hooks:
- id: flake8-conda
args: ["--ignore=E501,W503,W504,E722,E203"]
# Test if the import statements are sorted correctly.
- repo: https://github.com/PyCQA/isort
rev: 5.10.1
hooks:
- id: isort
args: ["--profile", "black", --line-length=79]
## Test if the variable typing is correct. (Variable typing is when you say:
## def is_larger(nr: int) -> bool: instead of def is_larger(nr). It makes
## it explicit what type of input and output a function has.
## - repo: https://github.com/python/mypy
# - repo: https://github.com/pre-commit/mirrors-mypy
#### - repo: https://github.com/a-t-0/mypy
# rev: v0.982
# hooks:
# - id: mypy
## Tests if there are spelling errors in the code.
# - repo: https://github.com/codespell-project/codespell
# rev: v2.2.1
# hooks:
# - id: codespell
# Performs static code analysis to check for programming errors.
- repo: local
hooks:
- id: pylint
name: pylint
entry: pylint
language: system
types: [python]
args:
[
"-rn", # Only display messages
"-sn", # Don't display the score
"--ignore-long-lines", # Ignores long lines.
]
# Runs additional tests that are created by the pre-commit software itself.
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.3.0
hooks:
# Check user did not add large files.
- id: check-added-large-files
# Check if `.py` files are written in valid Python syntax.
- id: check-ast
# Require literal syntax when initializing empty or zero Python builtin types.
- id: check-builtin-literals
# Checks if there are filenames that would conflict if case is changed.
- id: check-case-conflict
# Checks if the Python functions have docstrings.
- id: check-docstring-first
# Checks if any `.sh` files have a shebang like #!/bin/bash
- id: check-executables-have-shebangs
# Verifies json format of any `.json` files in repo.
- id: check-json
# Checks if there are any existing merge conflicts caused by the commit.
- id: check-merge-conflict
# Checks for symlinks which do not point to anything.
- id: check-symlinks
# Checks if xml files are formatted correctly.
- id: check-xml
# Checks if .yml files are valid.
- id: check-yaml
# Checks if debugger imports are performed.
- id: debug-statements
# Detects symlinks changed to regular files with content path symlink was pointing to.
- id: destroyed-symlinks
# Checks if you don't accidentally push a private key.
- id: detect-private-key
# Replaces double quoted strings with single quoted strings.
# This is not compatible with Python Black.
#- id: double-quote-string-fixer
# Makes sure files end in a newline and only a newline.
- id: end-of-file-fixer
# Removes UTF-8 byte order marker.
- id: fix-byte-order-marker
# Add <# -*- coding: utf-8 -*-> to the top of python files.
#- id: fix-encoding-pragma
# Checks if there are different line endings, like \n and crlf.
- id: mixed-line-ending
# Asserts `.py` files in folder `/test/` (by default:) end in `_test.py`.
- id: name-tests-test
# Override default to check if `.py` files in `/test/` START with `test_`.
args: ['--django']
# Ensures JSON files are properly formatted.
- id: pretty-format-json
args: ['--autofix']
# Sorts entries in requirements.txt and removes incorrect pkg-resources entries.
- id: requirements-txt-fixer
# Sorts simple YAML files which consist only of top-level keys.
- id: sort-simple-yaml
# Removes trailing whitespaces at end of lines of .. files.
- id: trailing-whitespace
- repo: https://github.com/PyCQA/autoflake
rev: v1.7.0
hooks:
- id: autoflake
args: ["--in-place", "--remove-unused-variables", "--remove-all-unused-imports", "--recursive"]
name: AutoFlake
description: "Format with AutoFlake"
stages: [commit]
- repo: https://github.com/PyCQA/bandit
rev: 1.7.4
hooks:
- id: bandit
name: Bandit
stages: [commit]
# Enforces formatting style in Markdown (.md) files.
- repo: https://github.com/executablebooks/mdformat
rev: 0.7.16
hooks:
- id: mdformat
additional_dependencies:
- mdformat-toc
- mdformat-gfm
- mdformat-black
- repo: https://github.com/MarcoGorelli/absolufy-imports
rev: v0.3.1
hooks:
- id: absolufy-imports
files: '^src/.+\.py$'
args: ['--never', '--application-directories', 'src']
- repo: https://github.com/myint/docformatter
rev: v1.5.0
hooks:
- id: docformatter
- repo: https://github.com/pre-commit/pygrep-hooks
rev: v1.9.0
hooks:
- id: python-use-type-annotations
- id: python-check-blanket-noqa
- id: python-check-blanket-type-ignore
# Updates the syntax of `.py` files to the specified python version.
# It is not compatible with: pre-commit hook: fix-encoding-pragma
- repo: https://github.com/asottile/pyupgrade
rev: v3.0.0
hooks:
- id: pyupgrade
args: [--py38-plus]
- repo: https://github.com/markdownlint/markdownlint
rev: v0.11.0
hooks:
- id: markdownlint
With pyproject.toml:
# This is used to configure the black, isort and mypy such that the packages don't conflict.
# This file is read by the pre-commit program.
[tool.black]
line-length = 79
include = '\.pyi?$'
exclude = '''
/(
\.git
| \.mypy_cache
| build
| dist
)/
'''
[tool.coverage.run]
# Due to a strange bug with xml output of coverage.py not writing the full-path
# of the sources, the full root directory is presented as a source alongside
# the main package. As a result any importable Python file/package needs to be
# included in the omit
source = [
"foo",
".",
]
# Excludes the following directories from the coverage report
omit = [
"tests/*",
"setup.py",
]
[tool.isort]
profile = "black"
[tool.mypy]
ignore_missing_imports = true
[tool.pylint.basic]
bad-names=[]
[tool.pylint.messages_control]
# Example: Disable error on needing a module-level docstring
disable=[
"import-error",
"invalid-name",
"fixme",
]
[tool.pytest.ini_options]
# Runs coverage.py through use of the pytest-cov plugin
# An xml report is generated and results are output to the terminal
# TODO: Disable this line to disable CLI coverage reports when running tests.
#addopts = "--cov --cov-report xml:cov.xml --cov-report term"
# Sets the minimum allowed pytest version
minversion = 5.0
# Sets the path where test files are located (Speeds up Test Discovery)
testpaths = ["tests"]
And setup.py:
"""This file is to allow this repository to be published as a pip module, such
that people can install it with: `pip install networkx-to-lava-nc`.
You can ignore it.
"""
import setuptools
with open("README.md", encoding="utf-8") as fh:
long_description = fh.read()
setuptools.setup(
name="networkx-to-lava-nc-snn",
version="0.0.1",
author="a-t-0",
author_email="author#example.com",
description="Converts networkx graphs representing spiking neural networks"
+ " (SNN)s of LIF neruons, into runnable Lava SNNs.",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/a-t-0/networkx-to-lava-nc",
packages=setuptools.find_packages(),
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: AGPL3",
"Operating System :: OS Independent",
],
)
Question
How can I resolve the pylint usage error to ensure the commit passes pre-commit?
The issue was caused by #"--ignore-long-lines", # Ignores long lines. in the .pre-commit-config.yaml. I assume it conflicts with the line length settings for black, and for the pyproject.toml, which are set to 79 respectively.

GitLab CI rules not working with extends and individual rules

Below are two jobs in the build stage.
Default, there is set some common condition, and using extends keyword for that, ifawsdeploy.
As only one of them should run, if variable $ADMIN_SERVER_IP provided then connect_admin_server should run, working that way.
If no value provided to $ADMIN_SERVER_IP then create_admin_server should run, but it is not running.
.ifawsdeploy:
rules:
- if: '$TEST_CREATE_ADMIN && $REGION && $ROLE_ARN && $PACKAGEURL && $TEST_CREATE_ADMIN == "aws" && $SUB_PLATFORM == "aws" && $ROLE_ARN != "" && $PACKAGEURL != "" && $REGION != ""'
variables:
TEST_CREATE_ADMIN:
#value: aws
description: "Platform, currently aws only"
SUB_PLATFORM:
value: aws
description: "Platform, currently aws only"
REGION:
value: "us-west-2"
description: "region where to deploy company"
PACKAGEURL:
value: "http://somerpmurl.x86_64.rpm"
description: "company rpm file url"
ACCOUNT_NAME:
value: "testsubaccount"
description: "Account name of sub account to refer in the deployment, no need to match in AWS"
ROLE_ARN:
value: "arn:aws:iam::491483064167:role/uat"
description: "ROLE ARN of the user account assuming: aws sts get-caller-identity"
tfenv_version: "1.1.9"
DEV_PUB_KEY:
description: "Optional public key file to add access to admin server"
ADMIN_SERVER_IP:
description: "Existing Admin Server IP Address"
ADMIN_SERVER_SSH_KEY:
description: "Existing Admin Server SSH_KEY PEM content"
#export variables below will cause the terraform to use the root account instead of the one specified in tfvars file
.configure_aws_cli: &configure_aws_cli
- aws configure set region $REGION
- aws configure set aws_access_key_id $AWS_FULL_STS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_FULL_STS_ACCESS_KEY_SECRET
- aws sts get-caller-identity
- aws configure set source_profile default --profile $ACCOUNT_NAME
- aws configure set role_arn $ROLE_ARN --profile $ACCOUNT_NAME
- aws sts get-caller-identity --profile $ACCOUNT_NAME
- aws configure set region $REGION --profile $ACCOUNT_NAME
.copy_remote_log: &copy_remote_log
- if [ -e outfile ]; then rm outfile; fi
- copy_command="$(cat $CI_PROJECT_DIR/scp_command.txt)"
- new_copy_command=${copy_command/"%s"/"outfile"}
- new_copy_command=${new_copy_command/"~"/"/home/ec2-user/outfile"}
- echo $new_copy_command
- new_copy_command=$(echo "$new_copy_command" | sed s'/\([^.]*\.[^ ]*\) \([^ ]*\) \(.*\)/\1 \3 \2/')
- echo $new_copy_command
- sleep 10
- eval $new_copy_command
.check_remote_log: &check_remote_log
- sleep 10
- grep Error outfile || true
- sleep 10
- returnCode=$(grep -c Error outfile) || true
- echo "Return code received $returnCode"
- if [ $returnCode -ge 1 ]; then exit 1; fi
- echo "No errors"
.prepare_ssh_key: &prepare_ssh_key
- echo $ADMIN_SERVER_SSH_KEY > $CI_PROJECT_DIR/ssh_key.pem
- cat ssh_key.pem
- sed -i -e 's/-----BEGIN RSA PRIVATE KEY-----/-bk-/g' ssh_key.pem
- sed -i -e 's/-----END RSA PRIVATE KEY-----/-ek-/g' ssh_key.pem
- perl -p -i -e 's/\s/\n/g' ssh_key.pem
- sed -i -e 's/-bk-/-----BEGIN RSA PRIVATE KEY-----/g' ssh_key.pem
- sed -i -e 's/-ek-/-----END RSA PRIVATE KEY-----/g' ssh_key.pem
- cat ssh_key.pem
- chmod 400 ssh_key.pem
connect-admin-server:
stage: build
allow_failure: true
image:
name: amazon/aws-cli:latest
entrypoint: [ "" ]
rules:
- if: '$ADMIN_SERVER_IP && $ADMIN_SERVER_IP != "" && $ADMIN_SERVER_SSH_KEY && $ADMIN_SERVER_SSH_KEY != ""'
extends:
- .ifawsdeploy
script:
- TF_IN_AUTOMATION=true
- yum update -y
- yum install git unzip gettext jq -y
- echo "Your admin server key and info are added as artifacts"
# Copy the important terraform outputs to files for artifacts to pass into other jobs
- *prepare_ssh_key
- echo "ssh -i ssh_key.pem ec2-user#${ADMIN_SERVER_IP}" > $CI_PROJECT_DIR/ssh_command.txt
- echo "scp -q -i ssh_key.pem %s ec2-user#${ADMIN_SERVER_IP}:~" > $CI_PROJECT_DIR/scp_command.txt
- test_pre_command="$(cat "$CI_PROJECT_DIR/ssh_command.txt") -o StrictHostKeyChecking=no"
- echo $test_pre_command
- test_command="$(echo $test_pre_command | sed -r 's/(ssh )(.*)/\1-tt \2/')"
- echo $test_command
- echo "sudo yum install -yq $PACKAGEURL 2>&1 | tee outfile ; exit 0" | $test_command
- *copy_remote_log
- echo "Now checking log file for returnCode"
- *check_remote_log
artifacts:
untracked: true
when: always
paths:
- "$CI_PROJECT_DIR/ssh_key.pem"
- "$CI_PROJECT_DIR/ssh_command.txt"
- "$CI_PROJECT_DIR/scp_command.txt"
after_script:
- cat $CI_PROJECT_DIR/ssh_key.pem
- cat $CI_PROJECT_DIR/ssh_command.txt
- cat $CI_PROJECT_DIR/scp_command.txt
create-admin-server:
stage: build
allow_failure: false
image:
name: amazon/aws-cli:latest
entrypoint: [ "" ]
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
extends:
- .ifawsdeploy
script:
- echo "admin server $ADMIN_SERVER_IP"
- TF_IN_AUTOMATION=true
- yum update -y
- yum install git unzip gettext jq -y
- *configure_aws_cli
- aws sts get-caller-identity --profile $ACCOUNT_NAME #to check whether updated correctly or not
- git clone "https://project-n-setup:$(echo $PERSONAL_GITLAB_TOKEN)#gitlab.com/company-oss/project-n-setup.git"
# Install tfenv
- git clone https://github.com/tfutils/tfenv.git ~/.tfenv
- ln -s ~/.tfenv /root/.tfenv
- ln -s ~/.tfenv/bin/* /usr/local/bin
# Install terraform 1.1.9 through tfenv
- tfenv install $tfenv_version
- tfenv use $tfenv_version
# Copy the tfvars temp file to the terraform setup directory
- cp .gitlab/admin_server.temp_tfvars project-n-setup/$SUB_PLATFORM/
- cd project-n-setup/$SUB_PLATFORM/
- envsubst < admin_server.temp_tfvars > admin_server.tfvars
- rm -rf .terraform || exit 0
- cat ~/.aws/config
- terraform init -input=false
- terraform apply -var-file=admin_server.tfvars -input=false -auto-approve
- echo "Your admin server key and info are added as artifacts"
# Copy the important terraform outputs to files for artifacts to pass into other jobs
- terraform output -raw ssh_key > $CI_PROJECT_DIR/ssh_key.pem
- terraform output -raw ssh_command > $CI_PROJECT_DIR/ssh_command.txt
- terraform output -raw scp_command > $CI_PROJECT_DIR/scp_command.txt
- cp $CI_PROJECT_DIR/project-n-setup/$SUB_PLATFORM/terraform.tfstate $CI_PROJECT_DIR
- cp $CI_PROJECT_DIR/project-n-setup/$SUB_PLATFORM/admin_server.tfvars $CI_PROJECT_DIR
artifacts:
untracked: true
paths:
- "$CI_PROJECT_DIR/ssh_key.pem"
- "$CI_PROJECT_DIR/ssh_command.txt"
- "$CI_PROJECT_DIR/scp_command.txt"
- "$CI_PROJECT_DIR/terraform.tfstate"
- "$CI_PROJECT_DIR/admin_server.tfvars"
How to fix that?
I tried the below step from suggestions on comments section.
.generalgrabclustertrigger:
rules:
- if: '$TEST_CREATE_ADMIN && $REGION && $ROLE_ARN && $PACKAGEURL && $TEST_CREATE_ADMIN == "aws" && $SUB_PLATFORM == "aws" && $ROLE_ARN != "" && $PACKAGEURL != "" && $REGION != ""'
.ifteardownordestroy: # Automatic if triggered from gitlab api AND destroy variable is set
rules:
- !reference [.generalgrabclustertrigger, rules]
- if: 'CI_PIPELINE_SOURCE == "triggered"'
when: never
And included the above in extends of a job.
destroy-admin-server:
stage: cleanup
extends:
- .ifteardownordestroy
allow_failure: true
interruptible: false
But I am getting syntax error in the .ifteardownordestroy part.
jobs:destroy-admin-server:rules:rule if invalid expression syntax
You are overriding rules: in your job that extends .ifawsdeploy. rules: are not combined in this case -- the definition of rules: in the job takes complete precedence.
Take for example the following configuration:
.template:
rules:
- one
- two
myjob:
extends: .template
rules:
- a
- b
In the above example, the myjob job only has rules a and b in effect. Rules one and two are completely ignored because they are overridden in the job configuration.
Instead of uinsg extends:, you can use !reference to preserve and combine rules. You can also use YAML anchors if you want.
create-admin-server:
rules:
- !reference [.ifawsdeploy, rules]
- ... # your additional rules
If no value provided to $ADMIN_SERVER_IP then create_admin_server should run
Lastly, pay special attention to your rules:
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
In this case, there are no rules that allow the job to run ever. You either need a case that will evaluate true for the job to run, or to have a default case (an item with no if: condition) in order for the job to run.
To get the behavior you expect, you probably want your default case to be on_success:
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
- when: on_success
you can change your rules to :
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
- when: always
or
rules:
- if: '$ADMIN_SERVER_IP == ""'
when: always
I have a sample in here: try-rules-stackoverflow-72545625 - GitLab and the pipeline record Pipeline no value - GitLab, Pipeline has value - GitLab

Bash: How to execute paths

I have job in my gitlab-cicd.yml file:
unit_test:
stage: test
image: $MAVEN_IMAGE
script:
- *tests_variables_export
- mvn ${MAVEN_CLI_OPTS} clean test
- cat ${CI_PROJECT_DIR}/rest-service/target/site/jacoco/index.html
- cat ${CI_PROJECT_DIR}/soap-service/target/site/jacoco/index.html
artifacts:
expose_as: 'code coverage'
paths:
- ${CI_PROJECT_DIR}/soap-service/target/surefire-reports/
- ${CI_PROJECT_DIR}/rest-service/target/surefire-reports/
- ${CI_PROJECT_DIR}/soap-service/target/site/jacoco/index.html
- ${CI_PROJECT_DIR}/rest-service/target/site/jacoco/index.html
And I want to change it to this one:
unit_test:
stage: test
image: $MAVEN_IMAGE
script:
- *tests_variables_export
- mvn ${MAVEN_CLI_OPTS} clean test
- cat ${CI_PROJECT_DIR}/rest-service/target/site/jacoco/index.html
- cat ${CI_PROJECT_DIR}/soap-service/target/site/jacoco/index.html
artifacts:
expose_as: 'code coverage'
paths:
- *resolve_paths
I try to use this bash script:
.resolve_paths: &resolve_paths |-
if [ "${MODULE_FIRST}" != "UNKNOWN" ]; then
echo "- ${CI_PROJECT_DIR}/${MODULE_FIRST}/target/surefire-reports/"
echo "- ${CI_PROJECT_DIR}/${MODULE_FIRST}/target/site/jacoco/index.html"
fi
if [ "${MODULE_SECOND}" != "UNKNOWN" ]; then
echo "- ${CI_PROJECT_DIR}/${MODULE_SECOND}/target/surefire-reports/"
echo "- ${CI_PROJECT_DIR}/${MODULE_SECOND}/target/site/jacoco/index.html"
fi
And right now I'm getting this error in pipeline:
WARNING: if [ "rest-service" != "UNKNOWN" ]; then\n echo "- /builds/minv/common/testcommons/taf-api-support/rest-service/target/surefire-reports/"\n echo "- /builds/minv/common/testcommons/taf-api-support/rest-service/target/site/jacoco/index.html"\nfi\nif [ "soap-service" != "UNKNOWN" ]; then\n echo "- /builds/minv/common/testcommons/taf-api-support/soap-service/target/surefire-reports/"\n echo "- /builds/minv/common/testcommons/taf-api-support/soap-service/target/site/jacoco/index.html"\nfi: no matching files ERROR: No files to upload
Can I execute [sic] paths using bash script like this?
No, scripts cannot alter the current YAML, particularly not if you specify the script (which is just a string) in a place where it is interpreted as a path.
You could trigger a dynamically generated YAML:
generate:
stage: build
script:
- |
exec > generated.yml
echo ".resolve_paths: &resolve_paths"
for module in "${MODULE_FIRST}" "${MODULE_SECOND}"; do
[[ "$module" = UNKNOWN ]] && continue
echo "- ${CI_PROJECT_DIR}/${module}/target/surefire-reports/"
echo "- ${CI_PROJECT_DIR}/${module}/target/site/jacoco/index.html"
done
sed '1,/^\.\.\. *$/d' "${CI_CONFIG_PATH}"
artifacts:
paths:
- generated.yml
run:
stage: deploy
trigger:
include:
- artifact: generated.yml
job: generate
...
# Start of actual CI. When this runs, there will be an
# auto-generated job `.resolve_paths: &resolve_paths`.
# Put the rest of your CI (e.g. `unit_test:`) here.
But there are so many extensions in GitLab's YAML that you likely will find a tremendously better solution, which depends on what you plan to do with .resolve_paths. Maybe have a look at
artifacts:exclude
additional jobs with rules:

CloudBuild jobs not executed in parallel

I am learning CloudBuild and understand that I can use waitFor to influence the order in which my build runs. job1 includes some sleep time to simulate a long running job. job2 just echos something. done waits for job1 & job2. So I created a test build like this: I have a package.json
{
"scripts": {
"job1": "echo \"[job1] Starting\" && sleep 5 && echo \"[job1] ...\" && sleep 2 && echo \"[job1] Done\" && exit 0",
"job2": "echo \"[job2] Hello from NPM\" && exit 0",
"done": "echo \"DONE DONE DONE!\" && exit 0"
},
}
Job 1 simulates a long running job, where I was hopping job 2 will execute in parallel. But seems like the output shows its not. Does CloudBuild run 1 step at a time only?
cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/npm'
args: ['run', 'job1']
id: 'job1'
- name: 'gcr.io/cloud-builders/npm'
args: ['run', 'job2']
id: 'job2'
- name: 'gcr.io/cloud-builders/npm'
args: ['run', 'done']
waitFor: ['job1', 'job2']
Output
Operation completed over 1 objects/634.0 B.
BUILD
Starting Step #0 - "job1"
Step #0 - "job1": Already have image (with digest): gcr.io/cloud-builders/npm
Step #0 - "job1":
Step #0 - "job1": > learn-gcp#1.0.0 job1 /workspace
Step #0 - "job1": > echo "[job1] Starting" && sleep 5 && echo "[job1] ..." && sleep 2 && echo "[job1] Done" && exit 0
Step #0 - "job1":
Step #0 - "job1": [job1] Starting
Step #0 - "job1": [job1] ...
Step #0 - "job1": [job1] Done
Finished Step #0 - "job1"
Starting Step #1 - "job2"
Step #1 - "job2": Already have image (with digest): gcr.io/cloud-builders/npm
Step #1 - "job2":
Step #1 - "job2": > learn-gcp#1.0.0 job2 /workspace
Step #1 - "job2": > echo "[job2] Hello from NPM" && exit 0
Step #1 - "job2":
Step #1 - "job2": [job2] Hello from NPM
Finished Step #1 - "job2"
Starting Step #2
Step #2: Already have image (with digest): gcr.io/cloud-builders/npm
Step #2:
Step #2: > learn-gcp#1.0.0 done /workspace
Step #2: > echo "DONE DONE DONE!" && exit 0
Step #2:
Step #2: DONE DONE DONE!
Finished Step #2
PUSH
DONE
If you wanted the job2 to be executed concurrently with job1 you should have added the line
waitFor: ['-'] in your cloudbuild.yaml, immediatly after job2. As it is stated in the official documentation:
If no values are provided for waitFor, the build step waits for all prior build steps in the build request to complete successfully before running.
To run a build step immediately at build time, use - in the waitFor
field.
The order of the build steps in the steps field relates to the order
in which the steps are executed. Steps will run serially or
concurrently based on the dependencies defined in their waitFor
fields.
In the documentation is, also an example on how to run two jobs in parallel.
In case you want job2 to run allong with job1 you should have something like this:
steps:
- name: 'gcr.io/cloud-builders/npm'
args: ['run', 'job1']
id: 'job1'
- name: 'gcr.io/cloud-builders/npm'
args: ['run', 'job2']
id: 'job2'
waitFor: ['-'] # The '-' indicates that this step begins immediately.
- name: 'gcr.io/cloud-builders/npm'
args: ['run', 'done']
waitFor: ['job1', 'job2']
Also note that the maximum number of concurent builds you can use is ten.

How to generate a file with env vars in a job in Circleci config.yml

I'm trying to generate a env.ts file in my Circleci config file with my circleci environment variables. I tried the code below:
steps:
- run:
name: Setup Environment Variables
command:
echo "export const env = {
jwtSecret: ${jwtSecret},
gqlUrl: ${gqlUrl},
engineAPIToken: ${engineAPIToken},
mongodb: ${mongodb},
mandrill: ${mandrill},
gcpKeyFilename: ${gcpKeyFilename},
demo: ${demo},
nats: ${NATS},
usernats: ${usernats},
passnats: ${passnats} };" | base64 --wrap=0 > dist/env.ts
but it outputs this:
#!/bin/sh -eo pipefail
# Unable to parse YAML
# mapping values are not allowed here
# in 'string', line 34, column 24:
# jwtSecret: ${jwtSecret},
# ^
#
# -------
# Warning: This configuration was auto-generated to show you the message above.
# Don't rerun this job. Rerunning will have no effect.
false
Exited with code 1
I forgot to write a pipe after command: |
steps:
- run:
name: Setup Environment Variables
command: |
echo "export const env = {
jwtSecret: '${jwtSecret}',
gqlUrl: '${gqlUrl}',
engineAPIToken: '${engineAPIToken}',
mongodb: '${mongodb}',
mandrill: '${mandrill}',
gcpKeyFilename: '${gcpKeyFilename}',
demo: ${demo}, // it's a boolean
nats: '${NATS}',
usernats: '${usernats}',
passnats: '${passnats}'
};" > dist/env.ts
Note: Also I had forgotten to add '' around my variables.

Resources