I created a YML pipeline using terraform .
It uses a script task and returns in output the web app name
steps:
- script: |
[......]
terraform apply -input=false -auto-approve
# Get the App Service name for the dev environment.
WebAppNameDev=$(terraform output appservice_name_dev)
# Write the WebAppNameDev variable to the pipeline.
echo "##vso[task.setvariable variable=WebAppNameDev;isOutput=true]$WebAppNameDev"
name: 'RunTerraform'
The task works fine but when i deploy the webapp it crashes because seems variable $WebAppNameDev has double quotes.
- task: AzureWebApp#1
displayName: 'Azure App Service Deploy: website'
inputs:
azureSubscription: 'MySubscription'
appName: $(WebAppNameDev)
package: '$(Pipeline.Workspace)/drop/*.zip'
The error looks like:
Got service connection details for Azure App Service:'"spikeapp-dev-6128"'
##[error]Error: Resource '"spikeapp-dev-6128"' doesn't exist. Resource should exist before deployment.
How can i remove double quotes or fix the terraform output?
I solved by adding --raw parameter to terraform output.
WebAppNameDev=$(terraform output --raw appservice_name_dev)
ref. https://www.terraform.io/docs/cli/commands/output.html
Related
We have a gitlab pipeline which I am trying to configure to use a different Azure subscription per environment without much luck.
Basically what I need to be able to do is set the environment variables ARM_CLIENT_ID, ARM_CLIENT_SECRET, ARM_SUBSCRIPTION_ID, ARM_TENANT_ID to different values depending on the environment being built.
In the cicd settings I have variables set for development_ARM_SUBSCRIPTION_ID, test_ARM_SUBSCRIPTION_ID etc with the idea being I assign the values from these variables to the ARM_CLIENT_ID, ARM_CLIENT_SECRET, ARM_SUBSCRIPTION_ID, ARM_TENANT_ID variables in the pipeline.
This is what my pipeline looks like
stages:
- infrastructure-validate
- infrastructure-deploy
- infrastructure-destroy
variables:
DESTROY_INFRA: "false"
development_ARM_SUBSCRIPTION_ID: $development_ARM_SUBSCRIPTION_ID
development_ARM_TENANT_ID: $development_ARM_TENANT_ID
development_ARM_CLIENT_ID: $development_ARM_CLIENT_ID
development_ARM_CLIENT_SECRET: $development_ARM_CLIENT_SECRET
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
before_script:
- rm -rf .terraform
- terraform --version
- terraform init
.terraform-validate:
script:
- export ARM_SUB_ID=${CI_ENVIRONMENT_NAME}_ARM_SUBSCRIPTION_ID
- export ARM_SUBSCRIPTION_ID=${!ARM_SUB_ID}
- export ARM_CLI_ID=${CI_ENVIRONMENT_NAME}_ARM_CLIENT_ID
- export ARM_CLIENT_ID=${!ARM_CLI_ID}
- export ARM_TEN=${CI_ENVIRONMENT_NAME}_ARM_TENANT_ID
- export ARM_TENANT_ID=${!ARM_TEN_ID}
- export ARM_CLI_SECRET=${CI_ENVIRONMENT_NAME}_ARM_CLIENT_SECRET
- export ARM_CLIENT_SECRET=${!ARM_CLI_SECRET")
- echo $development_ARM_SUBSCRIPTION_ID
- echo ${ARM_SUBSCRIPTION_ID}
- terraform workspace select ${CI_ENVIRONMENT_NAME}
- terraform validate
- terraform plan -out "terraform-plan-file"
only:
variables:
- $DESTROY_INFRA != "true"
development-validate-and-plan-terraform:
stage: infrastructure-validate
environment: development
extends: .terraform-validate
only:
refs:
- main
- develop
artifacts:
paths:
- terraform-plan-file
The variable substitution works fine when I test locally but in the pipeline it fails with
/bin/sh: eval: $ export ARM_SUBSCRIPTION_ID=${!ARM_SUB_ID}
line 139: syntax error: bad substitution
I think the problem is the terraform image does not have bash available, only sh but I can't for the life of me work out how I do the same substitution in sh. If anyone has any suggestions, or knows a better way of using different Azure subscriptions for different environments in the pipeline I would really appreciate it.
I would define different jobs for each environment that extend your main .terraform-validate job template, and define the environment variables on that job. This way you don't have to do the indirect substitution that seems to be giving you trouble. That would look something like this:
.terraform-validate:
stage: infrastructure-validate
script:
- echo ${ARM_SUBSCRIPTION_ID}
- terraform workspace select ${CI_ENVIRONMENT_NAME}
- terraform validate
- terraform plan -out "terraform-plan-file"
only:
variables:
- $DESTROY_INFRA != "true"
artifacts:
paths:
- terraform-plan-file
development-validate-and-plan-terraform:
extends: .terraform-validate
environment: development
only:
refs:
- main
- develop
variables:
ARM_SUBSCRIPTION_ID: $development_ARM_SUBSCRIPTION_ID
ARM_TENANT_ID: $development_ARM_TENANT_ID
ARM_CLIENT_ID: $development_ARM_CLIENT_ID
ARM_CLIENT_SECRET: $development_ARM_CLIENT_SECRET
production-validate-and-plan-terraform:
extends: .terraform-validate
environment: production
only:
refs:
- main
variables:
ARM_SUBSCRIPTION_ID: $production_ARM_SUBSCRIPTION_ID
ARM_TENANT_ID: $production_ARM_TENANT_ID
ARM_CLIENT_ID: $production_ARM_CLIENT_ID
ARM_CLIENT_SECRET: $production_ARM_CLIENT_SECRET
Then you define all the development_* and production_* vars in the GitLab CI/CD settings.
Note that I also moved the stage: infrastructure-validate and artifacts: ... directives to the template since I'd imagine they're the same for all environments.
I have a project where I'm using GitHub Actions for the CI / CD. I have a shell script where I would like to inject environment variables from my Actions yml. Here is what I have so far:
- name: docker-push
env:
USER: joesan
SOME_GITHUB_REPO_NAME: github-repo-name
GH_REPO: github.com/$USER/$SOME_GITHUB_REPO_NAME
run: |
echo "Running sbt assembly"
echo $GITHUB_REF
echo "Pushing tag into Docker Registry"
sh ./scripts/tag_deployment.sh
In the tag_deployment.sh, I'm trying to use these variables:
....
....
git clone https://${GH_REPO}
cd "${SOME_GITHUB_REPO_NAME}"
....
....
But I see when the action is run that it does not print what was set!
Cloning into '$SOME_GITHUB_REPO_NAME'...
fatal: unable to access 'https://github.com/$USER/$SOME_GITHUB_REPO_NAME/': The requested URL returned error: 400
How can I inject these environment variables properly into my shell script?
It was not so intuitive to understand from the GitHub documentation. I managed to fix this by moving the environment variables globally like this:
env:
USER: joesan
DEPLOYMENT_REPO_NAME: some-repo-name
And then use it in my some other job definition like this:
- name: docker-push
env:
GH_REPO: github.com/${{ env.USER }}/${{ env.DEPLOYMENT_REPO_NAME }}
I would like to use the Secret Manager to store a credential to our artifactory, within a cloud build step. I have it working using a build similar to:
steps:
- name: 'busybox:glibc'
entrypoint: 'sh'
args: ['-c', 'env']
secretEnv: ['SECRET_VALUE']
availableSecrets:
secretManager:
- versionName: "projects/PROJECT_ID/secrets/TEST-SECRET/versions/1"
env: 'SECRET_VALUE'
All great, no problems - I then try and slightly improve it to:
steps:
- name: 'busybox:glibc'
entrypoint: 'sh'
args: ['-c', 'env']
secretEnv: ['SECRET_VALUE']
availableSecrets:
secretManager:
- versionName: "projects/$PROJECT_ID/secrets/TEST-SECRET/versions/1"
env: 'SECRET_VALUE'
But then it throws the error:
ERROR: (gcloud.builds.submit) INVALID_ARGUMENT: failed to get secret name from secret version "projects/$PROJECT_ID/secrets/TEST-SECRET/versions/1"
I have been able to add a TRIGGER level env var (SECRET_MANAGER_PROJECT_ID), and that works fine. The only issue that as that is a trigger env, it is not available on rebuild, which breaks a lot of things.
Does anyone know how to get the PROJECT_ID of a Secret Manager from within CloudBuild without using a Trigger Param?
For now, it's not possible to set dynamic value in the secret field. I already provided this feedback directly to the Google Cloud PM, it has been take into account, but I don't have more info to share, especially for the availability.
EDIT 1
(January 22). Thanks to Seza443 comment, I tested again and now it works with automatically populated variable (PROJECT_ID and PROJECT_NUMBER), but also with customer defined substitution variables!
It appears that Cloud Build now allows for the use of substitution variables within the availableSecrets field of a build configuration.
From Google Cloud's documentation on using secrets:
After all the build steps, add an availableSecrets field to specify the secret version and environment variables to use for your secret. You can include substitution variables in the value of the secretVersion field. You can specify more than one secret in a build.
I was able to use the $PROJECT_ID variable in my own build configuration like so:
...
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/api-key/versions/latest
env: API_KEY
Granted, there appears to be (at least at present) some discrepancy between the documentation quoted above and the recommended configuration file schema. In the documentation they refer to secretVersion, but that appears to have changed to versionName. In either case, it seems to work properly.
Use the $PROJECT_NUMBER instead.
https://cloud.google.com/build/docs/configuring-builds/substitute-variable-values#using_default_substitutions
So I try to run a bash script in my pipeline on Azure Devops. Here is my code for it:
- task: Bash#3
inputs:
filePath: '../marvel-lcg-companion/hooks/az-emulator'
But as you can see I received this error when I run the pipeline.
##[error]ENOENT: no such file or directory, stat '/Users/runner/work/1/marvel-lcg-companion/hooks/az-emulator'
So for me it is not clear how to format the file path in my YAML file. Can you guys point me in the right direction? I also tried the glob version without any success
**/hooks/az-emulator
UPDATE: my root folder is marvel-lcg-companion
First of please ensure what you have in your working directory by adding this:
- script: ls '$(System.DefaultWorkingDirectory)'
but if marvel-lcg-companion is folder in root of your repo (and you use sinfle repo) you should try:
- task: Bash#3
inputs:
filePath: '$(System.DefaultWorkingDirectory)/marvel-lcg-companion/hooks/az-emulator'
However if marvel-lcg-companion is name of your repo than rather this:
- task: Bash#3
inputs:
filePath: '$(System.DefaultWorkingDirectory)/hooks/az-emulator'
I'm having a Gitlab CI/CD pipeline, and it works OK generally.
My problem is that my testing takes more than 10 minutes and it not stable (YET..) so occasionally randomly it fails on a minor test that I don't care for.
Generally, after retry, it works, but if I need an urgent deploy I need to wait another 10 minutes.
When we have an urgent bug, another 10 minutes is waaaay too much time, so I am looking for a way to force deploy even when the test failed.
I have the next pseudo ci yaml scenario that I'd failed to find a way to accomplish
stages:
- build
- test
- deploy
setup_and_build:
stage: build
script:
- build.sh
test_branch:
stage: test
script:
- test.sh
deploy:
stage: deploy
script:
- deploy.sh
only:
- master
I'm looking for a way to deploy manually if the test phase failed.
but if I add when: manual to the deploy, then deploy never happens automatically.
so a flag like when: auto_or_manual_on_previous_fail will be great.
currently, there is no such flag in Gitlab ci.
Do you have any idea for a workaround or a way to implement it?
Another approach would be to skip the test in case of an emergency release.
For that, follow "Skipping Tests in GitLab CI" from Andi Scharfstein, and:
add "skip test" in the commit message triggering that emergency release
check a variable on the test stage
That is:
.test-template: &test-template
stage: tests
except:
variables:
- $CI_COMMIT_MESSAGE =~ /\[skip[ _-]tests?\]/i
- $SKIP_TESTS
As you can see above, we also included the variable $SKIP_TESTS in the except block of the template.
This is helpful when triggering pipelines manually from GitLab’s web interface.
Here’s an example:
It's possible to control the job attribute of your deploy job by leveraging parent-child pipelines (gitlab 12.7 and above). This will let you decide if you want the job in the child pipeline to run as manual, or always
Essentially, you will need to have a .gitlab-ci.yml with:
stages:
- build
- test
- child-deploy
child-deploy stage will be used to run the child pipeline, in which the deploy job will run with the desired attribute.
Your test could generate as artifact the code for deploy section. For example, in the after_script section of your test, you can check the value of CI_JOB_STATUS builtin variable to decide if you want to write the child job to run as manual or always:
my_test:
stage: test
script:
- echo "testing, exit 0 on success, exit 1 on failure"
after_script:
- if [ "$CI_JOB_STATUS" == "success" ]; then WHEN=always; else WHEN=manual; fi
- |
cat << 'EOF' > deploy.yml
stages:
- deploy
my_deploy:
stage: deploy
rules:
- when: $WHEN
script:
- echo "deploying"
EOF
artifacts:
when: always
paths:
- deploy.yml
Note that variable expension is disabled in the heredoc section, by the use of single quoted 'EOF'. If you need variable expension, remember to escape the $ of $WHEN.
Finally, you can trigger the child pipeline with deploy.yml
gen_deploy:
stage: child-deploy
when: always
trigger:
include:
- artifact: deploy.yml
job: my_test
strategy: depend