I have written a terraform configuration with variable definition like:
variable "GOOGLE_CLOUD_REGION" {
type = string
}
When I run terraform plan I am asked to fill in this variable even though this variable is set within my environment.
Is there a way to tell terraform to work with current env vars? Or do I have to export them and pass them somehow manually one-by-one?
You can define the environment variable TF_VAR_GOOGLE_CLOUD_REGION to set that variable.
If you are using bash, it might look like this:
export TF_VAR_GOOGLE_CLOUD_REGION="$GOOGLE_CLOUD_REGION"
terraform apply ...
From Environment Variables under Configuration Language: Input Variables.
As a fallback for the other ways of defining variables, Terraform searches the environment of its own process for environment variables named TF_VAR_ followed by the name of a declared variable.
This can be useful when running Terraform in automation, or when running a sequence of Terraform commands in succession with the same variables. For example, at a bash prompt on a Unix system:
$ export TF_VAR_image_id=ami-abc123
$ terraform plan
...
You can create a file that ends with .tfvars or .tfvars.json and then when you run a plan you specify that file:
terraform apply -var-file="example.tfvars"
If you name the file terraform.tfvars or terraform.tfvars.json or have a file with names ending in .auto.tfvars or .auto.tfvars.json
then Terraform automatically loads the variable definition file and you don't have to manually specify it when you run a plan.
An example of what the terraform.tfvars file will look like:
first_env_var = "environment_variable_one"
second_env_var = "environment_variable_two"
An example of what the terraform.tfvars.json file will look like:
{
"image_id": "ami-abc123",
"availability_zone_names": ["us-west-1a", "us-west-1c"]
}
I would approach this by creating a variables.tf file, within the project directory. with the required variable block you can specify a default:
variable "GOOGLE_CLOUD_REGION" {
type = string
default = "us-west1"
}
this will then be used as the default value during each run, and you will not be prompted.
Related
I am using a Jenkins plugin to upload test run results to Jira. Using this plugin I can send two JSON blobs of data for the import, but the variables in those JSON blobs can only be environment variables (not variables generally available in the Jenkinsfile).
When I run it is recognizing environment variables that come from the parameters block (this is a parameterized build), but it does not recognize any environment variables I set, either in an environment {} block in the pipeline or by nesting the build step in a withEnv() {} block.
As a sanity check, right before the step in question, I echo two environment variables, one from the parameters block and one from the environment block, and both spit out to the console as expected, but then, as consumed by the plugin, only the variables coming from the parameters block are read as variables, with the rest being left as string.
So is there some difference in how these environment variables are stored/managed behind the scenes that might play into this?
So, for example, here are the parameters and environment blocks:
parameters {
choice(name: 'ENVIRONMENT', choices: ['dev', 'test', 'staging', 'prod'], description: 'Select the environment to run against.')
choice(name: 'TESTS', choices: ['All', 'API', 'Web'], description: 'Select the tests to run.')
}
environment {
PROJECT_KEY = "$jiraProjectKey"
TEST_PLAN_KEY = "$testPlanKeys[$env.ENVIRONMENT]"
PRODUCT_NAME = "$productName"
TEAM_NAME = "$teamName"
}
When I used these environment variables in the JSON blobs to set the Summary field of a Test Execution in Jira with a line that looks like this:
...
"summary": "${ENVIRONMENT} - ${PRODUCT_NAME} - ${TESTS} Tests",
...
The resulting issue summary is:
dev - ${PRODUCT_NAME} - API Tests
So it will properly interpret the environment variables set by the parameters block, but not ones I set explicitly in the environment block.
In the JSON blobs that you are sending inline make sure that for multiline strings you are using """ to delimit those strings and not '''.
Replace:
... importInfo: '''{...'''
by:
...importInfo: """{..."""
I have declared some variables in Gitlab -> Settings -> CI/CD -> Variables.
I want to access these variables in ruby (.rb) files for chef cookbook recipes.
I have declared a variable named "TEST_VARIABLE" in settings as:
Gitlab -> Settings -> CI/CD -> Variables :- Key = TEST_VARIABLE, Value = TEST_VALUE
I have tried accessing them in the below format in the ruby(.rb) recipes.
$TEST_VARIABLE, ${TEST_VARIABLE},TEST_VARIABLE, #{TEST_VARIABLE} and ENV['TEST_VARIABLE']
But nothing works, all returns blank or nil value.
Please let me know how to access these variables in the .rb file.
I dont think that you can use pipeline variables in normal source code. Maybe you can set in your script section a shell environment variable and access it in your ruby code.
script:
- export TEST_VARIABLE=$TEST_VARIABLE
You can pass the variables to the ruby script when you call it.
script:
- ./rubyscript $TEST_VARIABLE
I am using GitLab to deploy a project and have some environmental variables setup in the GitLab console which I use in my GitLab deployment script below:
- export S3_BUCKET="$(eval \$S3_BUCKET_${CI_COMMIT_REF_NAME^^})"
- aws s3 rm s3://$S3_BUCKET --recursive
My environmental variables are declared like so:
Key: s3_bucket_development
Value: https://dev.my-bucket.com
Key: s3_bucket_production
Value: https://prod.my-bucket.com
The plan is that it grabs the bucket URL from the environmental variables depending on which branch is trying to deploy (CI_COMMIT_REF_NAME).
The problem is that the S3_BUCKET variable does not seem to get set properly and I get the following error:
> export S3_BUCKET=$(eval \$S3_BUCKET_${CI_COMMIT_REF_NAME^^})
> /scripts-30283952-2040310190/step_script: line 150: https://dev.my-bucket.com: No such file or directory
It looks like it picks up the environmental variable value fine but does not set it properly - any ideas why?
It seems like you are trying to get the value of the variables S3_BUCKET_DEVELOPMENT and S3_BUCKET_PRODUCTION based on the value of CI_COMMIT_REF_NAME, you can do this by using parameter indirection:
$ a=b
$ b=c
$echo "${!a}" # c
and in your case, you would need a temporary variable as well, something like this might work:
- s3_bucket_variable=S3_BUCKET_${CI_COMMIT_REF_NAME^^}
- s3_bucket=${!s3_bucket_variable}
- aws s3 rm "s3://$s3_bucket" --recursive
You are basically telling bash to execute command, named https://dev.my-bucket.com, which obviously doesn't exist.
Since you want to assign output of command when using VAR=$(command) you should probably use echo
export S3_BUCKET=$(eval echo \$S3_BUCKET_${CI_COMMIT_REF_NAME^^})
Simple test:
VAR=HELL; OUTPUT="$(eval echo "\$S${VAR^^}")"; echo $OUTPUT
/bin/bash
It dynamically creates SHELL variable, and then successfully prints it
I am working on a terraform project that has a variable as such:
variable "datalake_layers" {
type = list
default = ["raw", "bronze", "silver", "gold"]
}
Now I would like to pass the list through the environment (os variable). The way I've been passing other os variables is by running config.sh file before terraform commands are executed. Contents of the shell script look like this:
export TF_VAR_tfinfra_storage_akey="some_storage_key"
export TF_VAR_rg_name="some_resourcegroup_name"
How can I achieve a similar setup with a list instead of a string? Could I set env variable like this and convert this to Terraform list somehow? I couldn't find a way to do this. Or is there a better way?
export TF_VAR_datalake_layers="["raw", "bronze", "silver", "gold"]"
Yes, but you have to use single quotes:
export TF_VAR_datalake_layers='["raw", "bronze", "silver", "gold"]'
I want to control the verbosity of ansible playbooks using an environment variable or a global configuration item. This is because ansible is called from multiple places in multiple ways and I want to change the logging level for all further executions from the same shell session.
I observed that if I configure ANSIBLE_DEBUG=true ansible will run in debug mode but debug more is extremely verbose and I am only looking for something similar to the -vvv option (DEBUG is more verbose than even -vvvv option)
I tried to look for a variable inside https://github.com/ansible/ansible/blob/devel/lib/ansible/constants.py but I wasn't able to find one this fits the bill.
I see two ways to do this:
alias
alias ansible-playbook="echo 'This is -vv alias'; ansible-playbook -vv"
This way your shell will call ansible-playbook -vv when you type ansible-playbook (and print friendly reminder about alias).
callback plugin
Drop this code as verbosity_env.py file into callback_plugins directory:
from ansible.plugins.callback import CallbackBase
import os
try:
from __main__ import display
except ImportError:
display = None
class CallbackModule(CallbackBase):
def v2_playbook_on_start(self, playbook):
v = os.environ.get('ANSIBLE_VERBOSITY')
if v and display:
display.display('Verbosity is set to {} with environment variable.'.format(v), color='blue')
display.verbosity = int(v)
It is not production quality code, but does the job. This will check ANSIBLE_VERBOSITY environment variable and set display verbosity with its value.
Not sure why I missed to answer this, since a long time ago ansible fully supports ANSIBLE_VERBOSITY=[0|1|2|3|4].
For reference, ansible documentation
You can create/edit ansible.cfg file in the local folder and add in section [defaults]:
[defaults]
verbosity=4
Alternatively you can add the same option in /etc/ansible/ansible.cfg file.
I can't find it documented anywhere other than sorin's answer, but if you set ANSIBLE_VERBOSITY=[0|1|2|3|4] environment variable, ansible-playbook will pick this up and use it, unless you specify the verbosity on the command line.
E.g. in Unix-type shells:
export ANSIBLE_VERBOSITY=2
ansible-playbook my-playbook.yml
I only stumbled upon it because I tried setting ANSIBLE_VERBOSITY='-vv' in a pipeline and Ansible started moaning about it not being an integer!