Taking output of terraform to bash script as input variable - bash

I am writing a script that takes care of running a terraform file and create infra. I have requirement where I need to take output from the terraform into the same script to create schema for DB. I need to take Endpoint, username, Password and DB name and take it as an input into the script to login to the db and create schema. I need to take the output from aws_db_instance from terraform which is already created and push that as an input into the bash script.
Any help would be really appreciated as to how can we achieve this. thanks in advance. Below is the schema code that I would be using in script and would need those inputs from terraform.
RDS_MYSQL_USER="Username";
RDS_MYSQL_PASS="password";
RDS_MYSQL_BASE="DB-Name";
mysql -h $RDS_MYSQL_ENDPOINT -P $PORT -u $RDS_MYSQL_USER -p $RDS_MYSQL_PASS -D $RDS_MYSQL_BASE -e 'quit';```

The usual way to export particular values from a Terraform configuration is to declare Output Values.
In your case it seems like you want to export several of the result attributes from aws_db_instance, which you could do with declarations like the following in your root module:
output "mysql_host" {
value = aws_db_instance.example.address
}
output "mysql_port" {
value = aws_db_instance.example.port
}
output "mysql_username" {
value = aws_db_instance.example.username
}
output "mysql_password" {
value = aws_db_instance.example.password
sensitive = true
}
output "mysql_database_name" {
value = aws_db_instance.example.name
}
After you run terraform apply you should see Terraform report the final values for each of these, with the password hidden behind (sensitive value) because I declared it with sensitive = true.
Once that's worked, you can use the terraform output command with its -raw option to retrieve these values in a way that's more convenient to use in a shell script. For example, if you are using a Bash-like shell:
MYSQL_HOST="$(terraform output -raw mysql_host)"
MYSQL_PORT="$(terraform output -raw mysql_port)"
MYSQL_USERNAME="$(terraform output -raw mysql_username)"
MYSQL_PASSWORD="$(terraform output -raw mysql_password)"
MYSQL_DB_NAME="$(terraform output -raw mysql_database_name)"
Each run of terraform output will need to retrieve the latest state snapshot from your configured backend, so running it five times might be slow if your chosen backend has a long round-trip time. You could potentially optimize this by installing separate software like jq to parse the terraform output -json result to retrieve all of the values from a single command. There are some further examples in terraform output: Use in Automation.

Related

Jenkins Pipeline throws "syntax error: bad substitution" when Passing in Parameter

I have a Terraform project that I was trying to use Jenkin's Custom Checkbox plugin (Custom Checkbox Parameter) with so that I can build separate applications dynamically using the same IaC, however, I'm getting the following error when passing in the name parameter for that plugin into the Terraform plan and apply commands.
syntax error: bad substitution
The idea for all this is just to click on "select all" or each individual app and run the build, and this will create the IaC for the given application(s).
I have a terraform plan that I am running as a smoke test to verify the parameters above are being passed in correctly before running the apply. This looks like the following:
sh 'terraform plan -var-file="terraform-dev.tfvars" -var "app_name=[${params[${please-work}]}]" -input=false'
The documentation for the plugin states that you can reference the items checked by using this format: "${params['please-work']}" which is what I've done above. That said, one caveat to this is that Im having to set the values in quotes for this to work since the variables are being set in the Terraform using list(string).
NOTE: I have tested that all this works if I just hardcode the app names with the escapes as following:
sh 'terraform plan -var-file="terraform-dev.tfvars" -var "app_name=[\\"app-1\\",\\"app-2\\"]" -input=false'
Again, what I need is for this to work with the -var "app_name=[${params[${please-work}]}]" without throwing that error.
If needed, here is the setup for the JSON that the plugin is using:
Additionally, I can see the values are being set the way I need them to be set when running the echo of echo "${params['please-work']}" on the initial build step. So these are coming back as "app-1", "app-2"
Again, all but that one bit is working and I've tried various ways to escape the needed strings to get this work and I need insight on a path forward. This would be greatly appreciated.
You are casting the script argument in your sh step method as a literal string, and therefore it will not interpolate the pipeline variable of type object params within the Groovy pipeline interpreter. You also are passing the variable value for the app_name with [] syntax (attempted list constructor?), which is not syntactically valid for shell, Terraform, or JSON, but is for Jenkins Pipeline and Groovy with undesired behavior (unclear what is desired here). Finally, please-work is a literal string and not a Jenkins Pipeline or Groovy variable, and since params is technically an object and not a Map, you must use the . syntax and not the [] syntax for accessors. You must update with:
sh(label: 'Execute Terraform Plan', script: "terraform plan -var-file='terraform-dev.tfvars' -var 'app_name=${params.please-work}' -input=false")
If another issue arises after fixing all of this, then it would be recommended to convert the plugin usage to the pipeline with a parameters directive, and also to probably remove the unusual characters e.g. - from the parameter name.
Thanks for helping me think through this, Matt. I was able to resolve the issue with the following shell script in the declarative pipeline:
sh "terraform plan -var-file='terraform-dev.tfvars' -var 'app_name=[${params['please-work']}]' -input=false"
This is working now.

Bash - Gitlab CI not converting variable to a string

I am using GitLab to deploy a project and have some environmental variables setup in the GitLab console which I use in my GitLab deployment script below:
- export S3_BUCKET="$(eval \$S3_BUCKET_${CI_COMMIT_REF_NAME^^})"
- aws s3 rm s3://$S3_BUCKET --recursive
My environmental variables are declared like so:
Key: s3_bucket_development
Value: https://dev.my-bucket.com
Key: s3_bucket_production
Value: https://prod.my-bucket.com
The plan is that it grabs the bucket URL from the environmental variables depending on which branch is trying to deploy (CI_COMMIT_REF_NAME).
The problem is that the S3_BUCKET variable does not seem to get set properly and I get the following error:
> export S3_BUCKET=$(eval \$S3_BUCKET_${CI_COMMIT_REF_NAME^^})
> /scripts-30283952-2040310190/step_script: line 150: https://dev.my-bucket.com: No such file or directory
It looks like it picks up the environmental variable value fine but does not set it properly - any ideas why?
It seems like you are trying to get the value of the variables S3_BUCKET_DEVELOPMENT and S3_BUCKET_PRODUCTION based on the value of CI_COMMIT_REF_NAME, you can do this by using parameter indirection:
$ a=b
$ b=c
$echo "${!a}" # c
and in your case, you would need a temporary variable as well, something like this might work:
- s3_bucket_variable=S3_BUCKET_${CI_COMMIT_REF_NAME^^}
- s3_bucket=${!s3_bucket_variable}
- aws s3 rm "s3://$s3_bucket" --recursive
You are basically telling bash to execute command, named https://dev.my-bucket.com, which obviously doesn't exist.
Since you want to assign output of command when using VAR=$(command) you should probably use echo
export S3_BUCKET=$(eval echo \$S3_BUCKET_${CI_COMMIT_REF_NAME^^})
Simple test:
VAR=HELL; OUTPUT="$(eval echo "\$S${VAR^^}")"; echo $OUTPUT
/bin/bash
It dynamically creates SHELL variable, and then successfully prints it

Terraform to read variables from environment

I have written a terraform configuration with variable definition like:
variable "GOOGLE_CLOUD_REGION" {
type = string
}
When I run terraform plan I am asked to fill in this variable even though this variable is set within my environment.
Is there a way to tell terraform to work with current env vars? Or do I have to export them and pass them somehow manually one-by-one?
You can define the environment variable TF_VAR_GOOGLE_CLOUD_REGION to set that variable.
If you are using bash, it might look like this:
export TF_VAR_GOOGLE_CLOUD_REGION="$GOOGLE_CLOUD_REGION"
terraform apply ...
From Environment Variables under Configuration Language: Input Variables.
As a fallback for the other ways of defining variables, Terraform searches the environment of its own process for environment variables named TF_VAR_ followed by the name of a declared variable.
This can be useful when running Terraform in automation, or when running a sequence of Terraform commands in succession with the same variables. For example, at a bash prompt on a Unix system:
$ export TF_VAR_image_id=ami-abc123
$ terraform plan
...
You can create a file that ends with .tfvars or .tfvars.json and then when you run a plan you specify that file:
terraform apply -var-file="example.tfvars"
If you name the file terraform.tfvars or terraform.tfvars.json or have a file with names ending in .auto.tfvars or .auto.tfvars.json
then Terraform automatically loads the variable definition file and you don't have to manually specify it when you run a plan.
An example of what the terraform.tfvars file will look like:
first_env_var = "environment_variable_one"
second_env_var = "environment_variable_two"
An example of what the terraform.tfvars.json file will look like:
{
"image_id": "ami-abc123",
"availability_zone_names": ["us-west-1a", "us-west-1c"]
}
I would approach this by creating a variables.tf file, within the project directory. with the required variable block you can specify a default:
variable "GOOGLE_CLOUD_REGION" {
type = string
default = "us-west1"
}
this will then be used as the default value during each run, and you will not be prompted.

Airflow parameter passing

I have a simple job that I'd like to move under an Airflow process, if possible. As it stands, I have a string of bash scripts that access a server and download the latest version of a file and then perform a variety of downstream manipulations to that file.
exec ./somescript.sh somefileurl
What I'd like to know is: how can I pass in the URL to this file every time I need to run this process?
It seems that if I try to run the bash script as a bash command like so:
download = BashOperator(
task_id='download_release',
bash_command='somescript.sh',
# params={'URL': 'somefileurl'},
dag=dag)
I have no way of passing in the one parameter that the bash script requires. Otherwise, if I try to send the bash script in as a bash command like so:
download = BashOperator(
task_id='download_release',
bash_command='./somescript.sh {{ URL }}',
params={'URL': 'somefileurl'},
dag=dag)
I receive an execution error as the program tries to execute the script in the context of a temporary directory. This breaks the script as it requires access to some credentials files that sit in the same directory and I'd like to keep the relative file locations intact...
Thoughts?
Update: What worked for me
download = BashOperator(
task_id='download_release',
bash_command='cd {{ params.dir }} && ./somescript.sh {{ params.url }}',
params={'url': 'somefileurl',
'dir': 'somedir'},
dag=dag)
I did not implement any parameter passing yet, though.
Here is an example of passing a parameter to your BashOperator:
templated_command = """
cd /working_directory
somescript.sh {{ dag_run.conf['URL'] }}
"""
download = BashOperator(
task_id='download_release',
bash_command=templated_command,
dag=dag)
For a discussion about this see passing parameters to externally trigged dag. Airflow has two example DAG's that demonstrate this: example_trigger_controller_dag and example_trigger_target_dag. Also, see the Airflow api reference on macros.

The result of same script is displayed in two formats while calling using powershell.invoke and pipeline.invoke

I am calling following script to display the local user accounts in a machine :
$adsi = [ADSI]'WinNT://localhost';
$adsi.Children | where {$_.SchemaClassName -eq 'user'} |Select-Object #{n='UserName';e={$_.Name}};
When the above script is executed using powershell.invoke, the result is
#{UserName=account17}
When the same script is executed using pipeline.invoke, the result is :
UserName
--------
account17
Why is there a difference in the output for the same script when invoked using powershell and pipeline?
Not sure but the powershell.invoke output looks like an object-output via write-host and the pipeline.invoke looks like write-output output even though both should return a psobject.
More code would be helpful

Resources