How to pass ip-address from terraform to ansible [duplicate] - ansible

I am trying to create Ansible inventory file using local_file function in Terraform (I am open for suggestions to do it in a different way)
module "vm" config:
resource "azurerm_linux_virtual_machine" "vm" {
for_each = { for edit in local.vm : edit.name => edit }
name = each.value.name
resource_group_name = var.vm_rg
location = var.vm_location
size = each.value.size
admin_username = var.vm_username
admin_password = var.vm_password
disable_password_authentication = false
network_interface_ids = [azurerm_network_interface.edit_seat_nic[each.key].id]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
output "vm_ips" {
value = toset([
for vm_ips in azurerm_linux_virtual_machine.vm : vm_ips.private_ip_address
])
}
When I run terraform plan with the above configuration I get:
Changes to Outputs:
+ test = [
+ "10.1.0.4",
]
Now, in my main TF I have the configuration for local_file as follows:
resource "local_file" "ansible_inventory" {
filename = "./ansible_inventory/ansible_inventory.ini"
content = <<EOF
[vm]
${module.vm.vm_ips}
EOF
}
This returns the error below:
Error: Invalid template interpolation value
on main.tf line 92, in resource "local_file" "ansible_inventory":
90: content = <<EOF
91: [vm]
92: ${module.vm.vm_ips}
93: EOF
module.vm.vm_ips is set of string with 1 element
Cannot include the given value in a string template: string required.
Any suggestion how to inject the list of IPs from the output into the local file while also being able to format the rest of the text in the file?

If you want the Ansible inventory to be statically sourced from a file in INI format, then you basically need to render a template in Terraform to produce the desired output.
module/templates/inventory.tmpl:
[vm]
%{ for ip in ips ~}
${ip}
%{ endfor ~}
alternative suggestion from #mdaniel:
[vm]
${join("\n", ips)}
module/config.tf:
resource "local_file" "ansible_inventory" {
content = templatefile("${path.module}/templates/inventory.tmpl",
{ ips = module.vm.vm_ips }
)
filename = "${path.module}/ansible_inventory/ansible_inventory.ini"
file_permission = "0644"
}
A couple of additional notes though:
You can modify your output to be the entire map of objects of exported attributes like:
output "vms" {
value = azurerm_linux_virtual_machine.vm
}
and then you can access more information about the instances to populate in your inventory. Your templatefile argument would still be the module output, but the for expression(s) in the template would look considerably different depending upon what you want to add.
You can also utilize the YAML or JSON inventory formats for Ansible static inventory. With those, you can then leverage the yamldecode or jsondecode Terraform functions to make the HCL2 data structure transformation much easier. The template file would become a good bit cleaner in that situation for more complex inventories.

Related

aws_imagebuilder_component issue with template_file

I need to deploy a bash script using aws_imagebuilder_component resource; I am using a template_file to populate an array inside my script. I am trying to figure out the correct way to render the template inside the imagebuilder_component resource.
I am pretty sure that the formatting of my script is the main issue :(
This is the error that I keep getting, seems like an issue with the way I am formatting the script inside the yml, can you please assist? I have not worked with image builder previously or with yamlencode.
Error: error creating Image Builder Component: InvalidParameterValueException: The value supplied for parameter 'data' is not valid. Failed to parse document. Error: line 1: cannot unmarshal string phases:... into Document.
# Image builder component
resource "aws_imagebuilder_component" "devmaps" {
name = "devmaps"
platform = "Linux"
data = yamlencode(data.template_file.init.rendered)
version = "1.0.0"
}
# template_file
data "template_file" "init" {
template = "${file("${path.module}/myscript.yml")}"
vars = {
devnames= join(" ", local.devnames)
}
}
# myscript.yml
schemaVersion: 1.0
phases:
- name: build
steps:
- name: pre-requirements
action: ExecuteBash
inputs:
commands: |
#!/bin/bash
for i in `seq 0 6`; do
nvdev="/dev/nvm${i}n1"
if [ -e $nvdev ]; then
mapdev="${devnames[i]}"
if [[ -z "$mapdev" ]]; then
mapdev="${devnames[i]}"
fi
else
ln -s $nvdev $mapdev
echo "symlink created: ${nvdev} to ${mapdev}"
fi
fi
done
#Marko
# tfvars
vols = {
data01 = {
devname = "/dev/xvde"
size = "200"
}
data01 = {
devname = "/dev/xvdf"
size = "300"
}
}
Variables.tf: populating this list "devnames" from the map object "vols", shown above
locals {
devnames = ([ for key, value in var.vols: value.devname ])
}
main: template_file is using the list "devnames" to assign its values to devnames variable; devnames variables is used inside myscript.yml
devnames = join(" ", local.devnames)
At this point, everything is working w/o issues
But when this is executed, it fails and complains about the formatting of the template that was rendered using myscript.yml.
I am doing something wrong here that I cannot figure it out
## Image builder component
resource "aws_imagebuilder_component" "devmaps" {
name = "devmaps"
platform = "Linux"
data = yamlencode(data.template_file.init.rendered)
version = "1.0.0"
}

Terraform EC2 NIC private_ips build list from custom module outputs

I have a custom child module that is building various AWS resources for unique EC2 instances, which are loaded from JSON definition files. In the root module, I need to concatenate an output property from each of the child modules to apply secondary private IPv4 addresses to a network interface resource. The network interface will have 2 static IPv4 addresses, plus an IPv4 address from EACH of the child modules that are built.
Here is my folder structure:
root/
|_
main.tf
instances/
|_
instance1.json
instance2.json
...
modules/instance/
|_
main.tf
outputs.tf
The root main.tf file will load all of the JSON files into custom child modules using the for_each argument like so:
locals {
json_files = fileset("./instances/", "*.json")
json_data = [for f in local.json_files: jsondecode(file("./instances/${f}"))]
}
module "instance" {
for_each = { for k,v in local.json_data: k => v }
source = "../modules/instance"
server_name = each.value.server_name
firewall_vip = each.value.firewall_vip
...
}
There is a string output attribute I'm trying to grab from the child modules to then apply as a list to an aws_network_interface resource private_ips property.
The string output attribute is a virtual IP used for special routing through a firewall to the backend instances.
Example of the output attribute in the child module outputs.tf file:
output "firewall_vip" {
description = "The virtual IP to pass through firewall"
value = "10.0.0.10"
}
Side note: The "firewall_vip" output property is ALSO defined within the JSON files for an input variable to the child module... So is there an easier way to pull the property straight from the JSON files instead of relying on the child module outputs?
Within the root module main.tf file, I am trying to concatenate a list of all secondary IPs to apply to the NIC with the Splat expression (not sure if this is the right approach):
resource "aws_network_interface" "firewall" {
subnet_id = <subnet ID>
private_ips = concat(["10.0.0.4","10.0.0.5"], module.instance[*].firewall_vip)
}
I receive an error saying:
Error: Incorrect attribute value type
module.instance is a map of object, known only after apply
Inappropriate value for attribute "private_ips": element 2: string required.
I have also tried to use the For expression to achieve this like so:
resource "aws_network_interface" "firewall" {
private_ips = concat(["10.0.0.4", "10.0.0.5"], [for k,v in module.instance[*]: v if k == "firewall_vip"])
...
}
I do not receive any errors with this method, but it also will not recognize any of the "firewall_vip" outputs from the child modules for appending to the list.
Am I going about this the wrong way? Any suggestions would be very helpful, as I'm still a Terraform newb.
I realize I was over-complicating this, and I could just use the locals{} block to pull the JSON attributes without having to rely on the child module outputs...
In the root main.tf file:
locals {
json_data = [for f in fileset("./instances/", "*.json"): jsondecode(file("./instances/${f}"))]
server_vips = local.json_data[*].server_vip
}
resource "aws_network_inteface" "firewall" {
private_ips = concat(["10.0.0.4", "10.0.0.5"], local.server_vips)
...
}

Using terraform yamldecode to access multi level element

I have a yaml file (also used in a azure devops pipeline so needs to be in this format) which contains some settings I'd like to directly access from my terraform module.
The file looks something like:
variables:
- name: tenantsList
value: tenanta,tenantb
- name: unitName
value: canary
I'd like to have a module like this to access the settings but I can't see how to get to the bottom level:
locals {
settings = yamldecode(file("../settings.yml"))
}
module "infra" {
source = "../../../infra/terraform/"
unitname = local.settings.variables.unitName
}
But the terraform plan errors with this:
Error: Unsupported attribute
on canary.tf line 16, in module "infra":
16: unitname = local.settings.variables.unitName
|----------------
| local.settings.variables is tuple with 2 elements
This value does not have any attributes.
It seems like the main reason this is difficult is because this YAML file is representing what is logically a single map but is physically represented as a YAML list of maps.
When reading data from a separate file like this, I like to write an explicit expression to normalize it and optionally transform it for more convenient use in the rest of the Terraform module. In this case, it seems like having variables as a map would be the most useful representation as a Terraform value, so we can write a transformation expression like this:
locals {
raw_settings = yamldecode(file("${path.module}/../settings.yml"))
settings = {
variables = tomap({
for v in local.raw_settings.variables : v.name => v.value
})
}
}
The above uses a for expression to project the list of maps into a single map using the name values as the keys.
With the list of maps converted to a single map, you can then access it the way you originally tried:
module "infra" {
source = "../../../infra/terraform/"
unitname = local.settings.variables.unitName
}
If you were to output the transformed value of local.settings as YAML, it would look something like this, which is why accessing the map elements directly is now possible:
variables:
tenantsList: tenanta,tenantb
unitName: canary
This will work only if all of the name strings in your input are unique, because otherwise there would not be a unique map key for each element.
(Writing a normalization expression like this also doubles as some implicit validation for the shape of that YAML file: if variables were not a list or if the values were not all of the same type then Terraform would raise a type error evaluating that expression. Even if no transformation is required, I like to write out this sort of expression anyway because it serves as some documentation for what shape the YAML file is expected to have, rather than having to study all of the references to it throughout the rest of the configuration.)
With my multidecoder for YAML and JSON you are able to access multiple YAML and/or JSON files with their relative paths in one step.
Documentations can be found here:
Terraform Registry -
https://registry.terraform.io/modules/levmel/yaml_json/multidecoder/latest?tab=inputs
GitHub:
https://github.com/levmel/terraform-multidecoder-yaml_json
Usage
Place this module in the location where you need to access multiple different YAML and/or JSON files (different paths possible) and pass
your path/-s in the parameter filepaths which takes a set of strings of the relative paths of YAML and/or JSON files as an argument. You can change the module name if you want!
module "yaml_json_decoder" {
source = "levmel/yaml_json/multidecoder"
version = "0.2.1"
filepaths = ["routes/nsg_rules.yml", "failover/cosmosdb.json", "network/private_endpoints/*.yaml", "network/private_links/config_file.yml", "network/private_endpoints/*.yml", "pipeline/config/*.json"]
}
Patterns to access YAML and/or JSON files from relative paths:
To be able to access all YAML and/or JSON files in a folder entern your path as follows "folder/rest_of_folders/*.yaml", "folder/rest_of_folders/*.yml" or "folder/rest_of_folders/*.json".
To be able to access a specific YAML and/or a JSON file in a folder structure use this "folder/rest_of_folders/name_of_yaml.yaml", "folder/rest_of_folders/name_of_yaml.yml" or "folder/rest_of_folders/name_of_yaml.json"
If you like to select all YAML and/or JSON files within a folder, then you should use "*.yml", "*.yaml", "*.json" format notation. (see above in the USAGE section)
YAML delimiter support is available from version 0.1.0!
WARNING: Only the relative path must be specified. The path.root (it is included in the module by default) should not be passed, but everything after it.
Access YAML and JSON entries
Now you can access all entries within all the YAML and/or JSON files you've selected like that: "module.yaml_json_decoder.files.[name of your YAML or JSON file].entry". If the name of your YAML or JSON file is "name_of_your_config_file" then access it as follows "module.yaml_json_decoder.files.name_of_your_config_file.entry".
Example of multi YAML and JSON file accesses from different paths (directories)
first YAML file:
routes/nsg_rules.yml
rdp:
name: rdp
priority: 80
direction: Inbound
access: Allow
protocol: Tcp
source_port_range: "*"
destination_port_range: 3399
source_address_prefix: VirtualNetwork
destination_address_prefix: "*"
---
ssh:
name: ssh
priority: 70
direction: Inbound
access: Allow
protocol: Tcp
source_port_range: "*"
destination_port_range: 24
source_address_prefix: VirtualNetwork
destination_address_prefix: "*"
second YAML file:
services/logging/monitoring.yml
application_insights:
application_type: other
retention_in_days: 30
daily_data_cap_in_gb: 20
daily_data_cap_notifications_disabled: true
logs:
# Optional fields
- "AppMetrics"
- "AppAvailabilityResults"
- "AppEvents"
- "AppDependencies"
- "AppBrowserTimings"
- "AppExceptions"
- "AppExceptions"
- "AppPerformanceCounters"
- "AppRequests"
- "AppSystemEvents"
- "AppTraces"
first JSON file:
test/config/json_history.json
{
"glossary": {
"title": "example glossary",
"GlossDiv": {
"title": "S",
"GlossList": {
"GlossEntry": {
"ID": "SGML",
"SortAs": "SGML",
"GlossTerm": "Standard Generalized Markup Language",
"Acronym": "SGML",
"Abbrev": "ISO 8879:1986",
"GlossDef": {
"para": "A meta-markup language, used to create markup languages such as DocBook.",
"GlossSeeAlso": ["GML", "XML"]
},
"GlossSee": "markup"
}
}
}
}
}
main.tf
module "yaml_json_multidecoder" {
source = "levmel/yaml_json/multidecoder"
version = "0.2.1"
filepaths = ["routes/nsg_rules.yml", "services/logging/monitoring.yml", test/config/*.json]
}
output "nsg_rules_entry" {
value = module.yaml_json_multidecoder.files.nsg_rules.aks.ssh.source_address_prefix
}
output "application_insights_entry" {
value = module.yaml_json_multidecoder.files.monitoring.application_insights.daily_data_cap_in_gb
}
output "json_history" {
value = module.yaml_json_multidecoder.files.json_history.glossary.title
}
Changes to Outputs:
nsg_rules_entry = "VirtualNetwork"
application_insights_entry = 20
json_history = "example glossary"

How to access JSON from external data source in Terraform?

I am receiving JSON from a http terraform data source
data "http" "example" {
url = "${var.cloudwatch_endpoint}/api/v0/components"
# Optional request headers
request_headers {
"Accept" = "application/json"
"X-Api-Key" = "${var.api_key}"
}
}
It outputs the following.
http = [{"componentID":"k8QEbeuHdDnU","name":"Jenkins","description":"","status":"Partial Outage","order":1553796836},{"componentID":"ui","name":"ui","description":"","status":"Operational","order":1554483781},{"componentID":"auth","name":"auth","description":"","status":"Operational","order":1554483781},{"componentID":"elig","name":"elig","description":"","status":"Operational","order":1554483781},{"componentID":"kong","name":"kong","description":"","status":"Operational","order":1554483781}]
which is a string in terraform. In order to convert this string into JSON I pass it to an external data source which is a simple ruby function. Here is the terraform to pass it.
data "external" "component_ids" {
program = ["ruby", "./fetchComponent.rb",]
query = {
data = "${data.http.example.body}"
}
}
Here is the ruby function
#!/usr/bin/env ruby
require 'json'
data = JSON.parse(STDIN.read)
results = data.to_json
STDOUT.write results
All of this works. The external data outputs the following (It appears the same as the http output) but according to terraform docs this should be a map
external1 = {
data = [{"componentID":"k8QEbeuHdDnU","name":"Jenkins","description":"","status":"Partial Outage","order":1553796836},{"componentID":"ui","name":"ui","description":"","status":"Operational","order":1554483781},{"componentID":"auth","name":"auth","description":"","status":"Operational","order":1554483781},{"componentID":"elig","name":"elig","description":"","status":"Operational","order":1554483781},{"componentID":"kong","name":"kong","description":"","status":"Operational","order":1554483781}]
}
I was expecting that I could now access data inside of the external data source. I am unable.
Ultimately what I want to do is create a list of the componentID variables which are located within the external data source.
Some things I have tried
* output.external: key "0" does not exist in map data.external.component_ids.result in:
${data.external.component_ids.result[0]}
* output.external: At column 3, line 1: element: argument 1 should be type list, got type string in:
${element(data.external.component_ids.result["componentID"],0)}
* output.external: key "componentID" does not exist in map data.external.component_ids.result in:
${data.external.component_ids.result["componentID"]}
ternal: lookup: lookup failed to find 'componentID' in:
${lookup(data.external.component_ids.*.result[0], "componentID")}
I appreciate the help.
can't test with the variable cloudwatch_endpoint, so I have to think about the solution.
Terraform can't decode json directly before 0.11.x. But there is a workaround to work on nested lists.
Your ruby need be adjusted to make output as variable http below, then you should be fine to get what you need.
$ cat main.tf
variable "http" {
type = "list"
default = [{componentID = "k8QEbeuHdDnU", name = "Jenkins"}]
}
output "http" {
value = "${lookup(var.http[0], "componentID")}"
}
$ terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
http = k8QEbeuHdDnU

Variable substitution in terraform built-in functions

In Terraform, I do not manage to assign a private IP to my instances using the cidrhost built-in function.
I'm pretty sure I do not have the correct syntax but cannot figure out what is wrong.
# Get the list of subnet ids of my VPC
data "aws_subnet_ids" "org_subnet_ids" {
vpc_id = "${data.aws_vpc.org_vpc.id}"
}
# Get the list of subnets from the ids
data "aws_subnet" "org_subnets" {
count = "${length(data.aws_subnet_ids.org_subnet_ids.ids)}"
id = "${data.aws_subnet_ids.org_subnet_ids.ids[count.index]}"
}
# Create instances and assign each of them to a different subnet
resource "aws_instance" "manager" {
count = 3
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t2.small"
subnet_id = "${data.aws_subnet_ids.org_subnet_ids.ids[count.index]}"
private_ip = "${cidrhost(${data.aws_subnet.org_subnets.cidr_block[count.index]}, count.index + 1)}"
tags {
Name = "manager-${count.index + 1}"
}
}
"$ terraform plan" fails with the following error:
Error: Failed to load root config module: Error loading ./provisionning/aws/ec2.tf
Error reading config for aws_instance[manager]: parse error at 1:12: expected expression but found invalid sequence "$"
Any idea what I'm missing here ?

Resources