Terraform template_file issue with a bash script - bash

I am using template_file to pass a list from tfvars to my shell script, I do escape "$" in my tpl.sh script but then my script ends-up ignoring the values of my variables, how to go around this?
In below, the script ignores the values of $${i} and $${device_names[i]} because of the two dollar signs
for i in `seq 0 6`; do
block_device="/dev/nvme$${i}n1"
if [ -e $block_device ]; then
mapping_device="$${device_names[i]}"
I did attempt with eval and got an error in terraform
for i in `seq 0 6`; do
block_device="/dev/nvme" eval "${i}n1"
if [ -e $block_device ]; then
mapping_device=eval "${device_names[i]}"
It turns out that the above issue is that I was not calling the template correctly
But now I am trying to call the template in user_data but getting the below error, can someone pls advice?
locals {
cloud_config_config = <<-END
#cloud-config
${jsonencode({
write_files = [
{
path = "/usr/tmp/myscript.sh.tpl"
permissions = "0744"
owner = "root:root"
encoding = "b64"
#content = "${data.template_file.init.*.rendered}"
},
]
})}
END
myvars = ["var1","var2","var3"]
}
# Template_file
data "template_file" "init" {
template = "${file("${path.module}/script.sh.tpl")}"
vars = {
myvariables = join(" ", local.myvars)
}
}
# cloud-init to copy myscript to the instance "/usr/tmp/myscript.sh.tpl"
# I am not really sure how to move my rendered template into my instance
data "cloudinit_config" "userdata" {
gzip = false
base64_encode = false
part {
content_type = "text/cloud-config"
filename = "cloud-config.yaml"
content = local.cloud_config_config
}
depends_on = [ data.template_file.init ]
}
# User Data
user_data = data.cloudinit_config.userdata.rendered
This configuration throws this error and not really sure why
│ Error: Incorrect attribute value type
│
│ on main.tf line 31, in data "cloudinit_config" "userdata":
│ 31: content = data.template_file.init.*.rendered
│ ├────────────────
│ │ data.template_file.init is object with 5 attributes
│
│ Inappropriate value for attribute "content": string required.

Related

How to pass ip-address from terraform to ansible [duplicate]

I am trying to create Ansible inventory file using local_file function in Terraform (I am open for suggestions to do it in a different way)
module "vm" config:
resource "azurerm_linux_virtual_machine" "vm" {
for_each = { for edit in local.vm : edit.name => edit }
name = each.value.name
resource_group_name = var.vm_rg
location = var.vm_location
size = each.value.size
admin_username = var.vm_username
admin_password = var.vm_password
disable_password_authentication = false
network_interface_ids = [azurerm_network_interface.edit_seat_nic[each.key].id]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
output "vm_ips" {
value = toset([
for vm_ips in azurerm_linux_virtual_machine.vm : vm_ips.private_ip_address
])
}
When I run terraform plan with the above configuration I get:
Changes to Outputs:
+ test = [
+ "10.1.0.4",
]
Now, in my main TF I have the configuration for local_file as follows:
resource "local_file" "ansible_inventory" {
filename = "./ansible_inventory/ansible_inventory.ini"
content = <<EOF
[vm]
${module.vm.vm_ips}
EOF
}
This returns the error below:
Error: Invalid template interpolation value
on main.tf line 92, in resource "local_file" "ansible_inventory":
90: content = <<EOF
91: [vm]
92: ${module.vm.vm_ips}
93: EOF
module.vm.vm_ips is set of string with 1 element
Cannot include the given value in a string template: string required.
Any suggestion how to inject the list of IPs from the output into the local file while also being able to format the rest of the text in the file?
If you want the Ansible inventory to be statically sourced from a file in INI format, then you basically need to render a template in Terraform to produce the desired output.
module/templates/inventory.tmpl:
[vm]
%{ for ip in ips ~}
${ip}
%{ endfor ~}
alternative suggestion from #mdaniel:
[vm]
${join("\n", ips)}
module/config.tf:
resource "local_file" "ansible_inventory" {
content = templatefile("${path.module}/templates/inventory.tmpl",
{ ips = module.vm.vm_ips }
)
filename = "${path.module}/ansible_inventory/ansible_inventory.ini"
file_permission = "0644"
}
A couple of additional notes though:
You can modify your output to be the entire map of objects of exported attributes like:
output "vms" {
value = azurerm_linux_virtual_machine.vm
}
and then you can access more information about the instances to populate in your inventory. Your templatefile argument would still be the module output, but the for expression(s) in the template would look considerably different depending upon what you want to add.
You can also utilize the YAML or JSON inventory formats for Ansible static inventory. With those, you can then leverage the yamldecode or jsondecode Terraform functions to make the HCL2 data structure transformation much easier. The template file would become a good bit cleaner in that situation for more complex inventories.

aws_imagebuilder_component issue with template_file

I need to deploy a bash script using aws_imagebuilder_component resource; I am using a template_file to populate an array inside my script. I am trying to figure out the correct way to render the template inside the imagebuilder_component resource.
I am pretty sure that the formatting of my script is the main issue :(
This is the error that I keep getting, seems like an issue with the way I am formatting the script inside the yml, can you please assist? I have not worked with image builder previously or with yamlencode.
Error: error creating Image Builder Component: InvalidParameterValueException: The value supplied for parameter 'data' is not valid. Failed to parse document. Error: line 1: cannot unmarshal string phases:... into Document.
# Image builder component
resource "aws_imagebuilder_component" "devmaps" {
name = "devmaps"
platform = "Linux"
data = yamlencode(data.template_file.init.rendered)
version = "1.0.0"
}
# template_file
data "template_file" "init" {
template = "${file("${path.module}/myscript.yml")}"
vars = {
devnames= join(" ", local.devnames)
}
}
# myscript.yml
schemaVersion: 1.0
phases:
- name: build
steps:
- name: pre-requirements
action: ExecuteBash
inputs:
commands: |
#!/bin/bash
for i in `seq 0 6`; do
nvdev="/dev/nvm${i}n1"
if [ -e $nvdev ]; then
mapdev="${devnames[i]}"
if [[ -z "$mapdev" ]]; then
mapdev="${devnames[i]}"
fi
else
ln -s $nvdev $mapdev
echo "symlink created: ${nvdev} to ${mapdev}"
fi
fi
done
#Marko
# tfvars
vols = {
data01 = {
devname = "/dev/xvde"
size = "200"
}
data01 = {
devname = "/dev/xvdf"
size = "300"
}
}
Variables.tf: populating this list "devnames" from the map object "vols", shown above
locals {
devnames = ([ for key, value in var.vols: value.devname ])
}
main: template_file is using the list "devnames" to assign its values to devnames variable; devnames variables is used inside myscript.yml
devnames = join(" ", local.devnames)
At this point, everything is working w/o issues
But when this is executed, it fails and complains about the formatting of the template that was rendered using myscript.yml.
I am doing something wrong here that I cannot figure it out
## Image builder component
resource "aws_imagebuilder_component" "devmaps" {
name = "devmaps"
platform = "Linux"
data = yamlencode(data.template_file.init.rendered)
version = "1.0.0"
}

terraform get values for variables from yaml list

How can I read the values from the yaml file? I have no more ideas how to do that. This is a super simple example I know, but I still don't get it what the issue is. I need to do this with count not with for_each. The yaml decode part is done by terragrunt.
aws:
-
accounts: "dev"
id: "523134851043"
private_subnets:
eu-central-1a: "10.44.4.96/27"
eu-central-1b: "10.44.5.128/27"
eu-central-1c: "10.44.6.160/27"
-
accounts: "prod"
id: "098453041227"
private_subnets:
eu-central-1a: "10.44.7.0/27"
eu-central-1b: "10.44.8.32/27"
eu-central-1c: "10.44.9.64/27"
variable "aws" {
type = list(object({
accounts: string
id: string
private_subnets: list(object({
cidr: string
}))
}))
}
resource "aws_subnet" "private" {
count = length(var.aws.accounts[*].private_subnets)
availability_zone = element(keys(var.aws.accounts[*].private_subnets), count.index)
cidr_block = element(values(var.aws.accounts[*].private_subnets), count.index)
map_public_ip_on_launch = false
vpc_id = aws_vpc.this.id
It produces this error.
│ Error: Unsupported attribute
│
│ on test.tf line 40, in resource "aws_subnet" "private":
│ 40: count = length(var.aws.accounts[*].private_subnets)
│ ├────────────────
│ │ var.aws is a list of object, known only after apply
│
│ Can't access attributes on a list of objects. Did you mean to access
│ attribute "accounts" for a specific element of the list, or across all
│ elements of the list?
╵
╷
│ Error: Unsupported attribute
│
│ on test.tf line 42, in resource "aws_subnet" "private":
│ 42: availability_zone = element(keys(var.aws.accounts[*].private_subnets), count.index)
│ ├────────────────
│ │ var.aws is a list of object, known only after apply
│
│ Can't access attributes on a list of objects. Did you mean to access
│ attribute "accounts" for a specific element of the list, or across all
│ elements of the list?
╵
╷
│ Error: Unsupported attribute
│
│ on test.tf line 43, in resource "aws_subnet" "private":
│ 43: cidr_block = element(values(var.aws.accounts[*].private_subnets), count.index)
│ ├────────────────
│ │ var.aws is a list of object, known only after apply
│
│ Can't access attributes on a list of objects. Did you mean to access
│ attribute "accounts" for a specific element of the list, or across all
│ elements of the list?
╵
ERRO[0008] 1 error occurred:
* exit status 1
First, the variable type doesn't match your data and code you try to use. It should be:
variable "aws" {
type = list(object({
accounts: string
id: string
private_subnets: map(string)
}))
}
Then I don't understand why you couldn't use for_each. It normally leads code which is easier to understand and maintain, especially when refactoring. count will lead to resource recreations if the order of the data changes.
With for_each you would only need to make sure to use a unique index string, e.g. <az>/<cidr> in this case.
For example:
resource "aws_subnet" "private" {
# Build a map with unique index.
# It might be cleaner to extract this to a local variable.
for_each = merge([
for account in var.aws : {
for az, cidr in account.private_subnets :
"${az}/${cidr}" => { az = az, cidr = cidr }
}
]...)
availability_zone = each.value.az
cidr_block = each.value.cidr
# ...
}
If you really want to use count:
locals {
subnets = flatten([
for account in var.aws : [
for az, cidr in account.private_subnets : { az = az, cidr = cidr }
]
])
}
resource "aws_subnet" "private" {
count = length(local.subnets)
availability_zone = local.subnets[count.index].az
cidr_block = local.subnets[count.index].cidr
# ...
}

In a setup with two Nix Flakes, where one provides a plugin for the other's application, "path <…> is not valid". How to fix that?

I have two Nix Flakes: One contains an application, and the other contains a plugin for that application. When I build the application with the plugin, I get the error
error: path '/nix/store/3b7djb5pr87zbscggsr7vnkriw3yp21x-mainapp-go-modules' is not valid
I have no idea what this error means and how to fix it, but I can reproduce it on both macOS and Linux. The path in question is the vendor directory generated by the first step of buildGoModule.
The minimal setup to reproduce the error requires a bunch of files, so I provide a commented bash script that you can execute in an empty folder to recreate my setup:
#!/bin/bash
# I have two flakes: the main application and a plugin.
# the mainapp needs to be inside the plugin directory
# so that nix doesn't complain about the path:mainapp
# reference being outside the parent's root.
mkdir -p plugin/mainapp
# each is a go module with minimal setup
tee plugin/mainapp/go.mod <<EOF >/dev/null
module example.com/mainapp
go 1.16
EOF
tee plugin/go.mod <<EOF >/dev/null
module example.com/plugin
go 1.16
EOF
# each contain minimal Go code
tee plugin/mainapp/main.go <<EOF >/dev/null
package main
import "fmt"
func main() {
fmt.Println("Hello, World!")
}
EOF
tee plugin/main.go <<EOF >/dev/null
package plugin
import log
func init() {
fmt.Println("initializing plugin")
}
EOF
# the mainapp is a flake that provides a function for building
# the app, as well as a default package that is the app
# without any plugins.
tee plugin/mainapp/flake.nix <<'EOF' >/dev/null
{
description = "main application";
inputs = {
nixpkgs.url = github:NixOS/nixpkgs/nixos-21.11;
flake-utils.url = github:numtide/flake-utils;
};
outputs = {self, nixpkgs, flake-utils}:
let
# buildApp builds the application from a list of plugins.
# plugins cause the vendorSha256 to change, hence it is
# given as additional parameter.
buildApp = { pkgs, vendorSha256, plugins ? [] }:
let
# this is appended to the mainapp's go.mod so that it
# knows about the plugin and where to find it.
requirePlugin = plugin: ''
require ${plugin.goPlugin.goModName} v0.0.0
replace ${plugin.goPlugin.goModName} => ${plugin.outPath}/src
'';
# since buildGoModule consumes the source two times –
# first for vendoring, and then for building –
# we do the necessary modifications to the sources in an
# own derivation and then hand that to buildGoModule.
sources = pkgs.stdenvNoCC.mkDerivation {
name = "mainapp-with-plugins-source";
src = self;
phases = [ "unpackPhase" "buildPhase" "installPhase" ];
# write a plugins.go file that references the plugin's package via
# _ = "<module path>"
PLUGINS_GO = ''
package main
// Code generated by Nix. DO NOT EDIT.
import (
${builtins.foldl' (a: b: a + "\n\t_ = \"${b.goPlugin.goModName}\"") "" plugins}
)
'';
GO_MOD_APPEND = builtins.foldl' (a: b: a + "${requirePlugin b}\n") "" plugins;
buildPhase = ''
printenv PLUGINS_GO >plugins.go
printenv GO_MOD_APPEND >>go.mod
'';
installPhase = ''
mkdir -p $out
cp -r -t $out *
'';
};
in pkgs.buildGoModule {
name = "mainapp";
src = builtins.trace "sources at ${sources}" sources;
inherit vendorSha256;
nativeBuildInputs = plugins;
};
in (flake-utils.lib.eachDefaultSystem (system:
let
pkgs = nixpkgs.legacyPackages.${system};
in rec {
defaultPackage = buildApp {
inherit pkgs;
# this may be different depending on your nixpkgs; if it is, just change it.
vendorSha256 = "sha256-pQpattmS9VmO3ZIQUFn66az8GSmB4IvYhTTCFn6SUmo=";
};
}
)) // {
lib = {
inherit buildApp;
# helper that parses a go.mod file for the module's name
pluginMetadata = goModFile: {
goModName = with builtins; head
(match "module ([^[:space:]]+).*" (readFile goModFile));
};
};
};
}
EOF
# the plugin is a flake depending on the mainapp that outputs a plugin package,
# and also a package that is the mainapp compiled with this plugin.
tee plugin/flake.nix <<'EOF' >/dev/null
{
description = "mainapp plugin";
inputs = {
nixpkgs.url = github:NixOS/nixpkgs/nixos-21.11;
flake-utils.url = github:numtide/flake-utils;
nix-filter.url = github:numtide/nix-filter;
mainapp.url = path:mainapp;
mainapp.inputs = {
nixpkgs.follows = "nixpkgs";
flake-utils.follows = "flake-utils";
};
};
outputs = {self, nixpkgs, flake-utils, nix-filter, mainapp}:
flake-utils.lib.eachDefaultSystem (system:
let
pkgs = nixpkgs.legacyPackages.${system};
in rec {
packages = rec {
plugin = pkgs.stdenvNoCC.mkDerivation {
pname = "mainapp-plugin";
version = "0.1.0";
src = nix-filter.lib.filter {
root = ./.;
exclude = [ ./mainapp ./flake.nix ./flake.lock ];
};
# needed for mainapp to recognize this as plugin
passthru.goPlugin = mainapp.lib.pluginMetadata ./go.mod;
phases = [ "unpackPhase" "installPhase" ];
installPhase = ''
mkdir -p $out/src
cp -r -t $out/src *
'';
};
app = mainapp.lib.buildApp {
inherit pkgs;
# this may be different depending on your nixpkgs; if it is, just change it.
vendorSha256 = "sha256-a6HFGFs1Bu9EkXwI+DxH5QY2KBcdPzgP7WX6byai4hw=";
plugins = [ plugin ];
};
};
defaultPackage = packages.app;
}
);
}
EOF
You need Nix with Flake support installed to reproduce the error.
In the plugin folder created by this script, execute
$ nix build
trace: sources at /nix/store/d5arinbiaspyjjc4ypk4h5dsjx22pcsf-mainapp-with-plugins-source
error: path '/nix/store/3b7djb5pr87zbscggsr7vnkriw3yp21x-mainapp-go-modules' is not valid
(If you get hash mismatches, just update the flakes with the correct hash; I am not quite sure whether hashing when spreading flakes outside of a repository is reproducible.)
The sources directory (shown by trace) does exist and looks okay. The path given in the error message also exists and contains modules.txt with expected content.
In the folder mainapp, nix build does run successfully, which builds the app without plugins. So what is it that I do with the plugin that makes the path invalid?
The reason is that the file modules.txt generated as part of vendoring will contain the nix store path in the replace directive in this scenario. The vendor directory is a fixed output derivation and thus must not depend on any other derivations. This is violated by the reference in modules.txt.
This can only be fixed by copying the plugin's sources into the sources derivation – that way, the replace path can be relative and thus references no other nix store path.

How to test for existence of a file on the Puppet Master

In a customized modules on Puppet I have
g_iptables
├── files
│ └── fqdn-of-server
├── lib
│ └── puppet
│ └── parser
│ └── functions
│ └── file_exists.rb
└── manifests
└── init.pp
and I want to let the module do something whether or not the file "fqdn-of-server" exist on the Puppet Master. Googling did get me a file_exists.rb function:
#!/usr/bin/ruby
require 'puppet'
module Puppet::Parser::Functions
newfunction(:file_exists, :type => :rvalue) do |args|
if File.exists?(args[0])
return 1
else
return 0
end
end
end
and this does work when put in something like:
$does_fqdn_file_exists = file_exists("/tmp/$fqdn")
if $does_fqdn_file_exists == 1 {
...
}
in my manifest init.pp (of course $fqdn is a facter). The problem is that it works only on the client (so $does_fqdn_file_exists is 1 if the /tmp/$fqdn exist on the client $fqdn, it does not work on the puppet master.
Also, I want to use puppet:/// uri structures in this construct, but thus sofar, my function doesn't understand this uri.
Can somebody help me ? The ruby function stems from someone on the web, who claims that it checks the file existence on the master, which is not the case (at least not what I can see).
Alex G provided solution with some corrections works for me, and maybe will be useful for someone else:
if file('/does/it/exist', '/dev/null') != '' {
# /does/it/exist must exist, do things here
}
In the puppet master you could test it like this:
# create a temporary file that may look like this :
]$ cat /tmp/checkfile.pp
$does_fqdn_file_exists = file_exists("/tmp/$fqdn")
notify{ "Check File = $does_fqdn_file_exists" : }
# Then run it :
]$ puppet apply /tmp/checkfile.pp
Here's a way to do with with the standard function library. Make a file in a known location with the content "not found", and use this check:
if file('/does/it/exist', '/file/containing/not_found') != 'not found' {
# /does/it/exist must exist, do things here
}
This is because the file() function in Puppet will read the contents of the first file provided that actually exists. Kind of a kludge to use it this way, but it works and doesn't require modifying the default set of functions.

Resources