I need to deploy a bash script using aws_imagebuilder_component resource; I am using a template_file to populate an array inside my script. I am trying to figure out the correct way to render the template inside the imagebuilder_component resource.
I am pretty sure that the formatting of my script is the main issue :(
This is the error that I keep getting, seems like an issue with the way I am formatting the script inside the yml, can you please assist? I have not worked with image builder previously or with yamlencode.
Error: error creating Image Builder Component: InvalidParameterValueException: The value supplied for parameter 'data' is not valid. Failed to parse document. Error: line 1: cannot unmarshal string phases:... into Document.
# Image builder component
resource "aws_imagebuilder_component" "devmaps" {
name = "devmaps"
platform = "Linux"
data = yamlencode(data.template_file.init.rendered)
version = "1.0.0"
}
# template_file
data "template_file" "init" {
template = "${file("${path.module}/myscript.yml")}"
vars = {
devnames= join(" ", local.devnames)
}
}
# myscript.yml
schemaVersion: 1.0
phases:
- name: build
steps:
- name: pre-requirements
action: ExecuteBash
inputs:
commands: |
#!/bin/bash
for i in `seq 0 6`; do
nvdev="/dev/nvm${i}n1"
if [ -e $nvdev ]; then
mapdev="${devnames[i]}"
if [[ -z "$mapdev" ]]; then
mapdev="${devnames[i]}"
fi
else
ln -s $nvdev $mapdev
echo "symlink created: ${nvdev} to ${mapdev}"
fi
fi
done
#Marko
# tfvars
vols = {
data01 = {
devname = "/dev/xvde"
size = "200"
}
data01 = {
devname = "/dev/xvdf"
size = "300"
}
}
Variables.tf: populating this list "devnames" from the map object "vols", shown above
locals {
devnames = ([ for key, value in var.vols: value.devname ])
}
main: template_file is using the list "devnames" to assign its values to devnames variable; devnames variables is used inside myscript.yml
devnames = join(" ", local.devnames)
At this point, everything is working w/o issues
But when this is executed, it fails and complains about the formatting of the template that was rendered using myscript.yml.
I am doing something wrong here that I cannot figure it out
## Image builder component
resource "aws_imagebuilder_component" "devmaps" {
name = "devmaps"
platform = "Linux"
data = yamlencode(data.template_file.init.rendered)
version = "1.0.0"
}
Related
I am trying to create Ansible inventory file using local_file function in Terraform (I am open for suggestions to do it in a different way)
module "vm" config:
resource "azurerm_linux_virtual_machine" "vm" {
for_each = { for edit in local.vm : edit.name => edit }
name = each.value.name
resource_group_name = var.vm_rg
location = var.vm_location
size = each.value.size
admin_username = var.vm_username
admin_password = var.vm_password
disable_password_authentication = false
network_interface_ids = [azurerm_network_interface.edit_seat_nic[each.key].id]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
output "vm_ips" {
value = toset([
for vm_ips in azurerm_linux_virtual_machine.vm : vm_ips.private_ip_address
])
}
When I run terraform plan with the above configuration I get:
Changes to Outputs:
+ test = [
+ "10.1.0.4",
]
Now, in my main TF I have the configuration for local_file as follows:
resource "local_file" "ansible_inventory" {
filename = "./ansible_inventory/ansible_inventory.ini"
content = <<EOF
[vm]
${module.vm.vm_ips}
EOF
}
This returns the error below:
Error: Invalid template interpolation value
on main.tf line 92, in resource "local_file" "ansible_inventory":
90: content = <<EOF
91: [vm]
92: ${module.vm.vm_ips}
93: EOF
module.vm.vm_ips is set of string with 1 element
Cannot include the given value in a string template: string required.
Any suggestion how to inject the list of IPs from the output into the local file while also being able to format the rest of the text in the file?
If you want the Ansible inventory to be statically sourced from a file in INI format, then you basically need to render a template in Terraform to produce the desired output.
module/templates/inventory.tmpl:
[vm]
%{ for ip in ips ~}
${ip}
%{ endfor ~}
alternative suggestion from #mdaniel:
[vm]
${join("\n", ips)}
module/config.tf:
resource "local_file" "ansible_inventory" {
content = templatefile("${path.module}/templates/inventory.tmpl",
{ ips = module.vm.vm_ips }
)
filename = "${path.module}/ansible_inventory/ansible_inventory.ini"
file_permission = "0644"
}
A couple of additional notes though:
You can modify your output to be the entire map of objects of exported attributes like:
output "vms" {
value = azurerm_linux_virtual_machine.vm
}
and then you can access more information about the instances to populate in your inventory. Your templatefile argument would still be the module output, but the for expression(s) in the template would look considerably different depending upon what you want to add.
You can also utilize the YAML or JSON inventory formats for Ansible static inventory. With those, you can then leverage the yamldecode or jsondecode Terraform functions to make the HCL2 data structure transformation much easier. The template file would become a good bit cleaner in that situation for more complex inventories.
I am using template_file to pass a list from tfvars to my shell script, I do escape "$" in my tpl.sh script but then my script ends-up ignoring the values of my variables, how to go around this?
In below, the script ignores the values of $${i} and $${device_names[i]} because of the two dollar signs
for i in `seq 0 6`; do
block_device="/dev/nvme$${i}n1"
if [ -e $block_device ]; then
mapping_device="$${device_names[i]}"
I did attempt with eval and got an error in terraform
for i in `seq 0 6`; do
block_device="/dev/nvme" eval "${i}n1"
if [ -e $block_device ]; then
mapping_device=eval "${device_names[i]}"
It turns out that the above issue is that I was not calling the template correctly
But now I am trying to call the template in user_data but getting the below error, can someone pls advice?
locals {
cloud_config_config = <<-END
#cloud-config
${jsonencode({
write_files = [
{
path = "/usr/tmp/myscript.sh.tpl"
permissions = "0744"
owner = "root:root"
encoding = "b64"
#content = "${data.template_file.init.*.rendered}"
},
]
})}
END
myvars = ["var1","var2","var3"]
}
# Template_file
data "template_file" "init" {
template = "${file("${path.module}/script.sh.tpl")}"
vars = {
myvariables = join(" ", local.myvars)
}
}
# cloud-init to copy myscript to the instance "/usr/tmp/myscript.sh.tpl"
# I am not really sure how to move my rendered template into my instance
data "cloudinit_config" "userdata" {
gzip = false
base64_encode = false
part {
content_type = "text/cloud-config"
filename = "cloud-config.yaml"
content = local.cloud_config_config
}
depends_on = [ data.template_file.init ]
}
# User Data
user_data = data.cloudinit_config.userdata.rendered
This configuration throws this error and not really sure why
│ Error: Incorrect attribute value type
│
│ on main.tf line 31, in data "cloudinit_config" "userdata":
│ 31: content = data.template_file.init.*.rendered
│ ├────────────────
│ │ data.template_file.init is object with 5 attributes
│
│ Inappropriate value for attribute "content": string required.
I have two Nix Flakes: One contains an application, and the other contains a plugin for that application. When I build the application with the plugin, I get the error
error: path '/nix/store/3b7djb5pr87zbscggsr7vnkriw3yp21x-mainapp-go-modules' is not valid
I have no idea what this error means and how to fix it, but I can reproduce it on both macOS and Linux. The path in question is the vendor directory generated by the first step of buildGoModule.
The minimal setup to reproduce the error requires a bunch of files, so I provide a commented bash script that you can execute in an empty folder to recreate my setup:
#!/bin/bash
# I have two flakes: the main application and a plugin.
# the mainapp needs to be inside the plugin directory
# so that nix doesn't complain about the path:mainapp
# reference being outside the parent's root.
mkdir -p plugin/mainapp
# each is a go module with minimal setup
tee plugin/mainapp/go.mod <<EOF >/dev/null
module example.com/mainapp
go 1.16
EOF
tee plugin/go.mod <<EOF >/dev/null
module example.com/plugin
go 1.16
EOF
# each contain minimal Go code
tee plugin/mainapp/main.go <<EOF >/dev/null
package main
import "fmt"
func main() {
fmt.Println("Hello, World!")
}
EOF
tee plugin/main.go <<EOF >/dev/null
package plugin
import log
func init() {
fmt.Println("initializing plugin")
}
EOF
# the mainapp is a flake that provides a function for building
# the app, as well as a default package that is the app
# without any plugins.
tee plugin/mainapp/flake.nix <<'EOF' >/dev/null
{
description = "main application";
inputs = {
nixpkgs.url = github:NixOS/nixpkgs/nixos-21.11;
flake-utils.url = github:numtide/flake-utils;
};
outputs = {self, nixpkgs, flake-utils}:
let
# buildApp builds the application from a list of plugins.
# plugins cause the vendorSha256 to change, hence it is
# given as additional parameter.
buildApp = { pkgs, vendorSha256, plugins ? [] }:
let
# this is appended to the mainapp's go.mod so that it
# knows about the plugin and where to find it.
requirePlugin = plugin: ''
require ${plugin.goPlugin.goModName} v0.0.0
replace ${plugin.goPlugin.goModName} => ${plugin.outPath}/src
'';
# since buildGoModule consumes the source two times –
# first for vendoring, and then for building –
# we do the necessary modifications to the sources in an
# own derivation and then hand that to buildGoModule.
sources = pkgs.stdenvNoCC.mkDerivation {
name = "mainapp-with-plugins-source";
src = self;
phases = [ "unpackPhase" "buildPhase" "installPhase" ];
# write a plugins.go file that references the plugin's package via
# _ = "<module path>"
PLUGINS_GO = ''
package main
// Code generated by Nix. DO NOT EDIT.
import (
${builtins.foldl' (a: b: a + "\n\t_ = \"${b.goPlugin.goModName}\"") "" plugins}
)
'';
GO_MOD_APPEND = builtins.foldl' (a: b: a + "${requirePlugin b}\n") "" plugins;
buildPhase = ''
printenv PLUGINS_GO >plugins.go
printenv GO_MOD_APPEND >>go.mod
'';
installPhase = ''
mkdir -p $out
cp -r -t $out *
'';
};
in pkgs.buildGoModule {
name = "mainapp";
src = builtins.trace "sources at ${sources}" sources;
inherit vendorSha256;
nativeBuildInputs = plugins;
};
in (flake-utils.lib.eachDefaultSystem (system:
let
pkgs = nixpkgs.legacyPackages.${system};
in rec {
defaultPackage = buildApp {
inherit pkgs;
# this may be different depending on your nixpkgs; if it is, just change it.
vendorSha256 = "sha256-pQpattmS9VmO3ZIQUFn66az8GSmB4IvYhTTCFn6SUmo=";
};
}
)) // {
lib = {
inherit buildApp;
# helper that parses a go.mod file for the module's name
pluginMetadata = goModFile: {
goModName = with builtins; head
(match "module ([^[:space:]]+).*" (readFile goModFile));
};
};
};
}
EOF
# the plugin is a flake depending on the mainapp that outputs a plugin package,
# and also a package that is the mainapp compiled with this plugin.
tee plugin/flake.nix <<'EOF' >/dev/null
{
description = "mainapp plugin";
inputs = {
nixpkgs.url = github:NixOS/nixpkgs/nixos-21.11;
flake-utils.url = github:numtide/flake-utils;
nix-filter.url = github:numtide/nix-filter;
mainapp.url = path:mainapp;
mainapp.inputs = {
nixpkgs.follows = "nixpkgs";
flake-utils.follows = "flake-utils";
};
};
outputs = {self, nixpkgs, flake-utils, nix-filter, mainapp}:
flake-utils.lib.eachDefaultSystem (system:
let
pkgs = nixpkgs.legacyPackages.${system};
in rec {
packages = rec {
plugin = pkgs.stdenvNoCC.mkDerivation {
pname = "mainapp-plugin";
version = "0.1.0";
src = nix-filter.lib.filter {
root = ./.;
exclude = [ ./mainapp ./flake.nix ./flake.lock ];
};
# needed for mainapp to recognize this as plugin
passthru.goPlugin = mainapp.lib.pluginMetadata ./go.mod;
phases = [ "unpackPhase" "installPhase" ];
installPhase = ''
mkdir -p $out/src
cp -r -t $out/src *
'';
};
app = mainapp.lib.buildApp {
inherit pkgs;
# this may be different depending on your nixpkgs; if it is, just change it.
vendorSha256 = "sha256-a6HFGFs1Bu9EkXwI+DxH5QY2KBcdPzgP7WX6byai4hw=";
plugins = [ plugin ];
};
};
defaultPackage = packages.app;
}
);
}
EOF
You need Nix with Flake support installed to reproduce the error.
In the plugin folder created by this script, execute
$ nix build
trace: sources at /nix/store/d5arinbiaspyjjc4ypk4h5dsjx22pcsf-mainapp-with-plugins-source
error: path '/nix/store/3b7djb5pr87zbscggsr7vnkriw3yp21x-mainapp-go-modules' is not valid
(If you get hash mismatches, just update the flakes with the correct hash; I am not quite sure whether hashing when spreading flakes outside of a repository is reproducible.)
The sources directory (shown by trace) does exist and looks okay. The path given in the error message also exists and contains modules.txt with expected content.
In the folder mainapp, nix build does run successfully, which builds the app without plugins. So what is it that I do with the plugin that makes the path invalid?
The reason is that the file modules.txt generated as part of vendoring will contain the nix store path in the replace directive in this scenario. The vendor directory is a fixed output derivation and thus must not depend on any other derivations. This is violated by the reference in modules.txt.
This can only be fixed by copying the plugin's sources into the sources derivation – that way, the replace path can be relative and thus references no other nix store path.
I am receiving JSON from a http terraform data source
data "http" "example" {
url = "${var.cloudwatch_endpoint}/api/v0/components"
# Optional request headers
request_headers {
"Accept" = "application/json"
"X-Api-Key" = "${var.api_key}"
}
}
It outputs the following.
http = [{"componentID":"k8QEbeuHdDnU","name":"Jenkins","description":"","status":"Partial Outage","order":1553796836},{"componentID":"ui","name":"ui","description":"","status":"Operational","order":1554483781},{"componentID":"auth","name":"auth","description":"","status":"Operational","order":1554483781},{"componentID":"elig","name":"elig","description":"","status":"Operational","order":1554483781},{"componentID":"kong","name":"kong","description":"","status":"Operational","order":1554483781}]
which is a string in terraform. In order to convert this string into JSON I pass it to an external data source which is a simple ruby function. Here is the terraform to pass it.
data "external" "component_ids" {
program = ["ruby", "./fetchComponent.rb",]
query = {
data = "${data.http.example.body}"
}
}
Here is the ruby function
#!/usr/bin/env ruby
require 'json'
data = JSON.parse(STDIN.read)
results = data.to_json
STDOUT.write results
All of this works. The external data outputs the following (It appears the same as the http output) but according to terraform docs this should be a map
external1 = {
data = [{"componentID":"k8QEbeuHdDnU","name":"Jenkins","description":"","status":"Partial Outage","order":1553796836},{"componentID":"ui","name":"ui","description":"","status":"Operational","order":1554483781},{"componentID":"auth","name":"auth","description":"","status":"Operational","order":1554483781},{"componentID":"elig","name":"elig","description":"","status":"Operational","order":1554483781},{"componentID":"kong","name":"kong","description":"","status":"Operational","order":1554483781}]
}
I was expecting that I could now access data inside of the external data source. I am unable.
Ultimately what I want to do is create a list of the componentID variables which are located within the external data source.
Some things I have tried
* output.external: key "0" does not exist in map data.external.component_ids.result in:
${data.external.component_ids.result[0]}
* output.external: At column 3, line 1: element: argument 1 should be type list, got type string in:
${element(data.external.component_ids.result["componentID"],0)}
* output.external: key "componentID" does not exist in map data.external.component_ids.result in:
${data.external.component_ids.result["componentID"]}
ternal: lookup: lookup failed to find 'componentID' in:
${lookup(data.external.component_ids.*.result[0], "componentID")}
I appreciate the help.
can't test with the variable cloudwatch_endpoint, so I have to think about the solution.
Terraform can't decode json directly before 0.11.x. But there is a workaround to work on nested lists.
Your ruby need be adjusted to make output as variable http below, then you should be fine to get what you need.
$ cat main.tf
variable "http" {
type = "list"
default = [{componentID = "k8QEbeuHdDnU", name = "Jenkins"}]
}
output "http" {
value = "${lookup(var.http[0], "componentID")}"
}
$ terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
http = k8QEbeuHdDnU
I'm trying to send data to Elasticsearch using logagent but while there doesn't seem to be any error sending the data, the index isn't being created in ELK. I'm trying to find the index by creating a new index pattern via the Kibana GUI but the index does not seem to exist. This is my logagent.conf right now:
input:
# bro-start:
# module: command
# # store BRO logs in /tmp/bro in JSON format
# command: mkdir /tmp/bro; cd /tmp/bro; /usr/local/bro/bin/bro -i eth0 -e 'redef LogAscii::use_json=T;'
# sourceName: bro
# restart: 1
# read the BRO logs from the file system ...
files:
- '/usr/local/bro/logs/current/*.log'
parser:
json:
enabled: true
transform: !!js/function >
function (sourceName, parsed, config) {
var src = sourceName.split('/')
// generate Elasticsearch _type out of the log file sourceName
// e.g. "dns" from /tmp/bro/dns.log
if (src && src[src.length-1]) {
parsed._type = src[src.length-1].replace(/\.log/g,'')
}
// store log file path in each doc
parsed.logSource = sourceName
// convert Bro timestamps to JavaScript timestamps
if (parsed.ts) {
parsed['#timestamp'] = new Date(parsed.ts * 1000)
}
}
output:
stdout: false
elasticsearch:
module: elasticsearch
url: http://10.10.10.10:9200
index: bro_logs
Maybe I have to create the index mappings manually? I don't know.
Thank you for any advice or insight!
I found out that there actually was an error . I was trying to send some authentication via a field called "auth" but that doesn't exist. I can do url: https://USERNAME:PASSWORD#10.10.10.10:9200 though.