Terraform template_cloudinit_config multiple part execution order is wrong - amazon-ec2

I am using the terraform to build my ec2-instances as part of instance bootstrap, added cloud-init config to run multiple userdata scripts. but the content_type = "text/x-shellscript" always executed first. I verified the cat /var/log/cloud-init-output.log file. it shows the shell script is invoked first. How do I config the shell script to run at last?
data "template_cloudinit_config" "myapp_cloudinit_config" {
gzip = false
base64_encode = false
# Main cloud-config configuration file.
part {
content_type = "text/cloud-config"
content = "${data.template_file.base_bootstrap_file.rendered}"
merge_type = "list(append)+dict(recurse_array)+str()"
}
part {
content_type = "text/cloud-config"
content = "${module.template_file_appsec_init.appsec_user_data_rendered}"
merge_type = "list(append)+dict(recurse_array)+str()"
}
part {
content_type = "text/x-shellscript"
content = "${module.template_file_beat_init.beat_user_data_rendered}"
}
}
Shell script looks like below
module " template_file_beat_init" {
source = "url" #the source url contains the zip file which includes the below shell script
}
#!/bin/sh
deploy_the_app() {
//invoke ansible playbook execution
}
deploy_the_app
Cloud provider: AWS
OS : RHEL 8.3
cloud-init --version: /usr/bin/cloud-init 19.4
Terraform v0.11.8

Related

Is there a way, inside a terraform script, to retrieve the latest version of a layer?

I have lambdas that reference a layer, this layer is maintained by someone else and when a new
version is released I have to update my terraform code to put the latest version in the arn (here 19).
Is there a way, in the terraform script, to get the latest version and use it?
module "lambda_function" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda1"
description = "My awesome lambda function"
handler = "index.lambda_handler"
runtime = "python3.8"
source_path = "../src/lambda-function1"
tags = {
Name = "my-lambda1"
}
layers = [
"arn:aws:lambda:eu-central-1:587522145896:layer:my-layer-name:19"
]
}
Thanks.
ps : this means the layer's terraform script is not in mine, it's an other script that I don't have access to.
You can use the aws_lambda_layer_version data source to discover the latest version.
For example:
module "lambda_function" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda1"
description = "My awesome lambda function"
handler = "index.lambda_handler"
runtime = "python3.8"
source_path = "../src/lambda-function1"
tags = {
Name = "my-lambda1"
}
layers = [
data.aws_lambda_layer_version.layer_version.arn
]
}
data "aws_lambda_layer_version" "layer_version" {
layer_name = "my-layer-name"
}

How can I run a shell script on multiple VMWare vm's created by terraform module?

I am using this module to spin up multiple vm's on my vmware cluster, https://registry.terraform.io/modules/Terraform-VMWare-Modules/vm/vsphere/1.6.0, and I want to run a shell script on all of the vms after using a null resource. With what i currently have, it complains that the host was not given a string, which makes sense. Here is my null resource:
# main.tf
module "jenkins-linuxvm-centos7" {
source = "Terraform-VMWare-Modules/vm/vsphere"
...
}
resource "null_resource" "vm" {
triggers = {
vm_ips = join(",", module.jenkins-linuxvm-centos7.Linux-ip)
}
# export TF_VAR_root_password=<pass>
connection {
type = "ssh"
host = module.jenkins-linuxvm-centos7.Linux-ip
user = "root"
password = var.vm_root_password
port = "22"
agent = false
}
provisioner "file" {
source = "resize_disk.sh"
destination = "/tmp/resize_disk.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/resize_disk.sh",
"/tmp/resize_disk.sh"
]
}
}
Do I need to use a dynamic block somehow? Or how can I modify host = module.jenkins-linuxvm-centos7.Linux-ip to include all the hosts I want to run it on?
You have to run it in a For_Each loop... Below is an example code where i am looping against the sql_var map variable. you will have to do it against the output of IPs --> module.jenkins-linuxvm-centos7.Linux-ip... you will be able to reference the IP of each machine as something like each.value i guess. I dont know how your output looks like, so guessing. If you are new to loops, here is one nice tuto.
https://blog.boltops.com/2020/10/04/terraform-hcl-loops-with-count-and-for-each
resource "null_resource" "instance" {
for_each = var.sql_var
provisioner "local-exec" {
command = "echo ${each.key} >> hello.txt"
}
}

groovy command curl on windows Jenkins

I have a groovy script that work on Linux Jenkins
import groovy.json.JsonSlurper
try {
List<String> artifacts = new ArrayList<String>()
//jira get summery for list by issue type story and label demo and project 11411
def artifactsUrl = 'https://companyname.atlassian.net/rest/api/2/search?jql=project=11411%20and%20issuetype%20in%20(Story)%20and%20labels%20in%20(demo)+&fields=summary' ;
def artifactsObjectRaw = ["curl", "-u", "someusername#xxxx.com:tokenkey" ,"-X" ,"GET", "-H", "Content-Type: application/json", "-H", "accept: application/json","-K", "--url","${artifactsUrl}"].execute().text;
def parser = new JsonSlurper();
def json = parser.parseText(artifactsObjectRaw );
//insert all result into list
for(item in json.issues){
artifacts.add( item.fields.summary);
}
//return list to extended result
return artifacts ;
}catch (Exception e) {
println "There was a problem fetching the artifacts " + e.message;
}
This script return all the names from Jira jobs by the API ,
But when I tried to run this groovy on Windows Jenkins the script will not work because windows do not have the command curl
def artifactsObjectRaw = ["curl", "-u","someusername#xxxx.com:tokenkey" ,"-X" ,"GET", "-H", "Content-Type: application/json", "-H", "accept: application/json","-K","--url","${artifactsUrl}"].execute().text;
how should I preform this command?
The following code:
import groovy.json.JsonSlurper
try {
def baseUrl = 'https://companyname.atlassian.net'
def artifactsUrl = "${baseUrl}/rest/api/2/search?jql=project=MYPROJECT&fields=summary"
def auth = "someusername#somewhere.com:tokenkey".bytes.encodeBase64()
def headers = ['Content-Type': "application/json",
'Authorization': "Basic ${auth}"]
def response = artifactsUrl.toURL().getText(requestProperties: headers)
def json = new JsonSlurper().parseText(response)
// the below will implicitly return a list of summaries, no
// need to define an 'artifacts' list beforehand
def artifacts = json.issues.collect { issue -> issue.fields.summary }
} catch (Exception e) {
e.printStackTrace()
}
is pure groovy, i.e. no need for curl. It gets the items from the jira instance and returns a List<String> of summaries. Since we don't want any external dependencies like HttpBuidler (as you are doing this from jenkins) we have to manually do the basic auth encoding.
Script tested (the connecting and getting json part, did not test the extraction of summary fields) with:
Groovy Version: 2.4.15 JVM: 1.8.0_201 Vendor: Oracle Corporation OS: Linux
against an atlassian on demand cloud instance.
I removed your jql query as it didn't work for me but you should be able to add it back as needed.
Install curl and set the path in environment variable of windows.
Please follow the link to download curl on windows.
I would consider using HTTP request plugin when making HTTP Requests.
Since you are using a plugin, it does not matter if you are running in Windows or .
Linux as your Jenkins Host

Ansible variable from packer script

I have one variable in ansible script like
- host:{{host}}
I want to send {{host}} variable value from packer script. I want to send {{host}} value using packer build or using packer variable. Is there anyway do it?
Using an ansible provisioner in packer allows you to use both ansible_env_vars and extra_arguments.
See doco: https://www.packer.io/plugins/provisioners/ansible/ansible#configuration-reference
So we generally used extra_arguments to pass in ansible variables over the command line
{
"type": "ansible",
"playbook_file": "./my_playbook}",
"extra_arguments": "-vvv --extra-vars 'host={{user `host`}}"
}
Below a simple example:
...
variable "gitlab_version" {
type = string
default = "15.1.6"
}
...
build {
name = local.build_name
provisioner "ansible" {
...
playbook_file = "./ansible/playbook.yml"
extra_arguments = ["--extra-vars", "gitlab_version=${var.gitlab_version}"]
...
}
}
It works as it's a simple interpolation

Terraform execute script before lambda creation

I have a terraform configuration that correctly creates a lambda function on aws with a zip file provided.
My problem is that I always have to package the lambda first (I use serverless package method for this), so I would like to execute a script that package my function and move the zip to the right directory before terraform creates the lambda function.
Is that possible? Maybe using a combination of null_resource and local-exec?
You already proposed the best answer :)
When you add a depends_on = ["null_resource.serverless_execution"] to your lambda resource, you can ensure, that packaging will be done before uploading the zip file.
Example:
resource "null_resource" "serverless_execution" {
provisioner "local-exec" {
command = "serverless package ..."
}
}
resource "aws_lambda_function" "update_lambda" {
depends_on = ["null_resource.serverless_execution"]
filename = "${path.module}/path/to/package.zip"
[...]
}
https://www.terraform.io/docs/provisioners/local-exec.html
The answer is already given, but I was looking for a way to install NPM modules on the fly, zip and then deploy Lambda function along with timeout if your lambda function size is large. So here is my finding may help someone else.
#Install NPM module before creating ZIP
resource "null_resource" "npm" {
provisioner "local-exec" {
command = "cd ../lambda-functions/loadbalancer-to-es/ && npm install --prod=only"
}
}
# Zip the Lamda function on the fly
data "archive_file" "source" {
type = "zip"
source_dir = "../lambda-functions/loadbalancer-to-es"
output_path = "../lambda-functions/loadbalancer-to-es.zip"
depends_on = ["null_resource.npm"]
}
# Created AWS Lamdba Function: Memory Size, NodeJS version, handler, endpoint, doctype and environment settings
resource "aws_lambda_function" "elb_logs_to_elasticsearch" {
filename = "${data.archive_file.source.output_path}"
function_name = "someprefix-alb-logs-to-elk"
description = "elb-logs-to-elasticsearch"
memory_size = 1024
timeout = 900
timeouts {
create = "30m"
}
runtime = "nodejs8.10"
role = "${aws_iam_role.role.arn}"
source_code_hash = "${base64sha256(data.archive_file.source.output_path)}"
handler = "index.handler"
# source_code_hash = "${base64sha256(file("/elb-logs-to-elasticsearch.zip"))}"
environment {
variables = {
ELK_ENDPOINT = "someprefix-elk.dns.co"
ELK_INDEX = "test-web-server-"
ELK_REGION = "us-west-2"
ELK_DOCKTYPE = "elb-access-logs"
}
}
}

Resources