How can I run a shell script on multiple VMWare vm's created by terraform module? - bash

I am using this module to spin up multiple vm's on my vmware cluster, https://registry.terraform.io/modules/Terraform-VMWare-Modules/vm/vsphere/1.6.0, and I want to run a shell script on all of the vms after using a null resource. With what i currently have, it complains that the host was not given a string, which makes sense. Here is my null resource:
# main.tf
module "jenkins-linuxvm-centos7" {
source = "Terraform-VMWare-Modules/vm/vsphere"
...
}
resource "null_resource" "vm" {
triggers = {
vm_ips = join(",", module.jenkins-linuxvm-centos7.Linux-ip)
}
# export TF_VAR_root_password=<pass>
connection {
type = "ssh"
host = module.jenkins-linuxvm-centos7.Linux-ip
user = "root"
password = var.vm_root_password
port = "22"
agent = false
}
provisioner "file" {
source = "resize_disk.sh"
destination = "/tmp/resize_disk.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/resize_disk.sh",
"/tmp/resize_disk.sh"
]
}
}
Do I need to use a dynamic block somehow? Or how can I modify host = module.jenkins-linuxvm-centos7.Linux-ip to include all the hosts I want to run it on?

You have to run it in a For_Each loop... Below is an example code where i am looping against the sql_var map variable. you will have to do it against the output of IPs --> module.jenkins-linuxvm-centos7.Linux-ip... you will be able to reference the IP of each machine as something like each.value i guess. I dont know how your output looks like, so guessing. If you are new to loops, here is one nice tuto.
https://blog.boltops.com/2020/10/04/terraform-hcl-loops-with-count-and-for-each
resource "null_resource" "instance" {
for_each = var.sql_var
provisioner "local-exec" {
command = "echo ${each.key} >> hello.txt"
}
}

Related

Provisioning Windows VM including File Provisioner for AWS using Terraform results in Timeout

I'm aware that there already exists several posts similar to this one - I've went through them and adapted my Terraform configuration file, but it makes no difference.
Therefore, I'd like to publish my configuration file and my use case: I'd like to provision a (Windows) Virtual Machine on AWS, using Terraform. It works without the File Provisioning part - including them, the provisioning results in a timeout.
This includes adaptations from previous posts:
SSH connection restriction
SSH isnt working in Windows with Terraform provisioner connection type
Usage of a Security group
Terraform File provisioner can't connect ec2 over ssh. timeout - last error: dial tcp 92.242.xxx.xx:22: i/o timeout
I also get a timeout when using "winrm" instead of "ssh".
I'd be happy if you could provide any hint for following config file:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
# Configure the AWS Provider
provider "aws" {
access_key = "<my access key>"
secret_key = "<my secret key>"
region = "eu-central-1"
}
resource "aws_instance" "webserver" {
ami = "ami-07dfec7a6d529b77a"
instance_type = "t2.micro"
security_groups = [aws_security_group.sgwebserver.name]
key_name = aws_key_pair.pubkey.key_name
tags = {
"Name" = "WebServer-Win"
}
}
resource "null_resource" "deployBundle" {
connection {
type = "ssh"
user = "Administrator"
private_key = "${file("C:/Users/<my user name>/aws_keypair/aws_instance.pem")}"
host = aws_instance.webserver.public_ip
}
provisioner "file" {
source = "files/test.txt"
destination = "C:/test.txt"
}
depends_on = [ aws_instance.webserver ]
}
resource "aws_security_group" "sgwebserver" {
name = "sgwebserver"
description = "Allow ssh inbound traffic"
ingress {
from_port = 0
to_port = 6556
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "sgwebserver"
}
}
resource "aws_key_pair" "pubkey" {
key_name = "aws-cloud"
public_key = file("key/aws_instance.pub")
}
resource "aws_eip" "elasticip" {
instance = aws_instance.webserver.id
}
output "eip" {
value = aws_eip.elasticip.public_ip
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "my-vpc"
cidr = "10.0.0.0/16"
azs = ["eu-central-1a", "eu-central-1b", "eu-central-1c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
enable_nat_gateway = true
enable_vpn_gateway = true
tags = {
Terraform = "true"
Environment = "dev"
}
}
Thanks a lot in advance!
Windows EC2 instances don't support SSH, they support RDP. You would have to install SSH server software on the instance before you could SSH into it.
I suggest doing something like placing the file in S3, and using a user data script to trigger the Windows EC2 instance to download the file on startup.

Terraform template_cloudinit_config multiple part execution order is wrong

I am using the terraform to build my ec2-instances as part of instance bootstrap, added cloud-init config to run multiple userdata scripts. but the content_type = "text/x-shellscript" always executed first. I verified the cat /var/log/cloud-init-output.log file. it shows the shell script is invoked first. How do I config the shell script to run at last?
data "template_cloudinit_config" "myapp_cloudinit_config" {
gzip = false
base64_encode = false
# Main cloud-config configuration file.
part {
content_type = "text/cloud-config"
content = "${data.template_file.base_bootstrap_file.rendered}"
merge_type = "list(append)+dict(recurse_array)+str()"
}
part {
content_type = "text/cloud-config"
content = "${module.template_file_appsec_init.appsec_user_data_rendered}"
merge_type = "list(append)+dict(recurse_array)+str()"
}
part {
content_type = "text/x-shellscript"
content = "${module.template_file_beat_init.beat_user_data_rendered}"
}
}
Shell script looks like below
module " template_file_beat_init" {
source = "url" #the source url contains the zip file which includes the below shell script
}
#!/bin/sh
deploy_the_app() {
//invoke ansible playbook execution
}
deploy_the_app
Cloud provider: AWS
OS : RHEL 8.3
cloud-init --version: /usr/bin/cloud-init 19.4
Terraform v0.11.8

Decrypting Windows Password in terraform

I'm trying to set up a Terraform script to deploy a windows server. When running terraform apply I get an error message referencing below
Error: Invalid reference
on main.tf line 44, in resource "aws_instance" "server":
44: password = "${rsadecrypt(aws_instance.server[0].password_data, file(KEY_PATH))}"
A reference to a resource type must be followed by at least one attribute
access, specifying the resource name.
AFAIK the resource is "aws_instance", the name is "server[0]" while the attribute is the "password_data". I know I'm missing something but don't know what. any assistance would be appreciated.
The full resource module is below in case there is something I'm missing contained in there.
Thanks
resource "aws_instance" "server" {
ami = var.AMIS[var.AWS_REGION]
instance_type = var.AWS_INSTANCE
vpc_security_group_ids = [module.networking.security_group_id_out]
subnet_id = module.networking.subnet_id_out
## Use this count key to determine how many servers you want to create.
count = 1
key_name = var.KEY_NAME
tags = {
# Name = "Server-Cloud"
Name = "Server-${count.index}"
}
root_block_device {
volume_size = var.VOLUME_SIZE
volume_type = var.VOLUME_TYPE
delete_on_termination = true
}
get_password_data = true
provisioner "remote-exec" {
connection {
host = coalesce(self.public_ip, self.private_ip)
type = "winrm"
## Need to provide your own .pem key that can be created in AWS or on your machine for each provisioned EC2.
password = ${rsadecrypt(aws_instance.server[0].password_data, file(KEY_PATH))}
}
inline = [
"powershell -ExecutionPolicy Unrestricted C:\\Users\\Administrator\\Desktop\\installserver.ps1 -Schedule",
]
}
provisioner "local-exec" {
command = "echo ${self.public_ip} >> ../public_ips.txt"
}
}
Use password = "${rsadecrypt(self.password_data, file("/root/.ssh/id_rsa"))}"
without user = "admin" as below :
resource "aws_instance" "windows_server" {
get_password_data = "true"
connection {
host = "${self.public_ip}"
type = "winrm"
https = false
password = "${rsadecrypt(self.password_data, file("/root/.ssh/id_rsa"))}"
agent = false
insecure = "true"
}
}

Ansible variable from packer script

I have one variable in ansible script like
- host:{{host}}
I want to send {{host}} variable value from packer script. I want to send {{host}} value using packer build or using packer variable. Is there anyway do it?
Using an ansible provisioner in packer allows you to use both ansible_env_vars and extra_arguments.
See doco: https://www.packer.io/plugins/provisioners/ansible/ansible#configuration-reference
So we generally used extra_arguments to pass in ansible variables over the command line
{
"type": "ansible",
"playbook_file": "./my_playbook}",
"extra_arguments": "-vvv --extra-vars 'host={{user `host`}}"
}
Below a simple example:
...
variable "gitlab_version" {
type = string
default = "15.1.6"
}
...
build {
name = local.build_name
provisioner "ansible" {
...
playbook_file = "./ansible/playbook.yml"
extra_arguments = ["--extra-vars", "gitlab_version=${var.gitlab_version}"]
...
}
}
It works as it's a simple interpolation

Terraform execute script before lambda creation

I have a terraform configuration that correctly creates a lambda function on aws with a zip file provided.
My problem is that I always have to package the lambda first (I use serverless package method for this), so I would like to execute a script that package my function and move the zip to the right directory before terraform creates the lambda function.
Is that possible? Maybe using a combination of null_resource and local-exec?
You already proposed the best answer :)
When you add a depends_on = ["null_resource.serverless_execution"] to your lambda resource, you can ensure, that packaging will be done before uploading the zip file.
Example:
resource "null_resource" "serverless_execution" {
provisioner "local-exec" {
command = "serverless package ..."
}
}
resource "aws_lambda_function" "update_lambda" {
depends_on = ["null_resource.serverless_execution"]
filename = "${path.module}/path/to/package.zip"
[...]
}
https://www.terraform.io/docs/provisioners/local-exec.html
The answer is already given, but I was looking for a way to install NPM modules on the fly, zip and then deploy Lambda function along with timeout if your lambda function size is large. So here is my finding may help someone else.
#Install NPM module before creating ZIP
resource "null_resource" "npm" {
provisioner "local-exec" {
command = "cd ../lambda-functions/loadbalancer-to-es/ && npm install --prod=only"
}
}
# Zip the Lamda function on the fly
data "archive_file" "source" {
type = "zip"
source_dir = "../lambda-functions/loadbalancer-to-es"
output_path = "../lambda-functions/loadbalancer-to-es.zip"
depends_on = ["null_resource.npm"]
}
# Created AWS Lamdba Function: Memory Size, NodeJS version, handler, endpoint, doctype and environment settings
resource "aws_lambda_function" "elb_logs_to_elasticsearch" {
filename = "${data.archive_file.source.output_path}"
function_name = "someprefix-alb-logs-to-elk"
description = "elb-logs-to-elasticsearch"
memory_size = 1024
timeout = 900
timeouts {
create = "30m"
}
runtime = "nodejs8.10"
role = "${aws_iam_role.role.arn}"
source_code_hash = "${base64sha256(data.archive_file.source.output_path)}"
handler = "index.handler"
# source_code_hash = "${base64sha256(file("/elb-logs-to-elasticsearch.zip"))}"
environment {
variables = {
ELK_ENDPOINT = "someprefix-elk.dns.co"
ELK_INDEX = "test-web-server-"
ELK_REGION = "us-west-2"
ELK_DOCKTYPE = "elb-access-logs"
}
}
}

Resources