Issue using Terraform EC2 Userdata - bash

I am deploying a bunch of EC2 instances that require a mount called /data, this is a seperate disk that I am attaching using volume attach in AWS.
Now when I did the following manually it works fine, so the script I use works however when adding it via userdata I am seeing issues and the mkfs command is not happening.
If you see my terraform config:
resource "aws_instance" "riak" {
count = 5
ami = "${var.aws_ami}"
vpc_security_group_ids = ["${aws_security_group.bastion01_sg.id}","${aws_security_group.riak_sg.id}","${aws_security_group.outbound_access_sg.id}"]
subnet_id = "${element(module.vpc.database_subnets, 0)}"
instance_type = "m4.xlarge"
tags {
Name = "x_riak_${count.index}"
Role = "riak"
}
root_block_device {
volume_size = 20
}
user_data = "${file("datapartition.sh")}"
}
resource "aws_volume_attachment" "riak_data" {
count = 5
device_name = "/dev/sdh"
volume_id = "${element(aws_ebs_volume.riak_data.*.id, count.index)}"
instance_id = "${element(aws_instance.riak.*.id, count.index)}"
}
And then the partition script is as follows:
#!/bin/bash
if [ ! -d /data ];
then mkdir /data
fi
/sbin/mkfs -t ext4 /dev/xvdh;
while [ -e /dev/xvdh ] ; do sleep 1 ; done
mount /dev/xvdh /data
echo "/dev/xvdh /data ext4 defaults 0 2" >> /etc/fstab
Now when I do this via terraform the mkfs doesn't appear to happen and I see no obvious errors in the syslog. If I copy the script manually and just bash script.sh the mount is created and works as expected.
Has anyone got any suggestions here?
Edit: It's wort noting adding this in AWS gui under userdata also works fine.

You could try with remote_exec instead of user_data.
User_data relates on cloud-init which can act differently depending on images of your cloud provider.
And also i'm not sure it's a good idea to exec a script that would wait for some time before executing in the cloud-init section => this may lead to VM considering launch has failed because of a timeout (depending on your cloud provider).
Remote_exec may be better here because you will be able to wait until your /dev/xvdh is attached
See here
resource "aws_instance" "web" {
# ...
provisioner "file" {
source = "script.sh"
destination = "/tmp/script.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/script.sh",
"/tmp/script.sh args",
]
}
}

Related

Templatefile and Bash script

I need to be able to run bash script as userdata for launchtemplate and this is how I try to do it :
resource "aws_launch_template" "ec2_launch_template" {
name = "ec2_launch_template"
image_id = data.aws_ami.latest_airbus_ami.id
instance_type = var.instance_type[terraform.workspace]
iam_instance_profile {
name = aws_iam_instance_profile.ec2_profile.name
}
vpc_security_group_ids = [data.aws_security_group.default-sg.id, aws_security_group.allow-local.id] # the second parameter should be according to the user
monitoring {
enabled = true
}
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 30
encrypted = true
volume_type = "standard"
}
}
tags = {
Name = "${var.app_name}-${terraform.workspace}-ec2-launch-template"
}
#user_data = base64encode(file("${path.module}/${terraform.workspace}-script.sh")) # change the base encoder as well
user_data = base64encode(templatefile("${path.module}/script.sh", {app_name = var.app_name, env = terraform.workspace, high_threshold = var.high_threshold, low_threshold = var.low_threshold})) # change the base encoder as well
}
as you can see, I pass parameters as map in the "templatefile" function, I managed to retrieve them doing this :
#!/bin/bash -xe
# Activate logs for everything
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
# Retrieve variables from Terraform
app_name = ${app_name}
environment = ${env}
max_memory_perc= ${high_threshold}
min_memory_perc= ${low_threshold}
instance_id=$(wget -q -O - http://169.254.169.254/latest/meta-data/instance-id)
ami_id=$(wget -q -O - http://169.254.169.254/latest/meta-data/ami-id)
instance_type=$(wget -q -O - http://169.254.169.254/latest/meta-data/instance-type)
scale_up_name=$${app_name}"-"$${environment}"-scale-up"
scale_down_name=$${app_name}"-"$${environment}"-scale-down"
Then, when I look at launchtemplate in AWS console, I can see that the values used in parameters are filled in :
app_name = test-app
environment = prod
max_memory_perc= 80
min_memory_perc= 40
the problem that I have is, when I run that, I get this error :
+ app_name = test-app
/var/lib/cloud/instances/scripts/part-001: line 7: app_name: command not found
I assume there is a problem with interpretation or something like that but cannot put the finger on it
any ideas ?
Thanks
As they said, it was a problem with spaces, it's fixed now
thanks

Nomad job using exec fails when running any bash command

I have tried everything and I just can’t get an exec type job to run. I tried it on 3 different clusters and it fails on all.
The job prunes docker containers and just runs docker system prune -a.
This is the config section:
driver = "exec"
config {
command = "bash"
args = ["-c",
" docker system prune -a "]
}
No logs and containers are not pruned:
job "docker-cleanup" {
type = "system"
constraint {
attribute = "${attr.kernel.name}"
operator = "="
value = "linux"
}
datacenters = ["dc1"]
group "docker-cleanup" {
restart {
interval = "24h"
attempts = 0
mode = "fail"
}
task "docker-system-prune" {
driver = "exec"
config {
command = "bash"
args = ["-c",
" docker system prune -a "]
}
resources {
cpu = 100
memory = 50
network {
mbits = 1
}
}
}
}
}
What am I doing wrong?
I would suggest you provide the output to make it easier to analyze.
One thing you can try, is to add the full path to the bash executable.
driver = "exec"
config {
command = "/bin/bash"
args = ["-c",
" docker system prune -a "]
}
Further you are missing the "--force" parameter on system prune, without it - docker system prune asks for confirmation.
docker system prune --all --force
As I know all args should be provided separately:
driver = "exec"
config {
command = "/bin/bash"
args = [
"-c", "docker", "system", "prune", -a "
]
}

Why does terraform aws code fail to render?

Terraform version = 0.12
resource "aws_instance" "bespin-ec2-web" {
ami = "ami-0bea7fd38fabe821a"
instance_type = "t2.micro"
vpc_security_group_ids = [aws_security_group.bespin-sg.id]
subnet_id = aws_subnet.bespin-subnet-public-a.id
associate_public_ip_address = true
tags = {
Name = "bespin-ec2-web-a"
}
user_data = data.template_file.user_data.rendered
}
data "template_file" "user_data" {
template = file("${path.module}/userdata.sh")
}
userdata.sh file
#!/bin/bash
USERS="bespin"
GROUP="bespin"
for i in $USERS; do
/usr/sbin/adduser ${i};
/bin/echo ${i}:${i}1! | chpasswd;
done
cp -a /etc/ssh/sshd_config /etc/ssh/sshd_config_old
sed -i 's/PasswordAuthentication no/#PasswordAuthentication no/' /etc/ssh/sshd_config
sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/' /etc/ssh/sshd_config
systemctl restart sshd
terraform plan result
Error: failed to render : <template_file>:5,24-25: Unknown variable; There is no variable named "i"., and 2 other di
agnostic(s)
on instance.tf line 13, in data "template_file" "user_data":
13: data "template_file" "user_data" {
Why am I getting an error?
The template argument in the template_file data source is processed as Terraform template syntax.
In this syntax, using ${...} has a special meaning, that the ... part will be injected by some var that is passed into the template.
Bash also allows this syntax, for getting the values of variables as you're intending to use it.
To reconcile this, you'll need to escape the $ character so that the terraform template compiler will leave it be, which you can do by doubling up the character: $${i} in all cases.
https://www.terraform.io/docs/configuration/expressions.html#string-templates

aws_launch_configuration: timeout - last error: dial tcp :22: connectex: No connection could be made because the target machine actively refused it

I have the following launch config for an auto-scaling group:
resource "aws_launch_configuration" "ASG-launch-config" {
#name = "ASG-launch-config" # see: https://github.com/hashicorp/terraform/issues/3665
name_prefix = "ASG-launch-config-"
image_id = "ami-a4dc46db" #Ubuntu 16.04 LTS
#image_id = "ami-b70554c8" #Amazon Linux 2
instance_type = "t2.micro"
security_groups = ["${aws_security_group.WEB-DMZ.id}"]
key_name = "MyEC2KeyPair"
#user_data = <<-EOF
# #!/bin/bash
# echo "Hello, World" > index.html
# nohup busybox httpd -f -p "${var.server_port}" &
# EOF
provisioner "file" {
source="script.sh"
destination="/tmp/script.sh"
}
provisioner "remote-exec" {
inline=[
"chmod +x /tmp/script.sh",
"sudo /tmp/script.sh"
]
}
connection {
user="ubuntu"
private_key="${file("MyEC2KeyPair.pem")}"
}
lifecycle {
create_before_destroy = true
}
}
Error: Error applying plan:
1 error(s) occurred:
aws_launch_configuration.ASG-launch-config: timeout - last error: dial tcp :22: connectex: No connection could be made because the target machine actively refused it.
I want to run a bash script to basically install WordPress on the instances created.
the script runs fine in a resource type "aws_instance" "example"
how to troubleshoot this?
Sounds like the instance is denying your traffic. Start up an instance without the provisioning script and see if you can SSH to it using the key you provided. You may want to add verbose logging to the SSH command with -v.

Terraform error when make's terraform apply command

When I do terraform plan -var-file=../variables.tfvars
pass all good
But then I run terraform apply -var-file=../variables.tfvars
give me this error and I don't know how to solve this because the directory's path is correct.
Error: Error applying plan:
1 error(s) occurred:
* aws_instance.mongodb_server: 1 error(s) occurred:
* Error running command 'sleep 60 && export ANSIBLE_HOST_KEY_CHECKING=False && echo "[mongodb]
54.193.20.170" > /tmp/inventory.ws && ansible-playbook -i /tmp/inventory.ws -e "mongodb_password=blahblah" -e "mongodb_user=admin" -u ec2-user -b --private-key=../BASE/files/joujou.pem ../DATABASE/files/ansible-mongodb-standalone/mongodb.yml': exit status 127. Output: /bin/sh: 2: ansible-playbook: not found
The code is like:
resource "aws_instance" "mongodb_server" {
instance_type = "${lookup(var.mongodb_instance_type_control,
var.target_env)}"
vpc_security_group_ids =
["${aws_security_group.default_internal.id}"]
ami = "${lookup(var.amazon_ami_by_location, var.aws_region)}"
key_name = "${var.key_name}"
subnet_id = "${data.aws_subnet.subnet_a.id}"
tags {
Name = "tf-mongodb-${lookup(var.environment, var.target_env)}"
}
associate_public_ip_address = true
provisioner "local-exec" {
command = "sleep 60 && export ANSIBLE_HOST_KEY_CHECKING=False && echo \"[mongodb]\n${aws_instance.mongodb_server.public_ip}\" > /tmp/inventory.ws && ansible-playbook -i /tmp/inventory.ws -e \"mongodb_password=${var.mongodb_default_password}\" -e \"mongodb_user=${var.mongodb_default_username}\" -u ec2-user -b --private-key=../BASE/files/joujou.pem ../DATABASE/files/ansible-mongodb-standalone/mongodb.yml"
}
Output: /bin/sh: 2: ansible-playbook: not found
This is your actual error. Terraform plan does not capture this error as local-exec commands are not evaluated by terraform plan.
Do you have ansible installed on the machine where you are trying to run the above terraform? And if installed, is it on the path.
Try installing ansible if its not installed already. If ansible is already installed, do a echo $PATH in your local-exec command and confirm if ansible is present in the given path.

Resources