how to deploy escloud extension in terraform - elasticsearch

I deploy escloud with terraform.
I want to add an existing extension, analysis-icu, how can I configure it?
resource "ec_deployment_extension" "icu" {
name = "analysis-icu"
version = "*"
extension_type = "bundle"
download_url = "https://artifacts.elastic.co/downloads/elasticsearch-plugins/analysis-nori/analysis-nori-8.6.1.zip"
}
module "escloud_default" {
source = "./escloud"
name = "${var.environment}-test"
...
elasticsearch_config = {
topologies = [
{
id = "hot_content"
size = var.environment == "prod" ? "2g" : "1g"
size_resource = "memory"
zone_count = var.environment == "prod" ? 2 : 1
autoscaling = {
min_size = ""
min_size_resource = ""
max_size = "116g"
max_size_resource = "memory"
}
},
]
extensions = [
{
name = ec_deployment_extension.nori.name
type = "bundle"
version = "*"
url = ec_deployment_extension.nori.url
}
]
}
...
This code does not apply existing icu plugin, just create custom bundle.

i solved it. There is config.plugins arguments.
https://registry.terraform.io/providers/elastic/ec/latest/docs/resources/ec_deployment#plugins

Related

EC2 + CodePipeline/CodeDeploy - how to make application ready on instance refresh

Please forgive me, I am learning DevOps.
I am using codedeploy/codepipeline to deploy a node.js app onto an ec2 instance.
I am using terraform to manage the infrastructure.
The pipeline works great, but...
Question 1: Deploy on Refresh
If I refresh the auto scaling group, then the instances are missing the application. Is this normal?
Question 2: Clean on deploy
As a workaround in step 1, I wrote a script that downloads the latest build from the codebuild s3 bucket, unzips it, and runs the application.
I then added the logic in the ASG launch configuration to run the script.
So far, so good. When I refresh the instance, the app is booted.
However, the problem arises when I try and then trigger a codepipeline deployment.
It fails in the codedeploy step because the project folder is already full.
In my appspec.yml I have already tried to make it clean out the application directory during the ApplicationStop phase, but I have not found any success.
Any tips/guidance would be most welcome.
Code
app_dir/appspec.yml
version: 0.0
os: linux
files:
- source: /
destination: /home/ec2-user/app_dir/
overwrite: true
permissions:
- object: /home/ec2-user/app_dir/
pattern: "**"
owner: ec2-user
group: ec2-user
hooks:
ApplicationStop:
- location: infrastructure/cleanup.sh
timeout: 60
runas: ec2-user
AfterInstall:
- location: infrastructure/install_dependencies.sh
timeout: 180
runas: ec2-user
- location: infrastructure/install_root_dependencies.sh
timeout: 30
runas: root
ApplicationStart:
- location: infrastructure/start.sh
timeout: 45
runas: ec2-user
app_dir/infrastructure/cleanup.sh
#!/bin/bash
export HOME=/home/ec2-user
pm2 stop all
pm2 delete all
rm -rf /home/ec2-user/app_dir
mkdir -p /home/ec2-user/app_dir
sudo chown ec2-user:ec2-user /home/ec2-user/app_dir
cd /home/ec2-user/app_dir
app_dir/infrastructure/install_root_dependencies.sh
#!/bin/bash
export HOME=/home/ec2-user
cd /home/ec2-user/app_dir
npm install -g pm2
app_dir/infrastructure/start.sh
#!/bin/bash
export HOME=/home/ec2-user
mkdir -p /home/ec2-user/Lexstep
. /home/ec2-user/Lexstep/infrastructure/parameter_store/load_ssm_parameters.sh
cd /home/ec2-user/app_dir || exit
sudo chown -R ec2-user:ec2-user /home/ec2-user/app_dir
mkdir -p /home/ec2-user/logs/app
sudo chown -R ec2-user:ec2-user /home/ec2-user/logs/app
sudo chmod a+rwx /home/ec2-user/logs/app
output=$(pm2 ls | awk '{print $2}')
startup_cmd=$(pm2 startup | awk '/^sudo/')
if [[ $output == *"0"* ]]; then
echo -e "\nProcess already defined. Deleting all processes."
pm2 restart ecosystem.config.js --update-env
else
echo -e "\nNo process is running currently. Creating a new process..\n"
if [[ "$DEPLOYMENT_GROUP_NAME" == "lexstep-prod"* ]] || [[ "$DEPLOYMENT_GROUP_NAME" == "lexstep-stage"* ]]; then
pm2 start ecosystem.config.js --env production
else
pm2 start ecosystem.config.js --env development
fi
eval "$startup_cmd"
fi
pm2 save #save current PM2 process list to start upon reboot
codepipeline terraform definitions
resource "aws_codebuild_project" "codebuild_project" {
name = "MY_APP-${var.env}-build-project"
description = "${var.env} Build Project for MY_APP"
build_timeout = "10"
service_role = aws_iam_role.codebuild_role.arn
source {
type = "CODEPIPELINE"
buildspec = "infrastructure/buildspec.yml"
git_clone_depth = 0
git_submodules_config {
fetch_submodules = false
}
}
environment {
compute_type = "BUILD_GENERAL1_SMALL"
image = "aws/codebuild/amazonlinux2-x86_64-standard:3.0"
type = "LINUX_CONTAINER"
image_pull_credentials_type = "CODEBUILD"
privileged_mode = false
}
vpc_config {
vpc_id = aws_vpc.vpc.id
subnets = [
aws_subnet.subnets_app[0].id,
aws_subnet.subnets_app[1].id,
]
security_group_ids = [
aws_security_group.app_sg.id,
]
}
artifacts {
type = "CODEPIPELINE"
artifact_identifier = "BuildArtifact"
}
secondary_artifacts {
type = "S3"
artifact_identifier = "TestArtifacts"
location = aws_s3_bucket.codepipeline_bucket.id
path = "MY_APP-${var.env}-pipeline/TestArtifacts/artifacts.zip"
name = "TestArtifacts"
packaging = "ZIP"
}
logs_config {
cloudwatch_logs {
group_name = local.app_build_log_group_name
stream_name = "MY_APP-${var.env}-build-app-log-stream"
}
}
cache {
type = "LOCAL"
modes = ["LOCAL_DOCKER_LAYER_CACHE", "LOCAL_SOURCE_CACHE"]
}
tags = merge(
var.additional_tags,
{
Name = "MY_APP-${var.env}-build-project"
},
)
}
resource "aws_codebuild_project" "codebuild_test_project" {
name = "MY_APP-${var.env}-build-test-project"
description = "Unit Test Build Project for MY_APP"
build_timeout = "10"
service_role = aws_iam_role.codebuild_test_role.arn
source {
type = "CODEPIPELINE"
buildspec = "infrastructure/buildspec_unit_test.yml"
git_clone_depth = 0
git_submodules_config {
fetch_submodules = false
}
}
environment {
compute_type = "BUILD_GENERAL1_SMALL"
image = "aws/codebuild/amazonlinux2-x86_64-standard:3.0"
type = "LINUX_CONTAINER"
image_pull_credentials_type = "CODEBUILD"
privileged_mode = false
}
vpc_config {
vpc_id = aws_vpc.vpc.id
subnets = [
aws_subnet.subnets_app[0].id,
aws_subnet.subnets_app[1].id,
]
security_group_ids = [
aws_security_group.app_sg.id,
]
}
artifacts {
type = "CODEPIPELINE"
}
logs_config {
cloudwatch_logs {
group_name = local.test_build_log_group_name
stream_name = "MY_APP-${var.env}-build-unit-test-log-stream"
}
}
cache {
type = "LOCAL"
modes = ["LOCAL_DOCKER_LAYER_CACHE", "LOCAL_SOURCE_CACHE"]
}
tags = merge(
var.additional_tags,
{
Name = "MY_APP-${var.env}-build-test-project"
},
)
}
resource "aws_codebuild_project" "codebuild_e2e_test_project" {
name = "MY_APP-${var.env}-build-e2e-test-project"
description = "Integration Test Build Project for MY_APP"
build_timeout = "10"
service_role = aws_iam_role.codebuild_test_role.arn
source {
type = "CODEPIPELINE"
buildspec = "infrastructure/buildspec_e2e_test.yml"
git_clone_depth = 0
git_submodules_config {
fetch_submodules = false
}
}
environment {
compute_type = "BUILD_GENERAL1_SMALL"
image = "aws/codebuild/amazonlinux2-x86_64-standard:3.0"
type = "LINUX_CONTAINER"
image_pull_credentials_type = "CODEBUILD"
privileged_mode = false
}
vpc_config {
vpc_id = aws_vpc.vpc.id
subnets = [
aws_subnet.subnets_app[0].id,
aws_subnet.subnets_app[1].id,
]
security_group_ids = [
aws_security_group.app_sg.id,
]
}
artifacts {
type = "CODEPIPELINE"
}
logs_config {
cloudwatch_logs {
group_name = local.e2e_test_build_log_group_name
stream_name = "MY_APP-${var.env}-build-test-e2e-log-stream"
}
}
cache {
type = "LOCAL"
modes = ["LOCAL_DOCKER_LAYER_CACHE", "LOCAL_SOURCE_CACHE"]
}
tags = merge(
var.additional_tags,
{
Name = "MY_APP-${var.env}-build-test-e2e-project"
},
)
}
resource "aws_codedeploy_app" "codedeploy_app" {
name = "MY_APP-${var.env}-deploy-app"
compute_platform = "Server"
}
resource "aws_codedeploy_deployment_group" "codedeploy_group" {
app_name = aws_codedeploy_app.codedeploy_app.name
deployment_group_name = "MY_APP-${var.env}-deploy-group"
service_role_arn = aws_iam_role.codedeploy_role.arn
autoscaling_groups = [aws_autoscaling_group.app_asg.name]
# deployment_config_name = "CodeDeployDefault.OneAtATime" # possible option: "CodeDeployDefault.AllAtOnce"
# TODO: change this to OneAtATime for production
deployment_config_name = "CodeDeployDefault.AllAtOnce" # possible option: "CodeDeployDefault.AllAtOnce"
deployment_style {
deployment_type = "IN_PLACE"
}
auto_rollback_configuration {
enabled = true
events = ["DEPLOYMENT_FAILURE"]
}
}
resource "aws_codedeploy_deployment_group" "codedeploy_schedule_group" {
app_name = aws_codedeploy_app.codedeploy_app.name
deployment_group_name = "MY_APP-${var.env}-schedule-deploy-group"
service_role_arn = aws_iam_role.codedeploy_role.arn
autoscaling_groups = [aws_autoscaling_group.app_schedule_asg.name]
deployment_config_name = "CodeDeployDefault.OneAtATime" # possible option: "CodeDeployDefault.AllAtOnce"
deployment_style {
deployment_type = "IN_PLACE"
}
auto_rollback_configuration {
enabled = true
events = ["DEPLOYMENT_FAILURE"]
}
}
resource "aws_codepipeline" "codepipeline" {
name = "MY_APP-${var.env}-pipeline"
role_arn = aws_iam_role.codepipeline_role.arn
artifact_store {
location = aws_s3_bucket.codepipeline_bucket.bucket
type = "S3"
}
stage {
name = "Source"
action {
name = "Source"
category = "Source"
owner = "AWS"
provider = "CodeStarSourceConnection"
version = "1"
output_artifacts = ["SourceArtifacts"]
namespace = "SourceVariables"
configuration = {
ConnectionArn = var.connection_arn
FullRepositoryId = "MY_APP/MY_APP-nest"
BranchName = "master" # change branch here "master"
}
}
}
stage {
name = "Build"
action {
name = "Build"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
version = "1"
input_artifacts = ["SourceArtifacts"]
output_artifacts = ["BuildArtifact", "TestArtifacts"]
namespace = "BuildVariables"
configuration = {
ProjectName = aws_codebuild_project.codebuild_project.id
}
}
}
/* stage {
name = "Test"
action {
name = "UnitTest"
category = "Test"
owner = "AWS"
provider = "CodeBuild"
version = "1"
run_order = "1"
input_artifacts = ["TestArtifacts"]
configuration = {
ProjectName = aws_codebuild_project.codebuild_test_project.id
}
}
action {
name = "IntegrationTest"
category = "Test"
owner = "AWS"
provider = "CodeBuild"
version = "1"
run_order = "2"
input_artifacts = ["TestArtifacts"]
configuration = {
ProjectName = aws_codebuild_project.codebuild_e2e_test_project.id
}
}
} */
stage {
name = "Deploy"
dynamic "action" {
# a fake map for a conditional block
for_each = var.deployment_requires_approval ? { name = "ManualApproval" } : {}
content {
name = "ManualApproval"
category = "Approval"
owner = "AWS"
provider = "Manual"
version = "1"
run_order = "1"
configuration = {
NotificationArn = var.sns_notification_arn
}
}
}
action {
name = "Deploy"
category = "Deploy"
owner = "AWS"
provider = "CodeDeploy"
input_artifacts = ["BuildArtifact"]
version = "1"
run_order = "1"
namespace = "DeployVariables"
configuration = {
ApplicationName = aws_codedeploy_app.codedeploy_app.name
DeploymentGroupName = "MY_APP-${var.env}-deploy-group"
}
}
action {
name = "DeploySchedule"
category = "Deploy"
owner = "AWS"
provider = "CodeDeploy"
input_artifacts = ["BuildArtifact"]
version = "1"
run_order = "2"
namespace = "DeployScheduleVariables"
configuration = {
ApplicationName = aws_codedeploy_app.codedeploy_app.name
DeploymentGroupName = "MY_APP-${var.env}-schedule-deploy-group"
}
}
}
tags = merge(
var.additional_tags,
{
Name = "MY_APP-${var.env}-pipeline"
},
)
depends_on = [aws_codebuild_project.codebuild_project]
}
Terraform EC2 definition
#############################
####### EC2 INSTANCES #######
#############################
### APP instance Launch Configuration ###
resource "aws_launch_configuration" "app_lc" {
name_prefix = "app-${var.env}-lc"
image_id = var.app_ec2_ami
instance_type = var.app_instance_type
iam_instance_profile = aws_iam_instance_profile.app_instance_profile.name
key_name = var.key_name
enable_monitoring = true
ebs_optimized = false
security_groups = [aws_security_group.app_sg.id]
root_block_device {
volume_size = 25
volume_type = "gp3"
}
user_data = <<-EOF
#!/bin/bash
sudo yum update -y
sudo yum install git jq -y
amazon-linux-extras install epel -y
#############
# Node.js #
#############
${file("gists/install_node.sh")}
#############
# App #
#############
cat <<-'ENVFILE' | tee /home/ec2-user/.env
export DEPLOYMENT_GROUP_NAME="{var.env}-deploy-group"
export region="eu-west-2"
ENVFILE
source /home/ec2-user/.env
function get_app() {
sudo mkdir -p /home/ec2-user/logs/app
sudo chown -R ec2-user:ec2-user /home/ec2-user/logs
sudo chmod 777 -R /home/ec2-user/logs
sudo chmod +arwx -R /home/ec2-user/logs
echo "get_app: starting"
mkdir -p /home/ec2-user/app_dir
cd /home/ec2-user/app_dir
BUCKET="${aws_s3_bucket.codepipeline_bucket.bucket}"
BUCKET_KEY=`aws s3 ls $BUCKET --recursive | sort | tail -n 1 | awk '{print $4}'`
echo "downloading app.zip"
aws s3 cp s3://$BUCKET/$BUCKET_KEY ./app.zip
echo "unzipping app.zip"
unzip app.zip
rm -rf app.zip
echo "removing app.zip"
sudo chown -R ec2-user:ec2-user /home/ec2-user/Lexstep
echo "installing root dependencies"
bash /home/ec2-user/app_dir/infrastructure/install_root_dependencies.sh
echo "installing root dependencies"
bash /home/ec2-user/app_dir/infrastructure/install_dependencies.sh
echo "STARTING APP"
export NODE_ENV=production
export region=eu-west-2
bash /home/ec2-user/app_dir/infrastructure/start.sh
sudo chown ec2-user:ec2-user /home/ec2-user/.pm2/rpc.sock /home/ec2-user/.pm2/pub.sock
cd /home/ec2-user
echo "FINISHED get_app SCRIPT"
}
mkdir -p /home/ec2-user/logs && touch /home/ec2-user/logs/get_app.log && get_app 2>&1 | tee /home/ec2-user/logs/get_app.log
echo "Finished init of app_lc"
EOF
lifecycle {
create_before_destroy = true
}
}
### APP Autoscaling Group ###
resource "aws_autoscaling_group" "app_asg" {
name = "${var.env}-app"
launch_configuration = aws_launch_configuration.app_lc.name
min_size = 1
max_size = 2
desired_capacity = 2
health_check_type = "EC2"
health_check_grace_period = 240
vpc_zone_identifier = [aws_subnet.subnets_app[0].id, aws_subnet.subnets_app[1].id]
service_linked_role_arn = data.aws_iam_role.aws_service_linked_role.arn
target_group_arns = [aws_lb_target_group.tg.arn]
lifecycle {
create_before_destroy = true
}
tags = concat(
[
{
"key" = "Name"
"value" = "${var.env}-app"
"propagate_at_launch" = true
},
{
"key" = "Project"
"value" = var.additional_tags.Project
"propagate_at_launch" = true
},
{
"key" = "CreatedBy"
"value" = var.additional_tags.CreatedBy
"propagate_at_launch" = true
},
{
"key" = "Environment"
"value" = var.additional_tags.Environment
"propagate_at_launch" = true
},
])
}

how to describe stop / start AWS EC2 instances process by schedule in Terraform

I trying figure out how to simply start / stop EC2 instances by schedule and EventBridge
This behavior may be easily set via AWS WEB console (EventBridge → Rules → Create Rule → AWS service → EC2 StopInstances API call):
But I can't figure out how to describe this rule in Terraform
Only possible way that I found is to create Lambda. But it looks like a huge overhead for this simple action. Is here any way to add EC2 StopInstances API call Rule with Terraform?
Okay, looks like it is possible to control instance running time with SSM Automation:
variables.tf
variable "start_cron_representation" {
type = string
}
variable "stop_cron_representation" {
type = string
}
variable "instance_id" {
type = string
}
variable "instance_type" {
description = "ec2 or rds"
type = string
}
locals.tf
locals {
stop_task_name = var.instance_type == "rds" ? "AWS-StopRdsInstance" : "AWS-StopEC2Instance"
start_task_name = var.instance_type == "rds" ? "AWS-StartRdsInstance" : "AWS-StartEC2Instance"
permissions = var.instance_type == "rds" ? [
"rds:StopDBInstance",
"rds:StartDBInstance",
"rds:DescribeDBInstances",
"ssm:StartAutomationExecution"
] : [
"ec2:StopInstances",
"ec2:StartInstances",
"ec2:DescribeInstances",
"ssm:StartAutomationExecution"
]
}
main.tf
data "aws_iam_policy_document" "ssm_lifecycle_trust" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = [
"ssm.amazonaws.com",
"events.amazonaws.com"
]
}
}
}
data "aws_iam_policy_document" "ssm_lifecycle" {
statement {
effect = "Allow"
actions = local.permissions
resources = ["*"]
}
statement {
effect = "Allow"
actions = [
"iam:PassRole"
]
resources = [aws_iam_role.ssm_lifecycle.arn]
}
}
resource "aws_iam_role" "ssm_lifecycle" {
name = "${var.instance_type}-power-control-role"
assume_role_policy = data.aws_iam_policy_document.ssm_lifecycle_trust.json
}
resource "aws_iam_policy" "ssm_lifecycle" {
name = "${var.instance_type}-power-control-policy"
policy = data.aws_iam_policy_document.ssm_lifecycle.json
depends_on = [
aws_iam_role.ssm_lifecycle
]
}
resource "aws_iam_role_policy_attachment" "ssm_lifecycle" {
policy_arn = aws_iam_policy.ssm_lifecycle.arn
role = aws_iam_role.ssm_lifecycle.name
}
resource "aws_cloudwatch_event_rule" "stop_instance" {
name = "stop-${var.instance_type}"
description = "Stop ${var.instance_type} instance"
schedule_expression = var.stop_cron_representation
}
resource "aws_cloudwatch_event_target" "stop_instance" {
target_id = "stop-${var.instance_type}"
arn = "arn:aws:ssm:ap-northeast-1::automation-definition/${local.stop_task_name}"
rule = aws_cloudwatch_event_rule.stop_instance.name
role_arn = aws_iam_role.ssm_lifecycle.arn
input = <<DOC
{
"InstanceId": ["${var.instance_id}"],
"AutomationAssumeRole": ["${aws_iam_role.ssm_lifecycle.arn}"]
}
DOC
}
resource "aws_cloudwatch_event_rule" "start_instance" {
name = "start-${var.instance_type}"
description = "Start ${var.instance_type} instance"
schedule_expression = var.start_cron_representation
}
resource "aws_cloudwatch_event_target" "start_instance" {
target_id = "start-${var.instance_type}"
arn = "arn:aws:ssm:ap-northeast-1::automation-definition/${local.start_task_name}"
rule = aws_cloudwatch_event_rule.start_instance.name
role_arn = aws_iam_role.ssm_lifecycle.arn
input = <<DOC
{
"InstanceId": ["${var.instance_id}"],
"AutomationAssumeRole": ["${aws_iam_role.ssm_lifecycle.arn}"]
}
DOC
}
This module may be called like:
module "ec2_start_and_stop" {
source = "./module_folder"
start_cron_representation = "cron(0 0 * * ? *)"
stop_cron_representation = "cron(0 1 * * ? *)"
instance_id = aws_instance.instance_name.id # or aws_db_instance.db.id for RDS
instance_type = "ec2" # or "rds" for RDS
depends_on = [
aws_instance.instance_name
]
}

How to add additional tag for EBS volume base on variable?

I'm using this EC2 module with lite alteration to create EC2 instances and EBS volumes, Code is working without an issue, But I have requirement to add mount point as a tag in EBS, So I can use data filter to get that value and mount it using Ansible.
Im trying to add tag value to "dynamic "ebs_block_device" through depoy-ec2.tf configuration file. As per the Terraform documentation tags is an optional value. Anyway, when I executing this it provided Unsupported argument error for tags value. Appreciate your support to understand issue here.
My Code as below.
Module main.tf
locals {
is_t_instance_type = replace(var.instance_type, "/^t(2|3|3a){1}\\..*$/", "1") == "1" ? true : false
}
resource "aws_instance" "this" {
count = var.instance_count
ami = var.ami
instance_type = var.instance_type
user_data = var.user_data
user_data_base64 = var.user_data_base64
subnet_id = length(var.network_interface) > 0 ? null : element(
distinct(compact(concat([var.subnet_id], var.subnet_ids))),
count.index,
)
key_name = var.key_name
monitoring = var.monitoring
get_password_data = var.get_password_data
vpc_security_group_ids = var.vpc_security_group_ids
iam_instance_profile = var.iam_instance_profile
associate_public_ip_address = var.associate_public_ip_address
private_ip = length(var.private_ips) > 0 ? element(var.private_ips, count.index) : var.private_ip
ipv6_address_count = var.ipv6_address_count
ipv6_addresses = var.ipv6_addresses
ebs_optimized = var.ebs_optimized
dynamic "root_block_device" {
for_each = var.root_block_device
content {
delete_on_termination = lookup(root_block_device.value, "delete_on_termination", null)
encrypted = lookup(root_block_device.value, "encrypted", null)
iops = lookup(root_block_device.value, "iops", null)
kms_key_id = lookup(root_block_device.value, "kms_key_id", null)
volume_size = lookup(root_block_device.value, "volume_size", null)
volume_type = lookup(root_block_device.value, "volume_type", null)
}
}
dynamic "ebs_block_device" {
for_each = var.ebs_block_device
content {
delete_on_termination = lookup(ebs_block_device.value, "delete_on_termination", null)
device_name = ebs_block_device.value.device_name
encrypted = lookup(ebs_block_device.value, "encrypted", null)
iops = lookup(ebs_block_device.value, "iops", null)
kms_key_id = lookup(ebs_block_device.value, "kms_key_id", null)
snapshot_id = lookup(ebs_block_device.value, "snapshot_id", null)
volume_size = lookup(ebs_block_device.value, "volume_size", null)
volume_type = lookup(ebs_block_device.value, "volume_type", null)
tags = lookup(ebs_block_device.value, "mount", null)
}
}
dynamic "ephemeral_block_device" {
for_each = var.ephemeral_block_device
content {
device_name = ephemeral_block_device.value.device_name
no_device = lookup(ephemeral_block_device.value, "no_device", null)
virtual_name = lookup(ephemeral_block_device.value, "virtual_name", null)
}
}
dynamic "metadata_options" {
for_each = length(keys(var.metadata_options)) == 0 ? [] : [var.metadata_options]
content {
http_endpoint = lookup(metadata_options.value, "http_endpoint", "enabled")
http_tokens = lookup(metadata_options.value, "http_tokens", "optional")
http_put_response_hop_limit = lookup(metadata_options.value, "http_put_response_hop_limit", "1")
}
}
dynamic "network_interface" {
for_each = var.network_interface
content {
device_index = network_interface.value.device_index
network_interface_id = lookup(network_interface.value, "network_interface_id", null)
delete_on_termination = lookup(network_interface.value, "delete_on_termination", false)
}
}
source_dest_check = length(var.network_interface) > 0 ? null : var.source_dest_check
disable_api_termination = var.disable_api_termination
instance_initiated_shutdown_behavior = var.instance_initiated_shutdown_behavior
placement_group = var.placement_group
tenancy = var.tenancy
tags = merge(
{
"Name" = var.instance_count > 1 || var.use_num_suffix ? format("%s${var.num_suffix_format}-EC2", var.name, count.index + 1) : format("%s-EC2",var.name)
},
{
"ResourceName" = var.instance_count > 1 || var.use_num_suffix ? format("%s${var.num_suffix_format}-EC2", var.name, count.index + 1) : format("%s-EC2",var.name)
},
{"Account" = var.Account,
"Environment" = var.Environment,
"ApplicationName" = var.ApplicationName,
"ApplicationID" = var.ApplicationID,
"Project" = var.Project,
"ProjectCode" = var.ProjectCode,
"Workload" = var.Workload,
"Division" = var.Division,
"Purpose" = var.Purpose,
"VersionNumber" = var.VersionNumber,
"RelVersion" = var.RelVersion,
"OSVersion" = var.OSVersion,
"DBVersion" = var.DBVersion,
"DataClassification" = var.DataClassification,
"Automation" = var.Automation,
"AWSResoureceType" = "EC2",
"BusinessEntitiy" = var.BusinessEntitiy,
"CostCentre" = var.CostCentre,
"BaseImageName" = var.BaseImageName},
var.tags,
)
volume_tags = merge(
{
"Name" = var.instance_count > 1 || var.use_num_suffix ? format("%s${var.num_suffix_format}-EBS", var.name, count.index + 1) : format("%s-EBS",var.name)
},
{
"ResourceName" = var.instance_count > 1 || var.use_num_suffix ? format("%s${var.num_suffix_format}-EBS", var.name, count.index + 1) : format("%s-EBS",var.name)
},
{"Account" = var.Account,
"Environment" = var.Environment,
"ApplicationName" = var.ApplicationName,
"ApplicationID" = var.ApplicationID,
"Project" = var.Project,
"ProjectCode" = var.ProjectCode,
"Workload" = var.Workload,
"Division" = var.Division,
"Purpose" = var.Purpose,
"VersionNumber" = var.VersionNumber,
"RelVersion" = var.RelVersion,
"OSVersion" = var.OSVersion,
"DBVersion" = var.DBVersion,
"DataClassification" = var.DataClassification,
"Automation" = var.Automation,
"AWSResoureceType" = "EC2",
"BusinessEntitiy" = var.BusinessEntitiy,
"CostCentre" = var.CostCentre,
"BaseImageName" = var.BaseImageName},
var.volume_tags,
)
credit_specification {
cpu_credits = local.is_t_instance_type ? var.cpu_credits : null
}
}
deploy-ec2.tf
module "mn-ec2" {
source = "../../../terraform12-modules/aws/ec2-instance"
instance_count = var.master_nodes
name = "${var.Account}-${var.Environment}-${var.ApplicationName}-${var.Project}-${var.Division}-${var.Purpose}-MN"
ami = var.ami_id
instance_type = var.master_node_ec2_type
subnet_ids = ["${data.aws_subnet.primary_subnet.id}","${data.aws_subnet.secondory_subnet.id}","${data.aws_subnet.tertiary_subnet.id}"]
vpc_security_group_ids = ["${module.sg-application-servers.this_security_group_id}"]
iam_instance_profile = "${var.iam_instance_profile}"
key_name = var.key_pair_1
Project = upper(var.Project)
Account = var.Account
Environment = var.Environment
ApplicationName = var.ApplicationName
ApplicationID = var.ApplicationID
ProjectCode = var.ProjectCode
Workload = var.Workload
Division = var.Division
RelVersion = var.RelVersion
Purpose = var.Purpose
DataClassification = var.DataClassification
CostCentre = var.CostCentre
Automation = var.Automation
tags = {
node_type = "master"
}
volume_tags = {
node_type = "master"
}
root_block_device = [
{
encrypted = true
kms_key_id = var.kms_key_id
volume_type = "gp2"
volume_size = 250
},
]
ebs_block_device = [
{
device_name = "/dev/sdc"
encrypted = true
kms_key_id = var.kms_key_id
volume_type = "gp2"
volume_size = 500
mount = "/x02"
},
{
device_name = "/dev/sdd"
encrypted = true
kms_key_id = var.kms_key_id
volume_type = "gp2"
volume_size = 1000
mount = "/x03"
},
{
device_name = "/dev/sde"
encrypted = true
kms_key_id = var.kms_key_id
volume_type = "gp2"
volume_size = 10000
mount = "/x04"
},
]
}
The issue with AWS provider, which didn't have much options, So I have upgraded to terraform-provider-aws_3.24.0_linux_amd64.zip and now can be added specific tags for each EBS volume
I ran into a similar problem. Changing from terraform-provider-aws=2 to terraform-provider-aws=3 worked.

create azure vm from custom image using terraform error

I need to provision a VMs in Azure from a Custom Image using Terraform, and everything works fine with image from the market place but when I try to specify a my custom image an error returns. I have been banging my head all day on this issue.
Here my tf script:
resource "azurerm_windows_virtual_machine" "tftest" {
name = "myazurevm"
location = "eastus"
resource_group_name = "myresource-rg"
network_interface_ids = [azurerm_network_interface.azvm1nic.id]
size = "Standard_B1s"
storage_image_reference {
id = "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/xxxxx/providers/Microsoft.Compute/images/mytemplate"
}
storage_os_disk {
name = "my-os-disk"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
storage_data_disk {
name = "my-data-disk"
managed_disk_type = "Premium_LRS"
disk_size_gb = 75
create_option = "FromImage"
lun = 0
}
os_profile {
computer_name = "myvmazure"
admin_username = "admin"
admin_password = "test123"
}
os_profile_windows_config {
provision_vm_agent = true
}
}
Here the error returned during plan phase:
2020-07-17T20:02:26.9367986Z ==============================================================================
2020-07-17T20:02:26.9368212Z Task : Terraform
2020-07-17T20:02:26.9368456Z Description : Execute terraform commands to manage resources on AzureRM, Amazon Web Services(AWS) and Google Cloud Platform(GCP)
2020-07-17T20:02:26.9368678Z Version : 0.0.142
2020-07-17T20:02:26.9368852Z Author : Microsoft Corporation
2020-07-17T20:02:26.9369049Z Help : [Learn more about this task](https://aka.ms/AA5j5pf)
2020-07-17T20:02:26.9369262Z ==============================================================================
2020-07-17T20:02:27.2826725Z [command]D:\agent\_work\_tool\terraform\0.12.3\x64\terraform.exe providers
2020-07-17T20:02:27.5303002Z .
2020-07-17T20:02:27.5304176Z └── provider.azurerm
2020-07-17T20:02:27.5304628Z
2020-07-17T20:02:27.5363313Z [command]D:\agent\_work\_tool\terraform\0.12.3\x64\terraform.exe plan
2020-07-17T20:02:29.7685150Z [31m
2020-07-17T20:02:29.7788471Z [1m[31mError: [0m[0m[1mInsufficient os_disk blocks[0m
2020-07-17T20:02:29.7792789Z
2020-07-17T20:02:29.7793007Z [0m on line 0:
2020-07-17T20:02:29.7793199Z (source code not available)
2020-07-17T20:02:29.7793305Z
2020-07-17T20:02:29.7793472Z At least 1 "os_disk" blocks are required.
2020-07-17T20:02:29.7793660Z [0m[0m
2020-07-17T20:02:29.7793800Z [31m
2020-07-17T20:02:29.7793975Z [1m[31mError: [0m[0m[1mMissing required argument[0m
Do you have any suggestions to locate the issue?
I have finally figured out the issue. I was using the wrong terraform resource:
wrong --> azurerm_windows_virtual_machine
correct --> azurerm_virtual_machine
azurerm_windows_virtual_machine doesn't support arguments like (storage_os_disk, storage_data_disk) and is not the right one for custom images unless the image is publish in Shared Image Gallery.
See documentation for options supported from each provider:
https://www.terraform.io/docs/providers/azurerm/r/virtual_machine.html
https://www.terraform.io/docs/providers/azurerm/r/windows_virtual_machine.html
first do it
https://learn.microsoft.com/pt-br/azure/virtual-machines/windows/upload-generalized-managed?toc=%2Fazure%2Fvirtual-machines%2Fwindows%2Ftoc.json
than my all cod
resource "azurerm_resource_group" "example" {
name = "example-resources1"
location = "West Europe"
}
resource "azurerm_virtual_network" "example" {
name = "example-network1"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_subnet" "example" {
name = "internal1"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "example" {
name = "example-nic1"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "internal1"
subnet_id = azurerm_subnet.example.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_virtual_machine" "example" {
name = "example-machine1"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
vm_size = "Standard_B1s"
network_interface_ids = [
azurerm_network_interface.example.id,
]
storage_image_reference {
id = "/subscriptions/XXXXXXXXXXXXX/resourceGroups/ORIGEM/providers/Microsoft.Compute/images/myImage"
//just copi id from your image that you created
}
storage_os_disk {
name = "my-os-disk"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
os_profile {
computer_name = "myvmazure"
admin_username = "adminusername"
admin_password = "testenovo#123"
}
os_profile_windows_config {
provision_vm_agent = true
}
}
//bellow the cod to call powershell o work extension,
resource "azurerm_virtual_machine_extension" "software" {
name = "install-software"
//resource_group_name = azurerm_resource_group.example.name
virtual_machine_id = azurerm_virtual_machine.example.id
publisher = "Microsoft.Compute"
type = "CustomScriptExtension"
type_handler_version = "1.9"
protected_settings = <<SETTINGS
{
"commandToExecute": "powershell -encodedCommand ${textencodebase64(file("install.ps1"), "UTF-16LE")}"
}
SETTINGS
}
You can use a custom image with "azurerm_windows_virtual_machine" module setting "source_image_id" parameter. Documentation note that "One of either source_image_id or source_image_reference must be set." One you can use for marketplace/gallery imagens and the other for managed images.

How to use locals in terraform to repeat and merge blocks?

I have multiple docker_container resources:
resource "docker_container" "headerdebug" {
name = "headerdebug"
image = "${docker_image.headerdebug.latest}"
labels {
"traefik.frontend.rule" = "Host:debug.in.bb8.fun"
"traefik.port" = 8080
"traefik.enable" = "true"
"traefik.frontend.passHostHeader" = "true"
"traefik.frontend.headers.SSLTemporaryRedirect" = "true"
"traefik.frontend.headers.STSSeconds" = "2592000"
"traefik.frontend.headers.STSIncludeSubdomains" = "false"
"traefik.frontend.headers.customResponseHeaders" = "${var.xpoweredby}"
"traefik.frontend.headers.customFrameOptionsValue" = "${var.xfo_allow}"
}
}
And another one:
resource "docker_container" "cadvisor" {
name = "cadvisor"
image = "${docker_image.cadvisor.latest}"
labels {
"traefik.frontend.rule" = "Host:cadvisor.bb8.fun"
"traefik.port" = 8080
"traefik.enable" = "true"
"traefik.frontend.headers.SSLTemporaryRedirect" = "true"
"traefik.frontend.headers.STSSeconds" = "2592000"
"traefik.frontend.headers.STSIncludeSubdomains" = "false"
"traefik.frontend.headers.contentTypeNosniff" = "true"
"traefik.frontend.headers.browserXSSFilter" = "true"
"traefik.frontend.headers.customFrameOptionsValue" = "${var.xfo_allow}"
"traefik.frontend.headers.customResponseHeaders" = "${var.xpoweredby}"
}
}
I'm trying to use locals to re-use the common labels between both the containers. I have the following local defined:
locals {
traefik_common_labels {
"traefik.frontend.passHostHeader" = "true"
"traefik.frontend.headers.SSLTemporaryRedirect" = "true"
"traefik.frontend.headers.STSSeconds" = "2592000"
"traefik.frontend.headers.STSIncludeSubdomains" = "false"
"traefik.frontend.headers.customResponseHeaders" = "${var.xpoweredby}"
"traefik.frontend.headers.customFrameOptionsValue" = "${var.xfo_allow}"
}
}
But the documentation doesn't mention how to use locals for merging entire blocks, only maps.
I've tried the following:
labels "${merge(
local.traefik_common_labels,
map(
"traefik.frontend.rule", "Host:debug.in.bb8.fun",
"traefik.port", 8080,
"traefik.enable", "true",
)
)}"
which gives the following error:
tf11 plan
Error: Failed to load root config module: Error loading modules: module docker: Error parsing .terraform/modules/2f3785083ce0d0ac2dd3346cf129e795/main.tf: key 'labels "${merge(
local.traefik_common_labels,
map(
"traefik.frontend.rule", "Host:debug.in.bb8.fun",
"traefik.port", 8080,
"traefik.enable", "true",
)
)}"' expected start of object ('{') or assignment ('=')
There is a pretty diff of my attempts at this PR: https://git.captnemo.in/nemo/nebula/pulls/4/files
In Terraform 1.x+ you can use a dynamic block to achieve this
variable "xpoweredby" { default = "" }
variable "xfo_allow" { default = "" }
locals {
traefik_common_labels = {
"traefik.frontend.passHostHeader" = "true"
"traefik.frontend.headers.SSLTemporaryRedirect" = "true"
"traefik.frontend.headers.STSSeconds" = "2592000"
"traefik.frontend.headers.STSIncludeSubdomains" = "false"
"traefik.frontend.headers.customResponseHeaders" = var.xpoweredby
"traefik.frontend.headers.customFrameOptionsValue" = var.xfo_allow
}
}
resource "docker_image" "cadvisor" {
name = "google/cadvisor:latest"
}
resource "docker_container" "cadvisor" {
name = "cadvisor"
image = docker_image.cadvisor.latest
dynamic "labels" {
for_each = merge(local.traefik_common_labels,
{
"traefik.frontend.rule" = "Host:debug.in.bb8.fun",
"traefik.port" = 8080,
"traefik.enable" = "true",
}
)
content {
label = labels.key
value = labels.value
}
}
}
In Terraform 0.11 etc, this could be accomplised with the following:
You need to assign the value to labels like so
locals {
traefik_common_labels {
"traefik.frontend.passHostHeader" = "true"
"traefik.frontend.headers.SSLTemporaryRedirect" = "true"
"traefik.frontend.headers.STSSeconds" = "2592000"
"traefik.frontend.headers.STSIncludeSubdomains" = "false"
"traefik.frontend.headers.customResponseHeaders" = "${var.xpoweredby}"
"traefik.frontend.headers.customFrameOptionsValue" = "${var.xfo_allow}"
}
}
resource "docker_container" "cadvisor" {
name = "cadvisor"
image = "${docker_image.cadvisor.latest}"
labels = "${merge(
local.traefik_common_labels,
map(
"traefik.frontend.rule", "Host:debug.in.bb8.fun",
"traefik.port", 8080,
"traefik.enable", "true",
))}"
}

Resources