I'm trying to create a Lambda with Terraform based on an docker image and I want that Lambda to get pushed every time I create a new image. I'm trying to use depends_on inside lambda, but it doesn't work, Lambda just stay in the same state as before. Any other solution or any way to make it work?
This is the code I have right now:
resource "docker_registry_image" "registry_image" {
name = docker_image.image.name
triggers = {
dir_sha1 = sha1(join("", [for f in fileset("../AWS/", "**") : filesha1("../AWS/${f}")]))
}
}
resource "docker_image" "image" {
name = "${aws_ecr_repository.repo.repository_url}:latest"
triggers = {
dir_sha1 = sha1(join("", [for f in fileset("../AWS/", "**") : filesha1("../AWS/${f}")]))
}
build {
context = "../AWS/"
dockerfile = "Dockerfile"
}
}
resource "aws_ecr_repository" "repo" {
name = "repo"
force_delete = true
}
resource "aws_ecr_repository_policy" "repo_policy" {
repository = aws_ecr_repository.repo.name
policy = <<EOF
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "Set the permission for ECR",
"Effect": "Allow",
"Principal": "*",
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:CompleteLayerUpload",
"ecr:GetDownloadUrlForLayer",
"ecr:GetLifecyclePolicy",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
]
}
]
}
EOF
}
resource "aws_lambda_function" "lambda1" {
depends_on = [docker_image.image]
function_name = "lambda1"
role = aws_iam_role.lambda_role.arn
image_uri = "${aws_ecr_repository.repo.repository_url}:latest"
image_config {
command = ["lambda1.handler"]
working_directory = "/var/task"
}
package_type = "Image"
memory_size = 2048 # Min 128 MB and the Max 10,240 MB, there are some files of 300 MB
timeout = 180
environment {
variables = {
TZ = "Europe/Madrid"
}
}
}
Update
It seems that it is a issue of Amazon CDK. I recommend you to read
this thread: updating-lambda-using-cdk-doesnt-deploy-latest-image.
Related
I deploy escloud with terraform.
I want to add an existing extension, analysis-icu, how can I configure it?
resource "ec_deployment_extension" "icu" {
name = "analysis-icu"
version = "*"
extension_type = "bundle"
download_url = "https://artifacts.elastic.co/downloads/elasticsearch-plugins/analysis-nori/analysis-nori-8.6.1.zip"
}
module "escloud_default" {
source = "./escloud"
name = "${var.environment}-test"
...
elasticsearch_config = {
topologies = [
{
id = "hot_content"
size = var.environment == "prod" ? "2g" : "1g"
size_resource = "memory"
zone_count = var.environment == "prod" ? 2 : 1
autoscaling = {
min_size = ""
min_size_resource = ""
max_size = "116g"
max_size_resource = "memory"
}
},
]
extensions = [
{
name = ec_deployment_extension.nori.name
type = "bundle"
version = "*"
url = ec_deployment_extension.nori.url
}
]
}
...
This code does not apply existing icu plugin, just create custom bundle.
i solved it. There is config.plugins arguments.
https://registry.terraform.io/providers/elastic/ec/latest/docs/resources/ec_deployment#plugins
I trying figure out how to simply start / stop EC2 instances by schedule and EventBridge
This behavior may be easily set via AWS WEB console (EventBridge → Rules → Create Rule → AWS service → EC2 StopInstances API call):
But I can't figure out how to describe this rule in Terraform
Only possible way that I found is to create Lambda. But it looks like a huge overhead for this simple action. Is here any way to add EC2 StopInstances API call Rule with Terraform?
Okay, looks like it is possible to control instance running time with SSM Automation:
variables.tf
variable "start_cron_representation" {
type = string
}
variable "stop_cron_representation" {
type = string
}
variable "instance_id" {
type = string
}
variable "instance_type" {
description = "ec2 or rds"
type = string
}
locals.tf
locals {
stop_task_name = var.instance_type == "rds" ? "AWS-StopRdsInstance" : "AWS-StopEC2Instance"
start_task_name = var.instance_type == "rds" ? "AWS-StartRdsInstance" : "AWS-StartEC2Instance"
permissions = var.instance_type == "rds" ? [
"rds:StopDBInstance",
"rds:StartDBInstance",
"rds:DescribeDBInstances",
"ssm:StartAutomationExecution"
] : [
"ec2:StopInstances",
"ec2:StartInstances",
"ec2:DescribeInstances",
"ssm:StartAutomationExecution"
]
}
main.tf
data "aws_iam_policy_document" "ssm_lifecycle_trust" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = [
"ssm.amazonaws.com",
"events.amazonaws.com"
]
}
}
}
data "aws_iam_policy_document" "ssm_lifecycle" {
statement {
effect = "Allow"
actions = local.permissions
resources = ["*"]
}
statement {
effect = "Allow"
actions = [
"iam:PassRole"
]
resources = [aws_iam_role.ssm_lifecycle.arn]
}
}
resource "aws_iam_role" "ssm_lifecycle" {
name = "${var.instance_type}-power-control-role"
assume_role_policy = data.aws_iam_policy_document.ssm_lifecycle_trust.json
}
resource "aws_iam_policy" "ssm_lifecycle" {
name = "${var.instance_type}-power-control-policy"
policy = data.aws_iam_policy_document.ssm_lifecycle.json
depends_on = [
aws_iam_role.ssm_lifecycle
]
}
resource "aws_iam_role_policy_attachment" "ssm_lifecycle" {
policy_arn = aws_iam_policy.ssm_lifecycle.arn
role = aws_iam_role.ssm_lifecycle.name
}
resource "aws_cloudwatch_event_rule" "stop_instance" {
name = "stop-${var.instance_type}"
description = "Stop ${var.instance_type} instance"
schedule_expression = var.stop_cron_representation
}
resource "aws_cloudwatch_event_target" "stop_instance" {
target_id = "stop-${var.instance_type}"
arn = "arn:aws:ssm:ap-northeast-1::automation-definition/${local.stop_task_name}"
rule = aws_cloudwatch_event_rule.stop_instance.name
role_arn = aws_iam_role.ssm_lifecycle.arn
input = <<DOC
{
"InstanceId": ["${var.instance_id}"],
"AutomationAssumeRole": ["${aws_iam_role.ssm_lifecycle.arn}"]
}
DOC
}
resource "aws_cloudwatch_event_rule" "start_instance" {
name = "start-${var.instance_type}"
description = "Start ${var.instance_type} instance"
schedule_expression = var.start_cron_representation
}
resource "aws_cloudwatch_event_target" "start_instance" {
target_id = "start-${var.instance_type}"
arn = "arn:aws:ssm:ap-northeast-1::automation-definition/${local.start_task_name}"
rule = aws_cloudwatch_event_rule.start_instance.name
role_arn = aws_iam_role.ssm_lifecycle.arn
input = <<DOC
{
"InstanceId": ["${var.instance_id}"],
"AutomationAssumeRole": ["${aws_iam_role.ssm_lifecycle.arn}"]
}
DOC
}
This module may be called like:
module "ec2_start_and_stop" {
source = "./module_folder"
start_cron_representation = "cron(0 0 * * ? *)"
stop_cron_representation = "cron(0 1 * * ? *)"
instance_id = aws_instance.instance_name.id # or aws_db_instance.db.id for RDS
instance_type = "ec2" # or "rds" for RDS
depends_on = [
aws_instance.instance_name
]
}
I'm trying to create a CloudWatch alarms for Windows logical disk
Terraform Enterprise Edition ver 0.13.7
Collecting data for metrics
CW agent config snippet
"metrics": {
"namespace": "custom_namespace",
"append_dimensions": {
"ImageId": "${aws:ImageId}",
"InstanceId": "${aws:InstanceId}",
"InstanceType": "${aws:InstanceType}"
},
"metrics_collected": {
"LogicalDisk": {
"measurement": [
{"name": "% Free Space", "unit": "Percent"}
],
"metrics_collection_interval": 60,
"resources": [
"*"
]
},
Terraform code
data.tf
To get info from the servers
data "aws_instance" "instance_name" {
for_each = toset(data.aws_instances.instance_cloudwatch.ids)
instance_id = each.value
}
to iterate through all servers with the specific tag
data "aws_instances" "instance_cloudwatch" {
instance_tags = {
Type = var.type
}
}
disk.tf
module "cloudwatch-metric-alarm-disk-usage" {
source = "git::ssh://git#bitbucket.infra.marcus.com:7999/corecard/modules-cloudwatch.git//cw_metric_alarm?ref=v6.0"
for_each = toset(data.aws_instances.instance_cloudwatch.ids)
alarm_name = "${var.app_prefix}-${var.type}-disk-utilization-alarm-${var.environment}-${data.aws_instance.instance_name[each.value].tags["Name"]}"
Alarm name includes the instance name
Also needs to include the disk for uniqueness
comparison_operator = "GreaterThanOrEqualToThreshold"
evaluation_periods = "2"
metric_name = "LogicalDisk % Free Space"
Custom metrics with that name
namespace = var.name_space != "" ? var.name_space : "CC-${upper(var.type)}"
For non-standard namespace
statistic = "Average"
threshold = "2"
period = "60"
alarm_description = "Alarm to trigger PagerDuty when disk utilization is high"
insufficient_data_actions = []
alarm_actions = [aws_sns_topic.sns_general.arn]
dimensions = {
InstanceId = each.value
ImageId = data.aws_instance.instance_name[each.value].ami
InstanceType = data.aws_instance.instance_name[each.value].instance_type
objectname = "LogicalDisk"
instance = "C:"
<--- need to iterate through all disks
}
}
I'm trying to export list variables and use them via TF_VAR_name and getting error while combining them with toset function.
Success scenario:
terraform apply -auto-approve
# Variables
variable "sg_name" { default = ["SG1", "SG2", "SG3", "SG4", "SG5"] }
variable "Project" { default = "POC" }
variable "Owner" { default = "Me" }
variable "Environment" { default = "Testing" }
locals {
common_tags = {
Project = var.Project
Owner = var.Owner
Environment = var.Environment
}
}
# Create Security Group
resource "aws_security_group" "application_sg" {
for_each = toset(var.sg_name)
name = each.value
description = "${each.value} security group"
tags = merge(local.common_tags, { "Name" = each.value })
}
# Output the SG IDs
output "sg_id" {
value = values(aws_security_group.application_sg)[*].id
}
Failure scenario:
TF_VAR_sg_name='["SG1", "SG2", "SG3", "SG4", "SG5"]' terraform apply -auto-approve
# Variables
variable "sg_name" { }
variable "Project" { default = "POC" }
variable "Owner" { default = "Me" }
variable "Environment" { default = "Testing" }
locals {
common_tags = {
Project = var.Project
Owner = var.Owner
Environment = var.Environment
}
}
# Create Security Group
resource "aws_security_group" "application_sg" {
for_each = toset(var.sg_name)
name = each.value
description = "${each.value} security group"
tags = merge(local.common_tags, { "Name" = each.value })
}
# Output the SG IDs
output "sg_id" {
value = values(aws_security_group.application_sg)[*].id
}
Error
Error: Invalid function argument
on main.tf line 16, in resource "aws_security_group" "application_sg":
16: for_each = toset(var.sg_name)
|----------------
| var.sg_name is "[\"SG1\", \"SG2\", \"SG3\", \"SG4\", \"SG5\"]"
Invalid value for "v" parameter: cannot convert string to set of any single
type.
You'll need to specify the type (i.e. type = list(string) in your case) of your variable then it should work.
I tested it with the following configuration:
variable "sg_name" {
type = list(string)
}
resource "null_resource" "application_sg" {
for_each = toset(var.sg_name)
triggers = {
name = each.key
}
}
Then a TF_VAR_sg_name='["SG1", "SG2", "SG3", "SG4", "SG5"]' terraform apply works.
If I remove the type = list(string) it errors out as you say.
It seems the HTTP health check is not occurring, I've come to this conclusion due to the HTTP debug log not showing any regular periodic requests.
Is there any additional configuration request for a health check to occur?
job "example" {
datacenters = ["dc1"]
type = "service"
update {
max_parallel = 1
min_healthy_time = "10s"
healthy_deadline = "3m"
progress_deadline = "10m"
auto_revert = false
canary = 0
}
migrate {
max_parallel = 1
health_check = "checks"
min_healthy_time = "10s"
healthy_deadline = "5m"
}
group "app" {
count = 1
restart {
attempts = 2
interval = "30m"
delay = "15s"
mode = "fail"
}
ephemeral_disk {
size = 300
}
task "app" {
driver = "docker"
config {
image = "localhost:5000/myhub:latest"
command = "python"
args = [
"manage.py",
"runserver",
"0.0.0.0:8001"
]
port_map {
app = 8001
}
network_mode = "host"
}
resources {
cpu = 500
memory = 256
network {
mbits = 10
port "app" {}
}
}
service {
name = "myhub"
port = "app"
check {
name = "alive"
type = "http"
port = "app"
path = "/"
interval = "10s"
timeout = "3s"
}
}
}
}
}
It seems Consul must be installed for this to occur.
Also make sure to install Consul v1.4.2 or later as v1.4.1 seems to have a bug: https://github.com/hashicorp/consul/issues/5270