I am creating lambda function using terraform as per the terraform syntax lambda code should be passed as a zip file. In a similar way, I am passing in a resource block and it is getting created also without any issue. But when I am trying to update lambda code using terraform in the next run it is not getting updated. Below block for reference.
data "archive_file" "stop_ec2" {
type = "zip"
source_file = "src_dir/stop_ec2.py"
output_path = "dest_dir/stop_ec2_upload.zip"
}
resource "aws_lambda_function" "stop_ec2" {
function_name = "stopEC2"
handler = "stop_ec2.handler"
runtime = "python3.6"
filename = "dest_dir/stop_ec2_upload.zip"
role = "..."
}
Need help to resolve this issue.
Set the source_code_hash argument, so Terraform will update the lambda function when the lambda code is changed.
resource "aws_lambda_function" "stop_ec2" {
source_code_hash = filebase64sha256("dest_dir/stop_ec2_upload.zip")
Related
Can anyone reference or show me an example on how to create a AWS Lambda trigger with Terraform?
In the AWS console, after clicking a function name and selecting the configuration tab, you can create triggers E.g. a SNS trigger
For sns you need to create sns subscription
resource "aws_sns_topic_subscription" "user_updates_lampda_target" {
topic_arn = “sns topic arn”
protocol = "lambda"
endpoint = “lambda arn here”
}
To allows Lambda functions to get events from Kinesis, DynamoDB and SQS you can use event source mapping
resource "aws_lambda_event_source_mapping" "example" {
event_source_arn = aws_dynamodb_table.example.stream_arn
function_name = aws_lambda_function.example.arn
starting_position = "LATEST"
}
I have written some Terraform code to create some servers. For AMI I was using the Terraform data module to get the latest Ubuntu 16.04 image ID and assign it to the EC2 instances.
Recently I wanted to add another EC2 instance to this environment, however when I run terraform plan I can see that Terraform is trying to delete the existing EC2 instance and recreate them. The reason being that a new Ubuntu image has been released and it is trying to delete the old instance and create new ones with the new AMI ID.
Is there any chance I can address this issue as I don't want to accidentally delete our production servers?
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
}
module "jenkins" {
source = "terraform-aws-modules/ec2-instance/aws"
name = "Jenkins"
instance_count = 1
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t2.small"
associate_public_ip_address = true
disable_api_termination = true
key_name = "${aws_key_pair.ssh_key.key_name}"
monitoring = false
vpc_security_group_ids = "${module.jenkins_http_sg.this_security_group_id}", "${module.jenkins_https_sg.this_security_group_id}", "${module.ssh_sg.this_security_group_id}"]
subnet_id = "${module.vpc.public_subnets[0]}"
iam_instance_profile = "${aws_iam_instance_profile.update-dns-profile.name}"
tags = {
Terraform = "true"
}
}
While the answer above helps, I solved the problem by adding the following to the aws_instance resource.
lifecycle {
ignore_changes = ["ami"]
}
Please note if you are using the AWS module like I am using, you will have to enter this code to the main.tf file in .terraform/modules/.
Terraform is doing exactly as you asked it to do. Each time it runs it looks for the most recent AMI with a name beginning with ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-* and then passes that AMI ID to the aws_instance resource. As it's not possibly to modify the image ID of an instance, Terraform correctly determines it must destroy the old instances and rebuild them from the new AMI.
If you want to specify a specific AMI then you should either make the data source only return a single AMI (eg by specifying the date stamp in the name filter) or you should hardcode the AMI ID you want to use.
data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"] # Canonical
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-20190403"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
}
or:
variable "ami" {
default = "ami-0727f3c2d4b0226d5"
}
If you were to remove the most_recent = true parameter then instead your data source would find multiple images that match those criteria and then fail as the aws_ami data source can only return a single AMI:
NOTE: If more or less than a single match is returned by the search, Terraform will fail. Ensure that your search is specific enough to return a single AMI ID only, or use most_recent to choose the most recent one. If you want to match multiple AMIs, use the aws_ami_ids data source instead.
Also note that I added the owners field to your data source. This is now required since version 2.0.0 because otherwise this was very insecure as your data source could return any public image that uses that naming scheme.
I'm using Terraform to create stuff in Azure,
In ARM I used to use uniqueString() to generate storage account names,
So is it possible to generate random name for storage account using Terraform?
There are several random resources you can use in Terraform
https://www.terraform.io/docs/providers/random/index.html
Resources
random_id
random_pet
random_shuffle
random_string
Use random_id as a sample, and the official codes in resource azurerm_storage_account
You can define the resource azurerm_storage_account name easily.
resource "random_id" "storage_account" {
byte_length = 8
}
resource "azurerm_storage_account" "testsa" {
name = "tfsta${lower(random_id.storage_account.hex)}"
resource_group_name = "${azurerm_resource_group.testrg.name}"
location = "westus"
account_type = "Standard_GRS"
tags {
environment = "staging"
}
}
How can I add new trigger for existing AWS Lambda function using Java API?
I would like to add CloudWatch Events - Schedule trigger.
It looks like I should use AmazonCloudWatchEventsClient.
How can I set the credentials for the client?
Any examples will be appreciated.
Thanks.
It is possible to add event sources via aws sdk. I faced the same issue and please see code below as the solution using java.
AddPermissionRequest addPermissionRequest = new AddPermissionRequest();
addPermissionRequest.setStatementId("12345ff"); //any unique string would go
addPermissionRequest.withSourceArn(ruleArn); //CloudWatch rule's arn
addPermissionRequest.setAction("lambda:InvokeFunction");
addPermissionRequest.setPrincipal("events.amazonaws.com");
addPermissionRequest.setFunctionName("name of your lambda function");
AWSLambdaAsyncClient lambdaClient = new AWSLambdaAsyncClient();
lambdaClient.withRegion(Regions.US_EAST_1); //region of your lambda's location
lambdaClient.addPermission(addPermissionRequest);
Thanks needed it in Kotlin myself, the thing missing from the previous answer was the dependency:
compile 'com.amazonaws:aws-java-sdk-lambda:1.11.520'
code:
val addPermissionRequest = AddPermissionRequest()
addPermissionRequest.statementId = "12345ff" //any unique string would go
addPermissionRequest.withSourceArn(ruleArn) //CloudWatch rule's arn
addPermissionRequest.action = "lambda:InvokeFunction"
addPermissionRequest.principal = "events.amazonaws.com"
addPermissionRequest.functionName = "name of your lambda function"
val lambdaClient = AWSLambdaAsyncClient.builder().build()
lambdaClient.addPermission(addPermissionRequest)
I'm trying to use the attributes of an EC2 instance as a variable, but it keeps failing or otherwise not working. As you can see below, I want to insert the private IP of the instance into a config file that will get copied to the instance. The remote-exec script will then move the file into place (/etc/vault.d/server-config.json)
instances.tf
resource "template_file" "tpl-vault-server-config" {
template = "${file("${path.module}/templates/files/vault-server-config.json.tpl")}"
vars {
aws_private_ip = "${aws_instance.ec2-consul-server.private_ip}"
}
}
provisioner "file" {
source = "${template_file.tpl-vault-server-config.rendered}"
destination = "/tmp/vault-server-config.json"
}
vault-server-config.json.tpl
backend "consul" {
address = "127.0.0.1:8500"
path = "vault"
tls_enable = 1
tls_ca_file = "/etc/consul.d/ssl/ca.cert"
tls_cert_file = "/etc/consul.d/ssl/consul.cert"
tls_key_file = "/etc/consul.d/ssl/consul.key"
}
listener "tcp" {
address = "${aws_private_ip}:8200"
tls_cert_file = "/etc/consul.d/ssl/consul.cert"
tls_key_file = "/etc/consul.d/ssl/consul.key"
}
The error on terraform plan is:
* aws_instance.ec2-consul-server: missing dependency: template_file.tpl-vault-server-config
Questions:
Am I taking the wrong approach?
Am I missing something basic?
How do you get an instance's attributes into a file?
Thanks in advance.
I realized that I was defining the template_file resource within the current resource and this was part of the problem. When I fixed that, things worked much easier.