AWS Lambda Code in S3 Bucket not updating - aws-lambda

I am using cloudformation to create my lambda function with the code in a S3Bucket with versioning enabled.
"MYLAMBDA": {
"Type": "AWS::Lambda::Function",
"Properties": {
"FunctionName": {
"Fn::Sub": "My-Lambda-${StageName}"
},
"Code": {
"S3Bucket": {
"Fn::Sub": "${S3BucketName}"
},
"S3Key": {
"Fn::Sub": "${artifact}.zip"
},
"S3ObjectVersion": "1e8Oasedk6sDZu6y01tioj8X._tAl3N"
},
"Handler": "streams.lambda_handler",
"Runtime": "python3.6",
"Timeout": "300",
"MemorySize": "512",
"Role": {
"Fn::GetAtt": [
"LambdaExecutionRole",
"Arn"
]
}
}
}
The lambda function gets created successfully. When i copy a new artifact zip file to the s3bucket, a new version of the file gets created with the new version "S3ObjectVersion" string. But the lambda function code is still using the older version.
The documentation of aws cloudformation clearly says the following
To update a Lambda function whose source code is in an Amazon S3
bucket, you must trigger an update by updating the S3Bucket, S3Key, or
S3ObjectVersion property. Updating the source code alone doesn't
update the function.
Is there an additional trigger event, i need to create to get the code updated?

In case anyone is running into this similar issue, I have figured out a way in my case. I use Terraform + Jenkins to create my lambda functions through s3 bucket. In the beginning, I can create the functions but it won't update once it created. I verified my zip files in s3 is updated. It took me some time to figure out that I need do one of following two changes.
solution 1: Giving a new object key when load the new zip file. In my terraform I add the git commit id as part of the s3 key.
resource "aws_s3_bucket_object" "lambda-abc-package" {
bucket = "${aws_s3_bucket.abc-bucket.id}"
key = "${var.lambda_ecs_task_runner_bucket_key}_${var.git_commit_id}.zip"
source = "../${var.lambda_ecs_task_runner_bucket_key}.zip"
}
solution 2: add source_code_hash in lambda part.
resource "aws_lambda_function" "abc-ecs-task-runner" {
s3_bucket = "${var.bucket_name}"
s3_key = "${aws_s3_bucket_object.lambda-ecstaskrunner-package.key}"
function_name = "abc-ecs-task-runner"
role = "${aws_iam_role.AbcEcsTaskRunnerRole.arn}"
handler = "index.handler"
memory_size = "128"
runtime = "nodejs6.10"
timeout = "300"
source_code_hash = "${base64sha256(file("../${var.lambda_ecs_task_runner_bucket_key}.zip"))}"
So do either one should work. Also when checking lambda code, refresh the URL from the browser won't work. Need go back Functions and open that function again.
Hope this helps.

I also faced the same issue , my code was in Archive.zip in S3 bucket , when I uploaded a new Archive.zip , lambda was not responding according to new code .
Solution was to again paste the link of S3 location of Archive.zip in lambda's function code section and Save it again.
How I figured out lambda was not taking new code?
Go to your lambda function --> Actions --> Export Function --> Download Deployment Package and check if the code is actually the code that you've recently uploaded to S3 .

You have to update the S3ObjectVersion value to the new version ID in your CloudFormation template itself.
Then you have to update your Cloudformation stack with the new template.
You can do this either on the Cloudformation console or via the AWS CLI.

From AWS CLI you can do an update-function-code call like this post mentions : https://nono.ma/update-aws-lambda-function-code

Related

how to refer sns arn from terraform code in a lambda python py file?

my lambda python uses SNS topic arn. But this sns arn id is generated from terraform code. Is there way to refer it somehow in python lambda code?
def lambda_handler(event, context):
try:
#some code
publish_vote(vote, voter)
except:
#some code
return {'statusCode': 200, 'body': '{"status": "success"}'}
def publish_vote(vote, voter):
sns = boto3.client('sns', region_name='us-east-1')
sns.publish(
TopicArn='arn:aws:sns:us-east-1:025416187662:erjan',
Message='""',
MessageAttributes={
"vote": {
"DataType": "String",
"StringValue": vote,
},
"voter": {
"DataType": "String",
"StringValue": voter,
}
}
)
SNS terraform code:
resource "aws_sns_topic" "vote_sns" {
name = "erjan-sns"
}
resource "aws_sns_topic_policy" "vote_sns_access_policy" {
arn = aws_sns_topic.vote_sns.arn
policy = data.aws_iam_policy_document.vote_sns_access_policy.json
}
data "aws_iam_policy_document" "vote_sns_access_policy" {
policy_id = "__default_policy_ID"
statement {
#some stuff code
}
}
output "sns_arn_erjan" {
value = aws_sns_topic.vote_sns.arn
description = "aws full sns topic"
}
For your information:
I see you have already solved this problem, but I have one suggestion.
The lambda function can refer to the topic ARN by putting the ARN as a parameter into Parameter Store with Terraform.
resource "aws_ssm_parameter" "vote_sns" {
name = "sns_arn_erjan"
type = "String"
value = aws_sns_topic.vote_sns.arn
}
aws_ssm_parameter | Resources | hashicorp/aws | Terraform Registry
The lambda function can refer to the parameter stored in Parameter Store using boto3.
get_parameter - SSM — Boto3 Docs 1.26.54 documentation
Your terraform code does not have code for creating the lambda function itself. Are you creating it manually? If yes, then first create that as well using terraform. A basic example is mentioned here
Within the definition, there is an argument for environment. Use that to define your env variables as:
environment {
variables = {
SNS_ARN = aws_sns_topic.vote_sns.arn # Arn from the defined sns resource.
}
}
Then refer the same in your python code as:
import os
SNS_ARN = os.environ.get("SNS_ARN")
...
Alternatively, you could also consider using AWS SAM

Whats the right BlobStorageService Configuration format?

When creating a Microsoft Bot Framework 4 project - the Startup.cs has the following code which can be uncommented.
const string StorageConfigurationId = "<NAME OR ID>";
var blobConfig = botConfig.FindServiceByNameOrId(StorageConfigurationId);
if (!(blobConfig is BlobStorageService blobStorageConfig))
{
throw new InvalidOperationException($"The .bot file does not contain an blob storage with name '{StorageConfigurationId}'.");
}
This code handles a way to configure an Azure Storage Account via Json Configuration.
However the project lacks an example on what the config Json looks like for the "is BlobStorageService" to work.
I have done various tries and searched for examples but cannot make it work.
Has anyone got the nailed?
Got it working using this json...
{
"type": "blob", //Must be 'blob'
"name": "<NAME OF CONFIG - MUST BE UNIQUE (CAN BE ID)>",
"connectionString": "<COPY FROM AZURE DASHBOARD>",
"container": "<NAME OF CONTAINER IN STORAGE>"
}

AWS Lambda:The provided execution role does not have permissions to call DescribeNetworkInterfaces on EC2

Today I have a new AWS Lambda question, and can't find anywhere in Google.
I new a Lambda function, there is no question.
But when I input any code in this function[eg. console.log();] and click "Save", error is occured:
"The provided execution role does not have permissions to call DescribeNetworkInterfaces on EC2"
exports.handler = (event, context, callback) => {
callback(null, 'Hello from Lambda');
console.log(); // here is my code
};
I bound the function with Role: lambda_excute_execution(Policy:AmazonElasticTranscoderFullAccess)
And this function is not bound with any triggers now.
And then, I give the role "AdministratorAccess" Policy, I can save my source code correctly.
This role can run Functions successfully before today.
Is anyone know this error?
Thanks Very much!
This error is common if you try to deploy a Lambda in a VPC without giving it the required network interface related permissions ec2:DescribeNetworkInterfaces, ec2:CreateNetworkInterface, and ec2:DeleteNetworkInterface (see AWS Forum).
For example, this a policy that allows to deploy a Lambda into a VPC:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeNetworkInterfaces",
"ec2:CreateNetworkInterface",
"ec2:DeleteNetworkInterface",
"ec2:DescribeInstances",
"ec2:AttachNetworkInterface"
],
"Resource": "*"
}
]
}
If you are using terraform, just add:
resource "aws_iam_role_policy_attachment" "AWSLambdaVPCAccessExecutionRole" {
role = aws_iam_role.lambda.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
}
via Managed Policy
To grant Lambda necessary permissions to dig in to a VPC where a production RDS db resides in a private subnet.
As mentioned by #portatlas above, the AWSLambdaVPCAccessExecutionRole managed policy fits like a glove (and we all know use of IAM Managed Policies is an AWS-recommended best-practice).
This is for Lambdas with a service role already attached.
AWS CLI
1. Get Lambda Service Role
Ask Lambda API for function configuration, query the role from that, output to text for an unquoted return.
aws lambda get-function-configuration \
--function-name <<your function name or ARN here>> \
--query Role \
--output text
return, take your-service-role-name to #2
your-service-role-name
2. Attach Managed Policy AWSLambdaVPCAccessExecutionRole to Service Role
aws iam attach-role-policy \
--role-name your-service-role-name \
--policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
CDK 2 TypeScript
const lambdaVPCExecutionRole:iam.Role = new iam.Role(this, `createLambdaVPCExecutionRole`, {
roleName : `lambdaVPCExecutionRole`,
assumedBy : new iam.ServicePrincipal(`lambda.amazonaws.com`),
description : `Lambda service role to operate within a VPC`,
managedPolicies : [
iam.ManagedPolicy.fromAwsManagedPolicyName(`service-role/AWSLambdaVPCAccessExecutionRole`),
],
});
const lambdaFunction:lambda.Function = new lambda.Function(this, `createLambdaFunction`, {
runtime : lambda.Runtime.NODEJS_14_X,
handler : `lambda.handler`,
code : lambda.AssetCode.fromAsset(`./src`),
vpc : vpc,
role : lambdaVPCExecutionRole,
});
This is actually such a common issue.
You can resolve this by adding a custom Inline Policy to the Lambda execution role under the Permissions tab.
Just add this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeNetworkInterfaces",
"ec2:CreateNetworkInterface",
"ec2:DeleteNetworkInterface",
"ec2:DescribeInstances",
"ec2:AttachNetworkInterface"
],
"Resource": "*"
}
]
}
There's a full tutorial with pictures here if you need more information (Terraform, CloudFormation, and AWS Console) or are confused: https://ao.ms/the-provided-execution-role-does-not-have-permissions-to-call-createnetworkinterface-on-ec2/
Additionally, a more recent sequence of steps follows:
Under your Lambda Function, select "Configuration"
Select "Permissions"
Select the execution role:
Select "Add Permissions"
Create Inline Policy
Select "JSON"
Paste the JSON above and select Review.
It seems like this has been answered many different ways already but as of this posting, AWS has a managed policy. If you just search for the AWSLambdaVPCAccessExecutionRole you will be able to attached that, and this method worked for me.
Here is the arn:
arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
Just go to execution role -> Attach policy -> Search for 'AWSLambdaVPCAccessExecutionRole' and add it.
An example for Cloudformation and AWS SAM users.
This example lambda role definition adds the managed AWSLambdaVPCAccessExecutionRole and solves the issue:
Type: "AWS::IAM::Role"
Properties:
RoleName: "lambda-with-vpc-access"
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- sts:AssumeRole
Principal:
Service:
- lambda.amazonaws.com
Just cause there aren't enough answers already ;) I think this is the easiest way. If you're using the web admin console, when you're creating your Lambda function in the first place, down the bottom just expand 'Advanced Settings' and check 'Enable VPC' & choose your vpc... Simple! Before doing this, my connection to my RDS proxy was timing out. After doing this (and nothing else) - works great!
After a bit of experimentation, here is a solution using "least privilege". It's written in Python, for the AWS CDK. However the same could be applied to normal JSON
iam.PolicyDocument(
statements=[
iam.PolicyStatement(
effect=iam.Effect.ALLOW,
actions=["ec2:DescribeNetworkInterfaces"],
resources=["*"],
),
iam.PolicyStatement(
effect=iam.Effect.ALLOW,
actions=["ec2:CreateNetworkInterface"],
resources=[
f"arn:aws:ec2:{region}:{account_id}:subnet/{subnet_id}"
f"arn:aws:ec2:{region}:{account_id}:security-group/{security_group_id}",
f"arn:aws:ec2:{region}:{account_id}:network-interface/*",
],
),
iam.PolicyStatement(
effect=iam.Effect.ALLOW,
actions=["ec2:DeleteNetworkInterface"],
resources=[f"arn:aws:ec2:{region}:{account_id}:*/*"],
),
],
),
Here's a quick and dirty way of resolving the error.
Open IAM on AWS console, select the role that's attached to the Lambda function and give it the EC2FullAccess permission.
This will let you update the Lambda VPC by granting EC2 control access. Be sure to remove the permission from the role, the function still runs.
Is it more or less secure than leaving some permissions attached permanently? Debatable.
If you are using SAM you just need to add to the Globals in the Template, like this:
Globals:
Function:
VpcConfig:
SecurityGroupIds:
- sg-01eeb769XX2d6cc9b
SubnetIds:
- subnet-1a0XX614
- subnet-c6dXXb8b
- subnet-757XX92a
- subnet-8afXX9ab
- subnet-caeXX7ac
- subnet-b09XXd81
(of course, you can put all in variables, or parameters!)
and then, to the Lambda Function, add Policies to the Properties, like this:
BasicFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- AWSLambdaVPCAccessExecutionRole
- AWSLambdaBasicExecutionRole
It is definitely a strange error, but are you sure the example code you added is the one you're using in your lambda?
Because in your code, you are trying to log something in your lambda after returning control via the callback. In other words, first you told your lambda that you're done. Next, while it is busy shutting down and returning your results, you try to do some logging...
So first, I'd try this:
exports.handler = (event, context, callback) => {
console.log('this is a test');
// do stuff
callback(null, 'Hello from Lambda'); // only do a callback *after* you've run all your code
};
And see if that fixes the problem.

How do I use CloudFormation resources in a Lambda function?

I have added a Redis ElastiCache section to my s-resource-cf.json (a CloudFormation template), and selected its hostname as an output.
"Resources": {
...snip...
"Redis": {
"Type": "AWS::ElastiCache::CacheCluster",
"Properties": {
"AutoMinorVersionUpgrade": "true",
"AZMode": "single-az",
"CacheNodeType": "cache.t2.micro",
"Engine": "redis",
"EngineVersion": "2.8.24",
"NumCacheNodes": "1",
"PreferredAvailabilityZone": "eu-west-1a",
"PreferredMaintenanceWindow": "tue:00:30-tue:01:30",
"CacheSubnetGroupName": {
"Ref": "cachesubnetdefault"
},
"VpcSecurityGroupIds": [
{
"Fn::GetAtt": [
"sgdefault",
"GroupId"
]
}
]
}
}
},
"Outputs": {
"IamRoleArnLambda": {
"Description": "ARN of the lambda IAM role",
"Value": {
"Fn::GetAtt": [
"IamRoleLambda",
"Arn"
]
}
},
"RedisEndpointAddress": {
"Description": "Redis server host",
"Value": {
"Fn::GetAtt": [
"Redis",
"Address"
]
}
}
}
I can get CloudFormation to output the Redis server host when running sls resources deploy, but how can I access that output from within a Lambda function?
There is nothing in this starter project template that refers to that IamRoleArnLambda, which came with the example project. According to the docs, templates are only usable for project configuration, they are not accessible from Lambda functions:
Templates & Variables are for Configuration Only
Templates and variables are used for configuration of the project only. This information is not usable in your lambda functions. To set variables which can be used by your lambda functions, use environment variables.
So, then how do I set an environment variable to the hostname of the ElastiCache server after it has been created?
You can set environment variables in the environment section of a function's s-function.json file. Furthermore, if you want to prevent those variables from being put into version control (for example, if your code will be posted to a public GitHub repo), you can put them in the appropriate files in your _meta/variables directory and then reference those from your s-function.json files. Just make sure you add a _meta line to your .gitignore file.
For example, in my latest project I needed to connect to a Redis Cloud server, but didn't want to commit the connection details to version control. I put variables into my _meta/variables/s-variables-[stage]-[region].json file, like so:
{
"redisUrl": "...",
"redisPort": "...",
"redisPass": "..."
}
…and referenced the connection settings variables in that function's s-function.json file:
"environment": {
"REDIS_URL": "${redisUrl}",
"REDIS_PORT": "${redisPort}",
"REDIS_PASS": "${redisPass}"
}
I then put this redis.js file in my functions/lib directory:
module.exports = () => {
const redis = require('redis')
const jsonify = require('redis-jsonify')
const redisOptions = {
host: process.env.REDIS_URL,
port: process.env.REDIS_PORT,
password: process.env.REDIS_PASS
}
return jsonify(redis.createClient(redisOptions))
}
Then, in any function that needed to connect to that Redis database, I imported redis.js:
redis = require('../lib/redis')()
(For more details on my Serverless/Redis setup and some of the challenges I faced in getting it to work, see this question I posted yesterday.)
update
CloudFormation usage has been streamlined somewhat since that comment was posted in the issue tracker. I have submitted a documentation update to http://docs.serverless.com/docs/templates-variables, and posted a shortened version of my configuration in a gist.
It is possible to refer to a CloudFormation output in a s-function.json Lambda configuration file, in order to make those outputs available as environment variables.
s-resource-cf.json output section:
"Outputs": {
"redisHost": {
"Description": "Redis host URI",
"Value": {
"Fn::GetAtt": [
"RedisCluster",
"RedisEndpoint.Address"
]
}
}
}
s-function.json environment section:
"environment": {
"REDIS_HOST": "${redisHost}"
},
Usage in a Lambda function:
exports.handler = function(event, context) {
console.log("Redis host: ", process.env.REDIS_HOST);
};
old answer
Looks like a solution was found / implemented in the Serverless issue tracker (link). To quote HyperBrain:
CF Output variables
To have your lambda access the CF output variables you have to give it the cloudformation:describeStacks access rights in the lambda IAM role.
The CF.loadVars() promise will add all CF output variables to the process'
environment as SERVERLESS_CF_OutVar name. It will add a few ms to the
startup time of your lambda.
Change your lambda handler as follows:
// Require Serverless ENV vars
var ServerlessHelpers = require('serverless-helpers-js');
ServerlessHelpers.loadEnv();
// Require Logic
var lib = require('../lib');
// Lambda Handler
module.exports.handler = function(event, context) {
ServerlessHelpers.CF.loadVars()
.then(function() {
lib.respond(event, function(error, response) {
return context.done(error, response);
});
})
};

Have Route 53 point to an instance instead of an IP or CNAME?

We're using Route 53 DNS to point to an EC2 instance. Is there any way to get Route 53 to point to the instance directly, instead of to an Elastic IP or CNAME?
I have multiple reasons for this:
I don't want to burn an IP.
CNAMEs are unreliable, because if an instance goes down and comes back up, the full name, ec2-X-X-X-X.compute-1.amazonaws.com, will change.
In the future, I need to spin up instances programmatically and address them with a subdomain, and I see no easy way to do this with either elastic IPs or CNAMEs.
What's the best approach?
I wrote my own solution to this problem since I was unhappy with other approaches that were presented here. Using Amazon CLI tools is nice, but they IMHO tend to be slower than direct API calls using other Amazon API libraries (Ruby for example).
Here's a link to my AWS Route53 DNS instance update Gist. It contains an IAM policy and a Ruby script. You should create a new user in IAM panel, update it with the attached policy (with your zone id in it) and set the credentials and parameters in the Ruby script. First parameter is the hostname alias for your instance in your hosted zone. Instance's private hostname is aliased to <hostname>.<domain> and instance's public hostname is aliased to <hostname>-public.<domain>
UPDATE: Here's a link to AWS Route53 DNS instance update init.d script registering hostnames when instance boots. Here's another one if want to use AWS Route53 DNS load-balancing in similar fashion.
If you stick to using route53, you can make a script that updates the CNAME record for that instance everytime it reboots.
see this -> http://cantina.co/automated-dns-for-aws-instances-using-route-53/ (disclosure, i did not create this, though i used it as a jumping point for a similar situation)
better yet, because you mentioned being able to spin up instances programmatically, this approach should guide you to that end.
see also -> http://docs.pythonboto.org/en/latest/index.html
Using a combination of Cloudwatch, Route53 and Lambda is also an option if you host at a least part of your dns in Route53. The advantage of this is that you don't need any applications running on the instance itself.
To use this this approach you configure a Cloudwatch rule to trigger a Lambda function whenever the status of an EC2 instance changes to running. The Lambda function can then retrieve the public ip address of the instance and update the dns record in Route53.
The Lambda could look something like this (using Node.js runtime):
var AWS = require('aws-sdk');
var ZONE_ID = 'Z1L432432423';
var RECORD_NAME = 'testaws.domain.tld';
var INSTANCE_ID = 'i-423423ccqq';
exports.handler = (event, context, callback) => {
var retrieveIpAddressOfEc2Instance = function(instanceId, ipAddressCallback) {
var ec2 = new AWS.EC2();
var params = {
InstanceIds: [instanceId]
};
ec2.describeInstances(params, function(err, data) {
if (err) {
callback(err);
} else {
ipAddressCallback(data.Reservations[0].Instances[0].PublicIpAddress);
}
});
}
var updateARecord = function(zoneId, name, ip, updateARecordCallback) {
var route53 = new AWS.Route53();
var dnsParams = {
ChangeBatch: {
Changes: [
{
Action: "UPSERT",
ResourceRecordSet: {
Name: name,
ResourceRecords: [
{
Value: ip
}
],
TTL: 60,
Type: "A"
}
}
],
Comment: "updated by lambda"
},
HostedZoneId: zoneId
};
route53.changeResourceRecordSets(dnsParams, function(err, data) {
if (err) {
callback(err, data);
} else {
updateARecordCallback();
}
});
}
retrieveIpAddressOfEc2Instance(INSTANCE_ID, function(ip) {
updateARecord(ZONE_ID, RECORD_NAME, ip, function() {
callback(null, 'record updated with: ' + ip);
});
});
}
You will need to execute the Lambda with a role that has permissions to describe EC2 instances and update records in Route53.
With Route 53 you can create alias records that map to an Elastic Load Balancer (ELB):
http://docs.amazonwebservices.com/Route53/latest/DeveloperGuide/HowToAliasRRS.html
I've not tried on aws EC2 instance but it should work too.
I've written a small Java program that detect the public IP of the machine and update a certain record on aws route 53.
The only requirement is that you need Java installed on your EC2 instance.
The project is hosted on https://github.com/renatodelgaudio/awsroute53 and you are also free to modify it in case you need it
You could configure it to run at boot time or as a crontab job so that your record get updated with the new public IP following instructions similar to these
Linux manual installation steps
I used this cli53 tool to let an EC2 instance create an A record for itself during startup.
https://github.com/barnybug/cli53
I added file following lines to my rc.local (please check your linux calls this script during startup):
IP=$(curl http://169.254.169.254/latest/meta-data/public-ipv4)
/usr/local/bin/cli53 rrcreate example.com "play 30 A $IP" --wait --replace
It creates an A record play.example.com pointing to the current public IP of the EC2 instance.
You need to assign a IAM role to EC2 instance, which allows the instance to manipulate Route 53. In the simplest case just create a IAM role using a predefined policy AmazonRoute53FullAccess. Then assign this role to the EC2 instance.
Assuming the EC2 instance has the aws command configured with proper permissions, the following shell script does it:
#!/bin/bash
IP=$(curl http://169.254.169.254/latest/meta-data/public-ipv4)
PROFILE="dnsuserprofile"
ZONE="XXXXXXXXXXXXXXXXXXXXX"
DOMAIN="my.domain.name"
TMPFILE="/tmp/updateip.json"
cat << EOF > $TMPFILE
{
"Comment": "Updating instance IP address",
"Changes": [
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "$DOMAIN",
"Type": "A",
"TTL": 300,
"ResourceRecords": [
{
"Value": "$IP"
}
]
}
}
]
}
EOF
aws route53 change-resource-record-sets --profile $PROFILE --hosted-zone-id $ZONE --change-batch file://$TMPFILE > /dev/null && \
rm $TMPFILE
Set that script to run on reboot, for example in cron:
#reboot /home/ec2-user/bin/updateip
The IAM policy can be as narrow as:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "route53:ChangeResourceRecordSets",
"Resource": "arn:aws:route53:::hostedzone/XXXXXXXXXXXXXXXXXXXXX"
}
]
}

Resources