Tflocal: unsupported argument meteringmarketplace - aws-lambda

I'm trying to run terraform-local to test out my modules before deployment. I've run into an error when trying to run my stack locally:
Error: Unsupported argument
on localstack_providers_override.tf line 67, in provider "aws":
67: meteringmarketplace = "http://localhost:4566"
An argument named "meteringmarketplace" is not expected here.
For context, my terraform templates specify the following resources
A lambda function with a node runtime
An API Gateway
Cloudwatch log groups, IAM roles, s3 objects and some other minor resources
I'm also running terraform v1.2.7 and terraform-local v1.2.7
Any idea how I might fix this error?

i get exactly the same error. I assume the terraform-local configurations are setting that "meteringmarketplace" which is not actually there anymore (I think it was renamed?).
A possibility is to do the local configuration directly yourself and not use terraform-local but terraform with your overwrites and let it run against localstack (https://github.com/localstack/localstack).
For an example i used the code from the terraform page:
main.tf:
provider "aws" {
access_key = "mock_access_key"
region = "us-east-1"
s3_force_path_style = true
secret_key = "mock_secret_key"
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
apigateway = "http://localhost:4566"
cloudformation = "http://localhost:4566"
cloudwatch = "http://localhost:4566"
dynamodb = "http://localhost:4566"
es = "http://localhost:4566"
firehose = "http://localhost:4566"
iam = "http://localhost:4566"
kinesis = "http://localhost:4566"
lambda = "http://localhost:4566"
route53 = "http://localhost:4566"
redshift = "http://localhost:4566"
s3 = "http://localhost:4566"
secretsmanager = "http://localhost:4566"
ses = "http://localhost:4566"
sns = "http://localhost:4566"
sqs = "http://localhost:4566"
ssm = "http://localhost:4566"
stepfunctions = "http://localhost:4566"
sts = "http://localhost:4566"
}
}
resource "aws_s3_bucket" "test-bucket" {
bucket = "my-bucket"
}
If you have your localstack running with the default settings you should be able to run "terraform plan" against it.
Maybe that helps you as a workaround.

Related

Error in creating EC2 resource using Terraform

I'm trying to create an EC2 using terraform (I'm new to the area). I'm following the tutorial, but I think there's something wrong with the user I created in AWS.
Steps I followed:
Create user in AWS
a) I added to a group that has the AmazonEC2FullAccess policy
b) I created the credentials to use the AWS Cli
I used the file suggested by the Terraform tutorial
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.16"
}
}
required_version = ">= 1.2.0"
}
provider "aws" {
region = "us-east-1"
}
resource "aws_instance" "app_server" {
ami = "ami-830c94e3"
instance_type = "t2.micro"
tags = {
Name = "ExampleAppServerInstance"
}
}
I ran the aws configure command and put the key and secret key values.
I ran terraform init and it worked
When I run the terraform plan, the error appears.
Error: configuring Terraform AWS Provider: error validating provider credentials: retrieving caller identity from STS: operation error STS: GetCallerIdentity, https response error StatusCode: 403, RequestID: xxxxxxxxxxxxxxxx, api error InvalidClientTokenId: The security token included in the request is invalid.
Any idea?
I missed the parameter "profile" in main.tf. So, the new file is
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.16"
}
}
required_version = ">= 1.2.0"
}
provider "aws" {
region = "us-east-1"
profile = "default"
}
resource "aws_instance" "app_server" {
ami = "ami-0557a15b87f6559cf"
instance_type = "t2.micro"
tags = {
Name = "ExampleAppServerInstance"
}
}
Its works now!

error while spinning ec2 instance with key using Terraform script

I want to provision an ec2 instance with a key and run a script inside an ec2-instance.
filename instance.tf
resource "aws_key_pair" "mykey" {
key_name = "terraform-nverginia"
public_key = "${file ("${var.PATH_TO_PUBLIC_KEY}")}"
}
resource "aws_instance" "demo" {
ami = "${lookup (var.AMIS, var.AWS_REGION)}"
instance_type = "t2.micro"
key_name = "${aws_key_pair.mykey.key_name}"
tags = {
Name = "T-instance"
}
provisioner "file" { // copying file from local to remote server
source = "deployscript.sh"
destination = "/home/ec2-user/deploy.sh" //check if both the file names are same or not.
}
provisioner "remote-exec" { // executing script to do some deployment in the server.
inline = [
"chmod +x /home/ec2-user/deploy.sh",
"sudo /home/ec2-user/deploy.sh"
]
}
connection {
type = "ssh" // To connect to the instance
user = "${var.INSTANCE_USERNAME}"
host = "122.171.19.4" // My personal laptop's ip address
private_key = "${file ("${var.PATH_TO_PRIVATE_KEY}")}"
}
} // end of resource aws_instance
//-------------------------------------------------
filename: provider.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.9.0"
}
}
}
filename vars.tf
variable "AWS_ACCESS_KEY" {}
variable "AWS_SECRET_KEY" {}
variable "AWS_REGION"{
default = "us-east-1"
}
variable "AMIS" {
type = map
default = {
us-east-1 = "ami-0574da719dca65348"
us-east-2 = "ami-0a606d8395a538502"
}
}
variable "PATH_TO_PRIVATE_KEY" {
default = "terraform-nverginia"
}
variable "PATH_TO_PUBLIC_KEY"{
default = "mykey.pub"
}
variable "INSTANCE_USERNAME"{
default = "ec2-user"
}
filename = terraform.tfvars
AWS_ACCESS_KEY = "<Access key>"
AWS_SECRET_KEY = "<Secret key>"
Error:
PS D:\\Rajiv\\DevOps-Practice\\Terraform\\demo-2\> terraform plan
╷
│ Error: Invalid provider configuration
│
│ Provider "registry.terraform.io/hashicorp/aws" requires explicit configuration. Add a provider block to the root module and configure the
│ provider's required arguments as described in the provider documentation.
│ Error: configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: operation error STS: G
etCallerIdentity, https response error StatusCode: 403, RequestID: 594b6dab-e087-4678-8c57-63a65c3d3d41, api error InvalidClientTokenId: The se
curity token included in the request is invalid.
│
│ with provider\["registry.terraform.io/hashicorp/aws"\],
│ on \<empty\> line 0:
│ (source code not available)
I am expecting a ec2 instance to be created and the script should be run.
Providers are plugins which helps Terraform to interact with specific cloud services. You must declare and install cloud provider before you want to use the cloud service via Terraform. Refer to this link https://developer.hashicorp.com/terraform/language/providers. In your code try adding AWS provider.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.48.0"
}
}
}
provider "aws" {
# Configuration options
}
Then run terraform init to install the provider.

Terraform: CloudWatch Event that notifies SNS

I'm learning TF and trying to apply an infrastructure that creates:
a simple lambda function
an SNS topic
get that lambda to subscribe the SNS topic
a Cloud Watch Event that publishes a message to the topic at some interval
a Cloud Watch Log Group to check if the lambda gets notified by the SNS
The lambda permission to allow calls from SNS
I'm able to apply that successfully. The infrastructure seems perfectly fine (it has the same aspect when I create that myself through the visual aws console)
But the cloud watch Event doesn't get triggered (when built from TF), so no message is published to SNS and lambda doesn't get called. I don't know why
Anyone know how can I accomplish that? Bellow my .tf script:
provider "aws" {
region = "us-east-1"
}
//lambda function handler & code file
resource "aws_lambda_function" "lambda-function" {
function_name = "Function01"
handler = "com.rafael.lambda.Function01"
role = "arn:aws:iam::12345:role/LambdaRoleTest"
runtime = "java8"
s3_bucket = aws_s3_bucket.sns-test.id
s3_key = aws_s3_bucket_object.file_upload.id
source_code_hash = filebase64sha256("../target/sns-cw-lambda-poc.jar")
}
//allow sns to call lambda
resource "aws_lambda_permission" "allow-sns-to-lambda" {
function_name = aws_lambda_function.lambda-function.function_name
action = "lambda:InvokeFunction"
principal = "sns.amazonaws.com"
source_arn = aws_sns_topic.call-lambdas-topic.arn
statement_id = "AllowExecutionFromSNS"
}
//app s3 repository
resource "aws_s3_bucket" "sns-test" {
bucket = "app-bucket-12345"
region = "us-east-1"
}
//app jar file
resource "aws_s3_bucket_object" "file_upload" {
depends_on = [
aws_s3_bucket.sns-test
]
bucket = aws_s3_bucket.sns-test.id
key = "sns-cw-lambda-poc.jar"
source = "../target/sns-cw-lambda-poc.jar"
server_side_encryption = "AES256"
etag = filebase64sha256("../target/sns-cw-lambda-poc.jar")
}
//to check lambda exec logs
resource "aws_cloudwatch_log_group" "lambda-cloudwatch-logs" {
name = "/aws/lambda/${aws_lambda_function.lambda-function.function_name}"
retention_in_days = 1
}
//rule to trigger SNS
resource "aws_cloudwatch_event_rule" "publish-sns-rule" {
name = "publish-sns-rule"
schedule_expression = "rate(1 minute)"
}
//cloud watch event targets SNS
resource "aws_cloudwatch_event_target" "sns-publish" {
count = "1"
rule = aws_cloudwatch_event_rule.publish-sns-rule.name
target_id = aws_sns_topic.call-lambdas-topic.name
arn = aws_sns_topic.call-lambdas-topic.arn
}
//SNS topic to subscribe
resource "aws_sns_topic" "call-lambdas-topic" {
name = "call-lambdas-topic"
}
//lambda subscribes the topic, so it should be nofied when other resource publishes to the topic
resource "aws_sns_topic_subscription" "sns-lambda-subscritption" {
topic_arn = aws_sns_topic.call-lambdas-topic.arn
protocol = "lambda"
endpoint = aws_lambda_function.lambda-function.arn
}
I figured it out, I forgot to add the SNS policies that allow CloudWatch to publish to SNS topic. To get the above script to work, just add this:
resource "aws_sns_topic_policy" "default" {
count = 1
arn = aws_sns_topic.call-lambdas-topic.arn
policy = "${data.aws_iam_policy_document.sns_topic_policy.0.json}"
}
data "aws_iam_policy_document" "sns_topic_policy" {
count = "1"
statement {
sid = "Allow CloudwatchEvents"
actions = ["sns:Publish"]
resources = [aws_sns_topic.call-lambdas-topic.arn]
principals {
type = "Service"
identifiers = ["events.amazonaws.com"]
}
}
}

Winston CloudWatch Transport not Creating Logs When Running on Lambda

I have an expressjs App that is setup to run from within a AWS Lambda function. When I deploy this app to the lambda, the console logs for the lambda cloudwatch log show up (i.e. /aws/lambda/lambda-name), but it doesn't create a new CloudWatch LogGroup as specified in the configuration.
If I run the lambda function locally and generate logs it will create a CloudWatch Log Group for the local environment.
The Lambda Functions are connecting to an RDS instance so they are contained within a VPC.
The Lambda has been assigned the CloudWatchFullAccess policy so it should not be a permissions error.
I've looked at the Lambda logs and I'm not seeing any errors coming through related to this.
const env = process.env.NODE_ENV || 'local'
const config = require('../../config/env.json')[env]
const winston = require('winston')
const WinstonCloudwatch = require('winston-cloudwatch')
const crypto = require('crypto')
let startTime = new Date().toISOString()
const logger = winston.createLogger({
exitOnError: false,
level: 'info',
transports: [
new winston.transports.Console({
json: true,
colorize: true,
level: 'info'
}),
new WinstonCloudwatch({
awsAccessKeyId: config.aws.accessKeyId,
awsSecretKey: config.aws.secretAccessKey,
logGroupName: 'my-api-' + env,
logStreamName: function () {
// Spread log streams across dates as the server stays up
let date = new Date().toISOString().split('T')[0]
return 'my-requests-' + date + '-' +
crypto.createHash('md5')
.update(startTime)
.digest('hex')
},
awsRegion: 'us-east-1',
jsonMessage: true
})
]
})
const winstonStream = {
write: (message, encoding) => {
// use the 'info' log level so the output will be picked up by both transports
logger.info(message)
}
}
module.exports.logger = logger
module.exports.winstonStream = winstonStream
Then within my express app.
const morgan = require('morgan')
const { winstonStream } = require('./providers/loggers')
app.use(morgan('combined', { stream: winstonStream }
Confirming that the problem was related to the lambda function being in a VPC and not granted public access to the internet through Subnets, Route Tables, NAT and Internet Gateways as described within this post. https://gist.github.com/reggi/dc5f2620b7b4f515e68e46255ac042a7
I believe that to access external internet services you'd need what you described.
But to access an AWS service outside the VPC you can create a VPC endpoint.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/cloudwatch-logs-and-interface-VPC.html

Lambda chaining - Invoke lambda from another lambda using terraform

I am trying to invoke one AWS lambda from another and perform lambda chaining. The rationale behind doing this is AWS does not provide multiple trigger from same S3 bucket.
I have created one lambda, with an s3 trigger. The java code of first lambda will listen to S3 event and contains the invocation of another lambda. The second lambda will be invoked from first lambda. Both the lambda creation is done by terraform.
Lambda A has S3 trigger. This will be invoked on S3 event on a particular bucket. Lambda A will do the processing and will invoke Lambda B using invoke request. Lambda B invocation from Lambda A code in java is :
public class EventHandler implements RequestHandler<S3Event, String> {
#Override
public String handleRequest(S3Event event, Context context) throws RuntimeException {
InvokeRequest req = new InvokeRequest()
.withFunctionName("LambdaFunctionB")
.withPayload(json);
return "Lambda B invoked"
}
}
Both the lambdas are created using terraform. Scripts below:
Lambda A terraform:
module "lambda_function" {
source = "Git Path"
absolute_artifact_path = "../lambda.jar"
lambda_function_name = "LambdaFunctionA"
lambda_function_description = ""
lambda_function_runtime = "java8"
lambda_handler_name = "EventHandler"
lambda_execution_role_name = "lambda-iam-role"
lambda_memory = "512"
dead_letter_target_arn = "error-handling-arn"
}
resource "aws_lambda_permission" "allow_bucket" {
statement_id = "statementId"
action = "lambda:InvokeFunction"
function_name = "${module.lambda_function.lambda_arn}"
principal = "s3.amazonaws.com"
source_arn = "s3.bucket.arn"
}
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = "bucketName"
lambda_function {
lambda_function_arn = "${module.lambda_function.lambda_arn}"
events = ["s3:ObjectCreated:*"]
filter_prefix = "path/subPath"
}
}
Lambda B terraform:
module "lambda_function" {
source = "git path"
absolute_artifact_path = "../lambda.jar"
lambda_function_name = "LambdaFunctionB"
lambda_function_description = ""
lambda_function_runtime = "java8"
lambda_handler_name = "LambdaBEventHandler"
lambda_execution_role_name = "lambda-iam-role"
lambda_memory = "512"
dead_letter_target_arn = "error-handling-arn"
}
resource "aws_lambda_permission" "allow_lambda" {
statement_id = "AllowExecutionFromLambda"
action = "lambda:InvokeFunction"
function_name = "${module.lambda_function.lambda_arn}"
principal = "s3.amazonaws.com"
source_arn = "arn:aws:lambda:eu-west-1:xxxxxxxxxx:function:LambdaFunctionA"
}
lambda-iam-role has below policies attached
AmazonS3FullAccess
AWSLambdaBasicExecutionRole
AWSLambdaVPCAccessExecutionRole
AmazonSNSFullAccess
CloudWatchEventsFullAccess
Expectation was that Lambda A should successfully invoke Lambda B. But I am getting AccessDeniedException in Lambda A logs and it is not able to invoke Lambda B. Error is
com.amazonaws.services.lambda.model.AWSLambdaException: User: arn:aws:sts::xxxxxxxxx:assumed-role/lambda-iam-role/LambdaFunctionA is not authorized to perform: lambda:InvokeFunction on resource: arn:aws:lambda:eu-west-1:xxxxxxxxx:function:LambdaFunctionB (Service: AWSLambda; Status Code: 403; Error Code: AccessDeniedException; Request ID: f495ede3-b3cb-47a1-b884-16996545233d)
Hope this helps you, not exactly similar but its invoking one lambda from another lambda Github
I think the lambda needs this policy as well "lambda:InvokeFunction"
I found an answer online, using the aws-sdk.
var aws = require('aws-sdk');
var lambda = new aws.Lambda({
region: 'default'
});
lambda.invoke({
FunctionName: 'name_of_your_lambda_function',
Payload: JSON.stringify(event, null, 2) // pass params
}, function(error, data) {
if (error) {
context.done('error', error);
}
if(data.Payload){
context.succeed(data.Payload)
}
});
You can find the doc here: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Lambda.html
Hope it helps
:)

Resources