AWS - Configuring Lambda Destinations with SNS - aws-lambda

I'm trying to configure an AWS Lambda function to pipe its output into an SNS notification, but it doesn't seem to work. The function executes successfully in the Lambda console and I can see the output is correct, but SNS never seems to be getting notified or publishing anything. I'm working with Terraform to stand up my infra, here is the Terraform code I'm using, maybe someone can help me out:
resource "aws_lambda_function" "lambda_apigateway_to_sns_function" {
filename = "../node/lambda.zip"
function_name = "LambdaPublishToSns"
handler = "index.snsHandler"
role = aws_iam_role.lambda_apigateway_to_sns_execution_role.arn
runtime = "nodejs12.x"
}
resource "aws_iam_role" "lambda_apigateway_to_sns_execution_role" {
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "apigateway_to_sns_sns_full_access" {
policy_arn = "arn:aws:iam::aws:policy/AmazonSNSFullAccess"
role = aws_iam_role.lambda_apigateway_to_sns_execution_role.name
}
resource "aws_lambda_function_event_invoke_config" "example" {
function_name = aws_lambda_function.lambda_apigateway_to_sns_function.arn
destination_config {
on_success {
destination = aws_sns_topic.sns_topic.arn
}
on_failure {
destination = aws_sns_topic.sns_topic.arn
}
}
}
And here's my Lambda function code (in NodeJS):
exports.snsHandler = (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = false;
callback(null, {
statusCode: 200,
body: event.body + " apigateway"
);
}
(the function is supposed to take input from API Gateway, and whatever is in the body of the API Gateway request, just append "apigateway" to the end of it and pass the message on; I've tested the integration with API Gateway and that integration works perfectly)
Thanks!

Related

How to solve "The IAM role configured on the integration or API Gateway doesn't have permissions to call the integration

I have a lambda function and an apigatewayv2. I am creating everything via terraform as below.
resource "aws_lambda_function" "prod_options" {
description = "Production Lambda"
environment {
variables = var.prod_env
}
function_name = "prod-func"
handler = "index.handler"
layers = [
aws_lambda_layer_version.node_modules_prod.arn
]
memory_size = 1024
package_type = "Zip"
reserved_concurrent_executions = -1
role = aws_iam_role.lambda_exec.arn
runtime = "nodejs12.x"
s3_bucket = aws_s3_bucket.lambda_bucket_prod.id
s3_key = aws_s3_bucket_object.lambda_node_modules_prod.key
source_code_hash = data.archive_file.lambda_node_modules_prod.output_base64sha256
timeout = 900
tracing_config {
mode = "PassThrough"
}
}
and role
resource "aws_iam_role_policy_attachment" "lambda_policy" {
role = aws_iam_role.lambda_exec.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
resource "aws_iam_role" "lambda_exec" {
name = "api_gateway_role"
assume_role_policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": [
"apigateway.amazonaws.com",
"lambda.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
})
}
and then permissions
resource "aws_lambda_permission" "prod_api_gtw" {
statement_id = "AllowExecutionFromApiGateway"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.prod_options.function_name
principal = "apigateway.amazonaws.com"
source_arn = "${aws_apigatewayv2_api.gateway_prod.execution_arn}/*/*"
}
After I deploy and try to invoke the url , I gget the following error
"integrationErrorMessage": "The IAM role configured on the integration or API Gateway doesn't have permissions to call the integration. Check the permissions and try again.",
I've been stuck with this for a while now. How can I solve this error?
You may have to create a Lambda permission to allow execution from an API Gateway resource:
resource "aws_lambda_permission" "apigw_lambda" {
statement_id = "AllowExecutionFromAPIGateway"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.layout_editor_prod_options.function_name
principal = "apigateway.amazonaws.com"
# The /*/*/* part allows invocation from any stage, method and resource path
# within API Gateway REST API.
source_arn = "${aws_api_gateway_rest_api.rest_api.execution_arn}/*/*/*"
}
Also, for the Lambda lambda_exec, you don't need apigateway.amazonaws.com principal. The reason why we don't need this is that the execution role applies to the function and allows it to interact with other AWS services. In the other hand, this wont allow anything for the API Gateway, for that we need a Lambda permission.
resource "aws_iam_role" "lambda_exec" {
name = "lambda_exec_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
In the other hand, I would add a policy to the Lambda execution role to be able to log to CloudWatch. This might be useful for further debugging:
resource "aws_iam_policy" "lambda_logging" {
name = "lambda_logging"
path = "/"
description = "IAM policy for logging from a lambda"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*",
"Effect": "Allow"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "lambda_logs" {
role = aws_iam_role.lambda_exec.name
policy_arn = aws_iam_policy.lambda_logging.arn
}

Error creating ElasticSearch domain: ValidationException: Authentication error

I have been getting this error lately while creating a ES domain using Terraform. Nothing has changed in the way I define the ES domain. I did however start using SSL (AWS ACM cert) on the ALB layer but that should not have affected this. Any ideas what it might be complaining about ?
resource "aws_elasticsearch_domain" "es" {
domain_name = "${var.es_domain}"
elasticsearch_version = "6.3"
cluster_config {
instance_type = "r4.large.elasticsearch"
instance_count = 2
zone_awareness_enabled = true
}
vpc_options {
subnet_ids = "${var.private_subnet_ids}"
security_group_ids = [
"${aws_security_group.es_sg.id}"
]
}
ebs_options {
ebs_enabled = true
volume_size = 10
}
access_policies = <<CONFIG
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "es:*",
"Principal": "*",
"Effect": "Allow",
"Resource": "arn:aws:es:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:domain/${var.es_domain}/*"
}
]
}
CONFIG
snapshot_options {
automated_snapshot_start_hour = 23
}
tags = {
Domain = "${var.es_domain}"
}
depends_on = [
"aws_iam_service_linked_role.es",
]
}
resource "aws_iam_service_linked_role" "es" {
aws_service_name = "es.amazonaws.com"
}
EDIT: Oddly enough, when I removed using the ACM cert and moved back to using HTTP (port 80) for my ALB Listener, the ES domain was provisioned.
Not sure what to make of this but clearly the ACM cert is interfering with the ES domain creation. Or I am doing something wrong with the ACM creation. Here is how I do it and use it -
resource "aws_acm_certificate" "ssl_cert" {
domain_name = "api.xxxx.io"
validation_method = "DNS"
tags = {
Environment = "development"
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_alb_listener" "alb_listener" {
load_balancer_arn = "${aws_alb.alb.id}"
port = "443"
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-2016-08"
certificate_arn = "${aws_acm_certificate.ssl_cert.arn}"
default_action {
target_group_arn = "${aws_alb_target_group.default.id}"
type = "forward"
}
}
The cert is validated and issued by AWS pretty fast as far as I can see in the console. And as seen, it has nothing to do with the ES domain per say.
It sometimes occurs that when it creates an ES-domain before enabling a service-linked role, even though using depends_on.
maybe you can try using local-exec provisioner to wait.
resource "aws_iam_service_linked_role" "es" {
aws_service_name = "es.amazonaws.com"
provisioner "local-exec" {
command = "sleep 10"
}
}
Below one is enough for service-linked role creation, also incl the role in the depends_on
resource "aws_iam_service_linked_role" "es" {
aws_service_name = "es.amazonaws.com"
}

Writing authorizers for serverless framework in Ruby

I'm attempting to write an authorizer to protect calls to a lambda using the serverless framework. I'm using Ruby.
Configuration:
provider:
name: aws
runtime: ruby2.5
iamRoleStatements:
- Effect: Allow
Action:
- KMS:Decrypt
Resource: ${self:custom.kmsSecrets.keyArn}
functions:
authorize:
handler: handler.authorize
hello:
handler: handler.protected
events:
- http:
path: protected
method: get
authorizer: authorize
The authorizer:
def authorize(event:, context:)
if is_authorized?
{
"policyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "execute-api:Invoke",
"Resource": [ context.invoked_function_arn ],
"Effect": "Allow"
}
]
},
"principalId": "seeker"
}.to_json
end
end
The authentication I'd like to implement is token based: the is_authorized? method would receive the token and then return a policy that would allow access to the protected lambda function.
I'm not entirely sure what goes in the PrincipalId argument - I have no user.id.
Right now it complains that: seeker-dev-authorize is not authorized to perform: iam:CreatePolicy on resource: policy seeker-allowed which leaves me quite confused: I can't create a policy... on the policy? And where should I set this permission? On IAM or serverless.yml? Because I've set the permissions to encode/decode keys in serverless, maybe I should do the same with this?
I haven't used custom authorizers before, but I put together a small hello world project to try this out and this is what I found.
The protected function and the authorizer function:
def authorize(event:, context:)
{
"principalId": "test",
"policyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "execute-api:Invoke",
"Effect": "Allow",
"Resource": event["methodArn"]
}
]
}
}
end
def hello(event:, context:)
{ statusCode: 200, body: JSON.generate('This is a protected endpoint!') }
end
Note that I'm returning the hash and not a string with to_json, I got an error back from the authorizer when using to_json.
Also note that I'm using event["methodArn"] to get the protected lambda ARN, using context.invoked_function_arn also caused me an error.
Besides that, not including an Authorization header in the request, will return an "Unauthorized error":
curl -X GET https://endpoint/dev/hello -H 'Authorization: test'
Lastly, about the principalId:
The principalId is a required property on your authorizer response. It represents the principal identifier for the caller. This may vary from application-to-application, but it could be a username, an email address, or a unique ID.
Source: https://www.alexdebrie.com/posts/lambda-custom-authorizers/

How to properly format data with AppSync and DynamoDB when Lambda is in between

Receiving data with AppSync directly from DynamoDB seems working for my case, but when I try to put a lambda function in between, I receive errors that says "Can't resolve value (/issueNewMasterCard/masterCards) : type mismatch error, expected type LIST"
Looking to the AppSync cloudwatch response mapping output, I get this:
"context": {
"arguments": {
"userId": "18e946df-d3de-49a8-98b3-8b6d74dfd652"
},
"result": {
"Item": {
"masterCards": {
"L": [
{
"M": {
"cardId": {
"S": "95d67f80-b486-11e8-ba85-c3623f6847af"
},
"cardImage": {
"S": "https://s3.eu-central-1.amazonaws.com/logo.png"
},
"cardWallet": {
"S": "0xFDB17d12057b6Fe8c8c434653456435634565"
},...............
here is how I configured my response mapping template:
$utils.toJson($context.result.Item)
I'm doing this mutation:
mutation IssueNewMasterCard {
issueNewMasterCard(userId:"18e946df-d3de-49a8-98b3-8b6d74dfd652"){
masterCards {
cardId
}
}
}
and this is my schema :
type User {
userId: ID!
masterCards: [MasterCard]
}
type MasterCard {
cardId: String
}
type Mutation {
issueNewMasterCard(userId: ID!): User
}
The Lambda function:
exports.handler = (event, context, callback) => {
const userId = event.arguments.userId;
const userParam = {
Key: {
"userId":{S:userId}
},
TableName:"FidelityCardsUsers"
}
dynamoDB.getItem(userParam, function(err, data) {
if (err) {
console.log('error from DynamDB: ',err)
callback(err);
} else {
console.log('mastercards: ',JSON.stringify(data));
callback(null,data)
}
})
I think the problem is that the getItem you use when you use the DynamoDB datasource is not the same as the the DynamoDB.getItem function in the aws-sdk.
Specifically it seems like the datasource version returns an already marshalled response (that is, instead of something: { L: [ list of things ] } it just returns something: [ list of things]).
This is important, because it means that $utils.toJson($context.result.Item) in your current setup is returning { masterCards: { L: [ ... which is why you are seeing the type error- masterCards in this case is an object with a key L, rather than an array/list.
To solve this in the resolver, you can use the $util.dynamodb.toDynamoDBJson(Object) macro (https://docs.aws.amazon.com/appsync/latest/devguide/resolver-util-reference.html#dynamodb-helpers-in-util-dynamodb). i.e. your resolver should be:
$util.dynamodb.toDynamoDBJson($context.result.Item)
Alternatively you might want to look at the AWS.DynamoDB.DocumentClient class (https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/DynamoDB/DocumentClient.html). This includes versions of getItem, etc. that automatically marshal and unmarshall the proprietary DynamoDB typing back into native JSON. (Frankly I find this much nicer to work with and use it all the time).
In that case you can keep your old resolver, because you'll be returning an object where masterCards is just a JSON array.

get row from AWS dynamoDB table with cognito authentication

I have AWS dynamo DB table where I store information for AWS Cognito users. I created the table to be private so that only the owner of a row in the table can read/write the data (based on cognito authentication). I need to get the data for the user through a lambda function. I created the IAM role for the function in this way:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:Query",
"dynamodb:UpdateItem"
],
"Resource": [
"arn:aws:dynamodb:XX-XXXX-X:XXXXXXX:table/tablename"
],
"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:LeadingKeys": [
"${cognito-identity.amazonaws.com:sub}"
]
}
}
}
]
}
In the lamba function (node.js) I need to get the information stored from the user so I call:
let ddb = new AWS.DynamoDB({ apiVersion: 'latest' });
var params = {
TableName: tablename,
Key: { 'id': {S: event.queryStringParameters.user_id}
}
};
ddb.getItem(params, function(err, data) {
if (err) {
console.log("Error", err);
}
else {
console.log("Success", data);
}
});
I get the error:
Error { AccessDeniedException: User: arn:aws:sts::xxxxxx:assumed-role/lambda_dynamo/getPmtDetails is not authorized to perform: dynamodb:GetItem on resource
How can I call getItem with the cognito id in order to retrive the row that belong to the user?

Resources