get row from AWS dynamoDB table with cognito authentication - aws-lambda

I have AWS dynamo DB table where I store information for AWS Cognito users. I created the table to be private so that only the owner of a row in the table can read/write the data (based on cognito authentication). I need to get the data for the user through a lambda function. I created the IAM role for the function in this way:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:Query",
"dynamodb:UpdateItem"
],
"Resource": [
"arn:aws:dynamodb:XX-XXXX-X:XXXXXXX:table/tablename"
],
"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:LeadingKeys": [
"${cognito-identity.amazonaws.com:sub}"
]
}
}
}
]
}
In the lamba function (node.js) I need to get the information stored from the user so I call:
let ddb = new AWS.DynamoDB({ apiVersion: 'latest' });
var params = {
TableName: tablename,
Key: { 'id': {S: event.queryStringParameters.user_id}
}
};
ddb.getItem(params, function(err, data) {
if (err) {
console.log("Error", err);
}
else {
console.log("Success", data);
}
});
I get the error:
Error { AccessDeniedException: User: arn:aws:sts::xxxxxx:assumed-role/lambda_dynamo/getPmtDetails is not authorized to perform: dynamodb:GetItem on resource
How can I call getItem with the cognito id in order to retrive the row that belong to the user?

Related

UnknownOperationException: GraphQl Appsync with AWS API Gateway

I have integrated API Gateway as a proxy to AWS AppSync data plane via AWS CDK. I am trying to test the connection between API Gateway and AWS Appsync. I am getting to UnknownOperationException while testing calling appsync endpoint via the API gateway.
Below are the code snippet
const api = new appsync.GraphqlApi(this, 'UserApi', {
name: 'user-appsync-api',
schema: appsync.Schema.fromAsset('lib/graphql/schema.graphql'),
authorizationConfig: {
defaultAuthorization: {
authorizationType: appsync.AuthorizationType.API_KEY,
apiKeyConfig: {
expires: cdk.Expiration.after(cdk.Duration.days(365))
}
},
},
xrayEnabled: true,
});
const createUserAPIGraphQl = apigateway.root.addResource('user-graphql');
createUserAPIGraphQl.addMethod("POST", new apigw.AwsIntegration({
service: 'appsync-api',
region: 'us-east-1',
subdomain: 'adsdasdsadasdasd',
integrationHttpMethod: 'POST',
path: 'user-graphql',
options: {
passthroughBehavior: PassthroughBehavior.WHEN_NO_TEMPLATES,
credentialsRole: ApiGatewayAppSyncRole,
integrationResponses: [{
statusCode: '200'
}]
},
}
), {
methodResponses: [
{
statusCode: '200',
responseModels: {
'application/json': Model.EMPTY_MODEL
}
},
]
});
RequestBody
{
"query": "query getNoteById { getNoteById(noteId: \"001\") { id }}"
}
Error:
{
"errors": [
{
"errorType": "UnknownOperationException",
"message": "Unknown Operation Request."
}
]
}
API Gateway Logs:
Mon Jul 11 10:20:21 UTC 2022 : Endpoint response body before transformations:
{
"errors" : [ {
"errorType" : "UnknownOperationException",
"message" : "Unknown Operation Request."
} ]
}

How to solve "The IAM role configured on the integration or API Gateway doesn't have permissions to call the integration

I have a lambda function and an apigatewayv2. I am creating everything via terraform as below.
resource "aws_lambda_function" "prod_options" {
description = "Production Lambda"
environment {
variables = var.prod_env
}
function_name = "prod-func"
handler = "index.handler"
layers = [
aws_lambda_layer_version.node_modules_prod.arn
]
memory_size = 1024
package_type = "Zip"
reserved_concurrent_executions = -1
role = aws_iam_role.lambda_exec.arn
runtime = "nodejs12.x"
s3_bucket = aws_s3_bucket.lambda_bucket_prod.id
s3_key = aws_s3_bucket_object.lambda_node_modules_prod.key
source_code_hash = data.archive_file.lambda_node_modules_prod.output_base64sha256
timeout = 900
tracing_config {
mode = "PassThrough"
}
}
and role
resource "aws_iam_role_policy_attachment" "lambda_policy" {
role = aws_iam_role.lambda_exec.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
resource "aws_iam_role" "lambda_exec" {
name = "api_gateway_role"
assume_role_policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": [
"apigateway.amazonaws.com",
"lambda.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
})
}
and then permissions
resource "aws_lambda_permission" "prod_api_gtw" {
statement_id = "AllowExecutionFromApiGateway"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.prod_options.function_name
principal = "apigateway.amazonaws.com"
source_arn = "${aws_apigatewayv2_api.gateway_prod.execution_arn}/*/*"
}
After I deploy and try to invoke the url , I gget the following error
"integrationErrorMessage": "The IAM role configured on the integration or API Gateway doesn't have permissions to call the integration. Check the permissions and try again.",
I've been stuck with this for a while now. How can I solve this error?
You may have to create a Lambda permission to allow execution from an API Gateway resource:
resource "aws_lambda_permission" "apigw_lambda" {
statement_id = "AllowExecutionFromAPIGateway"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.layout_editor_prod_options.function_name
principal = "apigateway.amazonaws.com"
# The /*/*/* part allows invocation from any stage, method and resource path
# within API Gateway REST API.
source_arn = "${aws_api_gateway_rest_api.rest_api.execution_arn}/*/*/*"
}
Also, for the Lambda lambda_exec, you don't need apigateway.amazonaws.com principal. The reason why we don't need this is that the execution role applies to the function and allows it to interact with other AWS services. In the other hand, this wont allow anything for the API Gateway, for that we need a Lambda permission.
resource "aws_iam_role" "lambda_exec" {
name = "lambda_exec_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
In the other hand, I would add a policy to the Lambda execution role to be able to log to CloudWatch. This might be useful for further debugging:
resource "aws_iam_policy" "lambda_logging" {
name = "lambda_logging"
path = "/"
description = "IAM policy for logging from a lambda"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*",
"Effect": "Allow"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "lambda_logs" {
role = aws_iam_role.lambda_exec.name
policy_arn = aws_iam_policy.lambda_logging.arn
}

AWS - Configuring Lambda Destinations with SNS

I'm trying to configure an AWS Lambda function to pipe its output into an SNS notification, but it doesn't seem to work. The function executes successfully in the Lambda console and I can see the output is correct, but SNS never seems to be getting notified or publishing anything. I'm working with Terraform to stand up my infra, here is the Terraform code I'm using, maybe someone can help me out:
resource "aws_lambda_function" "lambda_apigateway_to_sns_function" {
filename = "../node/lambda.zip"
function_name = "LambdaPublishToSns"
handler = "index.snsHandler"
role = aws_iam_role.lambda_apigateway_to_sns_execution_role.arn
runtime = "nodejs12.x"
}
resource "aws_iam_role" "lambda_apigateway_to_sns_execution_role" {
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "apigateway_to_sns_sns_full_access" {
policy_arn = "arn:aws:iam::aws:policy/AmazonSNSFullAccess"
role = aws_iam_role.lambda_apigateway_to_sns_execution_role.name
}
resource "aws_lambda_function_event_invoke_config" "example" {
function_name = aws_lambda_function.lambda_apigateway_to_sns_function.arn
destination_config {
on_success {
destination = aws_sns_topic.sns_topic.arn
}
on_failure {
destination = aws_sns_topic.sns_topic.arn
}
}
}
And here's my Lambda function code (in NodeJS):
exports.snsHandler = (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = false;
callback(null, {
statusCode: 200,
body: event.body + " apigateway"
);
}
(the function is supposed to take input from API Gateway, and whatever is in the body of the API Gateway request, just append "apigateway" to the end of it and pass the message on; I've tested the integration with API Gateway and that integration works perfectly)
Thanks!

Apollo Server running as a gateway is hiding remote error if data is not null

I'm running an apollo-server-express as a gateway application. Setting up a few underlying GraphQL Applications with makeRemoteExecutableSchema and an apollo-link-http.
Usually every call just works. If an error is part of the response and data is null it also works. But if data contains just the data and errors contains an error. Data will be passed though but errors is empty
const headerSet = setContext((request, previousContext) => {
return setHeaders(previousContext);
});
const errorLink = onError(({ response, forward, operation, graphQLErrors, networkError }) => {
if (graphQLErrors) {
graphQLErrors.map((err) => {
Object.setPrototypeOf(err, Error.prototype);
});
}
if (networkError) {
logger.error(networkError, 'A wild network error appeared');
}
});
const httpLink = new HttpLink({
uri: remoteURL,
fetch
});
const link = headerSet.concat(errorLink).concat(httpLink);
Example A "Working Example":
Query
{
checkName(name: "namethatistoolooooooong")
}
Query Response
{
"errors": [
{
"message": "name is too long, the max length is 20 characters",
"path": [
"checkName"
],
"extensions": {
"code": "INPUT_VALIDATION_ERROR"
}
}
],
"data": null
}
Example B "Errors hidden":
Query
mutation inviteByEmail {
invite(email: "invalid!!!~~~test!--#example.com") {
status
}
}
Response from remote service (httpLink)
response.errors and graphQLErrors in onError method also contains the error
{
"errors": [
{
"message": "Email not valid",
"path": [
"invite"
],
"extensions": {
"code": "INPUT_VALIDATION_ERROR"
}
}
],
"data": {
"invite": {
"status": null
}
}
}
Response
{
"data": {
"invite": {
"status": null
}
}
}
According to graphql spec I would have expected the errors object to not be hidden if it is part of the response
https://graphql.github.io/graphql-spec/June2018/#sec-Errors
If the data entry in the response is present (including if it is the value null), the errors entry in the response may contain any errors that occurred during execution. If errors occurred during execution, it should contain those errors.

How to properly format data with AppSync and DynamoDB when Lambda is in between

Receiving data with AppSync directly from DynamoDB seems working for my case, but when I try to put a lambda function in between, I receive errors that says "Can't resolve value (/issueNewMasterCard/masterCards) : type mismatch error, expected type LIST"
Looking to the AppSync cloudwatch response mapping output, I get this:
"context": {
"arguments": {
"userId": "18e946df-d3de-49a8-98b3-8b6d74dfd652"
},
"result": {
"Item": {
"masterCards": {
"L": [
{
"M": {
"cardId": {
"S": "95d67f80-b486-11e8-ba85-c3623f6847af"
},
"cardImage": {
"S": "https://s3.eu-central-1.amazonaws.com/logo.png"
},
"cardWallet": {
"S": "0xFDB17d12057b6Fe8c8c434653456435634565"
},...............
here is how I configured my response mapping template:
$utils.toJson($context.result.Item)
I'm doing this mutation:
mutation IssueNewMasterCard {
issueNewMasterCard(userId:"18e946df-d3de-49a8-98b3-8b6d74dfd652"){
masterCards {
cardId
}
}
}
and this is my schema :
type User {
userId: ID!
masterCards: [MasterCard]
}
type MasterCard {
cardId: String
}
type Mutation {
issueNewMasterCard(userId: ID!): User
}
The Lambda function:
exports.handler = (event, context, callback) => {
const userId = event.arguments.userId;
const userParam = {
Key: {
"userId":{S:userId}
},
TableName:"FidelityCardsUsers"
}
dynamoDB.getItem(userParam, function(err, data) {
if (err) {
console.log('error from DynamDB: ',err)
callback(err);
} else {
console.log('mastercards: ',JSON.stringify(data));
callback(null,data)
}
})
I think the problem is that the getItem you use when you use the DynamoDB datasource is not the same as the the DynamoDB.getItem function in the aws-sdk.
Specifically it seems like the datasource version returns an already marshalled response (that is, instead of something: { L: [ list of things ] } it just returns something: [ list of things]).
This is important, because it means that $utils.toJson($context.result.Item) in your current setup is returning { masterCards: { L: [ ... which is why you are seeing the type error- masterCards in this case is an object with a key L, rather than an array/list.
To solve this in the resolver, you can use the $util.dynamodb.toDynamoDBJson(Object) macro (https://docs.aws.amazon.com/appsync/latest/devguide/resolver-util-reference.html#dynamodb-helpers-in-util-dynamodb). i.e. your resolver should be:
$util.dynamodb.toDynamoDBJson($context.result.Item)
Alternatively you might want to look at the AWS.DynamoDB.DocumentClient class (https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/DynamoDB/DocumentClient.html). This includes versions of getItem, etc. that automatically marshal and unmarshall the proprietary DynamoDB typing back into native JSON. (Frankly I find this much nicer to work with and use it all the time).
In that case you can keep your old resolver, because you'll be returning an object where masterCards is just a JSON array.

Resources