Can anyone reference or show me an example on how to create a AWS Lambda trigger with Terraform?
In the AWS console, after clicking a function name and selecting the configuration tab, you can create triggers E.g. a SNS trigger
For sns you need to create sns subscription
resource "aws_sns_topic_subscription" "user_updates_lampda_target" {
topic_arn = “sns topic arn”
protocol = "lambda"
endpoint = “lambda arn here”
}
To allows Lambda functions to get events from Kinesis, DynamoDB and SQS you can use event source mapping
resource "aws_lambda_event_source_mapping" "example" {
event_source_arn = aws_dynamodb_table.example.stream_arn
function_name = aws_lambda_function.example.arn
starting_position = "LATEST"
}
Related
I'd like to attach an IoT policy to the Cognito identities given to the federated users of my app. I'm tryng to do this with a Lambda function in the Post confirmation trigger of my user pool. Here's my function so far:
const AWS = require('aws-sdk');
const iot = new AWS.Iot();
exports.handler = async function(event, context) {
const policyName = 'arn:aws:iam::XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX';
const target = context.identity.cognitoIdentityId;
await iot.attachPolicy({ target, policyName }).promise();
const response = {
statusCode: 200,
body: JSON.stringify('Policy attached.'),
};
return response;
};
When this function runs I get an error:
"cannot read property 'cognitoidentityid' of undefined"
Similar error if I define principal as
const principal = context.cognito_identity_id; //error: "Missing required key 'target' in params"
According to the Iot docs, "The context object in your Lambda function contains a value for context.cognito_identity_id when you call the function with AWS credentials that you obtain through Amazon Cognito Identity pools." Can anyone tell me how to do that?
I should add that I would like to attach this policy for both desktop and mobile users.The Lambda docs imply that the identity property of the context object is provided for mobile apps only. If that is true then is there a different way to attach the IoT policy to all Cognito identites, mobile and desktop alike?
Thanks
To sum up, You will not be able to get identity id in Cognito's post confirmation trigger.
To overcome it, Client can invoke separate Lambda function (once user is confirmed) and in that Lambda you can attach policy, because here you will get identity id.
Other alternative which I don't prefer is client themselves attach a policy after user confirmation.
Attach Policy API - https://docs.aws.amazon.com/iot/latest/apireference/API_AttachPolicy.html
I'm trying to set up a DLQ for a Kinesis.
I used SQS and set it as the Kinesis on failure destination.
The Kinesis is attached to a lambda that always throws an error so the event will go right away to the SQS DLQ.
I can see the events in the SQS, but that payload of the event is missing ( the json I send as part of the event ), in the lambda if I print the event before throwing the exception, I can see the base64 encoded data, but not in my DLQ.
Is there a way to send the event data to the DLQ as well? I want to be able to examine the cause of the error correctly and put the event back to the Kinesis after I finished fixing the issue in the lambda.
https://docs.aws.amazon.com/lambda/latest/dg//with-kinesis.html#services-kinesis-errors
The actual records aren't included, so you must process this record and retrieve them from the stream before they expire and are lost.
According to the above the event payload won't be sent to the DLQ event so "missing event data" is expected here.
Therefore, in order to retrieve the actual record back, you might want to try something like
1) assuming we have the following kinesis batch info
{
"KinesisBatchInfo": {
"shardId": "shardId-000000000001",
"startSequenceNumber": "49601189658422359378836298521827638475320189012309704722",
"endSequenceNumber": "49601189658422359378836298522902373528957594348623495186",
"approximateArrivalOfFirstRecord": "2019-11-14T00:38:04.835Z",
"approximateArrivalOfLastRecord": "2019-11-14T00:38:05.580Z",
"batchSize": 500,
"streamArn": "arn:aws:kinesis:us-east-2:123456789012:stream/mystream"
}
}
2) we can get the record back by doing something like
import AWS from 'aws-sdk';
const kinesis = new AWS.Kinesis();
const ShardId = 'shardId-000000000001';
const ShardIteratorType = 'AT_SEQUENCE_NUMBER';
const StreamName = 'my-awesome-stream';
const StartingSequenceNumber =
'49601189658422359378836298521827638475320189012309704722';
const { ShardIterator } = await kinesis
.getShardIterator({
ShardId,
ShardIteratorType,
StreamName,
StartingSequenceNumber,
})
.promise();
const records = await kinesis
.getRecords({
ShardIterator,
})
.promise();
console.log('Records', records);
NOTE: don't forget to make sure your process has permission to 1) kinesis:GetShardIterator 2) kinesis:GetRecords
Hope that helps!
I've connected my blob storage account to Event Grid, via an Event Hub subscription, and can see the events from uploaded blobs.
But I was hoping to be able to pass some metadata with each received event, so I can relate the event back to a foreign key (customer identifier) without having to do extra work on each event.
Is this possible? I couldn't see anything in the API docs regarding this.
Based on the Azure Event Grid event schema for Blob storage there is no metadata properties in the Blob storage event data.
Note, there is only one specific case passing some metadata from the AEG Subscription to its subscriber such as a query string of the webhook event handler endpoint (e.g. HttpTrigger function).
Solution for your scenario is using an EventGridTrigger function (subscriber) with output binding to the Event Hub.
The following example shows a lightweight implementation of the event message mediator using the EventGridTrigger function:
[FunctionName("Function1")]
[return: EventHub("%myEventHub%", Connection = "AzureEventHubConnectionString")]
public async Task<JObject> Run([EventGridTrigger]JObject ed, ILogger log)
{
// original event message
log.LogInformation(ed.ToString());
// place for event data enrichment
var metadata = new { metadata = "ABCD", abcd = 12345 };
// enrich data object
ed["data"]["url"]?.Parent.AddAfterSelf(new JProperty("subscription", JObject.FromObject(metadata)));
// show after mediation
log.LogWarning(ed.ToString());
// forward to the Event Hub
return await Task.FromResult(ed);
}
and the log output from the Event Hub:
I've created a lambda that retrieves user attributes as (username, email, name...etc) however, I wonder how it's possible to get user attributes without explicitly hardcoding sub value to get all other related attributes? do I need to decode JWT Cognito token in frontend and use it in the lambda to determine the correct user and retrieve the related attributes?
here is my lambda in Node.JS:
const AWS = require('aws-sdk');
exports.handler = function(event, context) {
var cog = new AWS.CognitoIdentityServiceProvider();
var filter = "sub = \"" + "UserSUB" + "\"";
var req = {
"Filter": filter,
"UserPoolId": 'POOL here',
};
cog.listUsers(req, function(err, data) {
if (err) {
console.log(err);
}
else {
if (data.Users.length === 1){
var user = data.Users[0];
var attributes = data.Users[0].Attributes;
console.log(JSON.stringify(attributes));
} else {
console.log("error.");
}
}
});
}
I think the proper way to do this depends on whether you want to use API Gateway or not (It will make things simpler IMHO).
If you don't want to use APIG, and you are calling the lambda directly using temporary credentials, then you should pass the entire ID token and have the lambda do all of the validation and decoding (probably using a third party library for JWTs). It's not safe to do it in the frontend as that would mean you have a lambda that blindly accepts the attributes as facts from the frontend, and a malicious user could change them if they wanted.
If you are using API Gateway to put lambdas behind an API then I would create a cognito authorizer based on the User Pool, create a resource/method and configure it to use the authorizer, and enable Use Lambda Proxy Integration for the Integration Request. All the token's claims enabled for the client will be passed through on event.requestContext.authorizer.claims so long as it's valid.
There are some AWS docs here, although this does not use proxy integration. If you use proxy integration then you can skip 6b as the APIG will set the values for you. This is described in an answer here.
How can I add new trigger for existing AWS Lambda function using Java API?
I would like to add CloudWatch Events - Schedule trigger.
It looks like I should use AmazonCloudWatchEventsClient.
How can I set the credentials for the client?
Any examples will be appreciated.
Thanks.
It is possible to add event sources via aws sdk. I faced the same issue and please see code below as the solution using java.
AddPermissionRequest addPermissionRequest = new AddPermissionRequest();
addPermissionRequest.setStatementId("12345ff"); //any unique string would go
addPermissionRequest.withSourceArn(ruleArn); //CloudWatch rule's arn
addPermissionRequest.setAction("lambda:InvokeFunction");
addPermissionRequest.setPrincipal("events.amazonaws.com");
addPermissionRequest.setFunctionName("name of your lambda function");
AWSLambdaAsyncClient lambdaClient = new AWSLambdaAsyncClient();
lambdaClient.withRegion(Regions.US_EAST_1); //region of your lambda's location
lambdaClient.addPermission(addPermissionRequest);
Thanks needed it in Kotlin myself, the thing missing from the previous answer was the dependency:
compile 'com.amazonaws:aws-java-sdk-lambda:1.11.520'
code:
val addPermissionRequest = AddPermissionRequest()
addPermissionRequest.statementId = "12345ff" //any unique string would go
addPermissionRequest.withSourceArn(ruleArn) //CloudWatch rule's arn
addPermissionRequest.action = "lambda:InvokeFunction"
addPermissionRequest.principal = "events.amazonaws.com"
addPermissionRequest.functionName = "name of your lambda function"
val lambdaClient = AWSLambdaAsyncClient.builder().build()
lambdaClient.addPermission(addPermissionRequest)