lambda create-event-source-mapping Member must not be null Exception - aws-lambda

I am trying to attach a kinesis stream event to lambda function usi cli command but getting exception as :
An error occurred (ValidationException) when calling the CreateEventSourceMapping operation: 1 validation error detected: Value null at 'startingPosition' failed to satisfy constraint: Member must not be null.
My command is :
aws lambda create-event-source-mapping --event-source-arn arn:aws:kinesis:us-west-2:xxxxxx:stream/lambda-stream --function-name helloworld-divyanayan_lambda --batch-size 100

If Lambda is your consumer for Kinesis streams where you are continuously processing stream data, you use "LATEST" as the starting position.
TRIM_HORIZON will read the oldest untrimmed record in the shard.

I got this error for a DynamoDB stream mapping to a Lambda in CloudFormation. As other answer/comment suggests, the problem is the starting position on the Kinesis stream.
The CLI docs do indeed have a flag for --starting-position, and the CloudFormation template does as well.
So, in my case, it was fixed by adding this line to my CFN template:
Type: AWS::Lambda::EventSourceMapping
Properties:
...
StartingPosition: 'LATEST'

Related

What is the "instances" list in AWS Lambda? Cannot call Sagemaker Inference Endpoint From AWS Lambda

I am very new to AWS so please bear with my question.
I trained a model on Sagemaker Notebook, deployed it as an endpoint, and can see my predictions by hitting the endpoint from within the Sagemaker notebook. The code to create and return a prediction from my endpoint within my notebook is:
data = json.dumps({"input1": "hello world"})
model.predict(data)
{'predictions': [[0.15274021,
0.225715473,
0.460293412,
0.127488852,
0.0337620787]]}
Now I want to make my endpoint useable by my other applications, and saw that I could do this with AWS Lambda and API Gateway. In Lambda, I have successfully connected my ENDPOINT_NAME and am actually hitting my endpoint, but am struggling to get a 200 response due to request payload issues.
def lambda_handler(utterance, context):
payload = json.dumps({"input1": utterance})
response = runtime.invoke_endpoint(EndpointName=ENDPOINT_NAME, ContentType='application/json',Body=payload)
I pasted most of the error message below
{
"errorMessage": "An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message \"{\n \"error\": \"Failed to process element: 0 key: input1 of 'instances' list. Error: INVALID_ARGUMENT: JSON object: does not have named input: input1\"\n}\".",
}
What is this instances list? What should my input format be for lambda? Any help would be much appreciated
Are you using the SageMaker TensorFlow container without an inference.py?
Have you tried:
payload = json.dumps({"instances": utterance})

How is it that the documented intent states in Lex are different from the actual available values?What is the difference between each of these states?

I am currently working on integrating aws Lex with lambda function written in TypeScript and I am facing a situation in which I need help .
Upon reading the aws documentation for LexV2 the following values are available for an intent state:
Failed
Fulfilled
FulfillmentInProgress
InProgress
ReadyForFulfillment
Waiting
However when I used the 'Waiting' value, The following error message showed up :
Invalid Lambda Response: Received invalid response from Lambda: Can not deserialize value of type Intent$IntentState from String "Waiting": value not one of declared Enum instance names: [ReadyForFulfillment, InProgress, Failed, Fulfilled]
Upon this I need help to:
Understand how is it possible to have values that are not recognized.
Understand the difference between each of these values (Note: not all of the accepted values are explained in the documentation)
After reaching out to aws support here is the answer:
LexV2 doesn't accept "FulfillmentInProgress" or "Waiting" as valid intent state.
Difference between each of the valid value:
ReadyForFulfillment - The bot is ready to fulfillment. Passing this state via lambda output will make the bot jump to fulfillment state
InProgress - The default state
Fulfilled - The bot will jump to closed state and will play back both the fulfillment success message and closing response
Failed - Mark the intent as failed; will result in bot playing the
fulfillment failure message

AWS Lambda - different dead letter queue based on Lambda alias

I've two SQS queues one for test stage and one for prod stage and a single Lambda function with test and prod aliases, each alias is triggered by a SQS event of the corresponding queue.
I was wondering if there's a way to specifcy a DLQ for each alias (I'm using a SAM template to define the Lambda function) of the Lambda or I would like to understand which is the best practice to handle such requirement.
It's straightforward to add your DLQs - and the test-prod alias setup you have does not complicate things at all.
Add a RedrivePolicy to your two Queue definitions. Here's the prod half the code:
# template.yaml
# new: define a new queue to be the prod DLQ
MyProdDLQ:
Type: AWS::SQS::Queue
Properties:
QueueName: this-queue-captures-errors-sent-from-the-prod-queue
MyProdQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: this-queue-feeds-the-prod-alias-lambda
VisibilityTimeout: 60
# new: assign the DLQ
RedrivePolicy:
deadLetterTargetArn: !GetAtt MyProdDLQ.Arn
maxReceiveCount: 5
Note: You might be tempted to define the DLQ on the lambda function itself. Confusingly, the DLQ on the lambda function is for asynchronous (event-driven) invocations, not SQS sources. The docs warn about this, but it is easy to miss.

AWS Lambda SQS Trigger

I have added a new AWS Lambda in visual studio. It has generated a function that passes a SQS Message.
The template I have been generated is similar to the one here
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-function.html
I can't for the life of me find anywhere that tells me where to pass the queue name? So that my lambda can trigger after a message has been put into that queue. I imagine its a property in that json file? How about a schedule? Anyone know the name of the property to add a schedule?
It looks like you are using CloudFormation to create the Lambda function.
To trigger that Lambda function from SQS, you should create an AWS::Lambda::EventSourceMapping. For example, in YAML:
Type: AWS::Lambda::EventSourceMapping
Properties:
BatchSize: 1
Enabled: true
EventSourceArn: arn:aws:sqs:us-east-2:444455556666:queue1
FunctionName: myfunc
Increase the BatchSize if you want SQS messages batched.

How to send error to DLQ if Lambda function throws error (not timedOut)?

I am having Lambda function that I expect to return some result. So if I send wrong parameters it fails for example in the middle of the function.
Is there a way I can handle if any error occurs to be sent in my DLQ, print the error in the message, then retry, then delete the message?
example error from CloudWatch:
TypeError: commandArray is not iterable
AWS Lambda function has a retry mechanism on Asynchronous invocation, If AWS Lambda is unable to fully process the event, it will automatically retry the invocation twice, with delays between retries.
After retries, AWS Lambda will send ERROR message detail to the specified Amazon SQS queue or Amazon SNS topic.
https://docs.aws.amazon.com/lambda/latest/dg/retries-on-errors.html
The error message does not contain failed Lambda function name due to any reason (exceptions/timeout). To add lambda function name in the error message, you can go for two ideas.
Solution - 1
Lambda function name can be found by S3 API, S3 bucket detail can be found by received event object in the error message.
Solution - 2
Convention: SNS topic name contains lambda function name in it
Configure SNS topic to lambda function
Add a lambda function to SNS topic subscriber list
Subscribed lambda function can get lambda function name from SNS topic name and can add any custom detail in the received error message
Lambda has the facility to retry and pump failures into a Dead Letter Queue.
Any Lambda function invoked asynchronously is retried twice before the event is discarded. If the retries fail and you're unsure why, use Dead Letter Queues (DLQ) to direct unprocessed events to an Amazon SQS queue or an Amazon SNS topic to analyze the failure.
You can then have a Lambda Function on the SNS topic or SQS queue that can respond to the error and react in the way you want it to.
For more information, see: https://docs.aws.amazon.com/lambda/latest/dg/dlq.html

Resources