What is the "instances" list in AWS Lambda? Cannot call Sagemaker Inference Endpoint From AWS Lambda - aws-lambda

I am very new to AWS so please bear with my question.
I trained a model on Sagemaker Notebook, deployed it as an endpoint, and can see my predictions by hitting the endpoint from within the Sagemaker notebook. The code to create and return a prediction from my endpoint within my notebook is:
data = json.dumps({"input1": "hello world"})
model.predict(data)
{'predictions': [[0.15274021,
0.225715473,
0.460293412,
0.127488852,
0.0337620787]]}
Now I want to make my endpoint useable by my other applications, and saw that I could do this with AWS Lambda and API Gateway. In Lambda, I have successfully connected my ENDPOINT_NAME and am actually hitting my endpoint, but am struggling to get a 200 response due to request payload issues.
def lambda_handler(utterance, context):
payload = json.dumps({"input1": utterance})
response = runtime.invoke_endpoint(EndpointName=ENDPOINT_NAME, ContentType='application/json',Body=payload)
I pasted most of the error message below
{
"errorMessage": "An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message \"{\n \"error\": \"Failed to process element: 0 key: input1 of 'instances' list. Error: INVALID_ARGUMENT: JSON object: does not have named input: input1\"\n}\".",
}
What is this instances list? What should my input format be for lambda? Any help would be much appreciated

Are you using the SageMaker TensorFlow container without an inference.py?
Have you tried:
payload = json.dumps({"instances": utterance})

Related

How is it that the documented intent states in Lex are different from the actual available values?What is the difference between each of these states?

I am currently working on integrating aws Lex with lambda function written in TypeScript and I am facing a situation in which I need help .
Upon reading the aws documentation for LexV2 the following values are available for an intent state:
Failed
Fulfilled
FulfillmentInProgress
InProgress
ReadyForFulfillment
Waiting
However when I used the 'Waiting' value, The following error message showed up :
Invalid Lambda Response: Received invalid response from Lambda: Can not deserialize value of type Intent$IntentState from String "Waiting": value not one of declared Enum instance names: [ReadyForFulfillment, InProgress, Failed, Fulfilled]
Upon this I need help to:
Understand how is it possible to have values that are not recognized.
Understand the difference between each of these values (Note: not all of the accepted values are explained in the documentation)
After reaching out to aws support here is the answer:
LexV2 doesn't accept "FulfillmentInProgress" or "Waiting" as valid intent state.
Difference between each of the valid value:
ReadyForFulfillment - The bot is ready to fulfillment. Passing this state via lambda output will make the bot jump to fulfillment state
InProgress - The default state
Fulfilled - The bot will jump to closed state and will play back both the fulfillment success message and closing response
Failed - Mark the intent as failed; will result in bot playing the
fulfillment failure message

lambda create-event-source-mapping Member must not be null Exception

I am trying to attach a kinesis stream event to lambda function usi cli command but getting exception as :
An error occurred (ValidationException) when calling the CreateEventSourceMapping operation: 1 validation error detected: Value null at 'startingPosition' failed to satisfy constraint: Member must not be null.
My command is :
aws lambda create-event-source-mapping --event-source-arn arn:aws:kinesis:us-west-2:xxxxxx:stream/lambda-stream --function-name helloworld-divyanayan_lambda --batch-size 100
If Lambda is your consumer for Kinesis streams where you are continuously processing stream data, you use "LATEST" as the starting position.
TRIM_HORIZON will read the oldest untrimmed record in the shard.
I got this error for a DynamoDB stream mapping to a Lambda in CloudFormation. As other answer/comment suggests, the problem is the starting position on the Kinesis stream.
The CLI docs do indeed have a flag for --starting-position, and the CloudFormation template does as well.
So, in my case, it was fixed by adding this line to my CFN template:
Type: AWS::Lambda::EventSourceMapping
Properties:
...
StartingPosition: 'LATEST'

How to send error to DLQ if Lambda function throws error (not timedOut)?

I am having Lambda function that I expect to return some result. So if I send wrong parameters it fails for example in the middle of the function.
Is there a way I can handle if any error occurs to be sent in my DLQ, print the error in the message, then retry, then delete the message?
example error from CloudWatch:
TypeError: commandArray is not iterable
AWS Lambda function has a retry mechanism on Asynchronous invocation, If AWS Lambda is unable to fully process the event, it will automatically retry the invocation twice, with delays between retries.
After retries, AWS Lambda will send ERROR message detail to the specified Amazon SQS queue or Amazon SNS topic.
https://docs.aws.amazon.com/lambda/latest/dg/retries-on-errors.html
The error message does not contain failed Lambda function name due to any reason (exceptions/timeout). To add lambda function name in the error message, you can go for two ideas.
Solution - 1
Lambda function name can be found by S3 API, S3 bucket detail can be found by received event object in the error message.
Solution - 2
Convention: SNS topic name contains lambda function name in it
Configure SNS topic to lambda function
Add a lambda function to SNS topic subscriber list
Subscribed lambda function can get lambda function name from SNS topic name and can add any custom detail in the received error message
Lambda has the facility to retry and pump failures into a Dead Letter Queue.
Any Lambda function invoked asynchronously is retried twice before the event is discarded. If the retries fail and you're unsure why, use Dead Letter Queues (DLQ) to direct unprocessed events to an Amazon SQS queue or an Amazon SNS topic to analyze the failure.
You can then have a Lambda Function on the SNS topic or SQS queue that can respond to the error and react in the way you want it to.
For more information, see: https://docs.aws.amazon.com/lambda/latest/dg/dlq.html

SQS Queue not receiving all messages

I'm trying to get an Alexa skill to call a lambda function which sends a message to an SQS Queue. Basically what this guide is doing http://www.cyber-omelette.com/2017/01/alexa-run-script.html
I have the skill and lambda function working, when I execute the skill I get the proper response that's created in the lambda function. However sometimes the Queue gets the message and other times it doesn't, it seems completely random. Is there something that may be causing messages to be dropped/ignored?
In your lambda function, make sure you process ALL the messages received by the lambda function, and not just the first one.
```
def handler(event, context):
result={}
logger.debug(json.dumps(event))
for record in event['Records']:
message=json.loads(record['body'])
#do whatever you have to do with the message
```

Is it possible to print errors?

For example such code:
os.Stderr.WriteString(rec.(string))
But this will not show as an error:
I know that I can panic after logging and catch it on API Gateway (against sending stacktrace to the client) - no other ways? Documentation is not mention anything like that.
It seems not possible. I assume, you're looking at the metrics in Amazon CloudWatch
AWS Lambda automatically monitors functions on your behalf, reporting
metrics through Amazon CloudWatch. These metrics include total
invocations, errors, duration, throttles, DLQ errors and Iterator age
for stream-based invocations.
https://docs.aws.amazon.com/lambda/latest/dg/monitoring-functions-metrics.html
Now, let's see how do they define errors
Metric "Errors" measures the number of invocations that failed due to errors in the
function (response code 4XX).
So, if you want to see the errors on that graph, you have to respond with the proper codes. If you're concerned about exposing the error stacktrace, here is a good read Error handling with API Gateway and Go Lambda functions. The basic idea there is about creating a custom lambdaError type, meant to be used by a Lambda handler function to wrap errors before returning them. This custom error message
{
"code": "TASK_NOT_FOUND",
"public_message": "Task not found",
"private_message": "unknown task: foo-bar"
}
will be wrapped in a standard one
{
"errorMessage": "{\"code\":\"TASK_NOT_FOUND\",\"public_message\":\"Task not found\",\"private_message\":\"unknown task: foo-bar\"}",
"errorType": "lambdaError"
}
and later on mapped in API Gateway, so, the end client will see only the public message
{
"code": "TASK_NOT_FOUND",
"message": "Task not found"
}

Resources