tldr: Is it possible for a connected Lambda codehook to spin down then spin back up (possibly multiple times) before replying to Lex?
Some details first: I have a Lambda function in Java 8 which is connected to an Intent on my Lex chatbot. This is a "Initialization and validation code hook" Lambda, meaning any time my intent is activated, Lex queries my Lambda with the input from the user using the Input Event format specified here: https://docs.aws.amazon.com/lex/latest/dg/lambda-input-response-format.html#using-lambda-response-format. Now the way I've been handling input events and responses is through a function called "handleRequest()", which takes as args an InputStream, OutputStream, and Context. After reading the InputStream and activating appropriate logic, I write to the OutputStream object provided as input to handleRequest (using the response format in the link above) and Lex is happy.
This is how things work now, and it has met my needs.
However, now I have a new problem. Part of my Lambda logic now relies on making a request to a third-party web API. After making this request, my Lambda spins down (it stops computing). Eventually, this third party API will make a call to my Lambda with information needed to fulfill my intent, but by this point since I have spun down my Lambda I have lost that OutputStream object which I used to write my response to Lex into.
My question is if there is another way. Is there a way to reply to Lex somehow else using Java 8? Maybe I make a reply to Lex directly from Lambda sometime after Lex calls Lambda and Lambda is ready. Has anyone else ever done this or had experience with a Lambda which needs to spin down before replying to Lex?
Please share any insights.
The old process that you describe was synchronous but now you're migrating it to be async and that means that you'll need to change your design: since the same lambda cannot do both the querying (to the 3rd party) and responding back to Lex, you'll have to create new "players":
once a lambda called the 3rd party, it should persist its data (context) into a persistence storage (DB) and exit
receiving the callback from the 3rd party will have to be done by a different lambda which will look in the DB to get the relevant context and combine it with the data it got from the 3rd party and after composing the result it will have to call Lex (this is not a response anymore!) to update it.
I'm not familiar with Lex so I can't tell you if that's supported by it.
Another option is, to see if instead of getting a callback from the third-party, you can poll for the result. If there's such an option the lambda can run in a loop that sleeps for a few seconds, then polls the 3rd party to get the result, until it does.
Important to note that lambda execution time in AWS is limited (up to 15 minutes) so if it takes longer to the 3rd party to resolve your queries - this solution will not work.
Related
I have created a alexa smart home function and want to run it asynchronously so plan to use amazon sqs (Simple que service) functionality. I connected amazon sqs trigger output to lambda function and successfully able to send message from sqs to lambda. Now need to connect the alexa to sqs input. When i try to use sqs arn in alexa developer console it does not support it. Is there any way to solve this or will alexa support only lambda function for invocation.
The alexa skill is for smart home service to control switches (Turn on/off), so when try to control the multiple switches because of synchronous nature execution of lambda it turns on switches one after the other. I need to control them at single shot so need asynchronous execution for lambda where requests need to execute without waiting for the response.
Thanks in advance for answers.
It will not work as SQS works asynchronus and just reply that message was put there. But Alexa needs a valid JSON response with speech tag and so on immediately and SQS is not able to fulfill this.
What you could do:
Alexa -> Lambda (new) -> SQS - Lambda
In your new created lambda you could give a valid reply to Alexa and put a message in SQS.
AWS Lambda can work asynchronously. You can have a bunch of back-end processes all working as they need to, triggering various Lambdas as needed.
But the exchange with Alexa opens a session to your backend, sends its request, and the full response is expected to end that session. That response may have directives to download other content to incorporate into the response, like a sound file or lazy loading a list in APL. But it is expecting a full response.
If you go through the basic Cake Time tutorial for building Alexa skills, they actually use async-await for some APIs because that response has to be complete before it's sent.
There are some async APIs like reminders and proactive events, but they're NOT conversational. They're unique one-way messages.
The real questions are why do you feel you need to do it this way and what are you optimizing for by queuing?
Example use case
Send the user a notification 2 hours after signup.
Options considered
setTimeout(() => { /* send notification */ }, 2*60*60*1000); is not an option in serverless environments since the function terminates after execution (so it has to be stateless).
CloudWatch events can schedule lambda invocations using cron expressions - but this was designed for repetitive invocations (there's a limit of 100 rules/region).
I have not seen scheduling options in AWS SNS/SQS or GCP Pub/Sub. Are there alternatives with scheduling?
I want to avoid (if possible) setting up a dedicated message broker (overkill) or stateful/non-serverless instance - is there a serverless way to do this?
I can queue the events in a database and invoke a lambda function every minute to poll the database for events to execute in that minute... is there a more elegant solution?
Use AWS Step functions, they are like serverless functions that don't have the 15 minute limit like AWS Lambda does. You can design a workflow in AWS step that integrates with API Gateway, Lambda and SNS to send email and text notifications as follows:
Create a REST API via API gateway that will invoke a Lambda function passing in for example, the destination address (email, phone #) of the SNS notification, when it should be sent, notification method (e.g. email, text, etc.).
The Lambda function on invocation will invoke the Step function passing in the data (Lambda is needed because API Gateway currently can't invoke Step functions directly).
The Step function is basically a workflow, you can define states for waiting (like waiting for the specified time to send the notification e.g. 30 seconds), and states for invoking other Lambda functions that can use SNS to send out an email and/or text notifications.
A rudimentary example is provided by AWS w/ their Task Timer example.
Things are coming on GCP for doing this, but not very soon. Thereby, today, the solution is to poll a database.
You can to that with Datastore/firestore with the execution datetime indexed (to prevent to read all the documents each minute). But be careful of traffic spike, you could create hotspot.
You can use Cloud Scheduler on Google Cloud Platform. As is is stated in the official documentation :
Cloud Scheduler is a fully managed enterprise-grade cron job scheduler. It allows you to schedule virtually any job, including batch, big data jobs, cloud infrastructure operations, and more. You can automate everything, including retries in case of failure to reduce manual toil and intervention. Cloud Scheduler even acts as a single pane of glass, allowing you to manage all your automation tasks from one place.
Here you can check a quickstart for using it with Pub/Sub and Cloud Functions.
I'm beginner of greengrass core application, and finished the demo setup following greengrass developper guide. but i'm still confusing about how lambda functio works.the bellow is the quesitons I want to ask for help.
I want to run a lambda function in my raspberry pi 3 as greengrass core, which can recieve multiple IoT devices' MQTT messages and do some process according to task tpye(i.e various signal filtering or house-hold machine learning algorithms). After proceesing, I need send the information using MQTT to my own server(not AWS IoT cloud) for higher level processing with some topics.
my questions are as follows( I want to use JAVA language):
1 To receive multiple aws iot devices connected to the GGC, should I need to set up a AWSIoTMQTTClient in aws-iot-device-sdk-java?
I also find in aws_greengrass_core_sdk_java, there is “IotDataClient” class,what's it for?and what's the different with AWSIoTMQTTClient. here is really very confusing, even with sdk document description.
2 In GGC, when I deployed my lambda function, will it has an internal MQTT broker to receive messages for AWSIoTMQTTClient ?
3 for lambda functions, after creation and deployment on GGC, will it start to work. I saw there is method to invoke another lambda funciton from a lambda funciton. I don't understand the mechanism how lambda works.
4 Can i have multiple lambda functions for different uage,for instance, one is only to receive MQTT messages, another is to process the received info, other one is to send the processed info out to my own MQTT server? if permitted, how to make the work together to perform all the tasks.
5 I saw there is event input to lambda interface, how can I call a lambda only when some specific topic arriverd to AWSIoTMQTTClient defined in the lambda function?
6 the below is JAVA lambda interface template:
outputType handler-name(inputType input, Context context) {
...
}
i think it should permit user to define input data type as he need. but the quesiton is if I define inputtype is string. how to the lambda handler to receive the string. the development guidence have no clear description.
7 finally, can you please share some demo codes for the above questions?
Thanks for you attention and kind help in advance.
your help is highly expected
AWSIoTMQTTClient from the device SDK is not for Greengrass Lambda functions. Instead use IotDataClient from the Greengrass Java SDK, create a publish request, and then invoke the publish method. There is an example of that here - https://github.com/aws-samples/aws-greengrass-lambda-functions/blob/master/foundation/CDDBaselineJava/src/main/java/com/timmattison/greengrass/cdd/communication/GreengrassCommunication.java
AWSIoTMQTTClient is for devices/applications that run outside of Greengrass.
If you'd like to see some example Greengrass Lambda function code in Java check out at least this skeleton example - https://github.com/aws-samples/aws-greengrass-lambda-functions/tree/master/functions/CDDSkeletonJava. Note this function and other other ones in the repo depend on a framework called CDD (Cloud Device Driver). It is shared in the same repo and does most of the heavy lifting (messaging, startup, etc). That combined with the Greengrass provisioner - https://github.com/awslabs/aws-greengrass-provisioner - gives you a quick way to develop Java functions on Greengrass. Let me know if you try it out.
If you want to see the internals of CDD the root of it is here - https://github.com/aws-samples/aws-greengrass-lambda-functions/tree/master/foundation/CDDBaselineJava
As far as Lambda functions and how they run briefly I'll say that they can run on-demand (when they receive a message) or they can run "pinned" (forever). Pinned functions can receive messages too. Pinned functions are good when you need to track some kind of state. On-demand functions are more efficient for stateless data processing.
I have a lex bot with multiple intents that gets invoked from Connect. If I know exactly why the caller is calling, is it possible for me to invoke the bot but start off eliciting a slot from a particular intent? Maybe if I could programatically invoke the bot from a lambda in an ElicitSlot state?
Amazon Connect gives you two options for calling Lex you may want to explore further:
1. you can specify a subset of intents in the block that's calling Lex, so if your Connect flow already knows which intent needs to be called, then just specify that single intent.
2. You can set session attributes in Amazon Connect that get passed to Lex. So you can put any context information there and have your Lex logic (implemented as Lambda validation function) make choices as to what to do next based on that information. This may not work for picking intents, but can be used for picking the right slot to fill next.
I have a an application from which I need to send live updates to web clients.
I'm currently happily using websockets for that, via the WAMP protocol, as it provides both publish-subscribe and RPC methods.
Now, I find that in lots of situations, when a user starts the application or a view, I need to send an initial state to the client, and then keep sending updates. I do the first with an RPC call, and the latter via publish-subscribe.
Now, this forces me to write server-side and client-side code for both of the methods, even while I'm basically conveying the same information in both cases.
On server side, I'm moving appropriate code to a common method, but I still need to take care of both sending the event and provide an entry point for the RPC call:
# RPC endpoint for getting mission info
def get_mission_info(self):
return self.get_mission_info()
# Scheduled or manually called method to send mission info to all users
def publish_mission_info(self):
self.wamp.publish("UPDATE_INFO", [self.get_mission_info()])
def get_mission_info(self):
# Here we generate a JSON serializable dict with the info
return info
And you canimagine, client side (JS or Python) shows a similar duplicity (two handler methods).
Question is: is there a more clever way of handling this, and avoiding that boilerplate code? Some approach I could follow, perhaps automatically sending last event of each type just to clients that ask for it, or that just subscribed? Perhaps something at crossbar level?
In general terms, I feel I could be doing a better state synchronization strategy leveraging these two channels (pub-sub and RPC). How does people do it?
My WAMP server is Crossbar, and my client library is autobahn.js in Python and JS.