Questions for java greengrass lambda function - aws-iot-greengrass

I'm beginner of greengrass core application, and finished the demo setup following greengrass developper guide. but i'm still confusing about how lambda functio works.the bellow is the quesitons I want to ask for help.
I want to run a lambda function in my raspberry pi 3 as greengrass core, which can recieve multiple IoT devices' MQTT messages and do some process according to task tpye(i.e various signal filtering or house-hold machine learning algorithms). After proceesing, I need send the information using MQTT to my own server(not AWS IoT cloud) for higher level processing with some topics.
my questions are as follows( I want to use JAVA language):
1 To receive multiple aws iot devices connected to the GGC, should I need to set up a AWSIoTMQTTClient in aws-iot-device-sdk-java?
I also find in aws_greengrass_core_sdk_java, there is “IotDataClient” class,what's it for?and what's the different with AWSIoTMQTTClient. here is really very confusing, even with sdk document description.
2 In GGC, when I deployed my lambda function, will it has an internal MQTT broker to receive messages for AWSIoTMQTTClient ?
3 for lambda functions, after creation and deployment on GGC, will it start to work. I saw there is method to invoke another lambda funciton from a lambda funciton. I don't understand the mechanism how lambda works.
4 Can i have multiple lambda functions for different uage,for instance, one is only to receive MQTT messages, another is to process the received info, other one is to send the processed info out to my own MQTT server? if permitted, how to make the work together to perform all the tasks.
5 I saw there is event input to lambda interface, how can I call a lambda only when some specific topic arriverd to AWSIoTMQTTClient defined in the lambda function?
6 the below is JAVA lambda interface template:
outputType handler-name(inputType input, Context context) {
...
}
i think it should permit user to define input data type as he need. but the quesiton is if I define inputtype is string. how to the lambda handler to receive the string. the development guidence have no clear description.
7 finally, can you please share some demo codes for the above questions?
Thanks for you attention and kind help in advance.
your help is highly expected

AWSIoTMQTTClient from the device SDK is not for Greengrass Lambda functions. Instead use IotDataClient from the Greengrass Java SDK, create a publish request, and then invoke the publish method. There is an example of that here - https://github.com/aws-samples/aws-greengrass-lambda-functions/blob/master/foundation/CDDBaselineJava/src/main/java/com/timmattison/greengrass/cdd/communication/GreengrassCommunication.java
AWSIoTMQTTClient is for devices/applications that run outside of Greengrass.
If you'd like to see some example Greengrass Lambda function code in Java check out at least this skeleton example - https://github.com/aws-samples/aws-greengrass-lambda-functions/tree/master/functions/CDDSkeletonJava. Note this function and other other ones in the repo depend on a framework called CDD (Cloud Device Driver). It is shared in the same repo and does most of the heavy lifting (messaging, startup, etc). That combined with the Greengrass provisioner - https://github.com/awslabs/aws-greengrass-provisioner - gives you a quick way to develop Java functions on Greengrass. Let me know if you try it out.
If you want to see the internals of CDD the root of it is here - https://github.com/aws-samples/aws-greengrass-lambda-functions/tree/master/foundation/CDDBaselineJava
As far as Lambda functions and how they run briefly I'll say that they can run on-demand (when they receive a message) or they can run "pinned" (forever). Pinned functions can receive messages too. Pinned functions are good when you need to track some kind of state. On-demand functions are more efficient for stateless data processing.

Related

AWS SQS List Triggers from SDK

I'm looking for a method to programmatically identify the triggers associated with an SQS queue. Looking through the SQS sdk docs, it doesn't seem this is possible. I had thought instead to try from the other end, and it appears the Lambda ListEventSourceMappings function would likely do what I want, since I'm able to provide it with the queue ARN. However, this requires the ListSourceMappings permission on all lambdas (*), which isn't really ideal - though it shouldn't really hurt, just not what I want. Is there another mechanism for this that I'm missing, or another approach?
Lambda polls SQS queues. It doesn't appear that way in the console, because they hide some of the details from you, but behind the scenes there is a process running within the AWS Lambda system that is polling your SQS queue and invoking your Lambda function when a message is available.
SQS doesn't push messages to Lambda (or anywhere else). SQS just holds messages and hands them out to anything that asks for them. So from an SQS perspective, there is no knowledge of who the message consumers are.
Given the above, the only way to find what you want is to use the Lambda ListEventSourceMappings API.

Is it possible to connect alexa output to amazon SQS

I have created a alexa smart home function and want to run it asynchronously so plan to use amazon sqs (Simple que service) functionality. I connected amazon sqs trigger output to lambda function and successfully able to send message from sqs to lambda. Now need to connect the alexa to sqs input. When i try to use sqs arn in alexa developer console it does not support it. Is there any way to solve this or will alexa support only lambda function for invocation.
The alexa skill is for smart home service to control switches (Turn on/off), so when try to control the multiple switches because of synchronous nature execution of lambda it turns on switches one after the other. I need to control them at single shot so need asynchronous execution for lambda where requests need to execute without waiting for the response.
Thanks in advance for answers.
It will not work as SQS works asynchronus and just reply that message was put there. But Alexa needs a valid JSON response with speech tag and so on immediately and SQS is not able to fulfill this.
What you could do:
Alexa -> Lambda (new) -> SQS - Lambda
In your new created lambda you could give a valid reply to Alexa and put a message in SQS.
AWS Lambda can work asynchronously. You can have a bunch of back-end processes all working as they need to, triggering various Lambdas as needed.
But the exchange with Alexa opens a session to your backend, sends its request, and the full response is expected to end that session. That response may have directives to download other content to incorporate into the response, like a sound file or lazy loading a list in APL. But it is expecting a full response.
If you go through the basic Cake Time tutorial for building Alexa skills, they actually use async-await for some APIs because that response has to be complete before it's sent.
There are some async APIs like reminders and proactive events, but they're NOT conversational. They're unique one-way messages.
The real questions are why do you feel you need to do it this way and what are you optimizing for by queuing?

Can a connected lambda function spin down before replying to Lex?

tldr: Is it possible for a connected Lambda codehook to spin down then spin back up (possibly multiple times) before replying to Lex?
Some details first: I have a Lambda function in Java 8 which is connected to an Intent on my Lex chatbot. This is a "Initialization and validation code hook" Lambda, meaning any time my intent is activated, Lex queries my Lambda with the input from the user using the Input Event format specified here: https://docs.aws.amazon.com/lex/latest/dg/lambda-input-response-format.html#using-lambda-response-format. Now the way I've been handling input events and responses is through a function called "handleRequest()", which takes as args an InputStream, OutputStream, and Context. After reading the InputStream and activating appropriate logic, I write to the OutputStream object provided as input to handleRequest (using the response format in the link above) and Lex is happy.
This is how things work now, and it has met my needs.
However, now I have a new problem. Part of my Lambda logic now relies on making a request to a third-party web API. After making this request, my Lambda spins down (it stops computing). Eventually, this third party API will make a call to my Lambda with information needed to fulfill my intent, but by this point since I have spun down my Lambda I have lost that OutputStream object which I used to write my response to Lex into.
My question is if there is another way. Is there a way to reply to Lex somehow else using Java 8? Maybe I make a reply to Lex directly from Lambda sometime after Lex calls Lambda and Lambda is ready. Has anyone else ever done this or had experience with a Lambda which needs to spin down before replying to Lex?
Please share any insights.
The old process that you describe was synchronous but now you're migrating it to be async and that means that you'll need to change your design: since the same lambda cannot do both the querying (to the 3rd party) and responding back to Lex, you'll have to create new "players":
once a lambda called the 3rd party, it should persist its data (context) into a persistence storage (DB) and exit
receiving the callback from the 3rd party will have to be done by a different lambda which will look in the DB to get the relevant context and combine it with the data it got from the 3rd party and after composing the result it will have to call Lex (this is not a response anymore!) to update it.
I'm not familiar with Lex so I can't tell you if that's supported by it.
Another option is, to see if instead of getting a callback from the third-party, you can poll for the result. If there's such an option the lambda can run in a loop that sleeps for a few seconds, then polls the 3rd party to get the result, until it does.
Important to note that lambda execution time in AWS is limited (up to 15 minutes) so if it takes longer to the 3rd party to resolve your queries - this solution will not work.

Best way to schedule one-time events in serverless environments

Example use case
Send the user a notification 2 hours after signup.
Options considered
setTimeout(() => { /* send notification */ }, 2*60*60*1000); is not an option in serverless environments since the function terminates after execution (so it has to be stateless).
CloudWatch events can schedule lambda invocations using cron expressions - but this was designed for repetitive invocations (there's a limit of 100 rules/region).
I have not seen scheduling options in AWS SNS/SQS or GCP Pub/Sub. Are there alternatives with scheduling?
I want to avoid (if possible) setting up a dedicated message broker (overkill) or stateful/non-serverless instance - is there a serverless way to do this?
I can queue the events in a database and invoke a lambda function every minute to poll the database for events to execute in that minute... is there a more elegant solution?
Use AWS Step functions, they are like serverless functions that don't have the 15 minute limit like AWS Lambda does. You can design a workflow in AWS step that integrates with API Gateway, Lambda and SNS to send email and text notifications as follows:
Create a REST API via API gateway that will invoke a Lambda function passing in for example, the destination address (email, phone #) of the SNS notification, when it should be sent, notification method (e.g. email, text, etc.).
The Lambda function on invocation will invoke the Step function passing in the data (Lambda is needed because API Gateway currently can't invoke Step functions directly).
The Step function is basically a workflow, you can define states for waiting (like waiting for the specified time to send the notification e.g. 30 seconds), and states for invoking other Lambda functions that can use SNS to send out an email and/or text notifications.
A rudimentary example is provided by AWS w/ their Task Timer example.
Things are coming on GCP for doing this, but not very soon. Thereby, today, the solution is to poll a database.
You can to that with Datastore/firestore with the execution datetime indexed (to prevent to read all the documents each minute). But be careful of traffic spike, you could create hotspot.
You can use Cloud Scheduler on Google Cloud Platform. As is is stated in the official documentation :
Cloud Scheduler is a fully managed enterprise-grade cron job scheduler. It allows you to schedule virtually any job, including batch, big data jobs, cloud infrastructure operations, and more. You can automate everything, including retries in case of failure to reduce manual toil and intervention. Cloud Scheduler even acts as a single pane of glass, allowing you to manage all your automation tasks from one place.
Here you can check a quickstart for using it with Pub/Sub and Cloud Functions.

How to make AWS Lambda wait for a shadow updated topic

My desired flow is:
ask my iot device to do something using AVS sdk
aws lambda triggered and update the device shadow
iot device triggered based on the shadow topic: delta and do something locally. Publish the status to shadow when done doing something
aws lambda sends voice feedback to my iot device to tell users the reported status
I am stuck in point 4 since I dont know how to trigger AVS(ASK)'s speech response only after the topic is updated "within the same lambda triggered by AVS (as mentioned in point 1 and 2).
You don't want lambda to wait.
I heard from a wise man onetime,
A long lived Lambda = EC2 Instance
Either create an iot rule to trigger a lambda on specific topics,
or,
create an api endpoint to update the topic and trigger it from the client.

Resources