Cron Lambda Function Each 1 Minute But Lambda have Parameter like user_id or timestamp - aws-lambda

I want to get data of client, so i have a lambda function to get data.
But now i want this function is auto invoked after each 1 minute to update data.
I read AWS EventBridge Rule Or Event but how can i make Rule Input is dynamic, such as user_id, or timestamp...
Or Do u guys have other ideal, thank you?

Related

How do I delay a lambda function between each EventBridge event?

I am trying to update Algolia Rules via a Lambda function connected to Eventflow via Salesforce.
The issue is that when someone on Salesforce does a batch update, the Lambda function is triggered, which then checks for any existing Algolia Rules. If there is an existing Rule, then it grabs the existing params tied to that Rule and then adds them to the new Rule. However, there isn't enough time between updates to allow the Algolia Rule to be created and or updated in time for the next update within the batch.
Current Flow:
Update record (let's say 3)
Trigger Lambda function
The lambda function takes in each record
If the rule doesn't exist -> create a new Rule
Else if the rule exists -> grab existing params and add to the Rule
Repeat for the next two records
Further Clarification:
The Rule isn't updated in time for the second and/or third record to trigger a rule update versus creating a new one. Lambda takes these events, each at a time, but too quickly.
I've seen people use SQS to create a delay; however, all of my attempts have failed. This is my closest example of a similar problem.

DynamoDB Appsync Query on multiple attributes

My app uses AppSync resolvers to fetch data from DDB and return it to our front-end. One table we have is for Notifications. A Notification can be either pending or default (non-pending). The table itself has a primary key of notification_id and we have a GSI called userIndex to grab the notifications for a user, with a sort key of timestamp.
In the app, I show all notifications in a list, pending first and then default. Given that a user may have many notifications, I'd like to implement pagination to fetch a batch at a time. The only way I've been able to do this is to
change the query to include a isPending parameter, which I use as a filter expression for the query to only return notifications that are isPending or isNotPending.
Store two "nextTokens", one for each isPending and isNotPending, along with corresponding lists.
Make separate queries for pending/non-pending, and use the filter to return to the appropriate list.
This is obviously inefficient and I am re-reading data from DynamoDB. My question is, given my DynamoDB table/requirements, is there a way I can paginate so that I can get all the pending notifications first (sorted by timestamp) and then all the default notifications next (sorted by timestamp) by using one query and one nextToken
I've seen the use of #model and #key, but I haven't been able to make it work in my app.
Thanks!
No, not really. There is a hard limit on returns for a Dynamodb query - and that cannot be bypassed. the only way to make use of nextToken is another query.
However, it is also worth noting that the FilterExpression happens after the data has already been retrieved and is filtered client side. It does not reduce the documents pulled from the query - only whats displayed. So the next token is still going to be (relatively) the same for each query. You can instead filter it yourself after the call before the next pagination query and save yourself a little bit in terms of multiple calls.

AWS Appsync Lambda Custom Resolver for mutation

I'm complete newbie to AWS. I have appsync models, queries and mutations created by amplify and use DynamoDB. I need to add new timestamp field to DynamoDB in case one specific field has been updated. The only way I found should be Lambda function and use this function as Custom Resolver for mutation UpdateTask. So I created it (it basically just checks if the specific field has been updated and if so, it will set updateXY to current timestamp. I return the changed object). The problem is if I do update no change happens in DynamoDB and no error is returned from Appsync. Can anyone help me, please?
AppSync works by mapping fields in a GraphQL selection set to resolvers that do something with them. If you've overwritten the default resolver, then you aren't talking to DynamoDB anymore. Returning the value without saving/reading anything from DynamoDB won't have the effect you want. Instead, you'll want to interact with DynamoDB from your Lambda resolver.
For an example of a NodeJS-based Lambda resolver that interacts with DynamoDB directly, checkout this blog:
GraphQL API using Serverless + AWS AppSync + DynamoDB + Lambda resolvers + Cognito. [Part 3]. Note in particular how the functions include DynamoDB utilities:
const { insertOrReplace } = require('./../../util/dynamo/operations');
etc.

How to prevent overwrites/duplicates in dynamoDB from triggering lambda

I have a DynamoDB that is getting data written to it from a java app
mapper.save(row, DynamoDBMapperConfig.builder().withSaveBehavior(SaveBehavior.CLOBBER).build());
I want to have a lambda trigger off of new items in the DDB so that their keys can be put onto SNS. However, if an item in DynamoDB is being overwritten (IE we received duplicate data) we do NOT want to do anything with it.
How to handle that? I control both the lambda and the code writing to DDB, but not the source of the data.
I don't think we can prevent the Lambda from being triggered when an item is overwritten in DynamoDB. But once the Lambda is triggered, we can identify if it's a new record or an existing record.
The input to the Lambda function will be a DynamoDBStreamEvent which contains an OldImage attribute. If that is present, it indicates that it's an existing record that got modified. In this case, we can just return from the Lambda without doing any processing.
Also, the event contains the entire snapshot before and after the insert in the OldImage and NewImage attributes. So we can also check whether some other attribute value has changed or not to decide whether it's an overwrite.
You need to have an IF, CASE, or something that looks at the stream record's eventName and if it is INSERT, which means new if I recall correctly, then it will run your code. If it is something like MODIFY, then it will not. There is an example in the DynamoDB documentation.

Concurrent requests from PL/SQL

I want to work out the logic on following requirements.
XML Publisher Data Definition will fire the beforeReport trigger which calls function that returns TRUE or FALSE.
This function is contained within PL/SQL package.
Within this function first the concurrent request will be sent in order to SQLLoad some data into APPS database table. After, report will show records that didn't pass. SQLLoader is called as concurrent request with Executable that is HOST(shell script).
Somehow the RETURN(sucess or failure) from SQL*Loader should be checked within PL/SQL. Meaning, I need some output parameters from concurrent request? Or?
If SUCCESS then it proceeds forward. If FAILED, it should check the database table to see if there are old records to process.
So here the next 'event' follows which SHOULD happen in beforeReport trigger. Call procedure which I already have, which performs some validations upon table records and calls API to create person within HZ_PARTIES table. But depending on SQL*Loader return status:
-if there are old record, then call API, but return WARNING status of concurrent request. What's the best way to return WARNING in this case?
-if there are no old records then API is not submitted but (execution)report will show that SQL*Loader failed and there is no data to show.
In short, I need to work out how to structure functions and procedures, that is:
Say, I put the function which is called in beforeReport trigger as public, and then within this function everything else happens:
1) SQL*Loader concurrent request - but is it better to put this into some procedure or nested function and then call it within the above "main" function?
Then, if its success which I get as out parameter, step 2:
2) Validation and API - can I put this inside private procedure or function as well, as above?
3) Report will be shown. Containing results from two prior steps.
I just need some clarification on this what is best to do. I can clear up some questions further if needed.
Thanks.

Resources