How to configure an Azure custom handler with a timer trigger? - go

I'm trying to configure a new function in my Golang custom handler that uses a timer trigger. But I haven't been able to find any documentation for it.
I've reviewed examples on the Azure/Azure-Functions github, but a timer trigger is missing: https://github.com/Azure/Azure-Functions
I've also reviewed the custom handler documentation at the microsoft azure/azure-functions page but it was only for HTTP triggers: https://learn.microsoft.com/en-us/azure/azure-functions/create-first-function-vs-code-other?tabs=go%2Cwindows
It's unclear how the function is executed in main.go on the cron schedule configured in function.json.
The intent is to execute the function once an hour. This is the /functionname/function.json file I'm using:
{
"bindings": [
{
"name": "timer",
"type": "timerTrigger",
"direction": "in",
"schedule": "0 0 * * * *"
}
]
}

There are a few differences between configuring a custom handler with a timer trigger and an HTTP trigger.
These are the differences that I noticed while figuring out how to get the timer trigger function up and running:
The /local.settings.json file requires a field called "AzureWebJobsStorage" when configuring a timer trigger, it's not required for the HTTP trigger. If not present it will cause the function app to fail when starting. This field stores the connection string for the storage account being used with the function app. (obviously add this file to .gitignore)
When the function app attempts to call the timer triggered function, it expects it to be at the endpoint /functionName. This is different from the HTTP trigger which executes function handlers at /api/functionName.
Note: Timer trigger functions don't appear to be able to set an outbound http binding. So even though the function sets a response, it isn't used and does not get sent back to through the function host. It's unclear why timer triggers ignore the outbound http binding or how to set a response without using one.

Related

How do I use Heartbeat with a Callback Return Step Function in my Lambda Function?

My Lambda function is required to send a token back to the step function for it to continue, as it is a task within the state machine.
Looking at my try/catch block of the lambda function, I am contemplating:
The order of SendTaskHeartbeatCommand and SendTaskSuccessCommand
The required parameters of SendTaskHeartbeatCommand
Whether I should add the SendTaskHeartbeatCommand to the catch block, and then if yes, which order they should go in.
Current code:
try {
const magentoCallResponse = await axios(requestObject);
await stepFunctionClient.send(new SendTaskHeartbeatCommand(taskToken));
await stepFunctionClient.send(new SendTaskSuccessCommand({output: JSON.stringify(magentoCallResponse.data), taskToken}));
return magentoCallResponse.data;
} catch (err: any) {
console.log("ERROR", err);
await stepFunctionClient.send(new SendTaskFailureCommand({error: JSON.stringify("Error Sending Data into Magento"), taskToken}));
return false;
}
I have read the documentation for AWS SDK V3 for SendTaskHeartbeatCommand and am confused with the required input.
The SendTaskHeartbeat and SendTaskSuccess API actions serve different purposes.
When your task completes, you call SendTaskSucces to report this back to Step Functions and to provide the results from the Task that your workflow can then process. You do not need to call SendTaskHeartbeat before SendTaskSuccess and the usage you have in the code above seems unnecessary.
SendTaskHeartbeat is optional and you use it when you've set "HeartbeatSeconds" on your Task. When you do this, you then need your worker (i.e. the Lambda function in this case) to send back regular heartbeats while it is processing work. I'd expect that to be running asynchronously while your code above was running the first line in the try block. The reason for having heartbeats is that you can set a longer TimeoutSeconds (or dynamically using TimeoutSecondsPath) than HeartbeatSeconds, therefore failing / retrying fast when the worker dies (Heartbeat timeout) while you still allow your tasks to take longer to complete.
That said, it's not clear why you are using .waitForTaskToken with Lambda. Usually, you can just use the default Request Response integration pattern with Lambda. This uses the synchronous invoke mode for Lambda and will return the response back to you without you needing to integrate back with Step Functions in your Lambda code. Possibly you are reading these off of an SQS queue for concurrency control or something. But if not, just use Request Response.

Laravel - Throttling Emails sends with job middleware

An application that I'm making will allow users to set up automatic email campaigns to email their list of users (up to x per day).
I need a way of making sure that this is throttled so too many aren't sent within some range. Right now I'm trying to work within the confines of a free Mailtrap plan. But even on production using Sendgrid, I want a sensible throttle.
So say a user has set their automatic time to 9am and there are 30 users eligible to receive requests on that date and time. Every review_request gets a record in the DB. Upon Model creation, an event listener is triggered to then dispatch a job.
This is the handle method of the job that is dispatched:
/**
* Execute the job.
*
* #return void
*/
public function handle()
{
Redis::throttle('request-' . $this->reviewRequest->id)
->block(0)->allow(1)->every(5)
->then(function () {
// Lock obtained...
$message = new ReviewRequestMailer($this->location, $this->reviewRequest, $this->type);
Mail::to($this->customer->email)
->send(
$message
);
}, function () {
// Could not obtain lock...
return $this->release(5);
});
}
the above is taken from https://laravel.com/docs/8.x/queues#job-middleware
"For example, consider the following handle method which leverages Laravel's Redis rate limiting features to allow only one job to process every five seconds:"
I am using Horizon to view the jobs. When I run my command to send emails (about 25 requests to be sent), all jobs seems to process instantly. Not 1 every 5 seconds as I would expect.
The exception for the failed jobs are:
Swift_TransportException: Expected response code 354 but got code "550", with message "550 5.7.0 Requested action not taken: too many emails per second
Why does the above Redis throttle not process a single job every 5 seconds? And how can I achieve this?

When should I use a DynamoDB trigger over calling the Lambda with another?

I currently have one AWS Lambda function that is updating a DynamoDB table, and I need another Lambda function that needs to run after the data is updated. Is there any benefit to using a DynamoDB trigger in this case instead of invoking the second Lambda using the first one?
It looks like the programmatic invocation would give me more control over when the Lambda is called (ie. I could wait for several updates to occur before calling), and reading from a DynamoDB Stream costs money while simply invoking the Lambda does not.
So, is there a benefit to using a trigger here? Or would I be better off invoking the Lambda myself?
DynamoDB Stream seems to be the better practice because:
you delegate the responsibility of invoking the post-processor function from your writer-Lambda. Makes writer more simple (aka faster),
you simplify connecting new external writers to the same Table, otherwise you have to implement the logic to call post-processors in all of them as well,
you guarantee that all data is post-processed (even if somebody added a new item in the web-interface of DynamoDB. :)
moneywise, the execution time you will spend to send invoke() operation from writer Lambda will likely cover the costs of a stream.
unless you use DynamoDB transactions your data may still be not yet available for post-processor if you call him from writer too soon. If your business logic doesn't need transactions then using them just to cover this problem = extra time/cost.
P.S. You can batch from the DynamoDB stream of course out of the box with simple setting. You are not obliged to invoke post-processor for every write operation.
After the data is updated, you can publish a SQS message, then add a trigger to configure another function to read from Amazon SQS in the Lambda console, create an SQS trigger.
To create a trigger
Open the Lambda console Functions page.
Choose a function.
Under Designer, choose Add trigger.
Choose a trigger type.
Configure the required options and then choose Add.
Lambda supports the following options for Amazon SQS event sources.
Event Source Options
SQS queue – The Amazon SQS queue to read records from.
Batch size – The number of items to read from the queue in each batch, up to 10. The event may contain fewer items if the batch that Lambda read from the queue had fewer items.
Enabled – Disable the event source to stop processing items.
var QUEUE_URL = 'https://sqs.us-east-1.amazonaws.com/{AWS_ACCUOUNT_}/matsuoy-lambda';
var AWS = require('aws-sdk');
var sqs = new AWS.SQS({region : 'us-east-1'});
exports.handler = function(event, context) {
var params = {
MessageBody: JSON.stringify(event),
QueueUrl: QUEUE_URL
};
sqs.sendMessage(params, function(err,data){
if(err) {
console.log('error:',"Fail Send Message" + err);
context.done('error', "ERROR Put SQS"); // ERROR with message
}else{
console.log('data:',data.MessageId);
context.done(null,''); // SUCCESS
}
});
}
Please don't forget add a trigger from another function to this SQS topic. That function will receive the SQS message automatic to handle.

Google Publisher Tag, how to remove event listener from Service

There seem to be several questions on how to register events to a gpt service:
Google Publisher Tag registering to Events
registering to events with google publisher tag
How to do this is clearly defined in the documentation:
googletag.pubads().addEventListener('eventName', callbackFn);
I add my event to the service when the component (React) mounts inside the callback function of window.googletag.cmd.push as described in this tutorial by Google.
Now the problem is that every time I change page, more event listeners are added to the service. I can make sure only one event listener executes on the actually existing slots by using this method (from the documentation):
googletag.pubads().addEventListener('impressionViewable', function(event) {
if (event.slot == targetSlot) { // will only run on target slot
// Slot specific logic.
}
});
But more an more event listeners will remain active and keep on executing (without executing the code within the if-statement).
Now, I assumed google would have implemented something like this (to run on componentWillUnmount):
googletag.pubads().removeEventListener('eventName', callbackFn);
But it doesn't exist in the documentation and I can't seem to find any way to remove active event listeners from the service?
So I went with this:
let eventListnerCreated = false;
if(!eventListnerCreated) {
eventListnerCreated = googletag.pubads().addEventListener("slotRenderEnded", function(event) {
// blablabla
});
}
Not clean. But will work.
I know this doesn't solve the original issue of removing the event listener, but this will let's not create event listeners again and again.

Where to make API call and how to structure actions

I've recently started migrating from ngrx to ngxs and had a design question of where I should be placing some of my calls.
In NGRX, I would create 3 actions for each interaction with an api. Something like:
GetEntities - to indicate that the initial api call was made
GetEntitiesSuccess - to indicate a successful return of the data
GetEntitiesFail - to indicate a unsuccessful return of the data
I would create an effect to watch for the GetEntities Action that actually called the API and handled the response by either calling the Success/Fail actions with the resultant payload.
In NGXS, do I make the api call from the store itself when the action occurs or is there some other NGXS object that I am supposed to use to handle those API calls and then handle the actions the same way I did in ngrx (by creating multiple actions per call)?
Most of the examples I have seen, and how I have used it is to make the API call from the action handler in the state, then when the API returns patch the state immediately.
Then after the patch call, you can dispatch an action to indicate success/failure if you need to. Something like this:
#Action(GetSomeData)
loadData({ patchState, dispatch}: StateContext<MyDataModel>, {payload}: GetSomeData) {
return this.myDataService.get(payload.id)
.pipe(
tap((data) => {
patchState({ data: data});
// optionally dispatch here
dispatch(new GetDataSuccess());
})
);
}
This q/a might also be useful Ngxs - Actions/state to load data from backend

Resources