Whats the best way to do `setTimeout` or `setInterval` with FAAS? - aws-lambda

Using serverless Functions As A Service (AWS Lambda, GCP Functions), what is the best way to run a timer or interval for sometime in the future?
I do not want to keep the instance running idle whilst the timer counts down. The timer will be less than 24 hrs, and needs to change dynamically at runtime (it isn't a single set up cron schedule).
Google has Cloud Scheduler, but that mimics cron and will not let me have a timer for any amount of seconds starting from now.

If you're looking for a product that's similar to Cloud Scheduler, but lets you schedule a single function invocation for an arbitrary amount of time in the future, you should look at Cloud Tasks. You have to create a queue, then arrange for it to create an HTTP target to run at some time in the future.

Related

Concurrency within a single lambda#edge instance?

My understanding is that a single lambda#edge instance can only handle one request at a time, and AWS will spin up new instances if all existing instances are serving a request.
My lambda has a heavy instance startup cost (~2 seconds) but a very light execution cost. It triggers on viewer requests, which always come in batches of ~20 (loading a single-page application). This means one user loading the app, on a cold start, will start ~20 lambda instances and take ~2 seconds.
But due to the very light execution cost, a single lambda instance could handle all 20 requests and it would still take only ~2 seconds.
An extra advantage is, since each instance connects to a 3rd party service on startup, there would be only 1 open connection instead of 20.
Is this possible?
Lambda#edge doesn’t support reserved nor provisioned concurrency.
Here is the link to the documentation for reference: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/edge-functions-restrictions.html#lambda-at-edge-function-restrictions
That being said, with 2s cold start, you might consider using standard lambda.
Also, can’t you reduce that cold start somehow?

Is there any way to check if a lambda function has been idle for a given amount of time?

I have one use case where I am supposed to execute a piece of code based on idle time of a given lambda function, I mean if given function has been idle for say 5 mins, my piece of code should run.
Is there any way to check the lambda state/status?
I assume you are looking to avoid lambda cold starts, please leverage Provisioned Concurrency which will have lambda running up with the amount of concurrency setup
https://aws.amazon.com/blogs/aws/new-provisioned-concurrency-for-lambda-functions/
If you did not mean this, then I assume idleness as "no requests processed" by lambda, if yes, then use cloudwatch metric/alarm to monitor # of invocations over a timeframe and then do whatever in its action

How to keep desired amount of AWS Lambda function containers warm

On my project there is REST API which implemented on AWS API Gateway and AWS Lambda. As AWS Lambda functions are serverless and stateless while we make a call to it, AWS starts a container with code of the Lambda function which process our call. According AWS documentation after finishing of lambda function execution AWS don't stop the container and we are able to process next call in that container. Such approach improves performance of the service - only in time of first call AWS spend time to start container (cold start of Lambda function) and all next calls are executed faster because their use the same container (warm starts).
As a next step for improving the performance we created cron job which calls periodically our Lambda function (we use Cloudwatch rules for that). Such approach allow to keep Lambda function "warm" allowing to avoid stopping and restarting of containers. I.e. when the real user will call our REST API, Lambda will not spent time to start a new container.
But we faced with the issue - such approach allow to keep warm only one container of Lambda function while the actual number of parallel calls from different users can be much larger (in our case that's hundreds and sometimes even thousands of users). Is there any way to implement warm up functionality for Lambda function which could warm not only single container, but some desired number of them?
I understand that such approach can affect cost of Lambda function's using and possibly, at all it will be better to use good old application server, but comparison of these approaches and their costs will be the next steps, I think, and in current moment I would like just to find the way to warm desired count of Lambda function containers.
This can be long but bear with me as this would probably give you workaround and may be would make you understand better How Lambda Works ?
Alternatively You can Skip to Bottom "The Workaround" if you are not interested in reading.
For folks who are not aware about cold starts please read this blog post to better understand it. To describe this in short:
Cold Starts
When a function is executed for the first time or after having the
functions code or resource configuration updated, a container will be
spun up to execute this function. All the code and libraries will be
loaded into the container for it to be able to execute. The code will
then run, starting with the initialisation code. The initialisation
code is the code written outside the handler. This code is only run
when the container is created for the first time. Finally, the Lambda
handler is executed. This set-up process is what is considered a cold
start.
For performance, Lambda has the ability to re-use containers created
by previous invocations. This will avoid the initialisation of a new
container and loading of code. Only the handler code will be
executed. However, you cannot depend on a container from a previous
invocation to be reused. if you haven’t changed the code and not too
much time has gone by, Lambda may reuse the previous container.
If you change the code, resource configuration or some time has
passed since the previous invocation, a new container will be
initialized and you will experience a cold start.
Now Consider these scenarios for better understanding:
Consider the Lambda function, in the example, is invoked for the first time. Lambda will create a container, load the code into the container and run the initialisation code. The function handler will then be executed. This invocation will have experienced a cold start. As mentioned in the comments, the function takes 15 seconds to complete. After a minute, the function is invoked again. Lambda will most likely re-use the container from the previous invocation. This invocation will not experience a cold start.
Now consider the second scenario, where the second invocation is executed 5 seconds after the first invocation. Since the previous function takes 15 seconds to complete and has not finished executing, the new invocation will have to create a new container for this function to execute. Therefore this invocation will experience a cold start.
Now to Come up First Part of Problem that you have solved :
Regarding preventing cold starts, this is a possibility, however, it is not guaranteed, the common workaround will only keep warm one container of the Lambda function. To do, you would run a CloudWatch event using a schedule event (cron expression) that will invoke your Lambda function every couple of minutes to keep it warm.
The Workaround:
For your use-case, your Lambda function will be invoked very frequently with a very high concurrency rate. To avoid as many cold starts as possible, you will need to keep warm as many containers as you expect your highest concurrency to reach. To do this you will need to invoke the functions with a delay to allow the concurrency of this function to build and reach the desired amount of concurrent executions. This will force Lambda to spin up the number of containers you desire. This, as a result, can bring up costs and will not guarantee to avoid cold starts.
That being said, here is a break down on how you can keep multiple containers for your function warm at one time:
You should have a CloudWatch Events Rule that is triggered on a schedule. This schedule can be a fixed rate or a cron expression. for example, You can set this rule to trigger every 5 minutes. You will then specify a Lambda function (Controller function) as the target of this rule.
Your Controller Lambda function will then invoke the Lambda function (Function that you want to be kept warm) for as many concurrent running containers as you desire.
There are a few things to consider here:
You will have to build concurrency because if the first invocation
is finished before another invocation starts then this invocation
may reuse the previous invocations container and not create a new
one. To do this you will need to add some sort of delay on the
Lambda function if the function is invoked by the controller
function. This can be done by passing in a specific payload to
the function with these invocations. The lambda function that you
want to be kept warm will then check if this payload exists. If
it does then the function will wait (to build concurrent
invocations), if it does not then the function can execute as
expected.
You will also need to ensure you are not getting throttled on the Invoke Lambda API call if you are calling it repeatedly. Your
Lambda
function should be written to handle this throttling if it occurs
and consider adding a delay between API calls to avoid throttling.
At the End this solution can reduce cold starts but it will increase costs and will not guarantee that cold starts will occur as they are inevitable when working with Lambda.If your application needs faster response times then what occurs with a Lambda cold start, I would recommend looking into having your server on a EC2 instance.
We are using java (spring boot) lambdas and have come to pretty much an identical solution as Kush Vyas's answer above which works very well.
We did find during load testing, however, that a legitimate user request would often occur during the period that the "Controller function" was executing, again causing the inevitable cold start...
So, now in our "Controller function", we have our regular number of X concurrent warm-up requests, however every 5th execution of the function we call our target lambda an additional 2 times. Theory being that we will end up with X+2 lambdas staying warm, but for 4 out of 5 warm up calls there will still be 2 redundant lambdas that can service user requests.
It did reduce our number of cold starts even further (but obviously still not completely) and we are still playing with concurrency/frequency of warm-ups/sleep-time combinations to find optimum solution for us - these values will always likely be dependent on load requirements for a specific situation.
AWS just announced this:
https://aws.amazon.com/about-aws/whats-new/2019/12/aws-lambda-announces-provisioned-concurrency/
Be aware though that it is not free and for our simple use case of keeping 10 lambda instances warm it seems our daily cost would increase from $0.06 to $4
If you use the serverless framework with AWS Lambda, you can use this plugin to keep all your lambdas warm with a certain level of concurrency.
I'd like to share small but useful tip which we use to reduce 'observed by user' delay related to cold starts. In our case the Lambda function handles HTTP requests from front-end via AWS API Gateway, in particular executes search functionality when user type something in the input field. Usually user start to type with some delay after UI is rendered, so we have some time to execute ping call to our Lambda function for warming it up. And when user will make requests to the back-end, most likely the Lambda will be ready for work.
Actually such approach do nothing for fixing the issue with cold starts on the back-end side and you will need to look for other options how to fix it, but it can be an user experience improvement without much efforts (something like hotfix).
One thing you should remember - if your service is public and you care about Google Insights score you should be careful implementing such approach.

Using Quartz for long running job

I'm planning to use Quartz scheduler to process a one-time job.
My use case is, I need to migrate BLOB from one storage to another and blob's can be as big as 100GB, so a particular job can run really long enough to get the work done.
The reason I'm using Quartz because of its clustering support, fault tolerance and retry capabilities in case job fails etc. Only thing I'm concerned about is, I might have a lot of miss fire trigger scenario and a lot of database lock which can hamper live production traffic on those database hosts. I will probably be scheduling 10s of thousands of job in one shot.
Few of the things that I figured out is
I can set a high value for org.quartz.jobStore.misfireThreshold so that miss fire does not happen. I don't really care about the time when the job get's picked up as it's background job and no SLA as such. Only thing I care about is that eventually job getting picked up and getting work done.
I can also set batch mode properties org.quartz.scheduler.batchTriggerAcquisitionMaxCount and org.quartz.scheduler.batchTriggerAcquisitionFireAheadTimeWindow. I understand the batch max count property should be like equal to the thread pool size which can give the biggest bang on performance but what should be the value of fire ahead of time window be?
I'm using Quartz with Spring boot and will be leveraging org.quartz.impl.jdbcjobstore.JobStoreCMT. What I understand is execute method of the job get wrapped in the transaction, will this cause any problem since transaction will be open for a long time as the job might take hours to complete? Is this something ok? I will be using Oracle database.
Am I missing something here? Can someone share their experience with a similar use case?
Thanks!

how to implement custom cloud worker

I am designing a cloud app and need a worker process which scours my database looking for work, and then performs it.
Most of the info I seem to find on the subject of background tasks in the cloud involves some kind of scheduler and/or queuing system.
What I have doesn't quite fit into the "run this task every 5 minutes" or "add this to the queue to be executed later" models. I think the main difference to my problem is that the workers themselves find work to do, rather than being assigned it by a periodic scheduler or an external process that generates work.
What I have is basically a giant table where each entry has three fields:
job: a small task to be performed, lets say it gets the last message from a twitter account and stores it in the database
the interval at which to perform that job: say every 5 minutes, N.B. the interval is arbitrary and different for each entry in the table
the last date when the job was performed
The way I would implement this is to have a worker which has an infinite loop. When it enters the loop, it scours the database a)looking for items whose date + interval < currentTime, b)when it finds one, it sets date = currentTime, and c)then executes the job. If there is no work ATM, it sleep for a few seconds, then tries again.
I will have many parallel workers scouring the database simultaneously, which is why I do b) first and then c) in the paragraph above. Since there are parallel workers, action a) and b) are atomic operations on the database to prevent work being duplicated. If the worker crashes after a) and b), but before it manages to finish the work, it's no big deal, and the workers can just do it at the next interval; reason for this is that the work is not performed in a time-invariant system so a backlog scenario of failed jobs has no benefit as the tasks have to be performed at their exact intervals, so it's better to skip 1 interval than to have uneven intervals between which the tasks were executed.
My question is whether that is a reasonable implementation strategy? If so, how do I bring this process to life on the cloud (I am using Heroku, but may switch to EC2 in the future)? I still haven't written any code so I would welcome other suggestions (maybe I misunderstood the use cases/applications for queue systems).
This sounds so close to using something like a scheduled job that you might as well tread the well beaten path and do it the more conventional way. There's no reason why you can't schedule a job to run once every few seconds.
However, this idea of looking for work sounds dodgy. What happens if two workers find the same task to run at the same time for instance? Also, are there not triggers in the application which can indicate that work needs doing? It seems strange that you have code 'looking for work'.
You can go a very long way with simple periodic background tasks, so I would exhaust all possibilities in that area before rolling your own.

Resources