I will be running a single instance full month, free normally. How will i be billed if I run a second instance 30 hours a month? Do you bill hourly? Or do I need to upgrade Catamaran to use a second instance?
You're only billed for the time that you actually consume resources and you don't have to change plans add extra workers. Just add the extra worker for the amount of time you need it, and you'll be billed for the time used.
Related
My understanding is that a single lambda#edge instance can only handle one request at a time, and AWS will spin up new instances if all existing instances are serving a request.
My lambda has a heavy instance startup cost (~2 seconds) but a very light execution cost. It triggers on viewer requests, which always come in batches of ~20 (loading a single-page application). This means one user loading the app, on a cold start, will start ~20 lambda instances and take ~2 seconds.
But due to the very light execution cost, a single lambda instance could handle all 20 requests and it would still take only ~2 seconds.
An extra advantage is, since each instance connects to a 3rd party service on startup, there would be only 1 open connection instead of 20.
Is this possible?
Lambda#edge doesn’t support reserved nor provisioned concurrency.
Here is the link to the documentation for reference: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/edge-functions-restrictions.html#lambda-at-edge-function-restrictions
That being said, with 2s cold start, you might consider using standard lambda.
Also, can’t you reduce that cold start somehow?
I came across the offer by Heroku that says that I can receive free 450 dyno hours for just adding my credit card. I was wondering how the free hours are added. Are they like the standard 550 hours, i.e. added every month or is it just for one time?
Also I just want to run one dyno and so having 1000 total hours per month would be more than enough. So if they add 450hrs/month with the credit card, does that mean I can run my dyno infinitely without being charged at all?
Yes, you get 1000 dynos per month after adding credit card. I have the same.
Yes, correct you will get 1000 dyno hours once you verify your identity with a credit card. For a single APP they are enough but in case you want to add more APPs, allowed dyno hours will be sum of hours consumed by each APP. Initially 550 hours were provided to keep the APP active for 75% of time i.e. 18 hours (18*30 ~= 550).
Using serverless Functions As A Service (AWS Lambda, GCP Functions), what is the best way to run a timer or interval for sometime in the future?
I do not want to keep the instance running idle whilst the timer counts down. The timer will be less than 24 hrs, and needs to change dynamically at runtime (it isn't a single set up cron schedule).
Google has Cloud Scheduler, but that mimics cron and will not let me have a timer for any amount of seconds starting from now.
If you're looking for a product that's similar to Cloud Scheduler, but lets you schedule a single function invocation for an arbitrary amount of time in the future, you should look at Cloud Tasks. You have to create a queue, then arrange for it to create an HTTP target to run at some time in the future.
https://www.heroku.com/pricing says that:
a free dyno "Sleeps after 30 mins of inactivity, otherwise always on depending on your remaining monthly free dyno hours."
a hobby dyno is "Always on"
in case of hobby dynos: price is $7/month, and "You pay for the time your dyno is running as a fraction of the month."
My app will get approximately 5 requests per day which it will serve in 3-4 milliseconds each.
I think about changing from free dynos to hobby dynos to avoid sleeping.
How much will I pay?
Am I right that it is only 5x4x30 milliseconds = 600 milliseconds running time in a month which is approximately $0? Or should I pay the whole $7/month?
I'm also wondering this myself. There's no clear answer on Heroku's website. The so called price "calculator" doesn't allow you to customise the number or type of dynos, let alone enter a estimated number of running minutes.
Judging by some of the comments on forms, I'm guessing it's the full $7 per month but it would be great if this could be clarified.
Answer: The price is $7 per month and there is no option for the dyno to sleep. Dynos can be turned off but this potentially disables functionality on the deployed application.
Also Note: You can't alway mix dyno types so you might have to pay for a worker dyno in addition to web dyno. This can be a real sting when you've been testing/developing with free web and worker dynos. So the jump is not necessarily from $0 to $7, but $0 to $14 per month.
We have an AWS Lambda written in Java that usually completes in about 200 ms. Occasionally, it times out after 5 seconds (our configured timeout value).
I understand that there is occasional added latency due to container setup (though, I'm not clear if that counts against your execution time). I added some debug logging, and it seems like the code just runs slow.
For example, a particularly noticeable log entry shows a call to HttpClients.createDefault usually takes less than 200 ms (based on the fact that the Lambda executes in less than 200 ms), but when the timeout happens, it takes around 2-3 seconds.
2017-09-14 16:31:28 DEBUG Helper:Creating HTTP Client
2017-09-14 16:31:31 DEBUG Helper:Executing request
Unless I'm misunderstanding something, it seems like any latency due to container initialization would have already happened. Am I wrong in assuming that code execution should not have dramatic differences in speed from one execution to the next? Or is this just something we should expect?
Setting up new containers or replacing cold containers takes some time. Both account against your time. The time you see in the console is the time you are billed against.
I assume that Amazon doesn't charge for the provisioning of the container, but they will certainly hit the timer as soon as your runtime is started. You are likely to pay for the time during which the SDK/JDK gets initialized and loads it's classes. They are certainly not charging us for the starting of the operation system which hosts the containers.
Running a simple Java Lambda two times shows the different times for new and reused instances. The first one is 374.58 ms and the second one is 0.89 ms. After that you see the billed duration of 400 and 100 ms. For the second one the container got reused. While you can try to keep your containers warm as already pointed out by #dashmug, AWS will occasionally recycle the containers and as load increases or decreases spawn new containers. The blogs How long does AWS Lambda keep your idle functions around before a cold start? and How does language, memory and package size affect cold starts of AWS Lambda? might be worth a look as well. If you include external libraries you times will increase. If you look at that blog you can see that for Java and smaller memory allocations can regularly exceed 2 - 4 seconds.
Looking at these times you should probably increase your timeout and not just have a look at the log provided by the application, but a look at the START, END and REPORT entries as well for an actual timeout event. Each running Lambda container instance seems to create its own log stream. Consider keeping your Lambdas warm if they aren't called that often.
05:57:20 START RequestId: bc2e7237-99da-11e7-919d-0bd21baa5a3d Version: $LATEST
05:57:20 Hello from Lambda com.udoheld.aws.lambda.HelloLogSimple.
05:57:20 END RequestId: bc2e7237-99da-11e7-919d-0bd21baa5a3d
05:57:20 REPORT RequestId: bc2e7237-99da-11e7-919d-0bd21baa5a3d Duration: 374.58 ms Billed Duration: 400 ms Memory Size: 128 MB Max Memory Used: 44 MB
05:58:01 START RequestId: d534155b-99da-11e7-8898-2dcaeed855d3 Version: $LATEST
05:58:01 Hello from Lambda com.udoheld.aws.lambda.HelloLogSimple.
05:58:01 END RequestId: d534155b-99da-11e7-8898-2dcaeed855d3
05:58:01 REPORT RequestId: d534155b-99da-11e7-8898-2dcaeed855d3 Duration: 0.89 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 44 MB
Try to keep your function always warm and see if it would make a difference.
If the timeout is really due to container warmup, then keeping it warm will greatly help reduce the frequency of these timeouts. You'd still get cold starts when you deploy changes but at least that's predictable.
https://read.acloud.guru/how-to-keep-your-lambda-functions-warm-9d7e1aa6e2f0
For Java based applications the warm up period is more as you know it's jvm right. Better to use NodeJS or Python because the warm up period is less for them. If you are not in such a way to switch the tech stack simply keep the container warm by triggering it or increase the memory that will reduce the execution time as lambda cpu allocation is more for larger memory.