How can I prevent multiple lambda authorizers triggered at the same time by api gateway from the same user?
It is happening because I’m doing multiple calls from the UI at the same time.
The reason that I want to prevent the behavior is because I’m caching mechanism inside the lambda authorizer which I’m not really enjoying from because of that behavior
thanks everyone for a answer about this or any suggestions
Related
I am currently managing a website via Django.
The website's url will request an api which is stored in AWS Lambda Function.
Normally, a python based Lambda function with no VPC setting coldout wouldn't worry us.
But I have 2 concerns about my website performance.
The server function communicates with several 3rd party features like AWS S3, Firestore, Firebase Authentication, and DynamoDB. So every Lambda function needs to build up the required settings.
Every page of the website checks the Firebase authentication which persistance is local. Could process delay be critical to a coldstarted container, causing 30s timeout?
If some user occasionaly experience the API Gateway timeout, could the cause of this be AWS Lambda coldstart?
No, the cold start will never be that high. For all cases, cold start should be less than 1 second (even for lambdas bound to VPC).
Is Amazon Lambda required to make a fully functional Alexa voice application? If so, what can and cannot be done if Lambda is not desired?
No. You can create your own secure endpoint and configure that to your Alexa skill. You will be able to accomplish pretty much everything lambda can do for you.
You can have lambda or any hosted endpoint as your Alexa backend.
However, having lambda will help you easy integration with services within AWS like using dynamodb/redshift/s3 etc...
And moreover, the price of keeping your backend running in lambda is very much less/negligible as you can have tons of requests served in less than few dollars.
I have a simple question: is there a way/program/method to create unit tests to test the API Url generated on AWS AppSync to verify the validity of created GraphQL schemas, queries, mutations, etc?
There is an open-source AppSync Serverless plugin which has offline emulator support. You may find it useful: https://github.com/sid88in/serverless-appsync-plugin#offline-support
Another good recommendation is to have two separate AppSync APIs. One API is hosting you production traffic. The other is to test changes before they go to production. This is significantly easier if you use Cloudformation (highly recommended) to manage your infrastructure.
If you want to validate your API is working periodically (every minute or so), you could create a canary like the following:
Create a Lambda function which runs on a schedule. This lambda function will make various GraphQL requests. It can emit success/failure metrics to CloudWatch.
Setup a CloudWatch alarm so you can be notified if your success/failure metric is out of the ordinary.
For the canary use-case see:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/RunLambdaSchedule.html
https://docs.aws.amazon.com/lambda/latest/dg/with-scheduled-events.html
There is also amplify amplify-appsync-simulator package that is supposed to help with testing appsync, but there is no documentation on how you should use it. It is used by serverless-appsync-simulator Michael wrote and Amplify itself.
What is the difference between simple aws lambda and aws lambda#edge ?
Lambda executes functions based on certain triggers. The use case for Lambda is quite broad and there is heavy integration with many AWS Services. You can even use it to simply execute the code via AWS's API and receive the code into your scripts separate from AWS. Common use cases include Lambdas being simply executed and the output received, plugged into API Gateway to serve user requests, modifying objects as they are placed into S3 buckets, etc.
Lambda#Edge is a service that allows you to execute Lambda functions that modify the behaviour of CloudFront specifically. Lambda#Edge simply runs during the request cycle and makes logical decisions that affect the delivery of the CloudFront content.
https://aws.amazon.com/lambda/features/
https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html
Lambda#Edge is Lambda functions in response to CloudFront events.
You still create lambda#edge function under Lambda, but Lambda#Edge function must be created in us-east-1.
You need configure lambda#edge to the cloundfront distribution behavior on viewer request or others.
has to be created in us-east-1 region
if code taken from bucket, bucket also needs to be in us-east-1 region
you can't pass environment variables the same way as to normal lambda fn. Either you need to hardcode values during build process or hardcode env and fetch values from somewhere else.
Lambda is a serverless AWS compute service that allows user to run code as function trigger. In file processing, optimization, lot of use cases.
On the other hand Lamda#Edge is extension of AWS lambda, is a feature of cloudfront that allows user run code closer to the application, so improves performance and reduce latency.
Here is the official documentation describe nicely about Lambda#Edge
https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html
I'm creating a serverless website using AWS Lambda, but I'm a bit concerned about potential abuse. How do I protect myself against a user who queries my endpoint a million times?
The API Gateway supports throttling. The defaults are reasonable, but you can alter them however you like. The throttle settings in the console are under the Stages tab of you APIs. There's more info here: http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html