Dynamic HTTP request routing using rules implemented in AWS Lambda - aws-lambda

I would like to use a Lambda function to implement dynamic routing rules for HTTP requests targeted at a fleet of Fargates. I need the Lambda function because routing rules require dynamic lookups to be made on an external database service (Redis cluster in this case).
Is there a way to do that using Elastic Load Balancing or API Gateway?
Is thee any other option that I should consider?

There is no way to have a Lambda function be part of the routing logic of either of those services. This would have to be something that you created yourself.
Each of the Lambda functions would have to know about the endpoints of the targets and when to send traffic there.

Related

How to integrate Dapr with Kong Api Gateway

I want to use Kong as api gateway to allow external applications to interact with the cluster Dapr communicate with my application. I can't find any example.
So, there is no easy way to do this directly. There is a blog post that walks through setting it up with ingress here https://carlos.mendible.com/2020/04/05/kubernetes-nginx-ingress-controller-with-dapr/
The gist of it is that you will set up your ingress controller pods as Dapr services and rewrite/redirect the calls to the dapr sidecar. Be aware of namespaces (the blog glazes over this and installs the ingress in the default namespace which is not common practice) and fully qualify the service name ..
Finally, I recommend you apply a rewrite to the invocation of the downstream service. use a regex to get the segments and append the segment at the end of the service invocation URL: HTTP://localhost:3500/v1.0/invoke/YOURSERVICE.ITSNAMESPACE/method/$2 (where $2 is the segment captured from the original path in the ingress
NOTE: I am having issues getting these types of calls to go through the HTTP pipeline components I have downstream, but if you don't need those, then its a great option

Apollo Server vs Apollo-server-express [duplicate]

I am struggling to understand the added value of Express (or Koa, Hapi, etc) integration with Apollo GraphQL server.
I see it can work in stand alone mode very well (an example: https://medium.com/codingthesmartway-com-blog/apollo-server-2-introduction-efc4026f5654).
In which case should we use it with (or without) integration? What should drive this decision?
If all you need is a GraphQL endpoint, then using the standalone library (apollo-server) is generally preferred because there will be less boilerplate to write (features like subscriptions, file uploads, etc. just work without additional configuration). However, many applications require additional functionality beyond just exposing a single API endpoint. Examples include:
Webhooks
OAuth callbacks
Session management
Cookie parsing
CSRF protection
Monitoring or logging requests
Rate limiting
Geofencing
Serving static content
Server-side rendering
If you need this sort of functionality for your application, then you'll want to utilize an HTTP framework like Express and then use the appropriate integration library (i.e. apollo-server-express).
Apollo Server also includes integrations for serverless solutions AWS Lambda. If you want to go serverless to, for example, get better scalability or eliminate system admin costs, then you would also need to use one of these integrations.

How to use selectively choose the lambda version for an APIG API at runtime?

I have a use case where an API backed by a lambda has to be latency critical for a few clients but there are clients how call the API with high volume in bursts and the latency restrictions are liberal .
We are using provisioned concurrency for the latency critical calls and do not want to use it for non latency critical calls as the cost is high.
Since provisioned concurrency can only be used with alias/version, is it possible to choose the lambda version at runtime based on the API Key?
Determine the client based on the API Key and point to the appropriate version. I am trying to avoid creating 2 API endpoints one for latency critical clients and the other for non-latency critical clients.
It is not possible for API Gateway to invoke a Lambda function alias based on the API key passed in the request. What you can do is set up 2 API Gateway stages, one for latency critical calls and the other for non-critical ones. Now, the Lambda function integration would need to be set up to use API GW stage variables so the appropriate Lambda function alias can be invoked based on the stage. You can refer to this blog post on how to configure that: https://docs.aws.amazon.com/apigateway/latest/developerguide/stages.html
So, using this method, you would be creating two endpoints, but the API configuration for both would be similar.

https calls from multiple lambda functions

I am learning AWS lambda and have a basic question regarding architecture with respect to managing https calls from multiple lambda functions to a single external service.
The external service will only process 3 requests per second from any IP address. Since I have multiple asynchronous lambdas I cannot be sure I will be below this threshold. I also don't know what IPs my lambdas use or even if they are the same or not.
How should this be managed?
I was thinking of using an SQS FIFO queue, but I would need to setup a bidirectional system to get the call responses back to the appropriate lambda. I think there must be a simple solution to this, but I'm just not familiar enough yet.
What would you experts suggest?
If I am understanding your question correctly then
You can create and API Endpoint by build an API Gateway with Lambda integrations(preferred Lambda proxy integration) and then use throttling option to decide the throughput this can be done in different ways aws docs account level, method level etc.
You can perform some load testing using gatling or any other tool and then generate a report for eg. which can show that even if you have say 6tps on your site you can throttle at method level and see that the external service is hit only at say 3tps.
It would depend upon your architecture how do you want to throttle I had done method level to protect the external service at 8tps.

Proper integration of AWS AppSync with Laravel?

Anyone successfully integrated AWS AppSync with Laravel?
I am new to AWS AppSync but good experience with laravel.
I am trying to implement an offline-app feature in my mobile app and the mobile API part is what Laravel handles.
I looked into AWS AppSync but all they are talking about is dynamoDB and graphQL. Someplace it says i need to use AWS Lambda.
I really can't get a grip on how to properly implement this.
Any suggestions or pieces of advice are greatly appreciated.
I have basic experience with graphQL
Thanks
I checked a few video sessions and found HTTP endpoint can be used as a resolver. is this the proper way?
If I use HTTP as resolver can I still use the real-time features?
links
https://aws.amazon.com/appsync/
Laravel is a PHP framework, so I think the two options you would want to consider would be HTTP and Lambda data sources.
Lambda can be something of a catch-all for data sources: you have absolute control over what you call, how you do it, and in what language you do it. You just have to set up a Lambda function and create a data source in the AppSync console pointing to it, then have your Lambda function interact with your framework however it needs to.
I'm not terribly familiar with Laravel myself, but I believe HTTP is also a totally viable option. I would think this would be the way you want to go, as it cuts out the added complexity and latency of a Lambda function between AppSync and your end destination. A tutorial for setting one up is available here: https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-http-resolvers.html
In either case, real time updates will be absolutely available to you.

Resources