How to architect the serverless framework and microservices on AWS Lambda - microservices

I have been studying microservices and serverless solutions and am playing with an angular frontend hosted on S3 and Lambda functions that talk to various DynamoDb tables via the API gateway on AWS.
Every example and video I read/watch uses a simple CRUD microservices as part of a simple 'todo' application or similar. My problem is where does the business logic sit? If I'm building a complex application I don't want all my business logic in my frontend Angular application. Or do I? I could build an Application API which in turn calls CRUD microservices but that feels like a monolithic approach.
I appreciate there may not be a definitive answer but can anybody advise a novice on best practice?

There are several best practices I follow in designing Serverless Microservices
Start with only few Microservices (Less the better up front, unless you know exactly how the service separation should be, delaying the decision to split)
Separate your business logic that goes to the API, and use the handler as a controller in MVC to invoke the business logic. (This will also helps to unit test logic without depending on Lambda).
Its not necessary to write only simple CRUD in your API. It depends on your domain and Business Logic required. (But don't build another monolith without separating the code in to different services. Several AWS Service limits will also give you some guide on how much endpoints should be there in a service & etc.)
Apply the design patterns available for Microservices (e.g If you want to sync data bases between each Microservice, use Pub-Sub pattern using SNS, DynamoDB Streams and Lambda)
Use the Angular App to put most of the presentation logic.
Use CloudFront as a proxy and a CDN to avoid CORs.
If you need more information you can refer the following articles I have written on this.
Deploying Angular/React Apps in AWS
Full Stack Serverless Web Apps with AWS
Note: You can use the CloudFormation in Deploying Angular/React Apps in AWS to automate the creation of S3 and CloudFront with best practices.

Related

Is it possible to have a multi-endpoint REST API on Google Cloud Functions? (Aws Lambda migration to GCF)

My company has been using AWS Lambda for many years to run our Spring Boot REST API. We are migrating to GCP and they want me to deploy our code to GCF the same way we were with AWS Lambda, but I am not sure that GCF works that way.
According to Google Cloud Functions are only good for Single Endpoints and can only work as a web server using the functions framework.
Spring has a document that uses the GcfJarLauncher, but that is still in alpha and I can only get it to work for a single endpoint. Any additional functions I put into the code are ignored and every endpoint triggers the same function.
There were some posts here on SO that talked about using Functional Beans to map to multiple functions, but I couldn't fully get it working and my boss isn't interested in that.
I've also read of people putting the endpoint in the request payload and then mapping to the proper function, but we are not interested in doing that either.
TLDR/Conclusion:
Is it even possible to deploy our app to GCF or do we need to use Cloud Run (as Google suggests in my first link)?

Multiple ApolloServers needed to implement a gateway connecting to REST APIs?

I'm building a graphql gateway service, which merge multiple services into one graph, using Apollo/Node/Express and following the Apollo Federation model. Initially, most of the services I'll be connecting to are REST services.
In all of the examples I find (e.g. here), I see that the gateway project runs multiple instances of ApolloServer, one for every REST service plus one more for the gateway itself, and runs them all using a package like concurrently. Basically the gateway project runs n+1 ApolloServers. Having all of these servers running seems strange to me, but I'm pretty new to this whole ecosystem.
I'm not clear if this is just for demonstration purposes, or is this also how it's implemented and deployed in the real world?
I hope that those were just examples, and are not the expected pattern.
If you need multiple GraphQL Services, each one of those would be served as a separate Domain Graph application, as its own project. Then an additional service (the gateway) would consume all of those applications and expose a single unified GraphQL API.

How to migrate REST APIs to GraphQL Apollo Federation

Planning to migrate my PHP APIs to Graphql using Apollo Federation. After a bit of research, I see it is done using the following way:
My questions are:
Is there any better way to create the federated services so it is not a separate layer (1 for each REST API)? Maybe something close to the previous schema stitching approach where all can sit in one place and be stitched together at the end (instead of a specific federated layer for each service).
If this is the recommended way, how do I deploy this infrastructure? From the diagram, does it mean I have 5 instances running to cover all of the services?
Is it recommended to run Gateway and Federated services all inside one instance (from diagram - 3 servers running in one instance)?
Let me know if it helps.
Federated services are great when you want to break the monolithic structure of the non federated implementation of apollo server. It can be designed by incorporating the micro-service best practices. Instead of blindly having one federated service per rest endpoint, you can have federated services based on the functionality the service is suppose to take care. One service can call multiple rest endpoit. This would provide you better control on scaling, securing and managing services at infrastructure level. An example can be as simple as amazon where item browsing hits will be way more than buying transactions. In this case you can have one federated service which provides browsing data where as another one can for managing transactions. Then you can scale one to multiple instance to handle user load and have additional security in place for the one hadnling transactions.
2 & 3. Yes you would need to have deploy all the components separately. I would recomend to have all the services in the same VPC cluster so that you don't have to worry about network layer security. If the services are deployed across multiple clusters, it will be adding handling firewall and https/tls for every request, which would cause unnecssaery delay becuase of network call. Although it would be in milliseconds but can be easily avoided.

AWS Lambda vs Elastic Beanstalk

Im new to aws.
I am going to develop a REST full app which is going host on aws.
I decided to use
Amazon S3 for static contents
Amazon Cognito User Pool for Authentication
Amazon DynamoDB as db
I am confused on where my app is going to be hosted. I have 2 ideas for that.
AWS Lambda Function + api gateway
Can I implement entire app on it ?
Elastic Beanstalk
Can i integrate all the above aws services with it ?
(Backend on .net core web api 2.0)
Please guid me
As the experience of working with cloud, after 1y 6m I can give a proper answer for my own question.
Yes.
There is a possibility to use API Gateway + Lambda for the entire app as the back end. But you have to manage your most of the app logic from the front end. On there you have to get a risk because the source code can be viewed by the public.
Keeping your all business logic in the client code is not a good practice. And keeping all the logic in the Lambda also not easy or cost effective. The reason is when you making a real world app, you will need thousands of functions. To do one task, you will have to call many functions (Then its a function run time). So it will be very expensive.
Best solution is hosting the backend on Elastic Beanstalk and front end on S3. If you have any heavy task ? then you can make Lambda functions for that.
Lambda is best for CPU bounded functions. But not to have all the application logic on it.
Since you might not be interested in managing the underlying system, you should opt for AWS Lambda + API Gateway.

Does Serverless Framework support any kind of multi-cloud load balancing?

Does Serverless Framework support the ability to deploy the same API to multiple cloud providers (AWS, Azure and IBM) and route requests to each provider based on traditional load balancer methods (i.e. round robin or latency)?
Does Serverless Framework support this function directly?
Does Serverless integrate with global load balancers (e.g. dyn or neustar)?
Does Serverless Framework support the ability to deploy the same API to multiple cloud providers (AWS, Azure and IBM)
Just use 3 different serverless.yml files and deploy each function 3 times.
and route requests to each provider based on traditional load balancer methods (i.e. round robin or latency)?
No, there is no such support for multi-cloud load balancing
The Serverless concept is based on trust: you trust that your Cloud provider will be able to handle your traffic with proper scalability and availability. There is no multi-cloud model, a single Cloud provider must be able to satisfy your needs. To achieve this, they must implement a proper load-balacing schema internally.
If you don't trust on your Cloud provider, you are not thinking in a serverless way. Serverless means that you should not worry about the infra the supports your app.
However, you can implement a sort of multi-cloud load balancing
When you specify a serverless.yml file, you must say which provider (AWS, Azure, IBM) will create those resources. Multi-cloud means that you need one serverless.yml file per each Cloud, but the source code (functions) can be the same. When you deploy the same function to 3 different providers, you will receive 3 different endpoints to access them.
Now, which machine will execute the Load Balance? If you don't trust that a single Cloud provides enough availability, how will you define who will serve the Load Balance feature?
The only solution that I see is to implement this load-balacing in your frontend code. Your app would know the 3 different endpoints and randomize the requests. If one request returns an error, the endpoint would be marked as unhealthy. You could also determine the latency for each endpoint and select a preferred provider. All of this in the client code.
However, don't follow this path. Choose just one provider for production code. The SLA (service level agreement) usually provides a high availability. If it's not enough, you should still stick with just one provider and have in hand some scripts to easily migrate to another cloud in case of a mass outage of your preferred provider.

Resources