Microservice architecture for projects with low budgets / little traffic - microservices

When implementing a microservice architecture and keeping services really small, you soon have many services, let's say 100 for simplicity. Now when deploying each service to an AWS nano instance, this would cost ~500$ / month, a rather hefty sum for a smaller project or a hobby developer. What options do I have to reduce this price, while still being able to have many services?
I thought about putting multiple services on one nano instance (maybe dockerized). I can comfortably fit ~5 services on one nano instance, so the price would be 5 times lower. The problem I have with this, is that I have to manage a lot of things and it doesn't seem to scale well. Is there a better way or alternatively a web-service that does this for me?

Microservices as a tool
One thing you may want to think about is if microservices are an architecture for a small project with low traffic.
Microservices architecture is a tool to solve e.g. high traffic challenges and with low traffic a monolith may be a more cost effective approach. Microservices come also at a cost (complexity all over the board - design, deployment, service discovery and relations).
Keep in mind that your microservices shouldn't be too small and as per the best practices you should cover a single domain with them (https://martinfowler.com/articles/microservices.html), not split a business domain into multiple microservices just for the sake of having microservices (unless this is a training project where you want to learn the tools for microservices architecture).
I am not sure how large solution you would need to have to have a challenge of 100 microservices, but maybe you should review their design and make sure that they are not too small :)
Nice and short article about this topic - Microservice Architectures: What They Are and Why You Should Use Them.
Lambda
Microservices aside, as #Ashan suggested, for low ongoing cost you may want to look at functional programming/lambda architecture and serverless framework. Again - there is a complexity (since you go one level deeper in separating your deployment packages than with microservices) that is partially addressed by the serverless framework, but you have tools like AWS Lambda/Azure Functions/Google Functions to run your functions as a service and pay per use (real use, not reservation as in EC2).
Microservices with Docker and AWS ECS
If you want to stick to microservices, please look into Docker and Amazon EC2 Container Service. This will allow you to effectively use AWS EC2 instances for running multiple microservices. You may want to put Application Gateway in front of AWS ECS to manage the traffic.

AWS serverless stack will give you the lowest total cost of ownership for a Microservices project.
It mainly involves AWS API Gateway and Lambda where you will pay only for the Opex rather Capex.

Related

can micro-service interact with downstream service through localhost origin

Can micro-service interact with downstream service through localhost origin, since my all service is running in same server is that is correct approach ? I found that interacting a downstream service with domain name takes much time when compared to localhost. i was curious to know whether we can do like this ?
You're right, you can communicate with other services running in the same host with localhost. It's completely fine and when thinking about network round trips, it's beneficial.
But,
what if you want to scale the services?
What if you want to move any of the services to a different host?
While considering at least these scenarios, binding to a specific host is not worth. And this is applicable if you are using the IP of the host.
*I found that interacting a downstream service with domain name takes much time when compared to localhost.*.
I see what you're saying.
Microservices architecture is not a silver bullet for software development design and always come with tradeoffs
And about your deployment strategy Multiple Service Instances per Host pattern.
How you are going to handle if your services have different resource requirements?
say what if one of your services is utilizing all the host resource?
What if you need to scale out one independent service?
How you are going to ensure the availabilities of your services?
..
..
So there are so many questions you must consider before going with a pattern in microservices. It all depends on your requirements.
If your services are on the same server you should using a message broker or mechanism like grcp to talk between your services so doesn't matter if your orgin is. If you are using HTTP to communicate between your micro services then it totally not gain any advantages of micro services architecture and your architecture is flawed.
Microservice is a concept, it does not force you to where you deploy your application and how they might call each other. You may deploy your microservices on different virtual machines that are hosted on the same physical server. The whole point is you need to have a reason for everything that you decide to do with your architecture.
The first question is why you have split your application into different microservices? for only carrying the word of microservice on your architecture or having better control on the business logic, scalability, and maintainability of the project?
These are important things you need to take care of them when you are designing an application. draw the big picture of your product, how it's going to be used. which service/component is mostly being used by the customers, does keeping it with other microservices on the same server makes performance issues or not? what if any issue happens to the server and whole applications would be unreachable.

How do Microservices enable CI/CD

I've been reading a lot of articles that state microservices enable CI/CD. However, the articles don't explain how or why this is the case. It seems that you could continuously deploy a monolith as well once all of its automated tests pass.
Thank you!
There are many aspects of this.
It seems that you could continuously deploy a monolith as well once all of its automated tests pass.
A monolith is typically stateful, e.g. sessions is used for short lived user state. Where as modern microservice architecture typically follows The Twelve Factor App principles and are typically deployed on e.g. Kubernetes or other cloud environment. Apps following the Twelve Factor App principles and apps on Kubernetes are stateless, e.g. all user state most be handled outside the app. See https://12factor.net/processes
With stateless apps, it is much easier to scale-out to more instances e.g. 5 instances for an app and it is also easy to scale down to fewer instance, e.g. 2 instances.
When the app is stateless and runs in multiple instances - doing a "rolling deployment" e.g. updating one instance at the time, form version1 to version2 is an easy process and built-in functionality in Kubernetes.
With all the above features in-place, it is now much easier to implement Continuous Deployment compared to how it was with large stateful monolithic apps.

How to design and Build microservices in an AWS serverless architecture?

I'm totally new to the concept of microservices and AWS serverless architecture. I Have a project that I have to divide it into microservices that should run on AWS Lambda but I face a difficulty on how to design it.
When searching I could not find a usefull documentation about how to divide and design microservices, all the docs I saw comparing monolithic app to microservices app or deploying microservice on aws lambda.
In my case I have to develop an ERP (Entreprise Resource Planning) that have to manage clients, manage stocks, manage books, manage commands.. so should I make a service for clients and a service for books... and then if I notice a lot of dependency between two microservices then I make them one ??
And for the DB, is it good to use one DB ( dynamoDB) for all microservices instead of a DB for every service in this case (ERP)?
Any help is really appreciated.
If anybody has a usefull document that can help me, I will be very thakfull.
Thanks a lot.
I think the architecture of your data and services can depend on a few things:
Which data sources are used/available
What your requirements/desired functionalities are
Business logic or any other restrictions/concerns
In order to reduce the size of a service, we want to limit the reasons why an application or another service would access that service to as few as possible. This reduces the amount of overall maintenance for managing it and also gives you a lot of flexibility when using their deployments.
For example: A service which transforms data from multiple sources and makes it available via API can split into an API using data processing service with a new, cleaner data source. This would prevent overreliance on large, older services and data, and make integration of that newer, smaller service easier for your applications.
In your scenario, you may get away with having services for managing clients, books, and stocks separately, but it also depends on how your data sources are integrated as well as what services are already available to you. You may want to create other microservices or databases to help reduce the size and organize the data into the format you want.
Depending on your business needs, combining or keeping separate two microservices can depend on different things too. Does one of these services have the potential to be useful for other applications? Is it dedicated to a specific project? Keeping services separate, small, and focused gives you room to expand or shrink things if needed. The same goes for data sources.
There are always many ways to approach it. Consider what your needs/problems are first before opting with a certain tool for creating solutions.
Microservices
It's simply small services running that can be scaled and deployed
independently.
AWS Serveless
Every application is different so you may not find single architecture that fits every application. Generally a simplge Serverless application consists of Lambda Function , Api Gateway , DB (SQL/NoSQL) . Serverless is great cloud native choice and you get availability , scalability out of box and you would be very quick to deploy your stuff.
How to design Serverless Application
There is no right answer. You need to architect your system in a way that individual microservices can work cohesively. in your case books, Stocks need to be separate microserivces which means they are separate Lambda functions. For DB , Dynamo is good and powerful choice as long as you know how NoSQL works and caveats around it. you need to think before hand what are challenges around NoSQL and how you would partition data ? What if you need to use the complex reporting and would NoSQL be good choice ? There are patterns around to get away that issue. Since Dynamo DB operate on table level so each microservice will preferably be separate table that can be scaled independently and makes more sense.
What's the right architecture for my application?
Instead of looking for one right answer i would strongly suggest to read individual component before making your mind. There are tons of articles and blogs. If i was you i would look in the following order
Microservices - why we need then ?
Serverless - In General
Event Driven architecture
SQL vs NoSQL
Lambda and DynamoDB and how they actually work
DevOps and how that would work in serverless
Patterns
Once you have bit of understanding you would be in way better position to decide what suits you best.

Micro services using Service fabric where to place controllers

I have a micro-service project with multiple services in .NET Core. When it comes to placing the controllers, there are 2 approaches:
Place the controllers in respective Micro Services, with Startup.cs in each micro-service.
Place all controllers in a separate project and have them call the individual services.
I think the 1st approach will involve less coding effort but the 2nd one separates controllers from actual services using interfaces etc.
Is there a difference in terms of how they are created and managed in Fabric using both approaches.
This is very broad topic and can raise points for discussion because it all depends on preferences, experiences and tech stacks. I will add my two cents, but do not consider it as a rule, just my view for both approaches.
First approach (APIs for each service isolated from each other):
the services will expose their public APIs themselves and you will need to put a service discovery approach in place to enable clients to call each microservice, a simple one is using the reverse proxy to forward the calls using the service name.
Each service and it's APIs scales independently
This approach is better to deploy individual updates without taking down other microservices.
This approach tends to have more code repetition to handle authorization, authentication, and other common aspects, from there you will end up doing shared libraries using on all services.
This approach increase the points of failures, it is good because failures will affect less services, if one API is failing, other services won't be impacted (if the failure does not affect the machine like memory leak or high CPU usage).
The second approach (Single API to forward the calls to right services):
You have a single endpoint and the service discovery will happen in the API, all work will be handled by each services.
The API must scale for everyone even though one service consumes much more resources than others. just the service will scale independently.
This approach, to add or modify api endpoints, you will likely update the API and the service, taking down the API will affect other services.
This approach reduces the code duplication and you can centralize many common aspects like Authorization, request throttling and so on.
This approach has less points of failures, if one microservices goes down, and a good amount of calls depend on this service, the API will handle more connection and pending requests, this will affect other services and performance. If it goes down, every services will be unavailable. Compared to the first approach, the first approach will offloaded the resilience to the proxy or to the client.
In summary,
both approaches will have a similar effort, the difference is that the effort will be split into different areas, you should evaluate both and consider which one to maintain. Don't consider just code in the comparison, because code has very little impact on the overall solution when compared with other aspects like release, monitoring, logging, security, performance.
In our current project we have a public facing API. We have several individual microservice projects for each domain. Being individual allows us to scale according to the resources each microservice use. For example we have an imaging service that consumes a lot of resources, so scaling this is easier. You also have the chance to deploy them individually and if any service fails it doesn't break the whole application.
In front of all the microservices we have an API Gateway that handles all the authentication, throttles, versioning, health checks, metrics, logging etc. We have interfaces for each microservice, and keep the Request and Response models seperately for each context. There is no business logic on this layer, and you also have the chance to aggregate responses where several services need to be called.
If you would like to ask anything about this structure please feel free to ask.

Splitting monolith into microservices

I have an existing web service that supports ordering and it has multiple operations (approximately 20). This is a single webservice that support the ordering function. It interacts with multiple other services to provide ordering capability.
Since there is a lot of business functionality within this app and it is supported by a 10 member team , I believe it is a monolith (though I assume there is no hard and fast rule to define what a monolith is).
We are planning to get the application deployed in cloud foundry environment and we are planning to split the app into 2-3 microservices , primarily to enable them scale independently.
The first few apis which enable searching for a product typically have more number of hits whereas the api that support actual order submission receives less that 5% of the hits. So the product search api should have significantly larger number of instances as compared to order submission api.
Though I am not sure if we could split is based on sub-domains (which I have read should be the basis) , we are thinking of splitting them based on the call sequence as explained earlier.
I have also read that microservices should be choreographed and not orchestrated. However in order to ensure our existing consumers are not impacted , I believe we should expose a api layer which would orchestrate the calls to these microservices. Is providing an api gateway , the normal approach that is followed to ensure consumers do not end up calling multiple microservices and also provides a layer of abstraction?
This seems to be orchestration more than choreography - though I am not hung up on the theoretical aspects , I would like to understand the different solutions that are pursued for this problem statement in an enterprise world.
The Benefits of Microservices
Deploy & Scale Independently
Easier to 'Reason About'
Separation of Concerns
Single Responsibility
(Micro)Service-Oriented Architecture
I would suggest splitting your services based on domain. This is a logical and efficient approach which makes it an easy starting point. Your monolithic package structure may already be organized in this manner, which simplifies the refactoring even more.
API Gateway
The typical Spring Cloud approach for this would be to use a Zuul Proxy on the edge of your network which receives the requests from your clients (web, mobile, etc.) and routes them to the microservices located behind your firewall. The client only interfaces with a single domain, and it handles CORS out of the box.
Resources:
API Gateway Pattern
Routing and Filtering

Resources