I have to write a microservice which is responsible for making data transfer across cloud .i.e S3 to S3, S3 to Google cloud storage. Etc I was thinking to have separate controller for each of the implementation. Is it good practice to do that or shall I create a single controller which can call to different service implementations based on the payload request
Related
As part of my project, I'd like to use microservices. The application is a store website where the admin can add products and the user can order and buy them.
I envision implementing four services: admin service, user service, product service, and order service.
I had trouble with handling data between multi services but it's solved by duplicating some necessary data using message brokers.
I can do this solution between product and user and order service because I need some of the data not all of them
Now, my question is about handling admin service because in this service I need to access all of the data, for example, the admin should have a list of users and the ability to add new products or update them.
how can I handle data between these services and the admin service?
should I duplicate all data inside the admin service?
should I use Rest API?
no thats wrong. it seems you want run away from the fact. in general duplication is an anti-pattern mostly in case you describe.
the way you thinking about admin-service is wrong.
because in this service I need to access all of the data
i dont think you need to have such a service. accessing the data based on users must be handled by Identity server(oidc Oauth) which is the separated service and handle the accessing end points .
for example the product-service provides 1-return product list 2-return individual product data 3-create data. the first two can access by both user and admin but the 3rd must be accessed by admin. one of identity server duty is to identify user in case of user interaction(login) with services.
ADMIN Scenario
user-client request create product endpoint(services eg:product.service).
client-app(front end app) is configed with identity server and realize there is no require identity tokens and redirect to identity server login.
NOTE: there is also identifying the client-app itself i skipped.
user-client login and get require token that based on his claims and roles and etc.
user-client request create product endpoint with tokens included in request header
endpoint (product service) receives the request and check the header (the services also configured base on identity server and user claims)
get the user claims info.
the create-product requires admin role if its there then there we go otherwise no access.
the image uses identity server 4 . there are also several kinds and also you can implement by your self using 0AUTH and oidc protocol libraries.
so the admin just request to the certain service not getting data through the separate service for this goal.
Communication between Service:
the most struggling part of microservices is the wiring it up. the wiring is directly the consequence of your design.(recommand deep study on Domain Driven Design).
asynchronous communication :
to avoid coupling between services mostly use asynchronous communication which you pass event eg:brokers like rabbitmq and kafka..etc , redis etc. in this communication the source service who send event does not care about response and not wait for it.just it always ready to listen for any result event. for example
the inventory service creates item
123|shoe-x22|22units
and this service fire event with data 123|shoe-x22(duplicate maybe or maybe not just id) to product service to create but it does not wait for response from product service that is it created successfully or not.
as you see this scenario is unreliable in case of fault and you need handle that so in this case you have to study CAP theory,SAGA,Circuit-breaker.
synchronous communication :
in this case the service insist to have response back immediately. this push service to become more coupling. if you need performance then you can use gRPC communication other wise simple api call to the certain service. in case of gRPC i recommand using libraries like MassTransit
which also can be used for implementingf gRPC with minimum coupling.
Some of Requests need data from multiple services
if you are in such situation you have two options.
mostly microservices architecture using APIGATE WAY (EG:nginx,OCELOT,etc)
which provide reverse-proxy,load balancing,ssl terminations etc. one of its ability is to merge the multiple responses from a request.but it just merge them not changing the data structure of response.
in case of returns desire response data structure you may create an Aggregator service which itself calls other two, gathers data and wrap it in desire format and return it.
so in the end still the Domain Driven Design is the key and i think i talked tooooo much. hope help you out there.
Soon I'll start a project based on a Microservice Architecture and ones of the components I need to develop is a Worker Service (or Daemon).
I have some conceptual questions about this.
I need to create a worker service that send emails and sms. This worker service need the data to send this emails. Also, I need to create a micro service that allow users to create a list of emails that need to be sanded by this Worker service. But both of then need to consume data from the same database.
In my worker service I should consume a micro service resource to get the data or it's ok that this worker service have a connection to the same database that my micro service?
Or is best that my worker service also has the api endpoints to let the users create new lists of emails, add or modify configuration and all the other functionalities i need to implement? This sound like a good idea, but I'll get a component with two responsibilities, so I have some doubts about that.
Thanks in advance.
Two microservices sharing the connection to the same database is usually a bad idea. Because each service should be the owner of its own data model and no one else should access it directly. If a service needs data of the domain of another service it should get it calling the owner via API or replicating the model in a read-only way in its own dabase and update it using events for example.
However, I think that for your current use case the best option is to provide the worker with all the information that it needs to send an email, (address, subject, body, attached files...) so the only responsibility of the worker will be to send emails and not to fetch the information.
It could provide also the functionality to send emails in batches. In the end, the responsibility of the service will be only one "To send emails" but it can provide different ways to do it (single emails, batches, with attached files... etc)
I understand you can use the AWS API Gateway to allow developers to create applications that interact with AWS backend services (e.g. DynamoDB).
The basic flow is:
Create the resource (e.g. DynamoDB table)
Create a Lambda function
Create an HTTP API
Create Routes
Create an integration
Attach integration to routes
But what are the options for the API? What kind of operations can be done on DynamoDB (or whatever resource you’re working with)?
Usually when a REST API is available through a Gateway there is a set of endpoints, so the developers know what they can build with the API. Like Swagger Documentation.
It would be great to know all the things that can be done via API to DynamoDB, S3, Cloudwatch, etc. is there a master list somewhere?
Or is the idea that you can do anything inside the Lambda function that is supported by the aws-sdk?
In that case, is there a list of available options for the aws-sdk?
Am I thinking about this the right way?
I have been studying microservices and serverless solutions and am playing with an angular frontend hosted on S3 and Lambda functions that talk to various DynamoDb tables via the API gateway on AWS.
Every example and video I read/watch uses a simple CRUD microservices as part of a simple 'todo' application or similar. My problem is where does the business logic sit? If I'm building a complex application I don't want all my business logic in my frontend Angular application. Or do I? I could build an Application API which in turn calls CRUD microservices but that feels like a monolithic approach.
I appreciate there may not be a definitive answer but can anybody advise a novice on best practice?
There are several best practices I follow in designing Serverless Microservices
Start with only few Microservices (Less the better up front, unless you know exactly how the service separation should be, delaying the decision to split)
Separate your business logic that goes to the API, and use the handler as a controller in MVC to invoke the business logic. (This will also helps to unit test logic without depending on Lambda).
Its not necessary to write only simple CRUD in your API. It depends on your domain and Business Logic required. (But don't build another monolith without separating the code in to different services. Several AWS Service limits will also give you some guide on how much endpoints should be there in a service & etc.)
Apply the design patterns available for Microservices (e.g If you want to sync data bases between each Microservice, use Pub-Sub pattern using SNS, DynamoDB Streams and Lambda)
Use the Angular App to put most of the presentation logic.
Use CloudFront as a proxy and a CDN to avoid CORs.
If you need more information you can refer the following articles I have written on this.
Deploying Angular/React Apps in AWS
Full Stack Serverless Web Apps with AWS
Note: You can use the CloudFormation in Deploying Angular/React Apps in AWS to automate the creation of S3 and CloudFront with best practices.
We are evaluating a move to microservices. Each microservice would be its own project developed in isolation. During planning, we have determined that some of the microservices will communicate with other via REST calls, pub/sub, messaging (ie. a order service needs product information from product service).
If a microservice depends on retrieving data from another microservice, how can it be run in isolation during development? For example, what happens when your order service requests product details, but there is nothing to answer that request?
What you probably need is an stub rest service. Create a webapp that takes the expected output using a path that is not part of the public api. When you invoke the public api it sends what it just received
If a microservice depends on retrieving data from another microservice, how can it be run in isolation during development?
It should be always temporally isolated from other services during development and production as well.
For example, what happens when your order service requests product details, but there is nothing to answer that request?
This is a place where design flaw reveals itself: order service should not request product details from another service. Product details should be stored in the message (event) that order service will be subscribed to. Order service should be getting this message in an asynchronous manner using publish-subscribe pattern and saving it in its own database. Data about the product will be stored in 2 places as the result of that.
Please consider reading this series of articles about microservices for more details. But in a nutshell: your services should be temporally decoupled, so when your product service is down - order service can continue its operations without interruptions. This is the key thing to understand about good distributed systems design in general.