We want to show alternate of a product like most of the e-commerce websites does. In our case, we need to fetch data from multiple microservices.
Products - Stores all product information
Prices - In our case prices are complex and subject to user's location and other parameters. Hence we made it a separate microservice.
Reviews - It manages ratings and reviews about a product.
The end product will be List<AlternateProduct> which would have an image, description, rating out of 5 and a number of reviews.
In microservice architecture, what is the right place to compose a response from multiple microservices?
Approatch 1:
MVC/Rest API approaches APIGateway
API Gateway make an async call to all microservices
The response will be returned to MVC/WebAPI. Where the composition of response can be performed.
Approatch 2:
MVC/Rest API approaches APIGateway
API Gateway make an async call to Products microservice.
Products microservice will call other microservice and perform composition and returns List<UlternateProduct>
Please help me decide!
In this case I would go for Approach 1 sine you can download the list of Products you need and than run 2 other request in parallel for downloading Prices and Reviews.
After your receive response from all 3 requests than you build the model and return it.
I think that Gateway API should be smart enough to make calls to different services and build the result that need to return.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 15 days ago.
Improve this question
I have a monolith application that uses one database, and in my company, we decide to rewrite the application and use microservices in the backend.
At this time, we decided NOT to split the database because other applications and processes are using it, and it takes two years to change.
The difficulty in the process is to decompose and identify the right microservices.
I'll try to explain our system by start describing the UI. Please read carefully because I am trying to explain it in detail.
The system displays the stock market data. The Company or Fund or Fund manager in the market is posting everyday reports about the company's activities like status, information for investors, and more.
"breaking announcement" page
displays a list of today's priority reports. Each row contains the subject from the pdf document (the report) that the company is publishing and the company that belongs to the report:
When the user clicks on the row, we redirect to "report page" and which contains the report details:
In the database, we have entities such report, company, company_report, event, public_offers, upcoming_offering, and more.
So to get the list, we run an inner join query like this:
Select ... From report r inner join
company_report cr on r.reportid=cr.reportid
inner join company c on cr.company_cd=c.company_cd
Where ....
Most of our server endpoints are not changing anything but are only used to retrieve the data.
So I'll create this endpoint /reports/breaking-announcement to get the list, and it returns an object like that:
[{ reportId, subject, createAt, updateAt, pdfUrl, company: { id, name } }]
today's companies report page acts like "breaking announcement" page. but the page displays all the reports from today (not necessarily with priority).
disclosures are reports
On this page, we also have a search to get all reports by cretiria for example to get reports by company name. to do that we have autocomplete so the user types the company name or id.
In other to do that we think it should be API endpoint /companies/by-autocomplete and the response will [{ companyId, companyName, isCompany }].
eft page same as before, but this time we display the Funds report's (not a companys reports).
The list contained the fund name and the subject of the report. each click on the row leads to the report detail page (same page).
On this page we have a search by criteria such date-from date-to, name or id of the funds by autocomplete. and endpoint (/funds/by-autocomplete returns [{ fundId, fundName, ...}]).
foreign etf page same as before, list of items. each item is like before:
<fund name>
<subject of the report>
The query is different.
Okay, this was a very long description. thank you for reading.
Now I want to detect what are the microservices for this application.
I endup with:
Report microservice - responsible for getting and handling all the reports in the system.
which have a endpoints like getall, getbyid, get like getbreakingannouncement, getcompanytodayreports, getfunds, getforeignfunds. the report microservice will make a request to company or funds microservice to join the data from the company and build to the response.
company microservice:
handle all companies data. I mean endpoints such getall, getByIds (for report service), getByAutocomplete.
funds microservice:
handle all funds data. I mean endpoints such getall, getByIds (for report service), getByAutocomplete.
There are other services, such as a notification service or email service. but those are not business services. I want to split up my business logic into microservices in order to deploy and maintain them easily.
I'm not sure I decomposing right. maybe I do. but is fit the microservice ideas? it's fit the Pattern: Decompose by business capability
? if not what are the business capability in my system?
I don't think a query-oriented decomposition of your current application monolith will lead to a good microservice (MS) design. Two of your proposed microserivces have the same end-point query API which suggests to me that you are viewing your first-generation microservices as just entity-servers.
Your idea to perform joins on cross MS query operations indicates these first gen "microservices" are closely coupled and hence fall short of a genuine MS architecture.
One technique to verify an MS design is to ask yourself, "how would the whole system cope if one MS is unavailable for 3 minutes?". Solving that design challenge leads down a path towards decoupled message-base interactions between the microservices. And this in turn leads to interactions between Microservices being expressed as business operations where one MS raises messages that trigger a mutation in the state of another MS.
Maybe you should reduce the scope of your MS ambitions and instead look at Schema Stitching in GraphQL. Reading between the lines of your question I think a more realistic first step towards a distributed system would be to create specialised query services with a GraphQL endpoint.
At this time, we decided NOT to split the database because other applications and processes are using it, and it takes two years to change.
I'll try to stop you right here. In general case shared database is a huge antipattern in microservices architecture and should be avoided as much as possible. There are multiple problems here - less transparent dependencies between services which can cause high coupling with all the consequences in development and deployment, increasing chance to eventually end up with distributed monolith instead of microservices, etc.
Other applications and processes using it should not stop you from moving away from it - there are things which allow to mitigate that - you just sync data between services and "legacy" database (asynchronously using basically the same approaches like you will use in your microservices - for example transaction log tailing for example using something like debezium). It have it's own costs but I would argue that it is usually better to pay them upfront then have to pay bigger percentages on the tech debt.
I endup with: ....
I would argue that this split looks more like decomposition by subdomain then by business capability. Which is actually can be quite fine and suits microservices architecture also.
Based on your description I see at least the following business capabilities in your system that can be defined:
View (manage?) breaking announcements
View (manage?) reports
Search (reports?)
Potentially "today's reports" and "Funds reports" can be considered as separate business capabilities.
I want to split up my business logic into microservices in order to deploy and maintain them easily.
Then again - I highly recommend to reconsider not moving away from shared database.
I'm not sure I decomposing right
Without whole overview of the system including amount of data, data flows, resources available for development and competences in the teams, amount of incoming new business requirements, potential vectors of change, etc. it is hard to actually tell.
P.S.
Note that despite the microservices architecture having a lot of popularity it is not always a right solution for a concrete project to go full-blown microservices. If you have quite small team and/or do not handle high loads/large amount of data with various access patterns then potentially you do not need microservices. You still can leverage a lot of approaches used in the microservices architecture though.
Let's say I have an API Gateway for third parties to create orders in my system. As a part of order creation I need to validate that the request model I have been provided is correct - not just statically but by checking foreign keys are valid - that the product id’s are valid in the order, the account id is valid. If not I want to return a 400 to let the caller know they have passed an erroneous request.
What I would expect to do is to create an orders::createOrder lambda function, which would make parallel calls to products::listProducts, accounts::listAccountsForCustomer and other microservices to retrieve the information needed for validation, before I am happy to create the order in the system. This validation needs to happen synchronously as it’s a request/response from a third party to create the order.
I would usually want the logical domains - customers, products, orders, accounts to be separate microservices, and I usually have some logic in an API Gateway layer for orchestration / mapping to the microservices below. I've been reading that calling Lambda from Lambda is a bad idea..
How do I correctly model this on serverless?
For your case it'll be best to keep all this logic within one lambda. Splitting it into multiple smaller functions will add latency, so you have worse user experience and you'll multiply your cost since you have multiple functions running. You can also try Step Functions if you want such split. But it's also pretty expensive, I don't recommend it for such simple case.
For example I have a post service. At UI I need to show post and userinfo (username and id for redirect to user page)
Options:
Should I store username and id in post service. (When every user register to system I will send subset detail to post service via RabbitMQ). (Total Request from UI= 1)
I will store only Id of user(AR). And at UI component fetch user with id(Total Request from UI=2)
Both of them are OK. The decision is based on how you map the concepts between different boundary contexts. The patterns are:
Anticorruption Layer
Shared Kernel
Open Host Service(option 2)
Separate Ways
Customer Supplier
Conformist
Partenership
Published Language
...
It is not only the personal preference, but also about the organization structure(The Conway's Law).
If both the two contexts(post and user) are controlled by your team, you could choose either of them. Considering the complexity of the option 1, I prefer option 2 since it's very straight. Start from the easier one then involve your architecture is always a good idea.
I am investigating options to build a system to provide "Entity Access Control" across a microservices based architecture to restrict access to certain data based on the requesting user. A full Role Based Access Control (RBAC) system has already been implemented to restrict certain actions (based on API endpoints), however nothing has been implemented to restrict those actions against one data entity over another. Hence a desire for an Attribute Based Access Control (ABAC) system.
Given the requirements of the system to be fit-for-purpose and my own priorities to follow best practices for implementations of security logic to remain in a single location I devised to creation of an externalised "Entity Access Control" API.
The end result of my design was something similar to the following image I have seen floating around (I think from axiomatics.com)
The problem is that the whole thing falls over the moment you start talking about an API that responds with a list of results.
Eg. A /api/customers endpoint on a Customers API that takes in parameters such as a query filter, sort, order, and limit/offset values to facilitate pagination, and returns a list of customers to a front end. How do you then also provide ABAC on each of these entities in a microservices landscape?
Terrible solutions to the above problem tested so far:
Get the first page of results, send all of those to the EAC API, get the responses, drop the ones that are rejected from the response, get more customers from the DB, check those... and repeat until either you get a page of results or run out of customers in the DB. Tested that for 14,000 records (which is absolutely within reason in my situation) would take 30 seconds to get an API response for someone who had zero permission to view any customers.
On every request to the all customers endpoint, a request would be sent to the EAC API for every customer available to the original requesting user. Tested that for 14,000 records the response payload would be over half a megabyte for someone who had permission to view all customers. I could split it into multiple requests, but then you are just balancing payload size with request spam and the performance penalty doesn't go anywhere.
Give up on the ability to view multiple records in a list. This totally breaks the APIs use for customer needs.
Store all the data and logic required to perform the ABAC controls in each API. This is fraught with danger and basically guaranteed to fail in a way that is beyond my risk appetite considering the domain I am working within.
Note: I tested with 14,000 records just because its a benchmark of our current state of data. It is entirely feasible that a single API could serve 100,000 or 1m records, so anything that involves iterating over the whole data set or transferring the whole data set over the wire is entirely unsustainable.
So, here lies the question... How do you implement an externalised ABAC system in a microservices architecture (as per the diagram) whilst also being able to service requests that respond with multiple entities with a query filter, sort, order, and limit/offset values to facilitate pagination.
After dozens of hours of research, it was decided that this is an entirely unsolvable problem and is simply a side effect of microservices (and more importantly, segregated entity storage).
If you want the benefits of a maintainable (as in single piece of externalised infrastructure) entity level attribute access control system, a monolithic approach to entity storage is required. You cannot simultaneously reap the benefits of microservices.
I want to get multiple store details using Shopify graphql.
I tried like following but getting only current store details.
nodes(ids: ["gid://shopify/Shop/22954311758","gid://shopify/Shop/25747685469"]) {
... on Shop {
id
email
}
}
I know Shopify not provide other shop details like this because of security reason but I am looking for alternative to get multiple store details in a single graphql call.
This is not possible by default because of 2 restrictions:
1) You can't request two different end points ( you will have to request two stores GraphQL end points at the same time )
2) You will need to pass two different Access Tokens in the header request for each store.
In order to achieve this you will need to create a custom GraphQL server that will handle the request from the two different stores and pass them to your request. So in fact you are making two request in the background but your one will be a single one.
In addition if you make a separate GraphQL server you wont get any advantages in speed unless you cache the requests
But I find this solution a massive overkill in the current state of the request in your question. If you require multiply request of a such matter yes, but if you want to decrease a single request instead of making two just make the two instead of reinventing the wheel.