Child service data access to other services with Apollo Federation configuration - apollo-server

We've been using Apollo Federation for about 1.5 years as our main api. Behind the federation gateway are 6 child graphql services which are all combined at the gateway. This configuration really works excellent when you have a result set of data which spans the different services. E.g. A list of tickets which references the user who purchased and event it is associated with it, etc.
One place we have experienced this breaking down is when a pre-set of data is needed which is already defined in another child service (or across other child services) (resolver/path). There is no way (that has been discovered by us) to query the federation from a child service to get a federated set of data for use by a resolver to work with that data.
For example, say we have a graphql query defined which queries all tickets for an event, and through federation returns the purchaser's data, the event's data and the products data. If I need this data set from a resolver, I would need to make all those queries again myself duplicating dataSource logic and having to match up the data in code.
One crazy thought which came up is to setup apollo-datasource-rest dataSource to make queries against our gateway end point as a dataSource for our resolvers. This way we can request the data we need and let Apollo Federation stitch all the data together as it is designed to do. So instead of the resolver querying the database for all the different pieces of data and then matching them up, we would request the data from our graphql gateway where this query is already defined.
What we are trying to avoid by doing this is having a repeated set of queries in child services to get the details which are already available in (or across) other services.
The question
Is this a really bad idea?
Is it a plausible idea?
Has anyone tried something like this before?
Yes we would have to ensure that there aren't circular dependencies on the resolvers. In our case I see the "dataSource accessing the gateway" utilized in gathering initial data in mutations.
Example of a federated query. In this query, event, allocatedTo, purchasedBy, and product are all types in other services. event is an Event type, allocatedTo and purchasedBy are a Profile type, and product is a Product type. This query provides me with all the data I would use to say, send an email notification to the people in the result set. Though to get this data from a resolver in a mutation to queue up those emails means I need to make many queries and align all the data through code myself instead of using the Gateway/federation which does this already with the already established query. The thought around using apollo-datasource-rest to query our own gateway is get at this data in this form. Not through separate queries and code to align id's etc.
query getRegisteredUsers($eventId: ID!) {
communications {
event(eventId: $eventId) {
registered {
event {
name
}
isAllocated,
hasCheckedIn,
lastUpdatedAt,
allocatedTo {
firstName
lastName
email
}
purchasedBy {
id
firstName
lastName
}
product {
__typename
...on Ticket {
id
name
}
}
}
}
}
}

FYI, I didn't quite understand the question until I looked at your edits, which had some examples.
Is this a really bad idea?
In my experience, yes. Not as an idea, as you're in good company with other very smart people who have done this.
Is it a plausible idea?
Absolutely it's plausible, but I don't recommend it.
Has anyone tried something like this before?
Yes, but I hope you don't.
Your Question
Having resolvers make requests back to the Gateway:
I do not recommend this. I've seen this happen, and I've personally worked to help companies out of the mess this takes you into. Circular dependencies are going to happen. Latency is just going to skyrocket as you have more and more hops, TLS handshakes, etc. Do orchestration instead. It feels weird to introduce non-GraphQL, but IMO in the end it's way simpler, faster, and more maintainable than where "just talk to the gateway" takes you.
What then?
When you're dealing with some mutations which require data from across multiple data sources to be able to process a single thing (like sending a transaction email to a person), you have some choices. Something that helped me figure this out was the question "how would I have done this before GraphQL?"
Orchestration: you have a single "orchestration service", which takes the mutation and makes calls (preferably non-GraphQL, so REST, gRPC, Lambda?) to the owner services to collect the data. The orchestration layer does NOT own data, but it can speak with the other services. It's like Federation, but for sending the data into the request, instead of into the response.
Choreography: you trigger roughly the same thing, but via an event stream. (doesn't work as well with the request / response model of GraphQL)
CQRS (projections): Copies of database data, used for things like reporting. CQRS is basically "the way you read data doesn't have to be the same as the way you write it", and it allows for things like event-sourced data. If all of your data sources actually share the same database, you don't even need "projections" as much as you would just want a read replica. If you're not at enough scale to do replicas, just skip it and promise never to write data that your current domain doesn't own.
What I Do
Where I work, I have gotten us to:
Queries
queries always start with "one database call".
if the "one database call" goes to one domain of data (most often true), that query goes into one service, and Federation fills in the leaves of your tree. If you really follow CQRS, this could go the same way as #3, but we don't.
if your "one database call" needs data from across domains (e.g. get all orders with Product X in it, but sorted by the customer's first name), you need a database projection. Preferably this can be handled by a "reporting service": it doesn't OWN any data, but it READS all data.
Mutations
if your top-level mutation modifies acts only within one domain, the mutation goes in a service, it can use database transactions, and Federation fills in the leaves
if your mutation is required to write across multiple domains and requires immediate consistency (placing an order with inventory, payments, etc), we chose orchestration to write across multiple services (and roll-back when necessary, since we don't have database transactions to do it for us).
if your mutation requires data from many places to send further into the request (like sending an email), we chose orchestration to pull from the multiple services and to push that data down. This feels very much like Federation, but in reverse.

Related

Attribute Based Access Control (ABAC) in a microservices architecture for lists of resources

I am investigating options to build a system to provide "Entity Access Control" across a microservices based architecture to restrict access to certain data based on the requesting user. A full Role Based Access Control (RBAC) system has already been implemented to restrict certain actions (based on API endpoints), however nothing has been implemented to restrict those actions against one data entity over another. Hence a desire for an Attribute Based Access Control (ABAC) system.
Given the requirements of the system to be fit-for-purpose and my own priorities to follow best practices for implementations of security logic to remain in a single location I devised to creation of an externalised "Entity Access Control" API.
The end result of my design was something similar to the following image I have seen floating around (I think from axiomatics.com)
The problem is that the whole thing falls over the moment you start talking about an API that responds with a list of results.
Eg. A /api/customers endpoint on a Customers API that takes in parameters such as a query filter, sort, order, and limit/offset values to facilitate pagination, and returns a list of customers to a front end. How do you then also provide ABAC on each of these entities in a microservices landscape?
Terrible solutions to the above problem tested so far:
Get the first page of results, send all of those to the EAC API, get the responses, drop the ones that are rejected from the response, get more customers from the DB, check those... and repeat until either you get a page of results or run out of customers in the DB. Tested that for 14,000 records (which is absolutely within reason in my situation) would take 30 seconds to get an API response for someone who had zero permission to view any customers.
On every request to the all customers endpoint, a request would be sent to the EAC API for every customer available to the original requesting user. Tested that for 14,000 records the response payload would be over half a megabyte for someone who had permission to view all customers. I could split it into multiple requests, but then you are just balancing payload size with request spam and the performance penalty doesn't go anywhere.
Give up on the ability to view multiple records in a list. This totally breaks the APIs use for customer needs.
Store all the data and logic required to perform the ABAC controls in each API. This is fraught with danger and basically guaranteed to fail in a way that is beyond my risk appetite considering the domain I am working within.
Note: I tested with 14,000 records just because its a benchmark of our current state of data. It is entirely feasible that a single API could serve 100,000 or 1m records, so anything that involves iterating over the whole data set or transferring the whole data set over the wire is entirely unsustainable.
So, here lies the question... How do you implement an externalised ABAC system in a microservices architecture (as per the diagram) whilst also being able to service requests that respond with multiple entities with a query filter, sort, order, and limit/offset values to facilitate pagination.
After dozens of hours of research, it was decided that this is an entirely unsolvable problem and is simply a side effect of microservices (and more importantly, segregated entity storage).
If you want the benefits of a maintainable (as in single piece of externalised infrastructure) entity level attribute access control system, a monolithic approach to entity storage is required. You cannot simultaneously reap the benefits of microservices.

Mechanisms for response aggregation in event sourcing based microservices

When it comes to implementing event sourcing based microservices, one of the main concerns that we've come across is aggregating data for responses. For an example we may have two entities like school and student. One microservice may be responsible for handling school related business logic while another may handle students.
Now if someone makes a query through a REST endpoint and ask for a particular student and they might expect both school and student details, then the only known ways for me are the following.
Use something like service chaining. An example would be an Api-Gateway aggregating a response after making couple of requests to couple of microservices.
Having everything replicated throughout all services. Essentially, data would be duplicated.
Having services calling each other for those extra bit of information. This solution works but hard to scale and goes against basic idea of using event sourcing.
My question is that what other ways are there to do this ?
A better approach can be to create a separate reporting/search service, that aggregates the data from both services. For example implemented using ElasticSearch or SOLR.This now allows the users to do search and queries across multiple services and aggregates.
Sure, it will be eventually consistent, but I doubt that is s a problem. This gives a better separation of concerns and you get a nice search experience for your users at the same time.

Is graphql schema circular reference an anti-pattern?

graphql schema like this:
type User {
id: ID!
location: Location
}
type Location {
id: ID!
user: User
}
Now, the client sends a graphql query. Theoretically, the User and Location can circular reference each other infinitely.
I think it's an anti-pattern. For my known, there is no middleware or way to limit the nesting depth of query both in graphql and apollo community.
This infinite nesting depth query will cost a lot of resources for my system, like bandwidth, hardware, performance. Not only server-side, but also client-side.
So, if graphql schema allow circular reference, there should be some middlewares or ways to limit the nesting depth of query. Or, add some constraints for the query.
Maybe do not allow circular reference is a better idea?
I prefer to sending another query and doing multiple operations in one query. It's much more simple.
Update
I found this library: https://github.com/slicknode/graphql-query-complexity. If graphql doesn't limit circular reference. This library can protect your application against resource exhaustion and DoS attacks.
It depends.
It's useful to remember that the same solution can be a good pattern in some contexts and an antipattern in others. The value of a solution depends on the context that you use it. — Martin Fowler
It's a valid point that circular references can introduce additional challenges. As you point out, they are a potential security risk in that they enable a malicious user to craft potentially very expensive queries. In my experience, they also make it easier for client teams to inadvertently overfetch data.
On the other hand, circular references allow an added level of flexibility. Running with your example, if we assume the following schema:
type Query {
user(id: ID): User
location(id: ID): Location
}
type User {
id: ID!
location: Location
}
type Location {
id: ID!
user: User
}
it's clear we could potentially make two different queries to fetch effectively the same data:
{
# query 1
user(id: ID) {
id
location {
id
}
}
# query 2
location(id: ID) {
id
user {
id
}
}
}
If the primary consumers of your API are one or more client teams working on the same project, this might not matter much. Your front end needs the data it fetches to be of a particular shape and you can design your schema around those needs. If the client always fetches the user, can get the location that way and doesn't need location information outside that context, it might make sense to only have a user query and omit the user field from the Location type. Even if you need a location query, it might still not make sense to expose a user field on it, depending on your client's needs.
On the flip side, imagine your API is consumed by a larger number of clients. Maybe you support multiple platforms, or multiple apps that do different things but share the same API for accessing your data layer. Or maybe you're exposing a public API designed to let third-party apps integrate with your service or product. In these scenarios, your idea of what a client needs is much blurrier. Suddenly, it's more important to expose a wide variety of ways to query the underlying data to satisfy the needs of both current clients and future ones. The same could be said for an API for a single client whose needs are likely to evolve over time.
It's always possible to "flatten" your schema as you suggest and provide additional queries as opposed to implementing relational fields. However, whether doing so is "simpler" for the client depends on the client. The best approach may be to enable each client to choose the data structure that fits their needs.
As with most architectural decisions, there's a trade-off and the right solution for you may not be the same as for another team.
If you do have circular references, all hope is not lost. Some implementations have built-in controls for limiting query depth. GraphQL.js does not, but there's libraries out there like graphql-depth-limit that do just that. It'd be worthwhile to point out that breadth can be just as large a problem as depth -- regardless of whether you have circular references, you should look into implementing pagination with a max limit when resolving Lists as well to prevent clients from potentially requesting thousands of records at a time.
As #DavidMaze points out, in addition to limiting the depth of client queries, you can also use dataloader to mitigate the cost of repeatedly fetching the same record from your data layer. While dataloader is typically used to batch requests to get around the "n+1 problem" that arises from lazily loading associations, it can also help here. In addition to batching, dataloader also caches the loaded records. That means subsequent loads for the same record (inside the same request) don't hit the db but are fetched from memory instead.
TLDR; Circular references are an anti-pattern for non-rate-limited GraphQL APIs. APIs with rate limiting can safely use them.
Long Answer: Yes, true circular references are an anti-pattern on smaller/simpler APIs ... but when you get to the point of rate-limiting your API you can use that limiting to "kill two birds with one stone".
A perfect example of this was given in one of the other answers: Github's GraphQL API let's you request a repository, with its owner, with their repositories, with their owners ... infinitely ... or so you might think from the schema.
If you look at the API though (https://developer.github.com/v4/object/user/) you'll see their structure isn't directly circular: there are types in-between. For instance, User doesn't reference Repository, it references RepositoryConnection. Now, RepositoryConnection does have a RepositoryEdge, which does have a nodes property of type [Repository] ...
... but when you look at the implementation of the API: https://developer.github.com/v4/guides/resource-limitations/ you'll see that the resolvers behind the types are rate-limited (ie. no more than X nodes per query). This guards both against consumers who request too much (breadth-based issues) and consumers who request infinitely (depth-based issues).
Whenever a user requests a resource on GitHub it can allow circular references because it puts the burden on not letting them be circular onto the consumer. If the consumer fails, the query fails because of the rate-limiting.
This lets responsible users ask for the user, of the repository, owned by the same user ... if they really need that ... as long as they don't keep asking for the repositories owned by the owner of that repository, owned by ...
Thus, GraphQL APIs have two options:
avoid circular references (I think this is the default "best practice")
allow circular references, but limit the total nodes that can be queried per call, so that infinite circles aren't possible
If you don't want to rate-limit, GraphQL's approach of using different types can still give you a clue to a solution.
Let's say you have users and repositories: you need two types for both, a User and UserLink (or UserEdge, UserConnection, UserSummary ... take your pick), and a Repository and RepositoryLink.
Whenever someone requests a user via a root query, you return the User type. But that User type would not have:
repositories: [Repository]
it would have:
repositories: [RepositoryLink]
RepositoryLink would have the same "flat" fields as Repository has, but none of its potentically circular object fields. Instead of owner: User, it would have owner: ID.
The pattern you show is fairly natural for a "graph" and I don't think it's especially discouraged in GraphQL. The GitHub GraphQL API is the thing I often look at when I wonder "how do people build larger GraphQL APIs", and there are routinely object cycles there: a Repository has a RepositoryOwner, which can be a User, which has a list of repositories.
At least graphql-ruby has a control to limit nesting depth. Apollo doesn't obviously have this control, but you might be able to build a custom data source or use the DataLoader library to avoid repeatedly fetching objects you already have.
The above answers provide good theoretical discussion on the question. I would like to add more practical considerations that occur in software development.
As #daniel-rearden points out, a consequence of circular references is that it allows for multiple query documents to retrieve the same data. In my experience, this is a bad practice because it makes client-side caching of GraphQL requests less predictable and more difficult, since a developer would have to explicitly specify that the documents are returning the same data in a different structure.
Furthermore, in unit testing, it is difficult to generate mock data for objects whose fields/properties contain circular references to the parent. (at least in JS/TS; if there are languages that support this easily out-of-the-box, I'd love to hear it in a comment)
Maintenance of a clear data hierarchy seems to be the clear choice for understandable and maintainable schemas. If a reference to a field's parent is frequently needed, it is perhaps best to build a separate query.
Aside: Truthfully, if it were not for the practical consequences of circular references, I would love to use them. It would be beautiful and amazing to represent data structures as a "mathematically perfect" directed graph.

Am I misusing GraphQL if I must decompose REST data, then re-aggregate it?

We are considering using GraphQL on top of a REST service (using the
FHIR standard for medical records).
I understand that the pattern with GraphQL is to aggregate the results
of multiple, independent resolvers into the final result. But a
FHIR-compliant REST server offers batch endpoints that already aggregate
data. Sometimes we’ll need à la carte data—a patient’s age or address
only, for example. But quite often, we’ll need most or all of the data
available about a particular patient.
So although we can get that kind of plenary data from a single REST call
that knits together multiple associations, it seems we will need to
fetch it piecewise to do things the GraphQL way.
An optimization could be to eager load and memoize all the associated
data anytime any resolver asks for any data. In some cases this would be
appropriate while in other cases it would be serious overkill. But
discerning when it would be overkill seems impossible given that
resolvers should be independent. Also, it seems bloody-minded to undo
and then redo something that the REST service is already perfectly
capable of doing efficiently.
So—
Is GraphQL the wrong tool when it sits on top of a REST API that can
efficiently aggregate data?
If GraphQL is the right tool in this situation, is eager-loading and
memoization of associated data appropriate?
If eager-loading and memoization is not the right solution, is there
an alternative way to take advantage of the REST service’s ability
to aggregate data?
My question is different from
this
question and
this
question because neither touches on how to take advantage of another
service’s ability to aggregate data.
An alternative approach would be to parse the request inside the resolver for a particular query. The fourth parameter passed to a resolver is an object containing extensive information about the request, including the selection set. You could then await the batched request to your API endpoint based on the requested fields, and finally return the result of the REST call, and let your lower level resolvers handle parsing it into the shape the data was requested in.
Parsing the info object can be a PITA, although there's libraries out there for that, at least in the Node ecosystem.

Micro Services and noSQL - Best practice to enrich data in micro service architecture

I want to plan a solution that manages enriched data in my architecture.
To be more clear, I have dozens of micro services.
let's say - Country, Building, Floor, Worker.
All running over a separate NoSql data store.
When I get the data from the worker service I want to present also the floor name (the worker is working on), the building name and country name.
Solution1.
Client will query all microservices.
Problem - multiple requests and making the client be aware of the structure.
I know multiple requests shouldn't bother me but I believe that returning a json describing the entity in one single call is better.
Solution 2.
Create an orchestration that retrieves the data from multiple services.
Problem - if the data (entity names, for example) is not stored in the same document in the DB it is very hard to sort and filter by these fields.
Solution 3.
Before saving the entity, e.g. worker, call all the other services and fill the relative data (Building Name, Country name).
Problem - when the building name is changed, it doesn't reflect in the worker service.
solution 4.
(This is the best one I can come up with).
Create a process that subscribes to a broker and receives all entities change.
For each entity it updates all the relavent entities.
When an entity changes, let's say building name changes, it updates all the documents that hold the building name.
Problem:
Each service has to know what can be updated.
When a trailing update happens it shouldnt update the broker again (recursive update), so this can complicate to the microservices.
solution 5.
Keeping everything normalized. Fileter and sort in ElasticSearch.
Problem: keeping normalized data in ES is too expensive performance-wise
One thing I saw Netflix do (which i like) is create intermediary services for stuff like this. So maybe a new intermediary service that can call the other services to gather all the data then create the unified output with the Country, Building, Floor, Worker.
You can even go one step further and try to come up with a scheme for providing as input which resources you want to include in the output.
So I guess this closely matches your solution 2. I notice that you mention for solution 2 that there are concerns with sorting/filtering in the DB's. I think that if you are using NoSQL then it has to be for a reason, and more often then not the reason is for performance. I think if this was done wrong then yeah you will have problems but if all the appropriate fields that are searchable are properly keyed and indexed (as #Roman Susi mentioned in his bullet points 1 and 2) then I don't see this as being a problem. Yeah this service will only be as fast as the culmination of your other services and data stores, so they have to be fast.
Now you keep your individual microservices as they are, keep the client calling one service, and encapsulate the complexity of merging the data into this new service.
This is the video that I saw this in (https://www.youtube.com/watch?v=StCrm572aEs)... its a long video but very informative.
It is hard to advice on the Solution N level, but certain problems can be avoided by the following advices:
Use globally unique identifiers for entities. For example, by assigning key values some kind of URI.
The global ids also simplify updates, because you track what has actually changed, the name or the entity. (entity has one-to-one relation with global URI)
CAP theorem says you can choose only two from CAP. Do you want a CA architecture? Or CP? Or maybe AP? This will strongly affect the way you distribute data.
For "sort and filter" there is MapReduce approach, which can distribute the load of figuring out those things.
Think carefully about the balance of normalization / denormalization. If your services operate on URIs, you can have a service which turns URIs to labels (names, descriptions, etc), but you do not need to keep the redundant information everywhere and update it. Do not do preliminary optimization, but try to keep data normalized as long as possible. This way, worker may not even need the building name but it's global id. And the microservice looks up the metadata from another microservice.
In other words, minimize the number of keys, shared between services, as part of separation of concerns.
Focus on the underlying model, not the JSON to and from. Right modelling of the data in your system(s) gains you more than saving JSON calls.
As for NoSQL, take a look at Riak database: it has adjustable CAP properties, IIRC. Even if you do not use it as such, reading it's documentation may help to come up with suitable architecture for your distributed microservices system. (Of course, this applies if you have essentially parallel system)
First of all, thanks for your question. It is similar to Main Problem Of Document DBs: how to sort collection by field from another collection? I have my own answer for that so i'll try to comment all your solutions:
Solution 1: It is good if client wants to work with Countries/Building/Floors independently. But, it does not solve problem you mentioned in Solution 2 - sorting 10k workers by building gonna be slow
Solution 2: Similar to Solution 1 if all client wants is a list enriched workers without knowing how to combine it from multiple pieces
Solution 3: As you said, unacceptable because of inconsistent data.
Solution 4: Gonna be working, most of the time. But:
Huge data duplication. If you have 20 entities, you are going to have x20 data.
Large complexity. 20 entities -> 20 different procedures to update related data
High cohesion. All your services must know each other. Data model change will propagate to every service because of update procedures
Questionable eventual consistency. It can be done so data will be consistent after failures but it is not going to be easy
Solution 5: Kind of answer :-)
But - you do not want everything. Keep separated services that serve separated entities and build other services on top of them.
If client wants enriched data - build service that returns enriched data, as in Solution 2.
If client wants to display list of enriched data with filtering and sorting - build a service that provides enriched data with filtering and sorting capability! Likely, implementation of such service will contain ES instance that contains cached and indexed data from lower-level services. Point here is that ES does not have to contain everything or be shared between every service - it is up to you to decide better balance between performance and infrastructure resources.
This is a case where Linked Data can help you.
Basically the Floor attribute for the worker would be an URI (a link) to the floor itself. And Any other linked data should be expressed as URIs as well.
Modeled with some JSON-LD it would look like this:
worker = {
'#id': '/workers/87373',
name: 'John',
floor: {
'#id': '/floors/123'
}
}
floor = {
'#id': '/floor/123',
'level': 12,
building: { '#id': '/buildings/87' }
}
building = {
'#id': '/buildings/87',
name: 'John's home',
city: { '#id': '/cities/908' }
}
This way all the client has to do is append the BASE URL (like api.example.com) to the #id and make a simple GET call.
To remove the extra calls burden from the client (in case it's a slow mobile device), we use the gateway pattern with micro-services. The gateway can expand those links with very little effort and augment the return object. It can also do multiple calls in parallel.
So the gateway will make a GET /floor/123 call and replace the floor object on the worker with the reply.

Resources