GraphQL: making use of it to handle complex view/UI logic - graphql

I've got a requirement to move complex switches logic from UI layer to GraphQL layer. This complex switches logic is to perform a computation based on a bunch of backend configurations to determine whether a set of UI components should be shown or hidden. To show or hide a particular UI component in the set, it requires at least 3 levels deep of nested if-else branches logic. I'm currently thinking of adding the following type to the schema to solve this problem:
type MyComplexView {
componentAShown: Boolean!
componentBShown: Boolean!
# ... (gazzillion of other components here)
componentXShown: Boolean!
}
...and then I'd expect the data returned from the query would look something like:
{
"data": {
"myComplexView": {
"componentAShown": true,
"componentBShown": false,
// ... (gazzillion of other components here)
"componentXShown": true,
}
}
}
But it just doesn't feel right, and it feels like I'm abusing the GraphQL layer for doing this kind of job.
So question is, is this also a valid use case for making use of GraphQL? Is there an alternative or better way of doing it? The idea is to share the complex switches logic for all the clients (e.g. web and mobile) that are going to consume the API without re-writing/duplicating the logic again on the client-side.

Related

Should GraphQL DataLoader wrap request to database or wrap requests to service methods?

I have very common GraphQL schema like this (pseudocode):
Post {
commentsPage(skip: Int, limit: Int) {
total: Int
items: [Comment]
}
}
So to avoid n+1 problem when requesting multiple Post objects I decided to use Facebook's Dataloader.
Since I'm working on Nest.JS 3-tier layered application (Resolver-Service-Repository), I have question:
should I wrap my repository methods with DataLoader or should I wrap my service methods with Dataloder?
Below is example of my service method that returns Comments page (i.e. this method called from commentsPage property resolver). Inside service method I'm using 2 repository methods (#count and #find):
#Injectable()
export class CommentsService {
constructor(
private readonly repository: CommentsRepository,
) {}
async getCommentsPage(postId, dataStart, dateEnd, skip, limit): PaginatedComments {
const counts = await this.repository.getCount(postId, dateStart, dateEnd);
const itemsDocs = await this.repository.find(postId, dateStart, dateEnd, skip, limit);
const items = this.mapDbResultToGraphQlType(itemsDocs);
return new PaginatedComments(total, items)
}
}
So should I create individual instances of Dataloader for each of repository method (#count, #find etc) or should I just wrap my entire service method with Dataloader (so my commentsPage property resolver will just work with Dataloader not with service)?
Disclaimer: I am not an expert in Nest.js but I have written a good bunch of dataloaders as well as worked with automatically generated dataloaders. I hope I can give a bit of insight nonetheless.
What is the actual problem?
While your question seems to be a relatively simple either or question it is probably much more difficult than that. I think the actual problem is the following: Whether to use the dataloader pattern or not for a specific field needs to be decided on a per field basis. The repository+service pattern on the other hand tries to abstract away this decision by exposing abstract and powerful ways of data access. One way out would be to simply "dataloaderify" every method of your service. Unfortunately in practice this is not really feasable. Let's explore why!
Dataloader is made for key-value-lookups
Dataloader provides a promise cache to reduce dublicated calls to the database. For this cache to work all requests need to be simple key value lookups (e.g. userByIdLoader, postsByUserIdLoader). This quickly becomes not sufficient enough, like in one of your example your request to the repository has a lot of parameters:
this.repository.find(postId, dateStart, dateEnd, skip, limit);
Sure technically you could make { postId, dateStart, dateEnd, skip, limit } your key and then somehow hash the content to generate a unique key.
Writing Dataloader queries is an order of magnitude harder than normal queries
When you implement a dataloader query it now suddenly has to work for a list of the inputs the initial query needed. Here a simple SQL example:
SELECT * FROM user WHERE id = ?
-- Dataloaded
SELECT * FROM user WHERE id IN ?
Okay now the repository example from above:
SELECT * FROM comment WHERE post_id = ? AND date < ? AND date > ? OFFSET ? LIMIT ?
-- Dataloaded
???
I have sometimes written queries that work for two parameters and they already become very difficult problems. This is why most dataloaders are simply load by id lookups. This tread on twitter discusses how a GraphQL API should only expose what can be efficiently queried. If you create service methods with strong filter methods you have the same problem even if your GraphQL API does not expose these filters.
Okay so what is the solution?
The first thing to my understanding that Facebook does is match fields and service methods very closely. You could do the same. This way you can make a decision in the service method if you want to use a dataloader or not. For example I don't use dataloaders in root queries (e.g. { getPosts(filter: { createdBefore: "...", user: 234 }) { .. }) but in subfields of types that appear in lists { getAllPosts { comments { ... } }. The root query is not going to be executed in a loop and is therefore not exposed to the n+1 problem.
Your repository now exposes what can be "efficiently queried" (as in Lee's tweet) like foreign/primary key lookups or filtered find all queries. The service can then wrap for example the key lookups in a dataloader. Often I end up filtering small lists in my business logic. I think this is perfectly fine for small apps but might be problematic when you scale. The GraphQL Relay helpers for JavaScript do something similar when you use the connectionFromArray function. The pagination is not done on the database level and this is probably okay for 90% of connections.
Some sources to consider
GraphQL before GraphQL - Dan Schafer
Dataloader source code walkthrough - Lee Byron
There is another talk from this years GraphQL conf that discusses the data access at FB but I don't think it is uploaded yet. I might come back when it has been published.

Why is TYPE_ADDED_TO_INTERFACE considered a breaking change?

I am using the Apollo Server implementation of GraphQL, together with the Apollo Engine, and specifically the functionality to check whether a schema diff contains any breaking changes. I'd like to understand better why TYPE_ADDED_TO_INTERFACE is considered to be a breaking change, and if anyone can provide an example of a graphql query that would break, as a consequence?
I'm using the apollo/2.9.0 darwin-x64 node-v10.10.0 to perform the schema check with the apollo service:check command.
For example, if I have this schema:
interface Animal {
id: ID
}
type Dog implements Animal {
id: ID
favoriteToy: String
}
And then add this to the schema:
type Cat implements Animal {
id: ID
}
This is considered a breaking change. Why?
I can see that if someone is making a query for all the Animal objects, and has a ... on Dog fragment in the query, they would start getting Cat objects back with only the interface fields, until they also add a ... on Cat fragment. Is that what's considered breaking?
Having a type implement an interface it previously did not should not break existing queries. To your point, even if the inline fragment is omitted, the results will still be valid (they could result in an empty object being returned if no interface fields were selected, but that's still a valid response).
I could, however, foresee issues in specific clients resulting from this kind of change. For example, when using Apollo client, we often create an IntrospectionFragmentMatcher to specifically help the client correctly cache results from union or interface fields.
To support result validation and accurate fragment matching on unions and interfaces, a special fragment matcher called the IntrospectionFragmentMatcher can be used. If there are any changes related to union or interface types in your schema, you will have to update the fragment matcher accordingly.
In other words, having the schema change in this way could break client caching behavior. I suspect for clients that do schema-based code-generation, like apollo-android, this could also result in some runtime weirdness.

Algorithm to filter data structure AND/OR/NOT (similar to GraphQL implementation)

I want to implement a data structure that allows for powerful filtering within my application.
The closest implementation I've found is from Prisma https://www.prisma.io/docs/1.27/prisma-graphql-api/reference/queries-qwe1/#combining-multiple-filters (which is actually from GraphQL specification, for what I understand)
Example:
{
OR: [
{
AND: [
{ title_in: ["My biggest Adventure", "My latest Hobbies"] }
{ published: true }
]
}
{ id: "cixnen24p33lo0143bexvr52n" }
]
}
The idea is to compare a context against the filters and see if it's a match.
In the above example, the "context" would be an object with the id, title and published fields.
I'm looking for an algorithm that would perform the comparison and resolve whether it's a match or not.
As I'm not looking at reinventing the wheel (especially that it's a complex algorithm IMHO, as AND/OR/NOT conditions can be nested), I wonder if that particular algorithm already exists, or is based on some standards (as we can find that particular data structure on several apps, such as Prisma, PipeDrive and other).
I'm looking for documentation, implementation examples or even open source implementations. (I'm using JS)
I was also looking for such an implementation but couldn't find one.
So I created a prototype for it: https://github.com/Errorname/logical-object-match
We couldn't find a solution that matched our requirements, so built our own and released it as OSS (MIT).
https://github.com/UnlyEd/conditions-matcher
Compares a given context with a filter (a set of conditions) and resolves whether the context validates the filter. Strongly inspired by GraphQL filters.

Graphql complex boolean queries

I understand the principles of querying via graphql from the docs you could search:
{
"hero": {
"name": "R2-D2"
}
}
but how about you want to do something a bit more intricate such as:
{
"hero": {
"name": "R2-D2 AND C-3PO AND BB-8 NOT K-2SO"
}
}
is there any way to pass a string like this and get the appropriate results?
No, there isn't.
You can read through the GraphQL spec and see what it does and doesn't define. In particular the spec doesn't define any sort of filtering, any sort of expression language, or any sort of Boolean combinators. (There is no native way to say the equivalent of SQL's WHERE NAME='foo' without a field resolver explicitly adding it.)
What GraphQL allows for field arguments is sufficiently open-ended that you can build richer queries on top of it, but that's very specific to some application or library. Two prominent examples are the GitHub GraphQL API (which tends to allow exact-match queries on selected fields but nothing richer) and the Prisma API (which has an involved multi-level object scheme to replicate SQL queries).

GraphQL: Utilizing introspection functionality for data mutation [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
From my understanding, GraphQL is a great query language for fetching data. However, data mutation, even when using a GraphQL client framework such as Relay, does not seem to be client-side developer friendly. Reason being, they need to know the logic behind the mutation and use it inside the client code.
Would it be better if GraphQL could expose some information to Relay via the introspection functionality, because no other frameworks seem to already be doing this? Also, what would be some of the technical challenges involved building a GraphQL client this way?
GraphQL has chosen to implement mutations in a purely RPC-style model. That is, mutations don't include any metadata about what kinds of changes they are likely to make to the backend. As a contrast, we can look at something like REST, where verbs like POST and PATCH indicate the client's intention about what should happen on the backend.
There are pros and cons to this. On the one hand, it's more convenient to write client code if your framework can learn to incorporate changes automatically, however I would claim this is not possible in all but the most principled of REST APIs. On the other hand, the RPC model has a huge advantage in that the server is not limited in the kinds of operations it can perform. Rather than needing to describe modifications in terms of updates to particular objects, you can simply define any semantic operation you like as long as you can write the server code.
Is this consistent with the rest of GraphQL?
I believe that the current implementation of mutations is consistent with the data fetching part of GraphQL's design, which has a similar concept: Any field on any object could be computed from the others, meaning that there is no stable concept of an "object" in the output of a query. So in order to have mutations which automatically update the results from a query, you would need to take into account computed fields, arguments, aggregates, etc. GraphQL as currently specified seems to explicitly make the trade off that it's fine for the information transfer from the server to be lossy, in order to enable complete flexibility in the implementation of server-side fields.
Are there some mutations that can be incorporated automatically?
Yes. In particular, if your mutation return values incorporate the same object types as your queries, a smart GraphQL client such as Apollo Client will merge those results into the cache without any extra work. By using fragments and picking convenient return types for mutations, you can get by with this approach for most or all mutations:
fragment PostDetails {
id
score
title
}
query PostWithDetails {
post(id: 5) {
...PostDetails
}
}
mutation UpdatePostTitle {
updatePostTitle(id: 5, newTitle: "Great new title") {
...PostDetails
}
}
The place where things get tricky are for mutations that are inserting and deleting objects, since it's not immediately clear what the client should do with that mutation result.
Can this be improved on with introspection or otherwise?
I think it would be very advantageous to have a restricted model for mutations that works more automatically, if the ability to upgrade to a more flexible approach is preserved.
One particular example would be to have a semantic way to declare "delete" mutations:
type Mutation {
deletePost(id: ID!): DeletePostResult #deletes
}
If a client can read the directives on these mutation fields via introspection, then it could identify the deletes directive and guess that the id field represents an object that was deleted and should be purged from the cache.
I'm one of the core contributors to Apollo, and I think it would be quite easy to experiment with features like this in companion packages. We had some inklings of this in core as well, and intentionally designed the store format to make things like this possible.
TL;DR
The current approach makes GraphQL super flexible and is consistent with the rest of the design, but it would be interesting to add conventions to make some mutations automatic.

Resources