Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
From my understanding, GraphQL is a great query language for fetching data. However, data mutation, even when using a GraphQL client framework such as Relay, does not seem to be client-side developer friendly. Reason being, they need to know the logic behind the mutation and use it inside the client code.
Would it be better if GraphQL could expose some information to Relay via the introspection functionality, because no other frameworks seem to already be doing this? Also, what would be some of the technical challenges involved building a GraphQL client this way?
GraphQL has chosen to implement mutations in a purely RPC-style model. That is, mutations don't include any metadata about what kinds of changes they are likely to make to the backend. As a contrast, we can look at something like REST, where verbs like POST and PATCH indicate the client's intention about what should happen on the backend.
There are pros and cons to this. On the one hand, it's more convenient to write client code if your framework can learn to incorporate changes automatically, however I would claim this is not possible in all but the most principled of REST APIs. On the other hand, the RPC model has a huge advantage in that the server is not limited in the kinds of operations it can perform. Rather than needing to describe modifications in terms of updates to particular objects, you can simply define any semantic operation you like as long as you can write the server code.
Is this consistent with the rest of GraphQL?
I believe that the current implementation of mutations is consistent with the data fetching part of GraphQL's design, which has a similar concept: Any field on any object could be computed from the others, meaning that there is no stable concept of an "object" in the output of a query. So in order to have mutations which automatically update the results from a query, you would need to take into account computed fields, arguments, aggregates, etc. GraphQL as currently specified seems to explicitly make the trade off that it's fine for the information transfer from the server to be lossy, in order to enable complete flexibility in the implementation of server-side fields.
Are there some mutations that can be incorporated automatically?
Yes. In particular, if your mutation return values incorporate the same object types as your queries, a smart GraphQL client such as Apollo Client will merge those results into the cache without any extra work. By using fragments and picking convenient return types for mutations, you can get by with this approach for most or all mutations:
fragment PostDetails {
id
score
title
}
query PostWithDetails {
post(id: 5) {
...PostDetails
}
}
mutation UpdatePostTitle {
updatePostTitle(id: 5, newTitle: "Great new title") {
...PostDetails
}
}
The place where things get tricky are for mutations that are inserting and deleting objects, since it's not immediately clear what the client should do with that mutation result.
Can this be improved on with introspection or otherwise?
I think it would be very advantageous to have a restricted model for mutations that works more automatically, if the ability to upgrade to a more flexible approach is preserved.
One particular example would be to have a semantic way to declare "delete" mutations:
type Mutation {
deletePost(id: ID!): DeletePostResult #deletes
}
If a client can read the directives on these mutation fields via introspection, then it could identify the deletes directive and guess that the id field represents an object that was deleted and should be purged from the cache.
I'm one of the core contributors to Apollo, and I think it would be quite easy to experiment with features like this in companion packages. We had some inklings of this in core as well, and intentionally designed the store format to make things like this possible.
TL;DR
The current approach makes GraphQL super flexible and is consistent with the rest of the design, but it would be interesting to add conventions to make some mutations automatic.
Related
I have a situation, using Apollo InMemoryCache on a React client, where I'd like to be able to instruct Apollo not to use cache normalization for certain nodes in the graph without having to disable caching entirely for that type. Is this possible?
To better explain what I mean: Say that I have an entity Person, that I generally want Apollo to use cache for, but I also have an endpoint called PersonEvent that has these two fields:
old: Person!
new: Person!
This returns two historic snapshots of the same person, used for showing what changed on a certain event in time. The problem is that with cache normalization turned on for Person, the cache would interpret old and new as being the same instance since they have the same id and __typename, and then replacing it with the same reference.
I know it is possible to configure Apollo not to normalize objects of a certain type, using the config code below, but then all caching of Person objects is disabled, and that's not what I want:
typePolicies: {
Person: {
keyFields: false
}
}
So my question is: What would be the best practice way to handle this situation? I think it's kind of a philosofical question to it as well: "Is a snapshot of a person, a person?". I could potentially ask the backend dev to add some sort of timestamp to the Person entity so that it could be used to build a unique ID, but I also feel like that would be polluting the Person object as the timestamp is only relevant in case of a snapshot (which is an edgecase). Is this a situation that should generally be solved on the client-side or the server-side?
Given that the graph is as it is, I'd like to only instruct Apollo not to cache the old/new fields on PersonEvent, but I haven't found a way to achieve that yet.
To get philosophical with you:
Is a snapshot of a person, a person?
I think you're answering your question by the problem you're having. The point of a cache is that you can set a value by its ID and you can load that value by its ID. The same can be said for any entity. The cache is just a way of loading the entity. Your Person object appears to be an entity. I'm guessing based on your conundrum that this is NOT true for this snapshot; that it isn't "an entity"; that it doesn't have "an ID" (though it may contain the value of something else's id).
What you are describing is an immutable value object.
And so, IMO, the solution here would be to create a type that represents a value object, and as such is uncacheable: PersonSnapshot (or similar).
I saw some graphql implementions that returns the whole object after deletion and some implementions that return only the id of the deleted object.
What is the right way according to graphql specification?
The specification is not there to dictate API design decisions nor even prescribe best practices. It's there to make sure different GraphQL engines and clients are compatible between each other.
As for you question, there's no right or wrong answer. Do what makes sense for your use-case. If you take an ID as the input for deletion, it makes sense to return the whole object. If you accept the whole object already, there's not much benefit in returning the exact same thing right back...
Decide what makes sense and keep your API consistent across the operations.
Still quite new to GraphQL,
The idea is to 'secure' mutations, meaning restricting those to the current user passed in the context. Basic one :
Create = GraphQL::Relay::Mutation.define do
name "AddItem"
input_field :title, !types.String
return_field :item, Types::ItemType
return_field :errors, types[types.String]
resolve -> (object, inputs, ctx) {
if ctx[:current_user]
... do the stuff...
else
...returns an error...
end
}
end
Let's say for one having multiple mutations… this very same conditions would have to be repeated everytime needed.
I'm obviously biased by before_action available in rails; is there something similar available in graphql-ruby ? (like, 'protected mutations', in any case looking to selectively protect specific parts of the available output, in a centralized setup)
Or should the approach be completely different ?
As of the time of this writing, the GraphQL spec does not define anything having to do with authz/authn. Generally speaking, people put their GraphQL layer behind a gateway of some sort and pass the auth token in with the query. How to do this will depend on your implementation. In the JavaScript GraphQL server, there is a "context" that will is passed to all resolvers.
In other words, securing queries and mutations at the resolver level is currently the best practice in GraphQL.
Specific to Ruby, however, it does look like there is a paid version of the software that has some nice auth features built in.
http://graphql-ruby.org/pro/authorization
I'm building a Graphene-Django based GraphQL API. One of my colleagues, who is building an Angular client that will use the API, has asked if there's a way to store frequently used queries somehow on the server-side so that he can just call them by name?
I have not yet encountered such functionality so am not sure if it's even possible.
FYI he is using the Apollo Client so maybe such "named" queries is strictly client-side? Here's a page he referred me to: http://dev.apollodata.com/angular2/cache-updates.html
Robert
Excellent question! I think the thing you are looking for is called "persisted queries." The GraphQL spec only outlines
A Type System for a schema
A formal language for queries
How to validate/execute a query against a schema
Beyond that, it is up to the implementation to make specific optimizations. There are a few ways to do persisted queries, and different ones may be more or less helpful for your project.
Storing Queries as a String
Queries can easily be stored as Strings, and the convention is to use *.gql files to do that. Many editors/IDEs will even have syntax highlighting for this. To consume them later, just URL Encode them, and you're all set! Since these strings are "known" you can whitelist the requests on the server if you choose.
const myQuery = `
{
user {
firstName
lastName
}
}
`
const query = `www.myserver.com/query=${urlEncode(myQuery)}`
Persisted Queries
For a more sophisticated approach, you can take queries that are extracted from your project (either from strings or using a build tool), pre-run them and put the result in a DB. This is what Facebook does. There are plenty of tools out there to help you with this, and the Awesome-GraphQL repo is a good place to start looking.
Resources
Check out this blog for more info on Persisted Queries
The project currently I am working in requires a lot of searhing/filtering pages. For example I have a comlex search page to get Issues by data,category,unit,...
Issue Domain Class is complex and contains lots of value objects and child objects.
.I am wondering how people deal with Searching/Filtering/Reporting for UI. As far As I know I have 3 options but none of them make me happier.
1.) Send parameters to Repository/DAO to Get DataTable and Bind DataTable to UI Controls.For Example to ASP.NET GridView
DataTable dataTable =issueReportRepository.FindBy(specs);
.....
grid.DataSource=dataTable;
grid.DataBind();
In this option I can simply by pass the Domain Layer and query database for given specs. And I dont have to get fully constructed complex Domain Object. No need for value objects,child objects,.. Get data to displayed in UI in DataTable directly from database and show in the UI.
But If have have to show a calculated field in UI like method return value I have to do this in the DataBase because I don't have fully domain object. I have to duplicate logic and DataTable problems like no intellisense etc...
2.)Send parameters to Repository/DAO to Get DTO and Bind DTO to UI Controls.
IList<IssueDTO> issueDTOs =issueReportRepository.FindBy(specs);
....
grid.DataSource=issueDTOs;
grid.DataBind();
In this option is same as like above but I have to create anemic DTO objects for every search page. Also For different Issue search pages I have to show different parts of the Issue Objects.IssueSearchDTO, CompanyIssueTO,MyIssueDTO....
3.) Send parameters to Real Repository class to get fully constructed Domain Objects.
IList<Issue> issues =issueRepository.FindBy(specs);
//Bind to grid...
I like Domain Driven Design and Patterns. There is no DTO or duplication logic in this option.but in this option I have to create lot's of child and value object that will not shown in the UI.Also it requires lot's ob join to get full domain object and performance cost for needles child objects and value objects.
I don't use any ORM tool Maybe I can implement Lazy Loading by hand for this version but It seems a bit overkill.
Which one do you prefer?Or Am I doing it wrong? Are there any suggestions or better way to do this?
I have a few suggestions, but of course the overall answer is "it depends".
First, you should be using an ORM tool or you should have a very good reason not to be doing so.
Second, implementing Lazy Loading by hand is relatively simple so in the event that you're not going to use an ORM tool, you can simply create properties on your objects that say something like:
private Foo _foo;
public Foo Foo
{
get {
if(_foo == null)
{
_foo = _repository.Get(id);
}
return _foo;
}
}
Third, performance is something that should be considered initially but should not drive you away from an elegant design. I would argue that you should use (3) initially and only deviate from it if its performance is insufficient. This results in writing the least amount of code and having the least duplication in your design.
If performance suffers you can address it easily in the UI layer using Caching and/or in your Domain layer using Lazy Loading. If these both fail to provide acceptable performance, then you can fall back to a DTO approach where you only pass back a lightweight collection of value objects needed.
This is a great question and I wanted to provide my answer as well. I think the technically best answer is to go with option #3. It provides the ability to best describe and organize the data along with scalability for future enhancements to reporting/searching requests.
However while this might be the overall best option, there is a huge cost IMO vs. the other (2) options which are the additional design time for all the classes and relationships needed to support the reporting needs (again under the premise that there is no ORM tool being used).
I struggle with this in a lot of my applications as well and the reality is that #2 is the best compromise between time and design. Now if you were asking about your busniess objects and all their needs there is no question that a fully laid out and properly designed model is important and there is no substitute. However when it comes to reporting and searching this to me is a different animal. #2 provides strongly typed data in the anemic classes and is not as primitive as hardcoded values in DataSets like #1, and still reduces greatly the amount of time needed to complete the design compared to #3.
Ideally I would love to extend my object model to encompass all reporting needs, but sometimes the effort required to do this is so extensive, that creating a separate set of classes just for reporting needs is an easier but still viable option. I actually asked almost this identical question a few years back and was also told that creating another set of classes (essentially DTOs) for reporting needs was not a bad option.
So to wrap it up, #3 is technically the best option, but #2 is probably the most realistic and viable option when considering time and quality together for complex reporting and searching needs.