Is there any mention in graphql specification about graphql delete mutation return type - graphql

I saw some graphql implementions that returns the whole object after deletion and some implementions that return only the id of the deleted object.
What is the right way according to graphql specification?

The specification is not there to dictate API design decisions nor even prescribe best practices. It's there to make sure different GraphQL engines and clients are compatible between each other.
As for you question, there's no right or wrong answer. Do what makes sense for your use-case. If you take an ID as the input for deletion, it makes sense to return the whole object. If you accept the whole object already, there's not much benefit in returning the exact same thing right back...
Decide what makes sense and keep your API consistent across the operations.

Related

Conditionally disable Apollo cache normalization for certain usage of a type?

I have a situation, using Apollo InMemoryCache on a React client, where I'd like to be able to instruct Apollo not to use cache normalization for certain nodes in the graph without having to disable caching entirely for that type. Is this possible?
To better explain what I mean: Say that I have an entity Person, that I generally want Apollo to use cache for, but I also have an endpoint called PersonEvent that has these two fields:
old: Person!
new: Person!
This returns two historic snapshots of the same person, used for showing what changed on a certain event in time. The problem is that with cache normalization turned on for Person, the cache would interpret old and new as being the same instance since they have the same id and __typename, and then replacing it with the same reference.
I know it is possible to configure Apollo not to normalize objects of a certain type, using the config code below, but then all caching of Person objects is disabled, and that's not what I want:
typePolicies: {
Person: {
keyFields: false
}
}
So my question is: What would be the best practice way to handle this situation? I think it's kind of a philosofical question to it as well: "Is a snapshot of a person, a person?". I could potentially ask the backend dev to add some sort of timestamp to the Person entity so that it could be used to build a unique ID, but I also feel like that would be polluting the Person object as the timestamp is only relevant in case of a snapshot (which is an edgecase). Is this a situation that should generally be solved on the client-side or the server-side?
Given that the graph is as it is, I'd like to only instruct Apollo not to cache the old/new fields on PersonEvent, but I haven't found a way to achieve that yet.
To get philosophical with you:
Is a snapshot of a person, a person?
I think you're answering your question by the problem you're having. The point of a cache is that you can set a value by its ID and you can load that value by its ID. The same can be said for any entity. The cache is just a way of loading the entity. Your Person object appears to be an entity. I'm guessing based on your conundrum that this is NOT true for this snapshot; that it isn't "an entity"; that it doesn't have "an ID" (though it may contain the value of something else's id).
What you are describing is an immutable value object.
And so, IMO, the solution here would be to create a type that represents a value object, and as such is uncacheable: PersonSnapshot (or similar).

How do i 'destroy all' a given Resource type in redux-saga?

I'm new to Redux-Saga, so please assume very shaky foundational knowledge.
In Redux, I am able to define an action and a subsequent reducer to handle that action. In my reducer, i can do just about whatever i want, such as 'delete all' of a specific state tree node, eg.
switch action.type
...
case 'DESTROY_ALL_ORDERS'
return {
...state,
orders: []
}
However, it seems to me (after reading the docs), that reducers are defined by Saga, and you have access to them in the form of certain given CRUD verb prefixes with invocation post fixes. E.g.
fetchStart, destroyStart
My instinct is to use destroyStart, but the method accepts a model instance, not a collection, i.e. it only can destroy a given resource instance (in my case, one Order).
TL;DR
Is there a destroyStart equivalent for a group of records at once?
If not, is there a way i can add custom behavior to the Saga created reducers?
What have a missed? Feel free to be as mean as you want, I have no idea what i'm doing but when you are done roasting me do me a favor and point me in the right direction.
EDIT:
To clarify, I'm not trying to delete records from my database. I only want to clear the Redux store of all 'Order' Records.
Two key bit's of knowledge were gained here.
My team is using a library called redux-api-resources which to some extent I was conflating with Saga. This library was created by a former employee, and adds about as much complexity as it removes. I would not recommend it. DestroyStart is provided by this library, and not specifically related to Saga. However the answer for anyone using this library (redux-api-resources) is no, there is no bulk destroy action.
Reducers are created by Saga, as pointed out in the above comments by #Chad S.. The mistake in my thinking was that I believed I should somehow crack open this reducer and fill it with complex logic. The 'Saga' way to do this is to put logic in your generator function, which is where you (can) define your control flow. I make no claim that this is best practice, only that this is how I managed to get my code working.
I know very little about Saga and Redux in general, so please take these answers with a grain of salt.

Why Elasticsearch uses PUT instead of POST for creating index?

As far as I know, POST is usually used for changing the state of the server, and PUT usually for updating the information. If I am creating a new index, should it not be POST instead of PUT? PUT does make sense when creating a document as it changes the state of data.
Your statement
As far as I know, POST is usually used for changing the state of the server, and PUT usually for updating the information.
does conform to the conventional HTTP vs CRUD semantics:
HTTP method
CRUD equivalent
Description
POST
Create
Let the target resource process the representation enclosed in the request.
PUT
Update
Set the target resource’s state to the state defined by the representation enclosed in the request.
However, the PUT spec also stipulates that:
The PUT method requests that the state of the target resource be
created or replaced with the state defined by the representation
enclosed in the request message payload
As such, PUT can (and is) used in Elasticsearch to both create an index AND update its
[settings and mappings].
Also, keep in mind that it's rarely just a matter of strict adherence to the semantics. One of the creators of ES put it this way:
It's all about REST semantics.
And our understanding of the semantics at the time when we made the APIs. And backwards compatibility constraints. And whatever "feels" natural to the person who implemented the API.
Where it makes a lot of sense Elasticsearch maps the HTTP verbs to
useful things. But when it doesn't make a ton of sense we just go with
whatever verb feels good rather than trying to be super strict about
REST. Also, we don't do linked data, instead relying on you to build
links from context. I'm told that is particularly non-REST. But it is
what we do.

What is the point of having PATCH, POST, PUT types when we use repository save methods for all?

As a newcomer to spring I would like to know the actual difference between:-
#PostMapping
#PutMapping
#PatchMapping
My understanding is PUT is for update but then we have to get the element by its id and then save() it. Similarly the save() method is again used by Post which automatically replaces by its identifier(PRIMARY). In my application I am able to use three of these methods interchangeably.
What is the point of having PATCH, POST, PUT types when we use repository save methods for all?
HTTP method tokens are used to define request semantics in such a way that general purpose components (browsers, reverse proxies, etc) can exploit the information to do intelligent things.
The easiest of these is that PUT has idempotent semantics; if an http response is lost, a general purpose component knows that it may autonomously retry sending the request. This in turn gives you a bit of extra reliability over an unreliable network, "for free".
The fact that your origin server uses the same persistence mechanism for each is an implementation detail, something deliberately hidden behind the "uniform interface".
The difference between PATCH and POST is subtle; PATCH gives you an unambiguous way to designate that the enclosed entity is a patch document, and offers a mechanism for discovering which patch document formats are understood by the origin server, neither of which you get from POST alone.
What's less clear, at least to me, is whether PATCH semantics allow an intermediate component to do something intelligent with a request - in other words, do the additional constraints (relative to POST) allow intermediaries to do anything interesting?
As best I can tell, the semantics of a PATCH request are more specific, but not actionably more specific -- certainly not as obviously as we have in the case of safe or idempotent request semantics.
POST is for creating a brand new object.
PUT will replace all of an objects properties in one go.
Leaving a property empty will empty the value in the datastore.
PATCH does a partial update of an object.
You can send it just the properties which should be updated.
A PATCH request with all object properties included will have the same effect as a POST request. But they are not the same.
The HTTP method is a convention not specific to Spring but is a main pillar of the REST API specification.
They make sure the intent of a request is clear and both the provider and consumer are in agreement of the end result.
Kind of like the pedals or gear shift in our cars. It's a lot easier when they all work the same.
Switching them up could lead to a lot of accidents.
For us as developers, it means we can expect most REST APIs to behave in a similar way, assuming an API is implemented according to or reasonably close to the specification.
POST/PUT/PATCH may look alike but there are subtle differences.
As you mention the PUT and PATCH methods require some kind of ID of the object to be updated.
In an example of a combined POST/PUT/PATCH endpoint, sending a request with an object, omitting some of its properties. How does the API react?
Update only the received properties.
Update the entire object, emptying the omitted properties.
Attempt to create a new object.
How is the consumer of the endpoint to know which of the three actions the server took?
This is where the HTTP method and specification/convention help determine the appropriate course of action.
Spring may facilitate the save method which can handle both creation, updates and partial updates. But this is not necessarily the case for other frameworks in Java or other languages.
Also, your application may be simple enough to handle POST/PUT/PATCH in the same controller method right now.
But over time as your application grows more complex, the separation of concerns makes your code a lot cleaner, more readable and maintainable.

Connection is based on `array`,is this a design style guide for design a relay server?

In connection/arrayconnection.js, It seems all the function is tend to work with array.
For example: offsetToCursor is the only way to generate Cursor. Does this mean its a design pattern i must follow, or imply that i should generate Cursor by myself when using something other than array.If im planning to use Mongodb,should i make the database interface like an static array ?
BTW:
As a newbie to web develop, im a bit confused how to implement a qualified relay server.
Are there some guide for design a graphql-relay server, should i follow all the way in graphql-relay-js, which Database Facebook used with relay-server ? mysql or ?
Im not sure ask this here is appropriate or not,but the topic for graphql-relay-js is rarely on the web.
Thanks a lot, forgive my impolite.
var PREFIX = 'arrayconnection:';
/**
* Creates the cursor string from an offset.
*/
export function offsetToCursor(offset: number): ConnectionCursor {
return base64(PREFIX + offset);
}
Additional question:
Maybe i get some idea from developers.facebook.com/docs/graph-api.
Seems should do an array style cache for pagination lookup( not sure about this).
But graph-api looks a bit different from graphql-relay-js (is graph-api still some part of old restful style?),
What is the relationship between graph-api and graphql-relay-js ? Is graphql-relay-js a common design guide for design a graphql server in facebook?
Thanks a lot! please give me some hints
Connection is a design pattern that your schema may implement if you want Relay to perform efficient pagination. How it gets implemented on the backend is an implementation detail. It may be backed by something array-like, or it may not (think about something like the infinite scrolling news feed on Facebook, which is ranked by a terribly sophisticated backend service: this is clearly not backed by an array). We provide the arrayconnection.js module as a way of demonstrating how this can be done if your data source has that array-like nature. If it does not, or cannot be efficiently converted to it, you are better off implementing something from scratch.
Cursors are opaque identifiers. You could use an array index or some kind of primary key if you are using an array source or a typical database backend (like MySQL), but again the details are implementation-specific and should be chosen to suit your back end. The only requirement is that the cursor should encode whatever information you need on the server to be able to return the next page of results after (or before) that point.
graphql-relay-js is just a collection of modules that provide some helpers for building Relay-compatible GraphQL schemas in JavaScript. The schema provides a uniform interface to your data, but the actual underlying storage can be anything you want to plug into it (a MySQL database, an object in memory, some REST service). For simple examples, look in the examples directory in the Relay repo. As an illustration of how you can put a schema in front of something that is not a traditional database, this is an example of a schema that reads its data out of a Git repo, with the help of indices in Redis and cached data in memcached.
Stay away from developers.facebook.com/docs/graph-api; despite the "graph" in the name this is an entirely different thing and has nothing to do with the GraphQL hierarchical query language that Relay uses.

Resources