Using a NODE_DELETE requires the parent, and to actually return the parent of the connection:
Relay Error when deleting: RelayMutationQuery: Invalid field name on fat query
Unfortunately, using this refetches ALL my nested items, which is simply unacceptable for my use case.
fragment on deleteItemNested #relay(pattern: true) {
id
ok
item {
nested {
edges {
node { id }
}
}
}
clientMutationId
}
Is there a way to delete an item from a connection/list without refetching all data? Trying not to fetch for the edges in nested results in nested being just an empty object.
All the nested items are refetched because #relay(pattern: true) was used in the query. This makes the query to match against the tracked query, which already includes the nested fields. See an excellent answer by steveluscher to the question Purpose of #relay(pattern:true).
The code example of NODE_DELETE in mutation documentation is worth taking a look.
Related
Given the following graph
type Movie {
name: String!
actors: [Actor!]!
}
type Actor {
name: String!
awards: [Award!]!
}
type Award {
name: String!
date: String!
}
type Query {
movies(): [Movie!]!
}
I'd like to be able to run the following three types of queries as efficiently as possible:
Query 1:
query {
movies {
actors {
rewards {
name
}
}
}
}
Query 2:
query {
movies {
name
}
}
Query 3:
query {
movies {
name
actors {
rewards {
date
}
}
}
}
Please note, these are not the only queries I will be running, but I'd like my code to be able to pick the optimal path "automatically".
The rest of my business logic is using JPA. The data comes from three respective tables, that can have up to 40 columns each.
I am not looking for code examples, but rather for a high-level structure describing different elements of architecture with respective responsibilities.
Without further context and details of your DB schema, what I could do is to just to give you the general advice that you need to aware of.
Most probably you would encounter N+1 loading performance issue when executing a query that contains several levels of related objects and these objects are stored in different DB tables.
Generally there are 2 ways to solve it :
Use Dataloader . Its idea is to defer the actual loading time of each object to a moment that multiple objects can be batched loaded together by a single SQL. It also provides the caching feature to further improve the loading performance for the same query request.
Use "look ahead pattern" (Refer this for an example). Its ideas is that when you resolve the parent object , you can look ahead to analyse the GraphQL query that you need to execute require to include others related children or not. If yes , you can then use the JOIN SQL to query the parent object together with their children such that when you resolve its children later , they are already fetched and you do not need to fetch them again.
Also, if the objects in your domain can contain infinity number in theory , you should consider to implement pagination behaviour for the query in order to restrict the maximum number of the objects that it can return.
According to the connection based model for pagination using graphQL, I have the following simplified schema.
type User {
id: ID!
name: String!
}
type UserConnection {
totalCount: Int
pageInfo: PageInfo
edges: [UserEdge]
}
type UserEdge {
cursor: String
node: User
}
type PageInfo {
lastCursor: Int
hasNextPage: Boolean
}
type Query {
users(first: Int, after: String): UserConnection
}
Consider the following router on within SPA front-end:
/users - once the user hit this page, I'm fetching first 10 records right up from the top of the list and further I'm able to paginate by reusing a cursor that I've retrieved from the first response.
/user/52 - here I'd like to show up 10 records that should go right from the position of user52.
Problem What are the possible ways to retrieve a particular subset of records on the very first request? On this moment I don't have any cursor to construct something similar to
query GetTenUsersAfter52 {
users(first: 10, after: "????") { # struggling to pass anything as a cursor...
edges {
node {
name
}
}
}
}
What I've already tried(a possible solution) is that I know that on a back-end the cursor is encoded value of an _id of the record in the DB. So, being on /users/52 I can make an individual request for that particular user, grab the value of id, then on the front-end I can compute a cursor and pass it to the back-end in the query above.
But in this case personally, I found a couple of disadvantages:
I'm exposing the way of how my cursor is computed to the front-end, which is bad since if I needed to change that procedure I need to change it on front-end and back-end...
I don't want to make another query field for an individual user simply because I need its id to pass to the users query field.
I don't want to make 2 API calls for that as well...
This is a good example of how Relay-style pagination can be limiting. You'll hit a similar scenario with create mutations, where manually adding a created object into the cache ends up screwing up your pagination because you won't have a cursor for the created object.
As long as you're not actually using Relay client-side, one solution is to just abandon using cursors altogether. You can keep your before and after fields, but instead simply accept the id (or _id or whatever PK) value instead of a cursor. This is what I ended up doing on a recent project and it simplified things significantly.
How can I define arguments for nested fields? Suppose I want to consult all my posts but limit and sort the comments. Thank you for your help.
{
allPosts {
title,
comments(limit: 5) {
content
}
}
}
What you are referring to, is often designated as Pagination, and is something that is covered by the GraphQL's specification.
There are different possible ways of constructing the query to allow retrieving multiple records of a certain object type (comments in our situation).
The simplest option, can be achieved by defining the GraphQL's query string with the object type you want to transverse in the plural form, meaning that your query would look like this:
{
allPosts {
title,
comments {
content
}
}
}
But with this implementation you would end up fetching all the data instead of simply retrieving a chunk of it. Obviously this approach can have many drawbacks depending on the volume of the data that is being fetched and should only be used on specific situations.
The easiest approach to achieve what you want is to request the comments as a "slice", meaning that you would be requesting a specific initial portion of the data set.
In this case, you would be requesting the initial 5 comments.
{
allPosts {
title,
comments(first: 5) {
content
}
}
}
But what if you want to paginate through the rest of the list?
{
allPosts {
title,
comments(first: 5, offset:5) {
content
}
}
}
Doing this, you could ask for the following next 5 comments.
But the approach that is recommend to use when implementing Pagination is the cursor-based pagination, which would translate to something like this:
{
allPosts {
title,
comments(first: 5) {
edges {
node {
content
}
cursor
}
}
}
}
The hard part consists on implementing the resolvers functionality (is slightly easier with frameworks like Apollo).
I am creating an index based on 2 fields in RethinkDB, in javascript (actually with rethinkdbdash driver). The code is like this :
r.table('someTable').indexList().contains("indexName").do(containsIndex => {
return r.branch(
containsIndex,
{created: 0},
r.table('someTable').indexCreate("indexName", [r.row("field1"), r.row("field2")])
);
}).run();
So it conditionnally creates the index if it doesn't exist already. The branching does work for single-field indexes. But it returns a ReqlCompileError: Cannot use r.row in nested queries. Use functions instead in this case.
The docs (https://www.rethinkdb.com/api/javascript/index_create/) clearly give this example :
r.table('comments').indexCreate('postAndDate', [r.row("postId"), r.row("date")]).run(conn, callback)
So what am I missing? Is using the rethinkdbdash driver changing anything? If I do use a function (as suggested by the error message) I can concatenate my 2 fields, but then how do I query with that index?
Thanks.
You can use r.row in un-nested queries like the example in the docs, but for nested queries you need to use an actual function. When you put the indexCreate inside a do it became part of a nested query.
If instead of r.table('someTable').indexCreate("indexName", [r.row("field1"), r.row("field2")]) you write r.table('someTable').indexCreate('indexName', function(row) { return [row('field1'), row('field2')]; }) in your query it should work.
I don't know how to do this type of branching correctly when creating compound indexes, but RethinkDB will just warn you if you try to create an index that already exists, so there is no worry if you just catch it and continue:
function createPostAndDateIndex() {
return r.table('comments').indexCreate('postAndDate',
[r.row("postId"), r.row("date")]).run();
}
function createDateIndex() {
return r.table('comments').indexCreate('d', 'date').run()
}
function initDb() {
return createPostAndDateIndex().error(console.warn)
.then(createDateIndex).error(console.warn);
}
I want to be able to insert documents and preferably map all inner objects to nested ones automatically. Is this possible?
My specific use case is that I am collecting documents of the same type that may or may not have the same fields of those currently in the store. So I would prefer if it can just automatically do the nested mapping without me having to tell it to do so.
Barring that could I potentially update the index before I insert an object with new fields? And would it be ok if I just set the type of the nested property to nested without specifying the fields of the property?
Code:
client.IndicesPutMapping("captures", "capture", new
{
capture = new
{
properties = new
{
CustomerInformations = new
{
type = "nested",
//...do not specify inner fields ?
}
}
}
});
Is partially mappings allowed when overriding mappings. In other words if I have the mapping above will the other properties of the capture objects still be mapped in a default way?
For those still struggling with the issue:
https://github.com/elastic/elasticsearch/issues/20886
The problem has been resolved in V5