I have a josn named "update",and it has an embedded list "comments" like this:
{
id: "update/0",
//comments contains elements with type:comment
comments: [{
id:"comment/0"
content:"old first level comment content..."
children:[{
id:"comment/00",
content:""old second level comment content...",
children[...]
}
]
}]
}
Questions are:
1, How to replace "old first level comment content..." with "new first level
comment content..." by ids "update/0" and "comment/0"?
2, How to replace "old second level comment content..." with "new second level
comment content..." by ids "update/0","comment/0" and "comment/00"?
First the queries you are looking for:
r.table("update").get("update/0").update(function(doc) {
return doc.merge({
comments: doc("comments").map(function(comment) {
return r.branch(
comment("id").eq("comment/0"),
comment.merge({
content: "new first level comment content..."
}),
comment
)
})
})
}).run(...)
Second one:
r.table("update").get("update/0").update(function(doc) {
return doc.merge({
comments: doc("comments").map(function(comment) {
return r.branch(
comment("id").eq("comment/0"),
comment.merge({
children: comment("children").map(function(child) {
return r.branch(
child("id").eq("comment/00"),
child.merge({
content: "new second level comment content..."
}),
child
)
})
}),
comment
)
})
})
}).run(...)
You probably want to split your data in multiple tables to do joins.
In your case, a table "update" and a table "comment".
Your table comment can join itself for the children.
You can find more information here:
http://www.rethinkdb.com/docs/data-modeling/
http://www.rethinkdb.com/docs/table-joins/
If you have more questions, let me know.
Related
wanted to ask if it is possible to upsert nested objects? for example, if i have a 'Users' table and a 'Students' table, and I'm inserting a new User(with a taken id), i want to update all fields (using on_conflict and update_columns) including the fields in the 'Students' table.
Basically replace all user's fields except the primary key.
mutation($UsersData: [core_users_insert_input!]!) {
insert_core_users(
objects: $UsersData
on_conflict: {
constraint: core_users_id_unique
update_columns: [first_name, last_name, gender]
}
) {
affected_rows
}
}
The update_column array should include fields from the 'Students' table but i can't figure it out.
It is possible, relevant documentation is here: https://hasura.io/docs/1.0/graphql/manual/mutations/upsert.html#upsert-in-nested-mutations
It is possible to use on_conflict key on any level (top, or nested) where you want to resolve updating an existing record.
mutation upsert_author_article {
insert_author(
objects: [
{
name: "John",
articles: {
data: [
{
title: "Article 3",
content: "Article 3 content"
}
],
on_conflict: {
constraint: article_title_key,
update_columns: [content]
}
}
}
]
) {
affected_rows
}
}
I have the following query:
const getPage = gql`
query Page($path: String!) {
page(path: $path) #rest(type: "Page", path: "{args.path}") {
blocks #type(name: Block) {
name
posts #type(name: Post) {
body
author
}
}
authors #type(name: Author) {
name
}
}
}
In blocks.posts.author there's only an AuthorId. The authors object is containing all the available authors.
I'd like to replace/match the AuthorId with it's corresponding object. Is it possible to do this within one query?
I also wouldn't mind to have a separate query for Author only (fetch will be cached, no new request would be made), but I still don't know how would I match it through 2 queries.
Example API response
{
blocks: [
{
posts: [
{
id: 1,
title: 'My post',
author: 12,
}
]
}
],
authors: [
{
id: 12,
name: 'John Doe'
}
]
}
What I want with 1 query that author inside a post becomes the full author object.
Great question. With GraphQL, you have the power to expand any field and select the exact subfields you want from it, so if you were using GraphQL on your backend as well this would be a non-issue. There are some workarounds you can do here:
If all of the Author objects are in your Apollo cache and you have access to each Author's id, you could use ApolloClient.readFragment to access other properties, like this:
const authorId = ...; // the id of the author
const authorInfo = client.readFragment({
id: authorId,
fragment: gql`
fragment AuthorInfo on Author {
id
name
# anything else you want here
}
`,
});
Although it's worth noting that with your original query in the question, if you have all of the Author objects as a property of the query, you could just use Javascript operations to go from Author id to object.
const authorId = ...; // the id of the author
data.page.authors.find(author => author.id === authorId);
The following should work.
First, capture the author id as a variable using the #export directive. Then add a new field with some name other than author and decorate it with the #rest, using the exported variable inside the path.
So the query would look something like this:
query Page($path: String!) {
page(path: $path) #rest(type: "Page", path: "{args.path}") {
blocks #type(name: Block) {
name
posts #type(name: Post) {
body
author #export(as: "authorId")
authorFull #rest(
path: '/authors/{exportVariables.authorId}'
type: 'Author'
) {
name
}
}
}
authors #type(name: Author) {
name
}
}
}
You can use the fieldNameNormalizer option to rename the author property in the response to a field with a different name (for example, authorId). Ideally, that should still work with the above so you can avoid having a weird field name like authorFull but apollo-link-rest is a bit wonky so no promises.
Let's say I have this books query that return 2 records and stored in local cache.
query Books {
books {
author
title
}
}
'Book:1': {
author: 'Foo',
title: 'Book 1'
}
'Book:2': {
author: 'Bar',
title: 'Book 2'
}
When I have another book query as below to get detail of the book, does react-apollo going to fetch missing fields to server or it will return what ever in the cache for that record? Assuming the default fetchPolicy is used (cache-first)
query Book {
book {
author
title
publisher
publishedAt
}
}
It will query the server to fetch the missing fields.
I have this JSON document:
{
userId: xx,
followedAuthors: [
{ authorId: abc, timestamp: 123 },
{ authorId: xyz, timestamp: 456 },
]
}
When a user want to follow an author I would like to write a query that check if that author is already followed, checking the id, and if it's not append the new followed author to the array.
Right now I create everytime a new entry.
This is my query:
r.table('users')
.get(userId)
.replace(user => {
return user.merge({
followedTopics: user('followedTopics')
.default([])
.setInsert({ topic: topic, timestamp: now }),
})
})
The best way to implement this is to use contains (using a predicate function), branch, and append.
I am hitting the following problem: Suppose that I have the following structure:
{
"id": 1,
"data": {
"arr": [{"text":"item1"}]
}
}
And the following query:
r.db('test').table('test').get(1).update(function (item) {
return {
data: {
arr: item('data')('arr').map(function (row) {
return r.branch(
row('text').eq('item1'),
row.merge({updated:true}),
row
)
})
}
}
})
I am listening for changes in this specific array only, and when the item is updated both create and delete events are emitted. I really need to receive an update event, e.g. old_val is not null and new_val is not null.
Thanks in advance guys
After all, I decided to drop the embedded array and use table joins, this avoids all possible hacks.
You can use something like this
r.db('test').table('test')('data')('arr').changes()
.filter(function(doc) {
return doc('new_val').ne(null).and(doc('old_val').ne(null))
})
I'll only show update to array. If you need to get access to other document field, try this:
r.db('test').table('test').changes()
.filter(function(doc) {
return doc('new_val')('data')('arr').ne(null).and(doc('old_val')('data')('arr').ne(null))
})