Hasura GraphQL query order by nested array relationships (with only one element)? - graphql

From the Hasura documentation is not possible to order by nested array relationships, the thing is I'm using that relation to get only one element from the array (e.g. the latest entry in that table). There is any way to transform that array (with one element) into an object to be able to perform order by in the root query?. Example:
query GetMachinesQuery {
machines {
machine_id
machine_detail
last_upgrade: upgrades(order_by: { created_at: desc }, limit: 1) {
upgrade_state {
updated_at
status
}
}
}
}
Do I have any way to sort the root query by any of the fields (e.g. status) present in the last_upgrade? The possible workaround is create a view (doing the joins to get latest upgrade info for each machine) and then I can use an object relationship, any other alternatives with hasura?
Thank you !

Related

Nesting queries in GraphQL

I'm trying to figure out if there is an easy way to nest a query in Graphql. I have two tables, one with the beach records and one with the definitions of the conditions.
What i'm trying to return is the conditionName and conditionDescription if the surfCondition matches (surfCondition = 2}
query MyQuery {
lifeguard {
beachID
id
surfCondition (match the second query)
conditionName (display)
conditionDescription (display)
updated_at
created_at
}
lifeguard_conditions {
surfCondition
conditionName
conditionDescription
}
}
Normally this would be handled on the GraphQL server to accept a filter.
So, for example, you can pass a variable to surfCondition such as
surfCondition(filter: 2)
…which would then return automatically the filtered data.

Adding limit's to nested value in graphql

Here's a simple graphQL query to fetch all people. I'd like to add a limit to to the number of friends each (Person) node will have (say e.g. max 5) when retrieved. Is this possible in graphQL? I know its possible to add a limit to allPeople, something like allPeople(limit: 5)
but i don't think that will help my use-case.
{
allPeople {
nodes {
id
friends {
name
id
phone
}
}
}
}
You can add params to friends 'level' in query
... if supported and resolved separately ...
... there is no filtering or searching syntax defined in general graphql specs.
It all depends on specific server/env, how resolved etc.
... probably it's better to resolve both levels (people+friends) in people resolver (one sql query) - in this case both filters/limits should be defined on 'parent' level - allPeople(limit:5, friendsLimit:5)

Fetching the data optimally in GraphQL

How can I write the resolvers such that I can generate database sub-query in each resolver and effectively combine all of them and fetch the data at once?
For the following schema :
type Node {
index: Int!
color: String!
neighbors(first: Int = null): [Node!]!
}
type Query {
nodes(color: String!): [Node!]!
}
schema {
query: Query
}
To perform the following query :
{
nodes(color: "red") {
index
neighbors(first: 5) {
index
}
}
}
Data store:
In my data store, nodes and neighbors are stored in separate tables. I want to write a resolver so that we can fetch the required data optimally.
If there are any similar examples, please share the details. (It would be helpful to get an answer in reference to graphql-java)
DataFetchingEnvironment provides access to sub-selections via DataFetchingEnvironment#getSelectionSet. This means, in your case, you'd be able to know from the nodes resolver that neighbors will also be required, so you could JOIN appropriately and prepare the result.
One limitation of the current implementation of getSelectionSet is that it doesn't provide info on conditional selections. So if you're dealing with interfaces and unions, you'll have to manually collect the sub-selection starting from DataFetchingEnvironment#getField. This will very likely be improved in the future releases of graphql-java.
The recommended and most common way is to use a data loader.
A data loader collects the info about which fields to load from which table and which where filters to use.
I haven't worked with GraphQL in Java, so I can only give you directions how you could implement this yourself.
Create an instance of your data loader and pass it to your resolvers as the context argument.
Your resolvers should pass the table name, a list of field names and a list of where conditions to the data loader and return a promise.
Once all the resolvers have executed your data loader should combine those lists so you only end up with one query per table.
You should remove duplicate field names and combine the where conditions using the or keyword.
After the queries have executed you can return all of this data to your resolvers and let them filter the data (since we combined the conditions using the or keyword)
As an advanced feature your data loader could apply the where conditions before returning the data to the resolvers so that they don't have to filter them.

Can you do a join using an embedded array in a document with rethinkdb?

Say I have a user table with a property called favoriteUsers which is an embedded array. i.e.
users
{
name:'bob'
favoriteUsers:['jim', 'tim'] //can you have an index on an embedded array?
}
user_presence
{
name:'jim', //index on name
online_since:14440000
}
Can I do an inner or eqJoin against say a 2nd table using the embedded property, or would I have to pull favoriteUsers out of the users table and into a join table like in traditional sql?
r.table('users')
.getAll('bob', {index:'name'})
// inner join user_presence on user_presence.name in users.highlights
.eqJoin("name", r.table('user_presence'), {index:'name'})
Eventually, I'd like to call changes() on the query so that I can get a realtime update of the users favorite users presence changes
eqJoin can works on embedded document, but it works by compare a value which we transform/pick from the embedded document to mark secondary index on right table.
In any other complicated join, I would rather use concatMap together with getAll.
Let's say we can fetch user and user_presence of their favoriteUsers
r.table('users')
.getAll('bob', {index: 'name'})
.concatMap(function(user) {
return r.table('user_presence').filter(function(presence) {
return user("favoriteUsers").contains(presence("name"))
})
)
So ideally, now you get the data and do the join yourself by querying extra data that you need. My query may have some syntax/error but I hope it gives you the idea

Rethinkdb - filtering by value in another table

In our RethinkDB database, we have a table for orders, and a separate table that stores all the order items. Each entry in the OrderItems table has the orderId of the corresponding order.
I want to write a query that gets all SHIPPED order items (just the items from the OrderItems table ... I don't want the whole order). But whether the order is "shipped" is stored in the Order table.
So, is it possible to write a query that filters the OrderItems table based on the "shipped" value for the corresponding order in the Orders table?
If you're wondering, we're using the JS version of Rethinkdb.
UPDATE:
OK, I figured it out on my own! Here is my solution. I'm not positive that it is the best way (and certainly isn't super efficient), so if anyone else has ideas I'd still love to hear them.
I did it by running a .merge() to create a new field based on the Order table, then did a filter based on that value.
A semi-generalized query with filter from another table for my problem looks like this:
r.table('orderItems')
.merge(function(orderItem){
return {
orderShipped: r.table('orders').get(orderItem('orderId')).pluck('shipped') // I am plucking just the "shipped" value, since I don't want the entire order
}
})
.filter(function(orderItem){
return orderItem('orderShipped')('shipped').gt(0) // Filtering based on that new "shipped" value
})
it will be much easier.
r.table('orderItems').filter(function(orderItem){
return r.table('orders').get(orderItem('orderId'))('shipped').default(0).gt(0)
})
And it should be better to avoid result NULL, add '.default(0)'
It's probably better to create proper index before any finding. Without index, you cannot find document in a table with more than 100,000 element.
Also, filter is limit for only primary index.
A propery way is to using getAll and map
First, create index:
r.table("orderItems").indexCreate("orderId")
r.table("orders").indexCreate("shipStatus", r.row("shipped").default(0).gt(0))
With that index, we can find all of shipper order
r.table("orders").getAll(true, {index: "shipStatus"})
Now, we will use concatMap to transform the order into its equivalent orderItem
r.table("orders")
.getAll(true, {index: "shipStatus"})
.concatMap(function(order) {
return r.table("orderItems").getAll(order("id"), {index: "orderId"}).coerceTo("array")
})

Resources