I'm trying to build a graphql interface that deals with data from different regions and for each region there's a different DB.
What I'm trying to accomplish is:
TypeDefs= gql`
type Player {
account_id: Int
nickname: String
clan_id:Int
clan_info:Clan
}
type Clan{
name:
}
So right now I can request player(region, id), and this will pull up the player details no issues there.
But the issue is that Clan_info field also requires the region from the parent, so the resolver would look like clan_info({clan_id}, region).
Is there any way to pass down the region from parent to child field? I know I can add it to the details of the player, but would rather not since there would be millions of records and every field counts
Related
I'm trying to implement a GraphQL server with the following query structure:
query getLists { /// Returns an array with multiple lists
id
users {
stats { /// Field-level resolver
sum /// Float
}
}
}
I have difficulties with understanding how to resolve this query conceptually. The sum field differs per user per list. So given a userId and a listId, the sum is different. The userId I can access from the resolver of the user by using the root of the resolver when resolving the field stats, however, I cannot access the list node to which the current user belongs, so I have no way of getting the listId. I have looked at the info object that is passed to the resolving function, but it doesn't seem to be including the resolved list object.
With such a structure, how do I pass the id of the list to the resolver of stats?
Note that users and lists have a many to many relationship with eachother, so there is no way to determine given only a userId which listId should be used for the calculation of sum.
I have 2 queries: getGroups(): [Group] and getGroup($id: ID!): Group. One component first loads all groups using getGroups() and then later on a different component needs to access a specific Group data by ID.
I'd expect that Apollo's normalization would already have Group data in cache and would use it when getGroup($id: ID!) query is executed, but that's not the case.
When I set cache-only fetchPolicy nothing is returned. I can access the data using readFragment, but that's not as flexible as just using a query.
Is there an easy way to make Apollo return the cached data from a different query as I would expect?
It's pretty common to have a query field that returns a list of nodes and another that takes an id argument and returns a single node. However, deciding what specific node or nodes are returned by a field is ultimately part of your server's domain logic.
As a silly example, imagine if you had a field like getFavoriteGroup(id: ID!) -- you may have the group with that id in your cache but that doesn't necessarily mean it should be returned by the field (it may not be favorited). There's any number of factors (other arguments execution context, etc.) that might affect what nodes(s) are returned by a field. As a client, it's not Apollo's place to make assumptions about your domain logic.
However, you can effectively duplicate that logic by implementing query redirects.
const cache = new InMemoryCache({
cacheRedirects: {
Query: {
group: (_, args) => toIdValue(cache.config.dataIdFromObject({ __typename: 'Group', id: args.id })),
},
},
});
I have this game type:
type Game {
id: ID! #id
goals: [Goal]
}
which have a Goal relationship to:
type Goal {
id: Int! #id(strategy: SEQUENCE) #sequence(name: "IncID", initialValue: 1, allocationSize: 20)
thumbnail: String!
player: String!
minute: Int!
}
what i'm trying to do by that "id" mess is to create an incremental id value for the goal, for the purpose of creating a url for each goal, like this:
domaine.com/game/{id-of-the-game}/goal/{incremental-id(1,2..)}
the problem is, the Goal type looks like it is an entity of its own, it is gonna keep the last incremented id even if it is new game.
so i want to reset the id sequence for each new game.
What you ask for is not possible using the #id annotation. Each type in the prisma model needs to have a unique id to identify the object in the database. If the underlying database used is MongoDB there will be a Goal collection with documents in it, each representing an individual Goal identified by the id. If the underlying database used is MySQL/PostgreSQL the Goals will be stored in a Goal table with each row representing an individual Goal.
Each individual object (no matter if it is stored as a document or row) needs to be uniquely identified to access it and to create relations, e.g. between Goal objects and Game objects.
If the Goal id would start at 1 for each Game this would violate the unique constraint for the id field since two Goals in the table or collection would be identified by the same id (e.g. 1).
What I would suggest is to simply add something like a "numberInGame" field to the Goal type and fill it while creating the Goal (e.g. by taking goals.length in the Game.type into consideration).
Hope that helped to clarify the id field uniqueness constraint.
I have an app that has a type with many related types. So like:
type Person {
Name: String!
Address: Address!
Family: [Person!]!
Friends: [Person!]!
Job: Occupation
Car: Car
}
type Address {...}
type Occupation {...}
type Car {...}
(don't worry about the types specifically...)
Anyway, this is all stored in a database in many tables.
Some of these queries are seldom used and are slow. Imagine for example there are billions of cars in the world and it takes time to find the one that is owned by the person we are interested in. Any query to "getPerson" must satisfy the full schema and then graphql will pare it down to the fields that are needed. But since that one is slow and could be requested, we have to perform the query even though the data is thrown out most of the time.
I only see 2 solutions to this.
a) Just do the query each time and it will always be slow
b) Make 2 separate Query options. One for "getPerson" and one "getPersonWithCar" but then you're not able to reuse the schema and now a Person is defined twice. Once in terms of the car and once without.
Is there a way to indicate whether a field is present in the Query requested fields? That way we could say like
if (query.isPresent("Car")) {
car = findCar();
} else {
car = null;
}
How can I write the resolvers such that I can generate database sub-query in each resolver and effectively combine all of them and fetch the data at once?
For the following schema :
type Node {
index: Int!
color: String!
neighbors(first: Int = null): [Node!]!
}
type Query {
nodes(color: String!): [Node!]!
}
schema {
query: Query
}
To perform the following query :
{
nodes(color: "red") {
index
neighbors(first: 5) {
index
}
}
}
Data store:
In my data store, nodes and neighbors are stored in separate tables. I want to write a resolver so that we can fetch the required data optimally.
If there are any similar examples, please share the details. (It would be helpful to get an answer in reference to graphql-java)
DataFetchingEnvironment provides access to sub-selections via DataFetchingEnvironment#getSelectionSet. This means, in your case, you'd be able to know from the nodes resolver that neighbors will also be required, so you could JOIN appropriately and prepare the result.
One limitation of the current implementation of getSelectionSet is that it doesn't provide info on conditional selections. So if you're dealing with interfaces and unions, you'll have to manually collect the sub-selection starting from DataFetchingEnvironment#getField. This will very likely be improved in the future releases of graphql-java.
The recommended and most common way is to use a data loader.
A data loader collects the info about which fields to load from which table and which where filters to use.
I haven't worked with GraphQL in Java, so I can only give you directions how you could implement this yourself.
Create an instance of your data loader and pass it to your resolvers as the context argument.
Your resolvers should pass the table name, a list of field names and a list of where conditions to the data loader and return a promise.
Once all the resolvers have executed your data loader should combine those lists so you only end up with one query per table.
You should remove duplicate field names and combine the where conditions using the or keyword.
After the queries have executed you can return all of this data to your resolvers and let them filter the data (since we combined the conditions using the or keyword)
As an advanced feature your data loader could apply the where conditions before returning the data to the resolvers so that they don't have to filter them.