we define a type in GraphQL like this:
const GraphQLTodo = new GraphQLObjectType({
name: 'Todo',
fields: {
id: globalIdField('Todo'),
text: {
type: GraphQLString,
resolve: (obj) => obj.text,
},
complete: {
type: GraphQLBoolean,
resolve: (obj) => obj.complete,
},
},
interfaces: [nodeInterface], // what is this?
});
and I've read there is GraphQLInterfaceType - is more suitable when the types are basically the same but some of the fields are different(is this something like a foreign key?)
and in Relay we get the nodefield and nodeInterface with nodeDefinitions:
const {nodeInterface, nodeField} = nodeDefinitions(
(globalId) => {
const {type, id} = fromGlobalId(globalId);
if (type === 'Todo') {
return getTodo(id);
} else if (type === 'User') {
return getUser(id);
}
return null;
},
(obj) => {
if (obj instanceof Todo) {
return GraphQLTodo;
} else if (obj instanceof User) {
return GraphQLUser;
}
return null;
}
);
The docs and samples only used one on interfaces: [] //it's an array. but when do I need to use many interfaces? I am just confused on what it is, I've read a lot about it(don't know if my understanding is correct), just can't seem to wrap it in my head
A GraphQLInterfaceType is one way GraphQL achieves polymorphism, i.e. types that consist of multiple object types. For example, suppose you have two base object types, Post and Comment. Suppose you want a field that could get a list of both comments and posts. Conveniently, both these types have an id, text, and author field. This is the perfect use case for an interface type. An interface type is a group of shared fields, and it can be implemented by any object type which possesses those fields. So we create an Authored interface and say the Comment and Post implement this interface. By placing this Authored type on a GraphQL field, that field can resolve either posts or comments (or a heterogeneous list of both types).
But wait, Post and Comment accept an array of interfaces. I could pass multiple interfaces here. Why? Since the requirement for implementing an interface is possession of all the fields in that interface, there is no reason why any object type can't implement multiple interfaces. To draw from your example, the Node interface in Relay only needs id. Since our Post and Comment have id, they could implement both Node and Authored. But many other types will likely implement Node, ones that aren't part of Authored.
This makes your object types much more re-usable. If you assign interfaces to your field instead of object types, you can easily add new possible types to the fields in your schema as long as you stick to these agreed-upon interfaces.
Related
I understand that GraphQL allows for object identifiers (ID) to uniquely identify objects in an object graph. This allows queries to return one or more "identifiable" objects. Frameworks such as Apollo even make use of these identifiers (along with __typename) to build a smart cache.
However, what about the scenario where we want to return an object that is not identifiable, e.g. the result of a computation. For example, the following query returns
the count of repositories by language type:
const GET_REPO_COUNTS = gql`
query GetRepoCounts {
repoCounts {
language
count
}
}
`;
The response is something like this:
{
repoCounts: [
{
language: 'javascript',
repoCount: 10
},
{
language: 'typescript',
repoCount: 20
}
]
}
The returned objects don't have any "identity". In DDD terms, they are simply value objects.
Does GraphQL specifically allow or disallow such scenarios?
How do frameworks like Apollo deal with it? Is this something that is cached on the client side? How?
I am using graphql-tools#v6 and I have implemented two directives #map and #filter. My goal is to use them like a map and filter pipeline. In some cases, I want to map before filtering and in other cases, vice-versa. The directives are implemented using the Schema Directives API and they work as expected when only one directive is applied.
However, if I use them together, then they always execute in one specific order which doesn't match how they are declared in the schema.
For example
directive #map on FIELD_DEFINITION
directive #filter on FIELD_DEFINITION
# usage
type MyType {
list1: [String!]! #map #filter
list2: [String!]! #filter #map
}
In this case, either both fields are mapped and then filtered or vice-versa. The order is controlled by how I pass them in schemaTransforms property.
const schema = makeExecutableSchema({
schemaTransforms: [mapDirective, filterDirective] # vs [filterDirective, mapDirective]
});
I believe since these transforms are passed as an array, so their order of execution depends on the ordering of array. I can replace them with directiveResolvers but they are limited in what they can do.
But what throws me off is the following statement from the documentation
Existing code that uses directiveResolvers could consider migrating to direct usage of mapSchema
Because they have different behavior when it comes to order of execution, I don't see how they are interchangeable.
Can someone explain if there is a way to guarantee that the Schema Directives execute in the order they are used in the schema for a particular field?
Please see this github issue for in depth discussion.
The new API doesn't work the same way as directiveResolvers or schemaDirectives. A schemaTransform is applied to the entire schema before the next one contrary to the other two in which the all the transforms are applied to a particular field before visiting the next field node. There are two approaches to this in my opinion:
Create a new #pipeline directive which takes a list of names of other directives and then applies them in the order like directiveResolvers.
I took a bit different route where I created a new function attachSchemaTransforms just like attachDirectiveResolvers which visits each node and applies all the directives in order.
export function attachSchemaTransforms(
schema: GraphQLSchema,
schemaTransforms: Record<string, FieldDirectiveConfig>, // a custom config object which contains the transform and the directive name
): GraphQLSchema {
if (typeof schemaTransforms !== 'object') {
throw new Error(`Expected schemaTransforms to be of type object, got ${typeof schemaTransforms}`);
}
if (Array.isArray(schemaTransforms)) {
throw new Error('Expected schemaTransforms to be of type object, got Array');
}
return mapSchema(schema, {
[MapperKind.OBJECT_FIELD]: oldFieldConfig => {
const fieldConfig = { ...oldFieldConfig };
const directives = getDirectives(schema, fieldConfig);
Object.keys(directives).forEach(directiveName => {
const config = schemaTransforms[directiveName];
if (config) {
const { apply, name } = config;
const directives = getDirectives(schema, fieldConfig);
if (directives[name]) {
const directiveArgs: unknown = directives[name]
apply(fieldConfig, directiveArgs);
return fieldConfig;
}
}
});
return fieldConfig;
},
});
}
I have a database with the following structure.
I'm writing a GraphQL resolver for the bottom-most node (the "rows" node).
As the image shows, each "rows" node corresponds to a specific path. (Company)->(DB)->(Table)->(rows)
A Query would be of the form:
{
Company(name: "Google") {
Database(name: "accounts") {
Table(name: "users") {
rows
}
}
}
}
Question: How can I include/access Company.name, Database.name, Table.name information in the rows resolver so that I can determine which rows node to return?
In other words: I know I can access Table.name using parent.name, but is there a way to get parent.parent.name or parent.parent.parent.name?
If there isn't a way to access ancestor properties, should I use arguments or context to pass these properties manually into the rows resolver?
Note: I can't use the neo4j-graphql-js package.
Note: This is the first simple example I thought of and I understand there are structural problems with organizing data this way, but the question still stands.
You can extract the path from the GraphQLResolveInfo object passed to the resolver:
const { responsePathAsArray } = require('graphql')
function resolver (parent, args, context, info) {
responsePathAsArray(info.path)
}
This returns an array like ['google', 'accounts', 0, 'user']. However, you can also pass arbitrary data from parent resolver to child resolver.
function accountResolver (parent, args, context, info) {
// Assuming we already have some value at parent.account and want to return that
return {
...parent.account,
message: 'It\'s a secret!',
}
}
function userResolver (parent, args, context, info) {
console.log(parent.message) // prints "It's a secret!"
}
Unless message matches some field name, it won't ever actually appear in your response.
I'm facing a problem where I need to reference a resolved field on the parent from inside the __resolveType. Unfortunately the field I need to reference did not come as part of the original api response for the parent, but from another field resolver, which I would not have though mattered, but indeed it does, so it is undefined.
But I need these fields (in this example the; obj.barCount and obj.bazCount) to be able to make the following query, so I've hit a dead end. I need them to be available in the resolveType function so that I can use them to determine what type to resolve in case this field is defined.
Here's an example:
The graphql query I wish to be able to make:
{
somethings {
hello
... on HasBarCount {
barCount
}
... on HasBazCount {
bazCount
}
}
}
Schema:
type ExampleWithBarCount implements Something & HasBarCount & Node {
hello: String!
barCount: Int
}
type ExampleWithBazCount implements Something & HasBazCount & Node {
hello: String!
bazCount: Int
}
interface Something {
hello: String!
}
interface HasBarCount {
barCount: Int
}
interface HasBazCount {
bazCount: Int
}
Resolvers:
ExampleWithBarCount: {
barCount: (obj) => {
return myApi.getBars(obj.id).length || 0
}
}
ExampleWithBazCount {
bazCount: (obj) => {
return myApi.getBazs(obj.id).length || 0
}
}
Problem:
Something: {
__resolveType(obj) {
console.log(obj.barCount) // Problem: this is always undefined
console.log(obj.bazCount) // Problem: this is always undefined
if (obj.barCount) {
return 'ExampleWithBarCount';
}
if (obj.bazCount) {
return 'ExampleWithBazCount';
}
return null;
}
}
Any ideas of alternative solutions or what am I missing?
Here's a little more about the use case.
In the database we have a table "entity". This table is very simple and only really important columns are id, parent_id, name. type, and then you can of course attach some additional metadata to it.
Like with "entity", types are created dynamically from within the backend management system, and aftewards you can assign a type to your concrete entity.
The primary purpose of "entity" is to establish a hierarchy / tree of nested entities by parent_id and with different "types" (in the type column of entity). There will be some different meta data, but let's not focus on that.
Note: entity can be named anything, and the type can be anything.
In the API we then have an endpoint where we can get all entities with a specific type (sidenote: and in addition to the single type on an entitiy we also have an endpoint to get all entities by their taxonomy/term).
In the first implementation I modeled the schema by adding all the "known" types I had in my specification from the UX'er during development. The tree of entities could be like eg.
Company (or Organization, ..., Corporation... etc)
Branch (or Region, ..., etc)
Factory (or Building, facility, ..., etc)
Zone (or Room, ..., etc)
But this hierarchy is just one way it could be done. The naming of each might be totally different, and you might move some of them a level up or down or not have them at all, depending on the use case.
Only thing that is set in stone is that they share the same database table, will have the type column/field defined and they may or may not have children. The bottom layer in the hierarchy will not have children, but machines instead. The rest of just diffent metadata, which I think we should ignore for to not complicate this further.
As you can see the hierarchy needs to be very flexible and dynamic, so I realized it wasn't a great solution I had begun on.
At the lowest level "Zone" in this case, there will need to be a "machines" field, which should return a list of machines (they are in a "machines" table in the db, and not part of the hierarchy, but simply related with an "entity_id" on the "machines" table.
I had schema types and resolvers for all in the above hierarchy: Organization, Branch, Factory, Zone etc, but I was for the most part just repeating myself, so I thought I could turn to interfaces to try to generalize this more.
So instead of doing
{
companies{
name
branchCount
buildingCount
zoneCount
branches {
name
buildingCount
zoneCount
buildings {
name
zoneCount
zones {
name
machines {
name
}
}
}
}
}
}
And having to add schema/resolvers for all the different namings of the entities, I thought this would work:
{
entities(type: "companies") {
name
... on HasEntityCount {
branchCount: entityCount(type: "branch")
buildingCount: entityCount(type: "building")
zoneCount: entityCount(type: "zone")
}
... on HasSubEntities {
entities(type: "branch") {
name
... on HasEntityCount {
buildingCount: entityCount(type: "building")
zoneCount: entityCount(type: "zone")
}
... on HasMachineCount {
machineCount
}
... on HasSubEntities {
entities(type: "building") {
name
... on HasEntityCount {
zoneCount: entityCount(type: "zone")
}
... on HasMachineCount {
machineCount
}
... on HasSubEntities {
entities(type: "zone") {
name
... on HasMachines {
machines
}
}
}
}
}
}
}
}
}
With the interfaces being:
interface HasMachineCount {
machineCount: Int
}
interface HasEntityCount {
entitiyCount(type: String): Int
}
interface HasSubEntities {
entities(
type: String
): [Entity!]
}
interface HasMachines {
machines: [Machine!]
}
interface Entity {
id: ID!
name: String!
type: String!
}
The below works, but I really want to avoid a single type with lots of optional / null fields:
type Entity {
id: ID!
name: String!
type: String!
# Below is what I want to avoid, by using interfaces
# Imagine how this would grow
entityCount
machineCount
entities
machines
}
In my own logic I don't care what the entities are called, only what fields expected. I'd like to avoid a single Entity type with alot of nullable fields on it, so I thought interfaces or unions would be helpful for keeping things separated so I ended up with HasSubEntities, HasEntityCount, HasMachineCount and HasMachines since the bottom entity will not have entities below, and only the bottom entity will have machines. But in the real code there would be much more than the 2, and it could end up with a lot of optional fields, if not utilizing interfaces or unions in some way I think.
There's two separate problems here.
One, GraphQL resolves fields in a top down fashion. Parent fields are always resolved before any children fields. So it's never possible to access the value that a field resolved to from the parent field's resolver (or a "sibling" field's resolver). In the case of fields with an abstract type, this applies to type resolvers as well. A field type will be resolved before any children resolvers are called. The only way to get around this issue is to move the relevant logic from the child resolver to inside the parent resolver.
Two, assuming the somethings field has the type Something (or [Something], etc.), the query you're trying to run will never work because HasBarCount and HasBazCount are not subtypes of Something. When you tell GraphQL that a field has an abstract type (an interface or a union), you're saying that what's returned by the field could be one of several object types that will be narrowed down to exactly one object type at runtime. The possible types are either the types that make up the union, or types that implement the interface.
A union may only be made up of object types, not interfaces or other unions. Similarly, only an object type may implement an interface -- other interfaces or unions may not implement interfaces. Therefore, when using inline fragments with a field that returns an abstract type, the on condition for those inline fragments will always be an object type and must be one of the possible types for the abstract type in question.
Because this is pseudocode, it's not really clear what business rules or use case you're trying to model with this sort of schema. But I can say that there's generally no need to create an interface and have a type implement it unless you're planning on adding a field in your schema that will have that interface as its type.
Edit: At a high level, it sounds like you probably just want to do something like this:
type Query {
entities(type: String!): [Entity!]!
}
interface Entity {
type: String!
# other shared entity fields
}
type EntityWithChildren implements Entity {
type: String!
children: [Entity!]!
}
type EntityWithModels implements Entity {
type: String!
models: [Model!]!
}
The type resolver needs to check for whether we have models, so you'll want to make sure you fetch the related models when you fetch the entity (as opposed to fetching them inside the models resolver). Alternatively, you may be able to add some kind of column to your db that identifies an entity as the "lowest" in the hierarchy, in which case you can just use this property instead.
function resolveType (obj) {
return obj.models ? 'EntityWithModels' : 'EntityWithChildren'
}
Now your query looks like this:
entities {
type
... on EntityWithModels {
models { ... }
}
... on EntityWithChildren {
children {
... on EntityWithModels {
models { ... }
}
... on EntityWithChildren {
# etc.
}
}
}
}
The counts are a bit trickier because of the variability in the entity names and the variability in the depth of the hierarchy. I would suggest just letting the client figure out the counts once it gets the whole graph from the server. If you really want to add count fields, you'd have to have fields like childrenCount, grandchildrenCount, etc. Then the only way to populate those fields correctly would be to fetch the whole graph at the root.
I am looking at GraphQL but am confused why when using a fragment as below, you have to define the "on Character"? could this be anything any name as doesn't explain or have the context on the GraphQL documentation.
query {
leftComparison: hero(id: "1") {
...comparisonFields
}
rightComparison: hero(id: "2") {
...comparisonFields
}
}
fragment comparisonFields on Character {
name
appearsIn
friends {
name
}
}
While the example on graphql.org doesn't make this totally obvious, a fragment is always attached to some specific type (can be an object type, interface, or union). Inside the fragment, you can only use fields that exist on the type that's named; the server will check this for you (and clients are capable of checking ahead of time if they want to).
If a field returns an interface or union type, you can similarly only select fields that you know to exist (because an interface provides them), but you can attempt to match on specific types that implement the interface or are members of the union to get more data. This is frequently done with inline fragments, but since a named fragment is attached to a type, you can use named fragments as well. If the schema contains the very generic query
interface Node { id: ID! }
type Query {
node(id: ID!): Node
}
and Character implements Node, then you can plug in the named fragment you have here
query GetCharacterDetails($id: ID!) {
node(id: $id) {
...comparisonFields
}
}