How would you scan schema for missing resolver for queries and non-scalar fields ?
I'm trying to work with a dynamic schema so I need to be able to test this programmatically. I've been browsing graphql tools for few hours to find a way to do this, but I'm getting nowhere...
checkForResolveTypeResolver - this only apply to interface and union resolveType resolver
I can't find a way to know when a defaultFieldResolver is applied
I tried working with custom directives to add #requiredResolver, to help identify those fields, but custom resolver are far from being fully supported:
introspection & directives
no graphql-js directives handler (can workaround this with graphql-tools tho)
any help is appreciated !
Given an instance of GraphQLSchema (i.e. what's returned by makeExecutableSchema) and your resolvers object, you can just check it yourself. Something like this should work:
const { isObjectType, isWrappingType, isLeafType } = require('graphql')
assertAllResolversDefined (schema, resolvers) {
// Loop through all the types in the schema
const typeMap = schema.getTypeMap()
for (const typeName in typeMap) {
const type = schema.getType(typeName)
// We only care about ObjectTypes
// Note: this will include Query, Mutation and Subscription
if (isObjectType(type) && !typeName.startsWith('__')) {
// Now loop through all the fields in the object
const fieldMap = type.getFields()
for (const fieldName in fieldMap) {
const field = fieldMap[fieldName]
let fieldType = field.type
// "Unwrap" the type in case it's a list or non-null
while (isWrappingType(fieldType)) {
fieldType = fieldType.ofType
}
// Only check fields that don't return scalars or enums
// If you want to check *only* non-scalars, use isScalarType
if (!isLeafType(fieldType)) {
if (!resolvers[typeName]) {
throw new Error(
`Type ${typeName} in schema but not in resolvers map.`
)
}
if (!resolvers[typeName][fieldName]) {
throw new Error(
`Field ${fieldName} of type ${typeName} in schema but not in resolvers map.`
)
}
}
}
}
}
}
Related
I am testing PermissionDetail component which has graphql fragment, that is the data of node of PermissionTable component. I am getting a flow type error in this line when getting mock data from query const permissionDetail = data.viewPermissionScheme?.grantGroups[0].grantHolders?.edges[0].node.permission;.
Component hierarchy:
App -> PermissionTable (Paginated component fragment) -> PermissionDetail (fragment)
const TestRenderer = () => {
const data = useLazyLoadQuery<examplesPermissionQuery>(
graphql`
query examplesPermissionQuery #relay_test_operation {
viewPermission(id: "test-scheme-id") {
... on PermissionView {
groups {
holders(first: 10) {
edges {
node {
permission {
...permissionDetailsFragment
}
}
}
}
}
}
}
}
`,
{},
);
// Getting Flowtype Error here: Cannot get `data.viewPermission?.groups[0]` because an index signature declaring the expected key / value type is missing in null or undefined [1]
const permissionDetail =
data.viewPermissionScheme?.grantGroups[0].grantHolders?.edges[0].node.permission;
return permissionDetail ? (<PermissionDetails permissionDetail={permissionDetail}/>) : null;
};
What is the correct way to test such components? I am new to flow and graphql and relay. So need to understand the best way to test this.
I think the error is simply that data.viewPermission?.groups can be null or undefined. Therefore you are not allowed to access an(y) index on this property. One way to fix this is by using data.viewPermission?.groups?[0] to access the property.
You could also make groups non-nullable in your GraphQL schema. Some people like a lot of nullable fields because that allows the server to return as much partial data as possible in the case of an error. But for the developer this means that every field has to be checked for null.
I am using graphql-tools#v6 and I have implemented two directives #map and #filter. My goal is to use them like a map and filter pipeline. In some cases, I want to map before filtering and in other cases, vice-versa. The directives are implemented using the Schema Directives API and they work as expected when only one directive is applied.
However, if I use them together, then they always execute in one specific order which doesn't match how they are declared in the schema.
For example
directive #map on FIELD_DEFINITION
directive #filter on FIELD_DEFINITION
# usage
type MyType {
list1: [String!]! #map #filter
list2: [String!]! #filter #map
}
In this case, either both fields are mapped and then filtered or vice-versa. The order is controlled by how I pass them in schemaTransforms property.
const schema = makeExecutableSchema({
schemaTransforms: [mapDirective, filterDirective] # vs [filterDirective, mapDirective]
});
I believe since these transforms are passed as an array, so their order of execution depends on the ordering of array. I can replace them with directiveResolvers but they are limited in what they can do.
But what throws me off is the following statement from the documentation
Existing code that uses directiveResolvers could consider migrating to direct usage of mapSchema
Because they have different behavior when it comes to order of execution, I don't see how they are interchangeable.
Can someone explain if there is a way to guarantee that the Schema Directives execute in the order they are used in the schema for a particular field?
Please see this github issue for in depth discussion.
The new API doesn't work the same way as directiveResolvers or schemaDirectives. A schemaTransform is applied to the entire schema before the next one contrary to the other two in which the all the transforms are applied to a particular field before visiting the next field node. There are two approaches to this in my opinion:
Create a new #pipeline directive which takes a list of names of other directives and then applies them in the order like directiveResolvers.
I took a bit different route where I created a new function attachSchemaTransforms just like attachDirectiveResolvers which visits each node and applies all the directives in order.
export function attachSchemaTransforms(
schema: GraphQLSchema,
schemaTransforms: Record<string, FieldDirectiveConfig>, // a custom config object which contains the transform and the directive name
): GraphQLSchema {
if (typeof schemaTransforms !== 'object') {
throw new Error(`Expected schemaTransforms to be of type object, got ${typeof schemaTransforms}`);
}
if (Array.isArray(schemaTransforms)) {
throw new Error('Expected schemaTransforms to be of type object, got Array');
}
return mapSchema(schema, {
[MapperKind.OBJECT_FIELD]: oldFieldConfig => {
const fieldConfig = { ...oldFieldConfig };
const directives = getDirectives(schema, fieldConfig);
Object.keys(directives).forEach(directiveName => {
const config = schemaTransforms[directiveName];
if (config) {
const { apply, name } = config;
const directives = getDirectives(schema, fieldConfig);
if (directives[name]) {
const directiveArgs: unknown = directives[name]
apply(fieldConfig, directiveArgs);
return fieldConfig;
}
}
});
return fieldConfig;
},
});
}
I have a simple graphql query and a directive
directive #isOwner(postID: String!) on FIELD_DEFINITION
type Query {
post(postID: String!): Post! #isOwner(postID: postID)
}
The problem is that I'm using GQLGen to generate my boilerplate code for Go, and directives are treated differently from the input values.
This presents a unique challenge where authorization logic is almost isolated from the actual db reads, which makes the logic very inefficient, in that I have to eiither make a database read twice: during validation and the actual db read.
The data required for validation is also required for the db read, and I would have to edit my whole code to inject this data into context.
Is there a way of passing the input arguements dynamically to the directive and have the validation done dynamically and is it a good pracise in the first place?
Arguments passed to schema directives are evaluated when your schema is initially built, so they can't be dynamic. In this particular case, you don't need an argument at all -- you can just read the value of the field's arguments.
visitFieldDefinition(field) {
const { resolve = defaultFieldResolver } = field
field.resolve = async function (parent, args, context, info) {
console.log(args.postID)
return resolve.apply(this, [parent, args, context, info])
}
}
However, if the name of the argument varies by field, then you can pass that as an argument to your directive
directive #isOwner(argName: String!) on FIELD_DEFINITION
visitFieldDefinition(field) {
const { resolve = defaultFieldResolver } = field
const { argName } = this.args
field.resolve = async function (parent, args, context, info) {
console.log(args[argName])
return resolve.apply(this, [parent, args, context, info])
}
}
I'm building a custom directive in which I'm hoping to validate entire input objects. I'm using the INPUT_OBJECT type with the visitInputObject method on SchemaDirectiveVisitor extended class.
Every time I run a mutation using the input type then visitInputObject does not run.
I've used the other types/methods like visitObject and visitFieldDefinition and they work perfectly. But when trying to use input types and methods they will not trigger.
I've read all the available documentation I can find. Is this just not supported yet?
Some context code(Not actual):
directive #validateThis on INPUT_OBJECT
input MyInputType #validateThis {
id: ID
someField: String
}
type Mutation {
someMutation(myInput: MyInputType!): SomeType
}
class ValidateThisDirective extends SchemaDirectiveVisitor {
visitInputObject(type) {
console.log('Not triggering');
}
}
All the visit methods of a SchemaDirectiveVisitor are ran at the same time -- when the schema is built. That includes visitFieldDefinition and visitFieldDefinition. The difference is that when we use visitFieldDefinition, we often do it to modify the resolve function for the visited field. It's this function that's called during execution.
You use each visit methods to modify the respective schema element. You can use visitInputObject to modify an input object, for example to add or remove fields from it. You cannot use it to modify the resolution logic of an output object's field. You should use visitFieldDefinition for that.
visitFieldDefinition(field, details) {
const { resolve = defaultFieldResolver } = field
field.resolve = async function (parent, args, context, info) {
Object.keys(args).forEach(argName => {
const argDefinition = field.args.find(a => a.name === argName)
// Note: you may have to "unwrap" the type if it's a list or non-null
const argType = argDefinition.type
if (argType.name === 'InputTypeToValidate') {
const argValue = args[argName]
// validate here
}
})
return resolve.apply(this, [parent, args, context, info]);
}
}
I have an Apollo GraphQL service that delegates to an internal gRPC service. This service has an endpoint which returns a message that contains a oneof, which I'm mapping to a Union in GraphQL.
This is straightforward, but there's a fair degree of boilerplate involved when implementing the resolvers. Suppose I have the following protobuf message definition:
message MyUnionMessage {
oneof value {
UnionType1 type1 = 1;
UnionType1 type2 = 3;
UnionType1 type3 = 4;
}
}
message UnionType1 {<type 1 props>}
message UnionType2 {<type 2 props>}
message UnionType3 {<type 3 props>}
My corresponding GraphQL schema looks something like this:
union MyUnionType = UnionType1 | UnionType2 | UnionType3
type UnionType1 {<type 1 props>}
type UnionType1 {<type 2 props>}
type UnionType1 {<type 3 props>}
In the javascript binding for gRPC, a MyUnionMessage object will have two properties: value which is a string indicating which type of value is contained, and a property named for the type. So, if I had a MyUnionMessage containing a UnionType2, for example, the object would look like this:
{
value: 'type2',
type2: {...}
}
This is nice for implementing __resolveType, since I can do a simple switch on the value in value, but I then have to write a resolver for all of the fields of all of the concrete types.
What I'm looking for is to be able to so something like this:
resolvers = {
MyUnionType: {
__resolveType(obj) {
switch(obj.value) {
case 'type1': return 'UnionType1';
case 'type2': return 'UnionType2';
case 'type3': return 'UnionType3';
default: return null;
},
__resolveValue(obj) {
return obj[obj.value];
},
},
};
Basically, I want to write a "resolver" at the level of the generic union (or interface) type that transforms the object before it's passed to the concrete resolver.
Is such a thing possible?
I'd wager that this sort of scenario is typically solved by transforming the data before it hits the __resolveType logic. For example, say you had a Query field that returned a list of MyUnionType. Your resolver for that field might look something like:
function resolve (arr) {
return arr.map(obj => {
return {
...obj[obj.value]
type: obj.value // or whatever field name that won't cause a collision
}
})
}
You then switch on type inside of __resolveType and you're good to go. Of course, that means if you have multiple fields that return a MyUnionType, you'll want to extract that logic into a utility function that can be used by each resolver.
I don't think there's not really a way to do what you're trying to do with the existing API. You could, of course, do something like this:
const getUnionType(obj) {
switch(obj.value) {
case 'type1': return 'UnionType1';
case 'type2': return 'UnionType2';
case 'type3': return 'UnionType3';
default: {
throw new Error(`Unrecognized type ${obj.value}`)
}
}
}
const resolvers = {
MyUnionType: {
__resolveType(obj) {
const type = getUnionType(obj)
Object.assign(obj, obj[obj.value])
return type
},
},
};
This works, but keep in mind it is a bit fragile since it assumes resolveType will always get the same root value as the resolve function, which could hypothetically change in the future.