How does one organize more than a few mutations in GraphQL .Net GraphType First? - graphql

In GraphQL .Net most of the example code has one top level mutations graph object that has many actual mutations defined within it.
Here's an example from the GraphQL .NET mutations page:
public class StarWarsSchema : Schema
{
public StarWarsSchema(IServiceProvider provider)
: base(provider)
{
Query = provider.Resolve<StarWarsQuery>();
Mutation = provider.Resolve<StarWarsMutation>();
}
}
public class StarWarsMutation : ObjectGraphType
{
public StarWarsMutation(StarWarsData data)
{
Field<HumanType>(
"createHuman",
arguments: new QueryArguments(
new QueryArgument<NonNullGraphType<HumanInputType>> {Name = "human"}
),
resolve: context =>
{
var human = context.GetArgument<Human>("human");
return data.AddHuman(human);
});
}
}
And that seems fine when you have 1-5 mutations, but overtime in some larger projects one could conceivably end up with dozens of mutations. Putting them in one big class together seems sufficient to work, although it also seems like there is some organization lacking. I tried putting a child mutation GraphTypeObject into a field on the parent mutation, but I had a little trouble calling the sub-mutation. Perhaps I had it configured wrong.
That just leads me to wonder, certainly there must be a user out there with more than a dozen mutations who might have organized their mutations beyond putting all their mutations in a single top level mutations object.
How does one organize more than a few mutations in GraphQL .Net GraphType First?

https://graphql-dotnet.github.io/docs/getting-started/query-organization
You can "group" queries or mutations together by adding a top level field. The "trick" is to return an empty object in the resolver.
public class StarWarsSchema : Schema
{
public StarWarsSchema(IServiceProvider provider)
: base(provider)
{
Query = provider.Resolve<StarWarsQuery>();
Mutation = provider.Resolve<StarWarsMutation>();
}
}
public class StarWarsMutation : ObjectGraphType
{
public StarWarsMutation(StarWarsData data)
{
Field<CharacterMutation>(
"characters",
resolve: context => new { });
}
}
public class CharacterMutation : ObjectGraphType
{
public CharacterMutation(StarWarsData data)
{
Field<HumanType>(
"createHuman",
arguments: new QueryArguments(
new QueryArgument<NonNullGraphType<HumanInputType>> {Name = "human"}
),
resolve: context =>
{
var human = context.GetArgument<Human>("human");
return data.AddHuman(human);
});
}
}
This organization is reflected in how the mutations (or queries) are called. If you simply want them externally to appear as a flat list (which would equate to one giant file), you can also break it up into as many files as you want using partial classes.

Related

How to access query path properties in a resolver? GraphQL

I have a database with the following structure.
I'm writing a GraphQL resolver for the bottom-most node (the "rows" node).
As the image shows, each "rows" node corresponds to a specific path. (Company)->(DB)->(Table)->(rows)
A Query would be of the form:
{
Company(name: "Google") {
Database(name: "accounts") {
Table(name: "users") {
rows
}
}
}
}
Question: How can I include/access Company.name, Database.name, Table.name information in the rows resolver so that I can determine which rows node to return?
In other words: I know I can access Table.name using parent.name, but is there a way to get parent.parent.name or parent.parent.parent.name?
If there isn't a way to access ancestor properties, should I use arguments or context to pass these properties manually into the rows resolver?
Note: I can't use the neo4j-graphql-js package.
Note: This is the first simple example I thought of and I understand there are structural problems with organizing data this way, but the question still stands.
You can extract the path from the GraphQLResolveInfo object passed to the resolver:
const { responsePathAsArray } = require('graphql')
function resolver (parent, args, context, info) {
responsePathAsArray(info.path)
}
This returns an array like ['google', 'accounts', 0, 'user']. However, you can also pass arbitrary data from parent resolver to child resolver.
function accountResolver (parent, args, context, info) {
// Assuming we already have some value at parent.account and want to return that
return {
...parent.account,
message: 'It\'s a secret!',
}
}
function userResolver (parent, args, context, info) {
console.log(parent.message) // prints "It's a secret!"
}
Unless message matches some field name, it won't ever actually appear in your response.

How resolve the right type in GraphQL when using interface and inline fragments

I'm facing a problem where I need to reference a resolved field on the parent from inside the __resolveType. Unfortunately the field I need to reference did not come as part of the original api response for the parent, but from another field resolver, which I would not have though mattered, but indeed it does, so it is undefined.
But I need these fields (in this example the; obj.barCount and obj.bazCount) to be able to make the following query, so I've hit a dead end. I need them to be available in the resolveType function so that I can use them to determine what type to resolve in case this field is defined.
Here's an example:
The graphql query I wish to be able to make:
{
somethings {
hello
... on HasBarCount {
barCount
}
... on HasBazCount {
bazCount
}
}
}
Schema:
type ExampleWithBarCount implements Something & HasBarCount & Node {
hello: String!
barCount: Int
}
type ExampleWithBazCount implements Something & HasBazCount & Node {
hello: String!
bazCount: Int
}
interface Something {
hello: String!
}
interface HasBarCount {
barCount: Int
}
interface HasBazCount {
bazCount: Int
}
Resolvers:
ExampleWithBarCount: {
barCount: (obj) => {
return myApi.getBars(obj.id).length || 0
}
}
ExampleWithBazCount {
bazCount: (obj) => {
return myApi.getBazs(obj.id).length || 0
}
}
Problem:
Something: {
__resolveType(obj) {
console.log(obj.barCount) // Problem: this is always undefined
console.log(obj.bazCount) // Problem: this is always undefined
if (obj.barCount) {
return 'ExampleWithBarCount';
}
if (obj.bazCount) {
return 'ExampleWithBazCount';
}
return null;
}
}
Any ideas of alternative solutions or what am I missing?
Here's a little more about the use case.
In the database we have a table "entity". This table is very simple and only really important columns are id, parent_id, name. type, and then you can of course attach some additional metadata to it.
Like with "entity", types are created dynamically from within the backend management system, and aftewards you can assign a type to your concrete entity.
The primary purpose of "entity" is to establish a hierarchy / tree of nested entities by parent_id and with different "types" (in the type column of entity). There will be some different meta data, but let's not focus on that.
Note: entity can be named anything, and the type can be anything.
In the API we then have an endpoint where we can get all entities with a specific type (sidenote: and in addition to the single type on an entitiy we also have an endpoint to get all entities by their taxonomy/term).
In the first implementation I modeled the schema by adding all the "known" types I had in my specification from the UX'er during development. The tree of entities could be like eg.
Company (or Organization, ..., Corporation... etc)
Branch (or Region, ..., etc)
Factory (or Building, facility, ..., etc)
Zone (or Room, ..., etc)
But this hierarchy is just one way it could be done. The naming of each might be totally different, and you might move some of them a level up or down or not have them at all, depending on the use case.
Only thing that is set in stone is that they share the same database table, will have the type column/field defined and they may or may not have children. The bottom layer in the hierarchy will not have children, but machines instead. The rest of just diffent metadata, which I think we should ignore for to not complicate this further.
As you can see the hierarchy needs to be very flexible and dynamic, so I realized it wasn't a great solution I had begun on.
At the lowest level "Zone" in this case, there will need to be a "machines" field, which should return a list of machines (they are in a "machines" table in the db, and not part of the hierarchy, but simply related with an "entity_id" on the "machines" table.
I had schema types and resolvers for all in the above hierarchy: Organization, Branch, Factory, Zone etc, but I was for the most part just repeating myself, so I thought I could turn to interfaces to try to generalize this more.
So instead of doing
{
companies{
name
branchCount
buildingCount
zoneCount
branches {
name
buildingCount
zoneCount
buildings {
name
zoneCount
zones {
name
machines {
name
}
}
}
}
}
}
And having to add schema/resolvers for all the different namings of the entities, I thought this would work:
{
entities(type: "companies") {
name
... on HasEntityCount {
branchCount: entityCount(type: "branch")
buildingCount: entityCount(type: "building")
zoneCount: entityCount(type: "zone")
}
... on HasSubEntities {
entities(type: "branch") {
name
... on HasEntityCount {
buildingCount: entityCount(type: "building")
zoneCount: entityCount(type: "zone")
}
... on HasMachineCount {
machineCount
}
... on HasSubEntities {
entities(type: "building") {
name
... on HasEntityCount {
zoneCount: entityCount(type: "zone")
}
... on HasMachineCount {
machineCount
}
... on HasSubEntities {
entities(type: "zone") {
name
... on HasMachines {
machines
}
}
}
}
}
}
}
}
}
With the interfaces being:
interface HasMachineCount {
machineCount: Int
}
interface HasEntityCount {
entitiyCount(type: String): Int
}
interface HasSubEntities {
entities(
type: String
): [Entity!]
}
interface HasMachines {
machines: [Machine!]
}
interface Entity {
id: ID!
name: String!
type: String!
}
The below works, but I really want to avoid a single type with lots of optional / null fields:
type Entity {
id: ID!
name: String!
type: String!
# Below is what I want to avoid, by using interfaces
# Imagine how this would grow
entityCount
machineCount
entities
machines
}
In my own logic I don't care what the entities are called, only what fields expected. I'd like to avoid a single Entity type with alot of nullable fields on it, so I thought interfaces or unions would be helpful for keeping things separated so I ended up with HasSubEntities, HasEntityCount, HasMachineCount and HasMachines since the bottom entity will not have entities below, and only the bottom entity will have machines. But in the real code there would be much more than the 2, and it could end up with a lot of optional fields, if not utilizing interfaces or unions in some way I think.
There's two separate problems here.
One, GraphQL resolves fields in a top down fashion. Parent fields are always resolved before any children fields. So it's never possible to access the value that a field resolved to from the parent field's resolver (or a "sibling" field's resolver). In the case of fields with an abstract type, this applies to type resolvers as well. A field type will be resolved before any children resolvers are called. The only way to get around this issue is to move the relevant logic from the child resolver to inside the parent resolver.
Two, assuming the somethings field has the type Something (or [Something], etc.), the query you're trying to run will never work because HasBarCount and HasBazCount are not subtypes of Something. When you tell GraphQL that a field has an abstract type (an interface or a union), you're saying that what's returned by the field could be one of several object types that will be narrowed down to exactly one object type at runtime. The possible types are either the types that make up the union, or types that implement the interface.
A union may only be made up of object types, not interfaces or other unions. Similarly, only an object type may implement an interface -- other interfaces or unions may not implement interfaces. Therefore, when using inline fragments with a field that returns an abstract type, the on condition for those inline fragments will always be an object type and must be one of the possible types for the abstract type in question.
Because this is pseudocode, it's not really clear what business rules or use case you're trying to model with this sort of schema. But I can say that there's generally no need to create an interface and have a type implement it unless you're planning on adding a field in your schema that will have that interface as its type.
Edit: At a high level, it sounds like you probably just want to do something like this:
type Query {
entities(type: String!): [Entity!]!
}
interface Entity {
type: String!
# other shared entity fields
}
type EntityWithChildren implements Entity {
type: String!
children: [Entity!]!
}
type EntityWithModels implements Entity {
type: String!
models: [Model!]!
}
The type resolver needs to check for whether we have models, so you'll want to make sure you fetch the related models when you fetch the entity (as opposed to fetching them inside the models resolver). Alternatively, you may be able to add some kind of column to your db that identifies an entity as the "lowest" in the hierarchy, in which case you can just use this property instead.
function resolveType (obj) {
return obj.models ? 'EntityWithModels' : 'EntityWithChildren'
}
Now your query looks like this:
entities {
type
... on EntityWithModels {
models { ... }
}
... on EntityWithChildren {
children {
... on EntityWithModels {
models { ... }
}
... on EntityWithChildren {
# etc.
}
}
}
}
The counts are a bit trickier because of the variability in the entity names and the variability in the depth of the hierarchy. I would suggest just letting the client figure out the counts once it gets the whole graph from the server. If you really want to add count fields, you'd have to have fields like childrenCount, grandchildrenCount, etc. Then the only way to populate those fields correctly would be to fetch the whole graph at the root.

GraphQL queries for Gatsby - Contentful setup with a flexible content model

I have a gatsby site with the contentful plugin and graphql queries (setup is working).
[EDIT]
My gatsby setup pulls data dynamically using the pageCreate feature. And populates my template component, the root graphql query of which I've shared below. I can create multiple pages using the setup if the pages on contentful follow the structure given in the below query.
[/EDIT]
My question is about a limitation I seemed to have come across or just don't know enough grpahql to understand this yet.
My high level content model 'BasicPageLayout' consists of references to other content types through the field 'Section'. So, it's flexible in terms of which content types are contained in the 'BasicPageLayout' and the order in which they are added.
Root page query
export const pageQuery = graphql`
query basicPageQuery {
contentfulBasicPageLayout(pageName: {eq: "Home"}) {
heroSection {
parent {
id
}
...HeroFields
}
section1 {
parent {
id
}
...ContentText
}
section2 {
parent {
id
}
...ContentTextOverMedia
}
section3 {
parent {
id
}
...ContentTextAndImage
}
section4 {
parent {
id
}
...ContentText
}
}
}
The content type fragments all live in the respecitve UI components.
The above query and setup are working.
Now, I have "Home" Hard coded because I'm having trouble creating a flexible reusable query. I'm taking advantage of contentful's flexible nature when creating the models, but haven't found a way to create that flexibility in the graphql query for it.
What I do know:
Graphql query is resolved at run time, so everything that needs to be fetched should be in that query. It can't be 'dynamic'.
Issue: The 'Section' fields in the basicPageLayout can link to any content type. So we can mix and match the granular level content types. How do I add the content type fragment (like ContentTextAndImage vs ContentText) so it is appropriate for that section instance ('Section' field in the query)?
In other words
I'd like the root query to get 'Home' data which might have 4 sections, all of type - ContentTextOverMedia
as well as 'About ' data that might have also have 4 sections but with alternating types - ContentText and ContentTextAndImage
This is the goal because I want to create content (Pages) by mix-matching content types on contentful, without needing to update the code each time a new Page is created. Which is why Contentful is useful and was picked in the first place.
My ideas so far:
A. Run two queries, in series. One fetches the parent.id on each section and that holds the content type info. Second fetches the data using the appropriate fragment.
B. Fetch the JSON file of the basicPageLayouts content instance (such as 'Home') separately through Contentful API, and using that JSON file create the graphql string to be used in each instance (So, different layout for Home, About, and so on)
This needs more experimentation, not sure if it's viable, could also be more complex then it needs to be.
So, please share thoughts on the above paths that I'm exploring or another solution that I haven't considered using graphql or gatsby's features.
This is my first question on SO btw, I've spent some time on refining it and trying to follow the guidelines but please do give me feedback in comments so I can improve even if you don't have an answer to my question.
Thanks in advance.
If I understood correctly you want to create pages dynamically from the data coming from Contentful.
You can achieve this using the Gatsbyjs Node API specifically createPage.
In your gatsby-node.js file you can have something like this
const fs = require('fs-extra')
const path = require('path')
exports.createPages = ({graphql, boundActionCreators}) => {
const {createPage} = boundActionCreators
return new Promise((resolve, reject) => {
const landingPageTemplate = path.resolve('src/templates/landing-page.js')
resolve(
graphql(`
{
allContentfulBesicPageLayout {
edges {
node {
pageName
}
}
}
}
`).then((result) => {
if (result.errors) {
reject(result.errors)
}
result.data.allContentfulBesicPageLayout.edges.forEach((edge) => {
createPage ({
path: `${edge.node.pageName}`,
component: landingPageTemplate,
context: {
slug: edge.node.pageName // this will passed to each page gatsby create
}
})
})
return
})
)
})
}
Now in your src/templates/landing-page.js
import React, { Component } from 'react'
const LandingPage = ({data}) => {
return (<div>Add you html here</div>)
}
export const pageQuery = graphql`
query basicPageQuery($pageName: String!) {
contentfulBasicPageLayout(pageName: {eq: $pageName}) {
heroSection {
parent {
id
}
...HeroFields
}
section1 {
parent {
id
}
...ContentText
}
section2 {
parent {
id
}
...ContentTextOverMedia
}
section3 {
parent {
id
}
...ContentTextAndImage
}
section4 {
parent {
id
}
...ContentText
}
}
}
note the $pageName param that's what was passed to the component context when creating a page.
This way you will end up creating as many pages as you want.
Please note: the react part of the code was not tested but I hope you get the idea.
Update:
To have a flexible query you instead of having your content Types as single ref field, you can have one field called sections and you can add the section you want there in the order you desire.
Your query will look like this
export const pageQuery = graphql`
query basicPageQuery($pageName: String!) {
contentfulBasicPageLayout(pageName: {eq: $pageName}) {
sections {
... on ContentfulHeroFields {
internal {
type
}
}
}
}
Khaled

Why QueryContainer is not updating from a Descriptor NESt C#

Hi I have following descriptors in my NEST query...
queryContainer.DateRange(b => dateRangeDescriptor);
queryContainer.MatchPhrase(b => matchPhraseDescriptor);
And finally I use this QueryContainerDescriptor in the following BoolQueryDescriptor
boolDescriptor.Must(q => queryContainer);
The problem is although I could see values in my dateRangeDescriptor as well as matchPhraseDescriptor, it is not available in side queryContainer.
Not sure what is going wrong here.
Must has the following overloads (in NEST 2.x)
public BoolQueryDescriptor<T> Must(
params Func<QueryContainerDescriptor<T>, QueryContainer>[] queries)
{
// impl
}
public BoolQueryDescriptor<T> Must(
IEnumerable<Func<QueryContainerDescriptor<T>, QueryContainer>> queries)
{
// impl
}
public BoolQueryDescriptor<T> Must(
params QueryContainer[] queries)
{
// impl
}
So you need to pass a collection of queries to apply multiple must clauses rather than adding them all to one QueryContainer.

Using eager loading with specification pattern

I've implemented the specification pattern with Linq as outlined here https://www.packtpub.com/article/nhibernate-3-using-linq-specifications-data-access-layer
I now want to add the ability to eager load and am unsure about the best way to go about it.
The generic repository class in the linked example:
public IEnumerable<T> FindAll(Specification<T> specification)
{
var query = GetQuery(specification);
return Transact(() => query.ToList());
}
public T FindOne(Specification<T> specification)
{
var query = GetQuery(specification);
return Transact(() => query.SingleOrDefault());
}
private IQueryable<T> GetQuery(
Specification<T> specification)
{
return session.Query<T>()
.Where(specification.IsSatisfiedBy());
}
And the specification implementation:
public class MoviesDirectedBy : Specification<Movie>
{
private readonly string _director;
public MoviesDirectedBy(string director)
{
_director = director;
}
public override
Expression<Func<Movie, bool>> IsSatisfiedBy()
{
return m => m.Director == _director;
}
}
This is working well, I now want to add the ability to be able to eager load. I understand NHibernate eager loading can be done by using Fetch on the query.
What I am looking for is whether to encapsulate the eager loading logic within the specification or to pass it into the repository, and also the Linq/expression tree syntax required to achieve this (i.e. an example of how it would be done).
A possible solution would be to extend the Specification class to add:
public virtual IEnumerable<Expression<Func<T, object>>> FetchRelated
{
get
{
return Enumerable.Empty<Expression<Func<T, object>>>();
}
}
And change GetQuery to something like:
return specification.FetchRelated.Aggregate(
session.Query<T>().Where(specification.IsSatisfiedBy()),
(current, related) => current.Fetch(related));
Now all you have to do is override FetchRelated when needed
public override IEnumerable<Expression<Func<Movie, object>>> FetchRelated
{
get
{
return new Expression<Func<Movie, object>>[]
{
m => m.RelatedEntity1,
m => m.RelatedEntity2
};
}
}
An important limitation of this implementation I just wrote is that you can only fetch entities that are directly related to the root entity.
An improvement would be to support arbitrary levels (using ThenFetch), which would require some changes in the way we work with generics (I used object to allow combining different entity types easily)
You wouldn't want to put the Fetch() call into the specification, because it's not needed. Specification is just for limiting the data that can then be shared across many different parts of your code, but those other parts could have drastically different needs in what data they want to present to the user, which is why at those points you would add your Fetch statements.

Resources