Apollo-tooling: including a defined type in another defined type - graphql

This may be a basic Graphql question or it may be related Apollo Tooling.
I am trying to use Apollo Tooling to generate typescript types from my client side schema. I have a NavItem type which looks like this:
type NavItem {
id: ID!
to: String!
icon: String!
text: String!
highlight: String!
children: [NavItemChild]
}
type NavItemChild {
id: ID!
to: String!
icon: String!
text: String!
highlight: String!
}
Basically a NavItem can have multiple NavItemChildren. When I go to generate types using apollo codegen:generate src/graphql/types --target=typescript --outputFlat I get an error
Field "children" of type "[NavItemChild]" must have a selection of subfields. Did you mean "children { ... }"?
What am I doing wrong and how should I correct it?

The problem was in the query I was trying to generate types for:
It looks, in part, like this:
links {
to,
icon,
text,
highlight,
children,
}
Since we declared children as a non primitive type (String, Int etc) its expected that we define the sub fields that we expect to get back. Therefore changing it to
links {
to,
icon,
text,
highlight,
children {
to,
icon,
text,
highlight,
}
}
works fine

Related

Graphql - How to include schema from other types

Let us say I have the following type:
type Foo {
id: ID!
field1: String
}
Now, I wish to define another type, which includes the earlier type. Something like this:
type Bar {
...Foo,
field2: String
}
How do I achieve the above in graphql? I want to basically first create a type, and then include that type in the definition of other types so that I don't have to type all the attributes multiple times.
I am using Amplify / AWS Appsync so if there's any special directive that I could use that would also be helpful
GraphQL has the concept interfaces for this. Appsync, AWS's GraphQL implementation, supports interfaces.
[Edit:] GraphQL does not support "...spread" syntax for interfaces. Fields are defined explicitly. Spread syntax does figure in GraphQL, but in the form of Fragments, resuable units of fields for reducing repetition in queries.
interface Character {
id: ID!
name: String!
friends: [Character]
appearsIn: [Episode]!
}
type Human implements Character {
id: ID!
name: String!
friends: [Character]
appearsIn: [Episode]!
starships: [Starship]
totalCredits: Int
}
type Droid implements Character {
id: ID!
name: String!
friends: [Character]
appearsIn: [Episode]!
primaryFunction: String
}
Amplify, which automagically creates AppSync schemas, resolvers and data sources, is apparently a more difficult story. The amplify-cli repo has an open feature request, Does the GraphQL Transformer support interfaces?. I am no Amplify expert, but a quick look at the loooong feature request comment thread suggests the answer for Amplify is "not out-of-the-box", but "maybe works in narrow circumstances or with advanced customization".

FaunaDB - How to bulk update list of entries within single graphQL mutation?

I want to bulk update list of entries with graphQL mutation in faunaDB.
The input data is list of coronavirus cases from external source. It will be updated frequently. The mutation should update existing entries if the entry name is present in collectio and create new ones if not present.
Current GRAPHQL MUTATION
mutation UpdateList($data: ListInput!) {
updateList(id: "260351229231628818", data: $data) {
title
cities {
data {
name
infected
}
}
}
}
GRAPHQL VARIABLES
{
"data": {
"title": "COVID-19",
"cities": {
"create": [
{
"id": 22,
"name": "Warsaw",
"location": {
"create": {
"lat": 52.229832,
"lng": 21.011689
}
},
"deaths": 0,
"cured": 0,
"infected": 37,
"type": "ACTIVE",
"created_timestamp": 1583671445,
"last_modified_timestamp": 1584389018
}
]
}
}
}
SCHEMA
type cityEntry {
id: Int!
name: String!
deaths: Int!
cured: Int!
infected: Int!
type: String!
created_timestamp: Int!
last_modified_timestamp: Int!
location: LatLng!
list: List
}
type LatLng {
lat: Float!
lng: Float!
}
type List {
title: String!
cities: [cityEntry] #relation
}
type Query {
items: [cityEntry!]
allCities: [cityEntry!]
cityEntriesByDeathFlag(deaths: Int!): [cityEntry!]
cityEntriesByCuredFlag(cured: Int!): [cityEntry!]
allLists: [List!]
}
Everytime the mutation runs it creates new duplicates.
What is the best way to update the list within single mutation?
my apologies for the delay, I wasn't sure exactly what the missing information was hence why I commented first :).
The Schema
An example of a part of a schema that has arguments:
type Mutation {
register(email: String!, password: String!): Account! #resolver
login(email: String!, password: String!): String! #resolver
}
When such a schema is imported in FaunaDB there will be placeholder functions provided.
The UDF parameters
As you can see all the function does is Abort with the message that the function still has to be implemented. The implementation starts with a Lambda that takes arguments and those arguments have to match what you defined in the resolver.
Query(Lambda(['email', 'password'],
... function body ...
))
Using the arguments is done with Var, that means Var('email') or Var('password') in this case. For example, in my specific case we would use the email that was passed in to get an account by email and use the password to pass on to the Login function which will return a secret (the reason I do the select here is that the return value for a GraphQL resolver has to be a valid GraphQL result (e.g. plain JSON
Query(Lambda(['email', 'password'],
Select(
['secret'],
Login(Match(Index('accountsByEmail'), Var('email')), {
password: Var('password')
})
)
))
Calling the UDF resolver via GraphQL
Finally, how to pass parameters when calling it? That should be clear from the GraphQL playground as it will provide you with the docs and autocompletion. For example, this is what the auto-generated GraphQL docs tell me after my schema import:
Which means we can call it as follows:
mutation CallLogin {
login (
email: "<some email>"
password: "<some pword>"
)
}
Bulk updates
For bulk updates, you can also pass a list of values to the User Defined Function (UDF). Let's say we would want to group a number of accounts together in a specific team via the UI and therefore want to update multiple accounts at the same time.
The mutation in our Schema could look as follows (ID's in GraphQL are similar to Strings)
type Mutation { updateAccounts(accountRefs: [ID]): [ID]! #resolver }
We could then call the mutation by providing in the id's that we receive from FaunaDB (the string, not the Ref in case you are mixing FQL and GraphQL, if you only use GraphQL, don't worry about it).
mutation {
updateAccounts(accountRefs: ["265317328423485952", "265317336075993600"] )
}
Just like before, we will have to fill in the User Defined Function that was generated by FaunaDB. A skeleton function that just takes in the array and returns it would look like:
Query(Lambda(['arr'],
Var('arr')
))
Some people might have seen an easier syntax and would be tempted to use this:
Query(Lambda(arr => arr))
However, this currently does not work with GraphQL when passing in arrays, it's a known issue that will be fixed.
The next step is to actually loop over the array. FQL is not declarative and draws inspiration from functional languages which means you would do that just by using a 'map' or a 'foreach'
Query(Lambda(["accountArray"],
Map(Var("accountArray"),
Lambda("account", Var("account")))
))
We now loop over the list but don't do anything with it yet since we just return the account in the map's body. We will now update the account and just set a value 'teamName' on there. For that we need the Update function which takes a FaunaDB Reference. GraphQL sends us strings and not references so we need to transform these ID strings to a reference with Ref as follows:
Ref(Collection('Account'), Var("account"))
If we put it all together we can add an extra attribute to a list of accounts ids as follows:
Query(Lambda(["accountArray"],
Map(Var("accountArray"),
Lambda("account",
Do(
Update(
Ref(Collection('Account'), Var("account")),
{ data: { teamName: "Awesome live-coders" } }
),
Var("account")
)
)
)
))
At the end of the Map, we just return the ID of the account again with Var("account") in order to return something that is just plain JSON, else we would be returning FaunaDB Refs which are more than just JSON and will not be accepted by the GraphQL call.
Passing in more complex types.
Sometimes you want to pass in more complex types. Let's say we have a simple todo schema.
type Todo {
title: String!
completed: Boolean!
}
And we want to set the completed value of a list of todos with specific titles to true. We can see in the extended schema generated by FaunaDB that there is a TodoInput.
If you see that extended schema you might think, "Hey that's exactly what I need!" but you can't access it when you write your mutations since you do not have that part of the schema at creation time and therefore can't just write:
type Mutation { updateTodos(todos: [TodoInput]): Boolean! #resolver }
As it will return the following error.
However, we can just add it to the schema ourselves. Fauna will just accept that you already wrote it and not override it (make sure that you keep the required fields, else your generated 'createTodo' mutation won't work anymore).
type Todo {
title: String!
completed: Boolean!
}
input TodoInput {
title: String!
completed: Boolean!
}
type Mutation { updateTodos(todos: [TodoInput]): Boolean! #resolver }
Which means that I can now write:
mutation {
updateTodos(todos: [{title: "test", completed: true}])
}
and dive into the FQL function to do things with this input.
Or if you want to include the ID along with data you can define a new type.
input TodoUpdateInput {
id: ID!
title: String!
completed: Boolean!
}
type Mutation { updateTodos(todos: [TodoUpdateInput]): Boolean! #resolver }
Once you get the hang of it and want to learn more about FQL (that's a whole different topic) we are currently writing a series of articles along with code for which the first one appeared here: https://css-tricks.com/rethinking-twitter-as-a-serverless-app/ which is probably a good gentle introduction.

Custom schema, interface, #fileByRelativePath and gatsby-image

I'm trying to get an interface working with the new #fileByRelativePath resolver extension, to keep compatible with v3.
I'm using Prismic for my content, and gatsby-source-prismic v2. I have two content types in Prismic, and created the interface to be able to more easily query and map over both for a home page index.
Here's the functioning (but with deprecated inferred resolvers) schema:
exports.createSchemaCustomization = ({ actions }) => {
const { createTypes } = actions
const typeDefs = `
interface indexPosts #nodeInterface {
id: ID!
uid: String!
data: Data!
type: String!
}
type Data {
title: Title!
date: Date!
featured: String!
featured_image: Featured_image!
body: Body!
}
type Title {
text: String!
}
type Featured_image {
localFile: File!
}
type Body {
html: String!
}
type PrismicGallery implements Node & indexPosts {
uid: String!
data: Data!
type: String!
}
type PrismicEssay implements Node & indexPosts {
uid: String!
data: Data!
type: String!
}
`
createTypes(typeDefs)
}
The problem comes after adding #fileByRelativePath to the Featured_image type definition. Doing so gives me an error during build:
"The "path" argument must be of type string. Received type undefined"
I'm unsure how to provide the necessary path argument, considering my images are third-party hosted. I'm trying to follow the brief guide at the end of this page and suspect the way to do it might be with a resolver or type builder and using 'source' to access the url field provided by both localFile and its parent, featured_image, but I can't figure it out!
I'm using gatsby-image and the childImageSharp convenience field to present the images, if that makes a difference at all!
I had exactly the same problem when I tried to use #fileByRelativePath. I managed to solve my problem by using #infer on the type that contained the File.
Try this:
type Featured_image #infer {
localFile: File!
}

Is there a more elegant way instead of writing lots of queries?

I'm building a small blog using GraphQL, Apollo Express and MongoDB with Mongoose.
Currently, articles are fetched by their IDs and visitors can browse an article with the id of let's say "123" here: example.com/articles/123
Instead, I would like to use slugs, so visitors can go to example.com/articles/same-article-as-above
My resolver so far:
import { gql } from 'apollo-server-express';
export default gql`
extend type Query {
articles: [Article!]
article(id: ID!): Article
}
type Article {
id: ID!
slug: String!
title: String!
desription: String!
text: String!
}
`;
I could just add another query:
articleBySlug(slug: String!): Article
This would work perfectly fine. However, this doesn't look very elegant to me and I feel like I am missing some basic understanding. Do I really have to add a new query to my resolvers each time I am trying to fetch an article by its title, text, description or whatever? I would end up with a lot of queries like "articleByTitle", "articleByDate", and so on. Can someone please give me a hint, an example or some best practices (or just confirm that I do have to add more and more queries☺)?
A common way to do this is to add all inputs to the same query, and make them optional:
export default gql`
extend type Query {
articles: [Article!]
article(id: ID, slug: String, date: String, search: String): Article
}
type Article {
id: ID!
slug: String!
title: String!
description: String!
text: String!
}
`;
Then, in the resolver just check that exactly one of id, slug or date is provided, and return an error if not.
Another option is to use a search string similar to what Gmail uses (eg id:x before:2012-12-12) that you then parse in the resolver.
export default gql`
extend type Query {
articles: [Article!]
article(search: String): Article
}
type Article {
id: ID!
slug: String!
title: String!
description: String!
text: String!
}
`;
A third option is to set up a separate search query that can return several types:
export default gql`
extend type Query {
articles: [Article!]
search(query: String!, type: SearchType): SearchResult
}
union SearchResult = Article | User
enum SearchType {
ARTICLE
USER
}
type Article {
id: ID!
slug: String!
title: String!
description: String!
text: String!
}
type User {
id: ID!
email: String!
name: String!
}
`;

Alias types in GraphQL Schema Definition Language

I have the following graphql schema definition in production today:
type BasketPrice {
amount: Int!
currency: String!
}
type BasketItem {
id: ID!
price: BasketPrice!
}
type Basket {
id: ID!
items: [BasketItem!]!
total: BasketPrice!
}
type Query {
basket(id: String!): Basket!
}
I'd like to rename BasketPrice to just Price, however doing so would be a breaking change to the schema because clients may be referencing it in a fragment, e.g.
fragment Price on BasketPrice {
amount
currency
}
query Basket {
basket(id: "123") {
items {
price {
...Price
}
}
total {
...Price
}
}
}
I had hoped it would be possible to alias it for backwards compatibility, e.g.
type Price {
amount: Int!
currency: String!
}
# Remove after next release.
type alias BasketPrice = Price;
type BasketPrice {
amount: Int!
currency: String!
}
type BasketItem {
id: ID!
price: BasketPrice!
}
type Basket {
id: ID!
items: [BasketItem!]!
total: BasketPrice!
}
type Query {
basket(id: String!): Basket!
}
But this doesn't appear to be a feature. Is there a recommended way to safely rename a type in graphql without causing a breaking change?
There's no way to rename a type without it being a breaking change for the reasons you already specified. Renaming a type is a superficial change, not a functional one, so there's no practical reason to do this.
The best way to handle any breaking change to a schema is to expose the new schema on a different endpoint and then transition the clients to using the new endpoint, effectively implementing versioning for your API.
The only other way I can think of getting around this issue is to create new fields for any fields that utilize the old type, for example:
type BasketItem {
id: ID!
price: BasketPrice! # deprecated(reason: "Use itemPrice instead")
itemPrice: Price!
}
type Basket {
id: ID!
items: [BasketItem!]!
total: BasketPrice! # deprecated(reason: "Use basketTotal instead")
basketTotal: Price!
}
I want this too, and apparently we can't have it. Making sure names reflect actual semantics over time is very important for ongoing projects -- it's a very important part of documentation!
The best way I've found to do this is multi-step, and fairly labor intensive, but at least can keep compatibility until a later time. It involves making input fields optional at the protocol level, and enforcing the application-level needs of having "one of them" at the application level. (Because we don't have unions.)
input OldThing {
thingId: ID!
}
input Referee {
oldThing: OldThing!
}
Change it to something like this:
input OldThing {
thingId: ID!
}
input NewThing {
newId: ID!
}
input Referee {
oldThing: OldThing # deprecated(reason: "Use newThing instead")
newThing: NewThing
}
In practice, all old clients will keep working. You can update your handler code to always generate a NewThing, and then use a procedural field resolver to copy it into oldThing if asked-for (depending on which framework you're using.) On input, you can update the handler to always translate old to new on receipt, and only use the new one in the code. You'll also have to return an error manually if neither of the elements are present.
At some point, clients will all be updated, and you can remove the deprecated version.

Resources