Error Message: Instance Data is not valid whilst updating a GraphQL schema - graphql

I had an issue with updating a graphql schema and got this message (in my case I was updating a FaunaDB schema). I couldn't find reference to this online.

Although this is not the most straightforward or descriptive error message ever conceived it simply means that (in this case) I had created a record which would not fit my new schema. (In my case I had added a required field). Although I had deleted records in the specific collection I had not deleted those which referenced that collection.

I received this error because I had entered a query with the same name as a query automatically created by Fauna (or perhaps created by a previous schema?).
This is the code that caused the error:
type User {
uid: ID! #unique
}
type Query {
user(uid: ID!): User
}
The solution was to rename the query:
type Query {
findUser(uid: ID!): User
}
An alternative possible solution with FaunaDB is to override the schema (not just update the schema). This is applicable if the user() query is still in your schema, unwanted, as a result of prior schema updates.

Related

Querying cache by fields other than ID?

I'm integrating GraphQL into my application and trying to figure out if this scenario is possible.
I have a schema for a Record type and a query that returns a list of Records from my service. Schema looks something like:
type Query {
records(someQueryParam: String!): [Record]!
}
type Record {
id: String!
otherId: String!
<other fields here>
}
There are some places in my application where I need to access a Record using the otherId value (because that's all I have access to). Currently, I do that with a mapping of otherId to id values that's populated after all the Records are downloaded. I use the map to go from otherId to id, and then use the id value to index into the collection of Record objects, to avoid iterating through the whole thing. (This collection used to be populated using a separate REST call, before I started using Apollo GQL.)
I'd like to remove my dependency on this mapping if possible. Since the Records are all in the Apollo cache once they've been loaded, I'd like to just query the cache for the Record in question using the otherId value. My service doesn't currently have that kind of lookup, so I don't have an existing query that I can cache in parallel. (i.e. there's no getIdFromOtherId).
tl;dr: Can I query my Apollo cache using something other than the id of an object?
You can't query the cache by otherId for the same reason you don't want to have to search through the record set to find the matching item -- the id is part of the item's key, and without the key Apollo can't directly access the item. Apollo's default cache is a key-value store, not a database that you can query however you like.
It's probably necessary to build a query into your data source that allows mapping between otherId and id, obviously it would be horribly inefficient at scale to search through the entire record set for your item.

Auto-update of apollo client cache after mutation not affecting existing queries

I have a mutation (UploadTransaction) returning certain list of certain object named Transaction.
#import "TransactionFields.gql"
mutation UploadTransaction($files: [Upload!]!) {
uploadFile(files: $files){
transactions {
...TransactionFields
}
}
}
Transaction returned from backend (graphene) has id and typename field. Hence it should automatically update Transaction in the cache. In chrome dev tools for Apollo, I can see new transactions:
I also have a query GetTransactions fetching all Transaction objects.
#import "TransactionFields.gql"
query GetTransactions {
transactions {
...TransactionFields
}
}
However I don't see newly added Transaction being returned by the query. During initial load, Apollo client loaded 292 transactions which it shows under ROOT_QUERY. It keeps returning same 292 transactions. UploadTransaction mutation add new object of type "Transaction" in cache in dev-tools without affecting ROOT_QUERY in dev-tools or my query in code.
TransactionFields.gql is
fragment TransactionFields on Transaction {
id
timestamp
description
amount
category {
id
name
}
currency
}
Any idea what am I doing wrong? I am new to apollo client and graphql
From the docs:
If a mutation updates a single existing entity, Apollo Client can automatically update that entity's value in its cache when the mutation returns. To do so, the mutation must return the id of the modified entity, along with the values of the fields that were modified. Conveniently, mutations do this by default in Apollo Client...
If a mutation modifies multiple entities, or if it creates or deletes entities, the Apollo Client cache is not automatically updated to reflect the result of the mutation. To resolve this, your call to useMutation can include an update function.
If you have a query that returns a list of entities (for example, users) and then create or delete a user, Apollo has no way of knowing that the list should be updated to reflect your mutation. The reason for this is two fold
There's no way for Apollo to know what a mutation is actually doing. All it knows is what fields you are requesting and what arguments you are passing those fields. We might assume that a mutation that includes words like "insert" or "create" is inserting something on the backend but that's not a given.
There's no way to know that inserting, deleting or updating a user should update a particular query. Your query might be for all users with the name "Bob" -- if you create a user with the name "Susan", the query shouldn't be updated to reflect that addition. Similarly, if a mutation updates a user, the query might need to be updated to reflect the change. Whether it should or not ultimately boils down to business rules that only your server knows about.
So, in order to update the cache, you have two options:
Trigger a refetch of the relevant queries. You can do this by either passing a refetchQueries option to your useMutation hook, or by manually calling refetch on those queries. Since this requires one or more additional requests to your server, it's the slower and more expensive option but can be the right option when A) you don't want to inject a bunch of business logic into your client or B) the updates to the cache are complicated and extensive.
Provide an update function to your useMutation hook that tells Apollo how to update the cache based on the results of the mutation. This saves you from making any additional requests, but does mean you have to duplicate some business logic between your server and your client.
The example of using update from the docs:
update (cache, { data: { addTodo } }) {
const { todos } = cache.readQuery({ query: GET_TODOS });
cache.writeQuery({
query: GET_TODOS,
data: { todos: todos.concat([addTodo]) },
});
}
Read the docs for additional details.

Update Apollo cache after object creation

What are all the different ways of updating the Apollo InMemoryCache after a mutation? From the docs, I can see:
Id-based updates which Apollo performs automatically
Happens for single updates to existing objects only.
Requires an id field which uniquely identifies each object, or the cache must be configured with a dataIdFromObject function which provides a unique identifier.
"Manual" cache updates via update functions
Required for object creation, deletion, or updates of multiple objects.
Involves calling cache.writeQuery with details including which query should be affected and how the cache should be changed.
Passing the refetchQueries option to the useMutation hook
The calling code says which queries should be re-fetched from the API, Apollo does the fetching, and the results replace whatever is in the cache for the given queries.
Are there other ways that I've missed, or have I misunderstood anything about the above methods?
I am confused because I've been reading the code of a project which uses Apollo for all kinds of mutations, including creations and deletions, but I don't see any calls to cache.writeQuery, nor any usage of refetchQueries. How does the cache get updated after creations and deletions without either of those?
In my own limited experience with Apollo, the cache is not automatically updated after an object creation or deletion, not even if I define dataIdFromObject. I have to update the cache myself by writing update functions.
So I'm wondering if there is some secret config I've missed to make Apollo handle it for me.
The only way to create or delete a node and have Apollo automatically update the cache to reflect the change is to return the parent field of whatever field contains the updated List field. For example, let's say we have a schema like this:
type Query {
me: User
}
type User {
id: ID!
posts: [Post!]!
}
type Post {
id: ID!
body: String!
}
By convention, if we had a mutation to add a new post, the mutation field would return the created post.
type Mutation {
writePost(body: String!): Post!
}
However, we could have it return the logged in User instead (the same thing the me field returns):
type Mutation {
writePost(body: String!): User!
}
by doing so, we enable the client to make a query like:
mutation WritePost($body: String!){
writePost(body: $body) {
id
posts {
id
body
}
}
}
Here Apollo will not only create or update the cache for all the returned posts, but it will also update the returned User object, including the list of posts.
So why is this not commonly done? Why does Apollo's documentation suggest using writeQuery when adding or deleting nodes?
The above will work fine when your schema is simple and you're working with a relatively small amount of data. However, returning the entire parent node, including all its relations, can be noticeably slower and more resource-intensive once you're dealing with more data. Additionally, in many apps a single mutation could impact multiple queries inside the cache. The same node could be returned by any number of fields in the schema, and even the same field could be part of a number of different queries that utilize different filters, sort parameters, etc.
These factors make it unlikely that you'll want to implement this pattern in production but there certainly are use cases where it may be a valid option.

How to query Apollo GraphQL server with a specific context?

I am writing an Apollo GraphQL API that returns product information from various brands. A simplified version of the schema looks like this:
type Query {
products: [Product]!
}
type Product {
name: String!
brand: String!
}
I want to be able to query products from a specific brand. Normally this would be simple to achieve by adding a brand argument to the Product object:
type Query {
products(brand: String!): [Product]!
}
However, I have multiple GraphQL clients in different apps and each is associated with a specific brand so it seems redundant to always pass the same brand argument in every query. I also have many other objects in my schema (orders, transactions, etc.) that are specific to a brand and would require a brand argument.
Furthermore, my resolvers need to query a different API depending on the brand so even objects in my schema such as User, which are conceptually unrelated to a brand, would potentially need a brand argument so that the resolver knows which API to fetch from.
Is there a way to set the brand context for each client and have this context received by the server? Or maybe there is a better way to achieve this brand separation?
I would probably make Brand be a first-class type in your GraphQL query. That doesn't save you from having to qualify many of the queries you describe by a specific brand, but it at least gives you a common place to start from. Then you'd wind up with an API somewhat like:
type Query {
brand(name: String!): Brand
allProducts: [Product!]!
}
type Brand {
name: String!
products: [Product!]!
# users: [User!]!
}
type Product {
name: String!
brand: Brand! # typical, but not important to your question
}
If the differences between kinds of brands are visible at the API layer, you also could consider using a GraphQL interface to describe the set of fields that all brands have, but actually return a more specific type from the resolver.
The way you describe your application, it could also make sense to run one copy of the service for each brand, each with a different GraphQL endpoint. That would let you straightforwardly parameterize the per-brand internal object configuration and make the "current brand" be process-global context. The big constraints here are that, at a GraphQL level, one brand's objects can never refer to another, and if you have a lot of brands, you need some good way to run a lot of servers.

Apollo graphql remote schema extending is dependent on remote fields

We are able to extend our remote schema, but with one major caveat: the new field cannot be queried for on it's own; it must be included with at least one field from the remote. Is it possible to query for just the extended field?
In the example below, I have extended "name" to include the field "catsName." If I query for "first," the query works. If I query for "catsName" and "first," the query works. If I query for just "catsName," it returns an internal server error with status code 400.
Note :
- When we extend non-remote fields, we do not have this issue.
- Our remote GraphQL engine uses Absinthe (Erlang/Elixir). We use Apollo locally. Our goal is to support the legacy Absinthe GraphQL implementation.
Working Query :
query{
user{
profile{
personal{
name{ // extend type name
catsName // New field
first // Original field
}}}}}
Non-working Query :
query{
user{
profile{
personal{
name{ // extend type name
catsName // New field
}}}}}
Error :
"message": "Field \"name\" of type \"UserPersonalName\" must have a
selection of subfields. Did you mean \"name { ... }\"?"
After further research we determined that the reason for this is that user, profile, personal, and name exist on the remote server. The extension of "catsName" exists locally. If we query for just catsName, our engine has no way of knowing that the root fields exist, and therefore returns an error. If we include at least one remote field in our query, the local engine knows to fetch the remote and is able to return all of the data.

Resources