How to query data from 2 APIs - graphql

I have setup a Gatsby Client which connects to Contentful using the gatsby-source-contentful plugin. I have also connected a simple custom API which is connected using the gatsby-source-graphql plugin.
When I run the dev-server I am able to query my pages from Contentful in the playground.
I am also able to query my custom API through the playground as well.
So both APIs work and are connected with Gatsby properly.
I want to programatically generate a bunch of pages that have dynamic sections (references) which an author can add and order as she wishes.
I do achieve this using the ...on Node connection together with fragments I define within each dynamic section. It all works out well so far.
My actual problem:
Now I have a dynamic section which is a Joblist. This Component requires to get data out of the Contentful API as it stores values like latitude and longitude. So the author is free to set a point on a map and set a radius. I successfully get this information out of Contentful using a fragment inside the component:
export const query = graphql `
fragment JoblistModule on ContentfulJoblisteMitAdresse {
... on ContentfulJoblisteMitAdresse {
contentful_id
radius
geo {
lon
lat
}
}
}`
But how can I pass this information in to another query that fetches the jobdata from my custom API? If I understand Gatsby correctly I somehow have to connect these two API's together? Or can I run another query somehow that fetches these values passed in as variables? How and where would I achieve this?
I could not find any approach neither inside the gatsby-node.js (since passed-in context can only be used as variables inside a query) nor in the template-file (since I can run only 1 query at a time), nor in the component itself (since this only accept staticQuery)
I don't know where my misunderstanding is. So I would very appreciate any hints, help or examples.

Since your custom API is a graphQL API, you can use delegateToSchema from the graphql-tools package to accomplish this.
You will need to create a resolver using Gatsby's setFieldsOnGraphQLNodeType API. Within this resolver, your resolve function will call delegateToSchema.
We have a similar problem, our blog posts have an "author" field which contains an ID. We then do a graphQL query to another system to look up author info by that ID.
return {
remoteAuthor: {
type: person,
args: {},
resolve: async (source: ContentfulBlogPost, fieldArgs, context, info) => {
if (!source.author) {
return null
}
// runs the selection on the remote schema
// https://github.com/gatsbyjs/gatsby/issues/14517
return delegateToSchema({
schema: authorsSchema,
operation: 'query',
fieldName: 'Person',
args: { id: source.author },
context,
info,
})
},
},
}
This adds a 'remoteAuthor' field to our blog post type, and whenever it gets queried, those selections are proxied to the remote schema where the person type exists.

Related

Amplify and AppSync not updating data on mutation from multiple sources

I have been attempting to interact with AppSync/GraphQL from:
Lambda - Create (works) Update (does not change data)
Angular - Create/Update subscription received, but object is null
Angular - Spoof update (does not change data)
AppSync Console - Spoof update (does not change data)
Post:
mutation MyMutation {
updateAsset(input: {
id: "b34d3aa3-fbc4-48b5-acba-xxxxxxxxxxx",
owner: "51b691a5-d088-4ac0-9f46-xxxxxxxxxxxx",
description: "AppSync"
}) {
id
owner
description
}
}
Response:
{
"data": {
"updateAsset": {
"id": "b34d3aa3-fbc4-48b5-acba-xxxxxxxxxx",
"owner": "51b691a5-d088-4ac0-9f46-xxxxxxxxxxx",
"description": "Edit Edit from AppSync"
}
}
The version in DynamoDB gets auto-incremented each time I send the query. But the description remains the same as originally set.
Auth Rules on Schema -
#auth(
rules: [
{ allow: public, provider: apiKey, operations: [create, update, read] },
{ allow: private, provider: userPools, operations: [read, create, update, delete] }
{ allow: groups, groups: ["admin"], operations: [read, create, update, delete] }
])
For now on the Frontend I'm cheating and just requesting the data after I received a null subscription event. But as I've stated I only seem to be able to set any of the data once and then I can't update it.
Any insight appreciated.
Update: I even decided to try a DeleteAsset statement and it won't delete but revs the version.
I guess maybe the next sane thing to do is to either stand up a new environment or attempt to stand this up in a fresh account.
Update: I have a working theory this has something to do with Conflict detection / rejection. When I try to delete via AppSync direct I get a rejection. From Angular I just get the record back with no delete.
After adding additional Auth on the API, I remember it asked about conflict resolution and I chose "AutoMerge". Doc on this at https://docs.aws.amazon.com/appsync/latest/devguide/conflict-detection-and-sync.html
After further review I'll note what happened in the hopes it helps someone else.
Created amplify add api
This walked me thru a wizard. I used the existing Cognito UserPool since I had not foreseen I would need to call this API from a S3 Trigger (Lambda Function) later.
Now needing to grant apiKey or preferably IAM access from the Lambda to AppSync/GraphQL API I performed amplify update api and added the additional Auth setting.
This asked me how I wanted to solve conflict, since more than one source can edit the data. Because I just hit "agree" on Terms and Conditions and rarely read the manual; I selected 'AutoMerge' .. sounds nice right?
So now if you read the fine print, edits made to a table will be rejected as we now have this _version (Int) that would need to get passed so AutoMerge can decide if it wants to take your change.
It also creates an extra DataStore Table in DynamoDB tracking versions. So in order to properly deal with this strategy you'd need to extend your schema to include _version not just id or whatever primary key you opted to use.
Also note: if you delete it sets _delete Bool to true. This actually still is returned to the UI so now your initial query needs to filter off (or not) deleted records.
Determined I also didn't need this. I don't want to use a Datastore (least not now) so: I found the offender in transform.conf.json within the API. After executing amplify update api, GraphQL, I chose 'Disable Datastore for entire API` and it got rid of the ConflictHandler an ConflictDetection.
This was also agitating my Angular 11 subscription to Create/Update as the added values this created broke the expected model. Not to mention the even back due to nothing changing was null.
Great information here, Mark. Thanks for the write up and updates.
I was playing around with this and with the Auto Merge conflict resolution strategy I was able to post an update using a GraphQL mutation by sending the current _version member along.
This function:
await API.graphql(
graphqlOperation(updateAsset, {
input: {
id: assetToUpdate.id,
name: "Updated name",
_version: assetToUpdate._version
}
}
));
Properly updates, contacts AppSync, and propagates the changes to DynamoDB/DataStore. Using the current version tells AppSync that we are up-to-date and able to edit the content. Then AppSync manages/increments the _version/_createdAt/etc.
Adding _version to my mutation worked very well.
API.graphql({
query: yourQuery,
variables: {
input: {
id: 'your-id',
...
_version: version,
},
},
});

How to separate logic when updating Apollo cache that used as global store?

Using Apollo cache as global store - for remote and local data, is very convenient.
However, while I've never used redux, I think that the most important thing about it is implementing flux: an event driven architecture in the front-end that separate logic and ensure separation of concerns.
I don't know how to implement that with Apollo. The doc says
When mutation modifies multiple entities, or if it creates or deletes entities, the Apollo Client cache is not automatically updated to reflect the result of the mutation. To resolve this, your call to useMutation can include an update function.
Adding an update function in one part of the application that handle all cache updates; by updating queries and/or fragments for the all other parts of the application, is exactly what we want to avoid in Flux / Event driven architecture.
To illustrate this, let me give a single simple example. Here, we have (at least 3 linked components)
1. InboxCount
Component that show the number of Inbox items in SideNav
query getInboxCount {
inbox {
id
count
}
}
2. Inbox list items
Component that displays items in Inbox page
query getInbox {
inbox {
id
items {
...ItemPreview
...ItemDetail
}
}
}
Both of those components read data from those GQL queries from auto generated hooks ie. const { data, loading } = useGetInboxItemsQuery()
3. AddItem
Component that creates a new item. Because it creates a new entity I need to manually update cache. So I am forced to write
(pseudo-code)
const [addItem, { loading }] = useCreateItemMutation({
update(cache, { data }) {
const cachedData = cache.readQuery<GetInboxItemsQuery>({
query: GetInboxItemsDocument,
})
if (cachedData?.inbox) {
// 1. Update items list GetInboxItemsQuery
const newItems = cachedData.inbox.items.concat(data.items)
cache.writeQuery({
query: GetInboxItemsDocument,
data: {
inbox: {
id: 'me',
__typename: 'Inbox',
items: newItems,
},
},
})
// 2. Update another query wrapped into another reusable method, here
setInboxCount(cache, newItems.length)
}
},
})
Here, my AddItem component must be aware of my different other queries / fragments declared in my application 😭Moreover, as it's quite verbose, complexity is increasing very fast in update method. Especially when multiple list / queries should be updated like here
Does anyone have recommendations about implementing a more independent components? Am I wrong with how I created my queries?
The unfortunate truth about update is that it trades simplicity for performance. A truly "dumb" client would only receive data from the server and render it, never manipulating it. By instructing Apollo how to modify our cache after a mutation, we're inevitably duplicating the business logic that already exists on our server. The only way to avoid this is to either:
Have the mutation return a larger section of the graph. For example, if a user creates a post, instead of returning the created post, return the complete user object, including all of the user's posts.
Refetch the affected queries.
Of course, often neither approach is particularly desirable and we opt for injecting business logic into our client apps instead.
Separating this business logic could be as simple as keeping your update functions in a separate file and importing them as needed. This way, at least you can test the update logic separately. You may also prefer a more elegant solution like utilizing a Link. apollo-link-watched-mutation is a good example of a Link that lets you separate the update logic from your components. It also solves the issue of having to keep track of query variables in order to perform those updates.

React Apollo - multiple mutations

I'm using react-apollo#2.5.6
I have a component, when you click on it, it will based on "select" state and issue either an add or a remove operation.
Currently I'm doing this to have 2 mutations function injected to my component. Is that the correct way to do it? Am I able to just use one Mutation ( HOC ) instead of multiple ?
<Mutation mutation={ADD_STUFF}>
{(addStuff) => (
<Mutation mutation={REMOVE_STUFF}>
{(removeStuff) => {
And later in the wrapped component, I will do something like that
onClick={(e) => {
e.preventDefault()
const input = {
variables: {
userId: user.id,
stuffId: stuff.id,
},
}
// Based on selected state, I will call either add or remove
if (isSelected) {
removeStuff(input)
} else {
addStuff(input)
}
}}
Thanks
Everything is possible but usually costs time and money ;) ... in this case simplicity, readability, manageablility.
1st solution
Common mutation, f.e. named 'change' with changeType parameter.
Of course that requires API change - you need a new resolver.
2nd solution
Using graphql-tag you can construct any query from the string. Take an inspiration from this answer - with 'classic graphql HOC' pattern.
This solution doesn't require API change.
I think using two different Mutation components does not make sense. If I understand correctly, there can be two ways to solve your problem.
Using Apollo client client.mutate function to do manual mutation based on the state and set mutation and variables properties based on the new state. To access the client in current component, you need to pass along the client from parent component where it was created to child components where mutation is taking place.
Using single Mutation component inside render method of your component and setting mutation and variables attributes based on the state variable.
The approach that you are using is working as you said, but to me looks like you are delegating some logic to the UI that should be handled by the underlying service based on the isSelected input.
I think that you should create a single mutation for ADD_STUFF and REMOVE_STUFF, I would create the ADD_OR_REMOVE_STUFF mutation, and choose the add or remove behavior on the resolver.
Having one mutation is easier to maintain/expand/understand, if the logic requires something else besides add/remove, for example if you have to choose add/remove/update/verify/transform, would you nest 5 mutations?
In the previous case the single mutation could be named MULTI_HANDLE_STUFF, and only have that one mutation called from the UI.

Apollo Client: can apollo-link-rest resolve relations between endpoints?

The rest api that I have to use provides data over multiple endpoints. The objects in the results might have relations that are are not resolved directly by the api, it rather provides ids that point to the actual resource.
Example:
For simplicity's sake let's say a Person can own multiple Books.
Now the api/person/{i} endpoint returns this:
{ id: 1, name: "Phil", books: [1, 5, 17, 31] }
The api/book/{i} endpoint returns this (note that author might be a relation again):
{ id: 5, title: "SPRINT", author: 123 }
Is there any way I can teach the apollo client to resolve those endpoints in a way that I can write the following (or a similar) query:
query fetchBooksOfUser($id: ID) {
person (id: $id) {
name,
books {
title
}
}
}
I didn't try it (yet) in one query but sould be possible.
Read docs strating from this
At the beggining I would try with sth like:
query fetchBooksOfUser($id: ID) {
person (id: $id) #rest(type: "Person", path: "api/person/{args.id}") {
name,
books #rest(type: "Book", path: "api/book/{data.person.books.id}") {
id,
title
}
}
}
... but it probably won't work - probably it's not smart enough to work with arrays.
UPDATE: See note for similiar example but using one, common parent-resolved param. In your case we have partially resolved books as arrays of objects with id. I don't know how to use these ids to resolve missing fields () on the same 'tree' level.
Other possibility - make related subrequests/subqueries (someway) in Person type patcher. Should be possible.
Is this really needed to be one query? You can provide ids to child containers, each of them runing own query when needed.
UPDATE: Apollo will take care on batching (Not for REST, not for all graphql servers - read docs).
'it's handy' to construct one query but apollo will cache it normalizing response by types - data will be stored separately. Using one query keeps you within overfetching camp or template thinking (collect all possible data before one step rendering).
Ract thinking keeps your data and view decomposed, used when needed, more specialised etc.
<Person/> container will query for data needed to render itself and list of child-needed ids. Each <Book/> will query for own data using passed id.
As an alternative, you could set up your own GraphQL back-end as an intermediary between your front-end and the REST API you're planning to use.
It's fairly easy to implement REST APIs as data sources in GraphQL using Apollo Server and a package such as apollo-datasource-rest which is maintained by the authors behind Apollo Server.
It would also allow you to scale if you ever have to use other data sources (DBs, 3rd party APIs, etc.) and would give you full control about exactly what data your queries return.

What is the point of naming queries and mutations in GraphQL?

Pardon the naive question, but I've looked all over for the answer and all I've found is either vague or makes no sense to me. Take this example from the GraphQL spec:
query getZuckProfile($devicePicSize: Int) {
user(id: 4) {
id
name
profilePic(size: $devicePicSize)
}
}
What is the point of naming this query getZuckProfile? I've seen something about GraphQL documents containing multiple operations. Does naming queries affect the returned data somehow? I'd test this out myself, but I don't have a server and dataset I can easily play with to experiment. But it would be good if something in some document somewhere could clarify this--thus far all of the examples are super simple single queries, or are queries that are named but that don't explain why they are (other than "here's a cool thing you can do.") What benefits do I get from naming queries that I don't have when I send a single, anonymous query per request?
Also, regarding mutations, I see in the spec:
mutation setName {
setName(name: "Zuck") {
newName
}
}
In this case, you're specifying setName twice. Why? I get that one of these is the field name of the mutation and is needed to match it to the back-end schema, but why not:
mutation {
setName(name: "Zuck") {
...
What benefit do I get specifying the same name twice? I get that the first is likely arbitrary, but why isn't it noise? I have to be missing something obvious, but nothing I've found thus far has cleared it up for me.
The query name doesn't have any meaning on the server whatsoever. It's only used for clients to identify the responses (since you can send multiple queries/mutations in a single request).
In fact, you can send just an anonymous query object if that's the only thing in the GraphQL request (and doesn't have any parameters):
{
user(id: 4) {
id
name
profilePic(size: 200)
}
}
This only works for a query, not mutation.
EDIT:
As #orta notes below, the name could also be used by the server to identify a persistent query. However, this is not part of the GraphQL spec, it's just a custom implementation on top.
We use named queries so that they can be monitored consistently, and so that we can do persistent storage of a query. The duplication is there for query variables to fill the gaps.
As an example:
query getArtwork($id: String!) {
artwork(id: $id) {
title
}
}
You can run it against the Artsy GraphQL API here
The advantage is that the same query each time, not a different string because the query variables are the bit that differs. This means you can build tools on top of those queries because you can treat them as immutable.

Resources