I have been attempting to interact with AppSync/GraphQL from:
Lambda - Create (works) Update (does not change data)
Angular - Create/Update subscription received, but object is null
Angular - Spoof update (does not change data)
AppSync Console - Spoof update (does not change data)
Post:
mutation MyMutation {
updateAsset(input: {
id: "b34d3aa3-fbc4-48b5-acba-xxxxxxxxxxx",
owner: "51b691a5-d088-4ac0-9f46-xxxxxxxxxxxx",
description: "AppSync"
}) {
id
owner
description
}
}
Response:
{
"data": {
"updateAsset": {
"id": "b34d3aa3-fbc4-48b5-acba-xxxxxxxxxx",
"owner": "51b691a5-d088-4ac0-9f46-xxxxxxxxxxx",
"description": "Edit Edit from AppSync"
}
}
The version in DynamoDB gets auto-incremented each time I send the query. But the description remains the same as originally set.
Auth Rules on Schema -
#auth(
rules: [
{ allow: public, provider: apiKey, operations: [create, update, read] },
{ allow: private, provider: userPools, operations: [read, create, update, delete] }
{ allow: groups, groups: ["admin"], operations: [read, create, update, delete] }
])
For now on the Frontend I'm cheating and just requesting the data after I received a null subscription event. But as I've stated I only seem to be able to set any of the data once and then I can't update it.
Any insight appreciated.
Update: I even decided to try a DeleteAsset statement and it won't delete but revs the version.
I guess maybe the next sane thing to do is to either stand up a new environment or attempt to stand this up in a fresh account.
Update: I have a working theory this has something to do with Conflict detection / rejection. When I try to delete via AppSync direct I get a rejection. From Angular I just get the record back with no delete.
After adding additional Auth on the API, I remember it asked about conflict resolution and I chose "AutoMerge". Doc on this at https://docs.aws.amazon.com/appsync/latest/devguide/conflict-detection-and-sync.html
After further review I'll note what happened in the hopes it helps someone else.
Created amplify add api
This walked me thru a wizard. I used the existing Cognito UserPool since I had not foreseen I would need to call this API from a S3 Trigger (Lambda Function) later.
Now needing to grant apiKey or preferably IAM access from the Lambda to AppSync/GraphQL API I performed amplify update api and added the additional Auth setting.
This asked me how I wanted to solve conflict, since more than one source can edit the data. Because I just hit "agree" on Terms and Conditions and rarely read the manual; I selected 'AutoMerge' .. sounds nice right?
So now if you read the fine print, edits made to a table will be rejected as we now have this _version (Int) that would need to get passed so AutoMerge can decide if it wants to take your change.
It also creates an extra DataStore Table in DynamoDB tracking versions. So in order to properly deal with this strategy you'd need to extend your schema to include _version not just id or whatever primary key you opted to use.
Also note: if you delete it sets _delete Bool to true. This actually still is returned to the UI so now your initial query needs to filter off (or not) deleted records.
Determined I also didn't need this. I don't want to use a Datastore (least not now) so: I found the offender in transform.conf.json within the API. After executing amplify update api, GraphQL, I chose 'Disable Datastore for entire API` and it got rid of the ConflictHandler an ConflictDetection.
This was also agitating my Angular 11 subscription to Create/Update as the added values this created broke the expected model. Not to mention the even back due to nothing changing was null.
Great information here, Mark. Thanks for the write up and updates.
I was playing around with this and with the Auto Merge conflict resolution strategy I was able to post an update using a GraphQL mutation by sending the current _version member along.
This function:
await API.graphql(
graphqlOperation(updateAsset, {
input: {
id: assetToUpdate.id,
name: "Updated name",
_version: assetToUpdate._version
}
}
));
Properly updates, contacts AppSync, and propagates the changes to DynamoDB/DataStore. Using the current version tells AppSync that we are up-to-date and able to edit the content. Then AppSync manages/increments the _version/_createdAt/etc.
Adding _version to my mutation worked very well.
API.graphql({
query: yourQuery,
variables: {
input: {
id: 'your-id',
...
_version: version,
},
},
});
Related
Some fields coming from the graphql server will have the shape short-lived-token-XYZ123. Ideally we wouldn't even have to know the field names ahead of time, as any code we write will live in a library. How can I hook into the InMemoryCache or the ApolloClient object to set the cache time of fields with values matching a regex? Causing them to poll at a set interval would be really ideal, but because polling is query-centric, I dont think that is possible at the field level. Giving them a specific cache time would be enough. Is there a way to hook into the InMemoryCache with a function that gets called on every read?
Another option would be to make these token strings a graphql type Token like
type Token {
id: String
}
and then in the client it might be possible to define a custom cache behavior for this type when initializing the cache like
new InMemoryCache({
typePolicies: {
Token: {
fields: {
id: {
read(cachedVal) {
if (cacheTimeElapsed){
return null
} else {
return cachedVal
}
}
}
},
},
},
But Im also unclear HOW to bust the cache using the read function. What do I return from the function to tell the cache that it is busted and needs to refetch? These docs are...challenging. If I could just call a function on every single read and do what I need to do, that would be ideal.
These fields will also be annotated in the apollo-server with #token(for other reasons), and we could potentially hook in here to somehow tell the client to cache-bust these fields. Not sure how, but it's another option.
I posted the same question on the Apollo forums and received the answer that, remarkably, they don't support setting specific cache times or invalidating the cache from the read function of the typePolicies. It is apparently on the roadmap.
A third party caching library was suggested instead: https://github.com/NerdWalletOSS/apollo-cache-policies
Looking in the "Why does this exist?" in the NerdWallet README, you can see they mention that this is a common pain point with the InMemoryCache
I have setup a Gatsby Client which connects to Contentful using the gatsby-source-contentful plugin. I have also connected a simple custom API which is connected using the gatsby-source-graphql plugin.
When I run the dev-server I am able to query my pages from Contentful in the playground.
I am also able to query my custom API through the playground as well.
So both APIs work and are connected with Gatsby properly.
I want to programatically generate a bunch of pages that have dynamic sections (references) which an author can add and order as she wishes.
I do achieve this using the ...on Node connection together with fragments I define within each dynamic section. It all works out well so far.
My actual problem:
Now I have a dynamic section which is a Joblist. This Component requires to get data out of the Contentful API as it stores values like latitude and longitude. So the author is free to set a point on a map and set a radius. I successfully get this information out of Contentful using a fragment inside the component:
export const query = graphql `
fragment JoblistModule on ContentfulJoblisteMitAdresse {
... on ContentfulJoblisteMitAdresse {
contentful_id
radius
geo {
lon
lat
}
}
}`
But how can I pass this information in to another query that fetches the jobdata from my custom API? If I understand Gatsby correctly I somehow have to connect these two API's together? Or can I run another query somehow that fetches these values passed in as variables? How and where would I achieve this?
I could not find any approach neither inside the gatsby-node.js (since passed-in context can only be used as variables inside a query) nor in the template-file (since I can run only 1 query at a time), nor in the component itself (since this only accept staticQuery)
I don't know where my misunderstanding is. So I would very appreciate any hints, help or examples.
Since your custom API is a graphQL API, you can use delegateToSchema from the graphql-tools package to accomplish this.
You will need to create a resolver using Gatsby's setFieldsOnGraphQLNodeType API. Within this resolver, your resolve function will call delegateToSchema.
We have a similar problem, our blog posts have an "author" field which contains an ID. We then do a graphQL query to another system to look up author info by that ID.
return {
remoteAuthor: {
type: person,
args: {},
resolve: async (source: ContentfulBlogPost, fieldArgs, context, info) => {
if (!source.author) {
return null
}
// runs the selection on the remote schema
// https://github.com/gatsbyjs/gatsby/issues/14517
return delegateToSchema({
schema: authorsSchema,
operation: 'query',
fieldName: 'Person',
args: { id: source.author },
context,
info,
})
},
},
}
This adds a 'remoteAuthor' field to our blog post type, and whenever it gets queried, those selections are proxied to the remote schema where the person type exists.
I tried
type Mutation {
deleteUser(id: ID!): User #delete #broadcast(subscription: "userDeleted")
}
type Subscription {
userDeleted(id: ID!): User
}
and I created a subcription where the methods authorize and filter return true.
But I get this error:
Illuminate\Database\Eloquent\ModelNotFoundException: No query results for model [App\User]
The deleteUser mutation works. Only the subscription does not work. I use Pusher for broadcast and the error appeared in the horizon dashboard.
If you really need a solution right now, just make a custom resolver where you firstly brodcast the event and then delete the user... (you can even make a custom directive that generalizes it).
Otherwise, you will have to dig a bit into the Lighthouse's internals to find a solution.
It might be too late for you by now, but could help future developers looking for a solution.
I found that you can trigger the subscription in the model's 'deleting' event using Laravel's model events: https://laravel.com/docs/7.x/eloquent#events This means that the model will exist in the database when the subscription gets it from the database, and shouldn't throw an error.
Ideally, the accepted solution would probably be the cleanest way to do it, but this works in the meantime.
The rest api that I have to use provides data over multiple endpoints. The objects in the results might have relations that are are not resolved directly by the api, it rather provides ids that point to the actual resource.
Example:
For simplicity's sake let's say a Person can own multiple Books.
Now the api/person/{i} endpoint returns this:
{ id: 1, name: "Phil", books: [1, 5, 17, 31] }
The api/book/{i} endpoint returns this (note that author might be a relation again):
{ id: 5, title: "SPRINT", author: 123 }
Is there any way I can teach the apollo client to resolve those endpoints in a way that I can write the following (or a similar) query:
query fetchBooksOfUser($id: ID) {
person (id: $id) {
name,
books {
title
}
}
}
I didn't try it (yet) in one query but sould be possible.
Read docs strating from this
At the beggining I would try with sth like:
query fetchBooksOfUser($id: ID) {
person (id: $id) #rest(type: "Person", path: "api/person/{args.id}") {
name,
books #rest(type: "Book", path: "api/book/{data.person.books.id}") {
id,
title
}
}
}
... but it probably won't work - probably it's not smart enough to work with arrays.
UPDATE: See note for similiar example but using one, common parent-resolved param. In your case we have partially resolved books as arrays of objects with id. I don't know how to use these ids to resolve missing fields () on the same 'tree' level.
Other possibility - make related subrequests/subqueries (someway) in Person type patcher. Should be possible.
Is this really needed to be one query? You can provide ids to child containers, each of them runing own query when needed.
UPDATE: Apollo will take care on batching (Not for REST, not for all graphql servers - read docs).
'it's handy' to construct one query but apollo will cache it normalizing response by types - data will be stored separately. Using one query keeps you within overfetching camp or template thinking (collect all possible data before one step rendering).
Ract thinking keeps your data and view decomposed, used when needed, more specialised etc.
<Person/> container will query for data needed to render itself and list of child-needed ids. Each <Book/> will query for own data using passed id.
As an alternative, you could set up your own GraphQL back-end as an intermediary between your front-end and the REST API you're planning to use.
It's fairly easy to implement REST APIs as data sources in GraphQL using Apollo Server and a package such as apollo-datasource-rest which is maintained by the authors behind Apollo Server.
It would also allow you to scale if you ever have to use other data sources (DBs, 3rd party APIs, etc.) and would give you full control about exactly what data your queries return.
In my app I have alerts. Partial schema:
type Alert {
id: ID!
users: [User]
}
type User {
id
username
... many more calculated fields
}
Each alert can return a list of Users. This User type is computationally expensive to build and we really only need a couple of user fields to display on an alert. However, if we only partially build the user object, Apollo will cache these partial user objects and break other parts of the app that depend on "complete" User objects. Avoiding this would require fetchPolicy: "no-cache", and I'd like to retain that caching.
So I'm trying to avoid returning entire User objects just to support an Alert. I've run into this issue with other types as well and am struggling with the best way to architect for this. The only solution I've come up with is to create "partial types", which would be separate types with a subset of fields. For example:
type Alert {
id: ID!
users: [PartialUser]
}
type PartialUser {
id
username
name
}
This feels hacky and like it violates DRY principles.
Another way might be to manually update the cache after a query. This would avoid the caching of partial User objects, but still feels hacky. I also think that only mutations support the options.update method to manipulate the cache. So I'm a bit stuck and haven't found any guidance in the docs.
Are there any recommendations for approaching this problem?