Postgraphile seems very handy tool, but I've already have tens of queries and mutations on client and server side.
Is there any way to integrate Postgraphile piece by piece, having my old GraphQL schema, described by hands working?
So, now I have following initialization code:
function createApolloLink(){
return createHttpLink({
uri: '/graphql',
credentials: 'same-origin'
});
}
function create(){
return new ApolloClient({
link: createApolloLink(),
ssrMode: !process.browser, // eslint-disable-line
cache: new InMemoryCache(),
connectToDevTools: process.browser
});
}
How to utilize one normalised storage (client side) and connect to second API point, driven by Postgraphile, e.g. /graphql2?
Typically your GraphQL client shouldn't have to think about this - it should be handled on the server side.
There's a number of techniques you can use to address this on the server side:
Schema Stitching
Schema stitching is a straight-forward approach for your issue - take your old schema and merge it with your PostGraphile schema; that way when clients communicate with /graphql they have access to both schemas. You can then mark everything in your old schema as deprecated and slowly phase out usage. However, if you can, I'd instead recommend that you use a PostGraphile plugin...
PostGraphile Plugin
PostGraphile is built around a plugin system, and you can use something like the makeExtendSchemaPlugin to mix your old GraphQL schema into the PostGraphile one. This is documented here: https://www.graphile.org/postgraphile/make-extend-schema-plugin/ but if your old types/resolvers are implemented via something like graphql-tools this is probably the easiest way to get started:
const { makeExtendSchemaPlugin, gql } = require('graphile-utils');
const typeDefs = gql`\
type OldType1 {
field1: Int!
field2: String
}
extend type Query {
oldField1: OldType1
oldField2: OldType2
}
`;
const resolvers = {
Query: {
oldField1(/*...*/) {
/* old logic here */
},
//...
},
};
const AddOldSchemaPlugin = makeExtendSchemaPlugin(
build => ({
typeDefs,
resolvers,
})
);
module.exports = AddOldSchemaPlugin;
This will also lead to the best performance as there should be no added latency, and you can again mark the legacy fields/mutations as deprecated.
Schema Delegation
Using this approach you write your own new GraphQL schema which then "delegates" to the other GraphQL schemas (the legacy one, and the one generated by PostGraphile). This adds a little latency but gives you much more control over the final shape of your GraphQL schema, though with this power comes great responsibility - if you make a typo then you're going to have to maintain that typo for a long time! Personally, I prefer the generated schema approach used by PostGraphile.
However, to answer your question as-asked, Apollo Link has "context" functionality that allows you to change how the query is executed. Typically this is used to add headers but you can also use it to override the URI to determine where the query can go. I've never done this myself, but I wouldn't be surprised if there was an Apollo Link that you can use that will switch automatically based on a client directive or even on the field name.
https://github.com/apollographql/apollo-link/tree/master/packages/apollo-link-http#context
Related
Some fields coming from the graphql server will have the shape short-lived-token-XYZ123. Ideally we wouldn't even have to know the field names ahead of time, as any code we write will live in a library. How can I hook into the InMemoryCache or the ApolloClient object to set the cache time of fields with values matching a regex? Causing them to poll at a set interval would be really ideal, but because polling is query-centric, I dont think that is possible at the field level. Giving them a specific cache time would be enough. Is there a way to hook into the InMemoryCache with a function that gets called on every read?
Another option would be to make these token strings a graphql type Token like
type Token {
id: String
}
and then in the client it might be possible to define a custom cache behavior for this type when initializing the cache like
new InMemoryCache({
typePolicies: {
Token: {
fields: {
id: {
read(cachedVal) {
if (cacheTimeElapsed){
return null
} else {
return cachedVal
}
}
}
},
},
},
But Im also unclear HOW to bust the cache using the read function. What do I return from the function to tell the cache that it is busted and needs to refetch? These docs are...challenging. If I could just call a function on every single read and do what I need to do, that would be ideal.
These fields will also be annotated in the apollo-server with #token(for other reasons), and we could potentially hook in here to somehow tell the client to cache-bust these fields. Not sure how, but it's another option.
I posted the same question on the Apollo forums and received the answer that, remarkably, they don't support setting specific cache times or invalidating the cache from the read function of the typePolicies. It is apparently on the roadmap.
A third party caching library was suggested instead: https://github.com/NerdWalletOSS/apollo-cache-policies
Looking in the "Why does this exist?" in the NerdWallet README, you can see they mention that this is a common pain point with the InMemoryCache
I'm looking for a tool that can make a clone of data exposed on a GraphQL API.
Basically something that can run periodically and recurively copy the raw data reponse to disk, making use of connection based pagination & cursors to ensure consistency of progress of the mirrored content.
Assuming this would be a runner that extracts data 24/7, it will either have to rewrite/transform already copied data, or even better apply updates in a more event sourced way to make it easier to provide diff-sets of changes in the source API data.
I'm not aware of any such tool. I'm not sure such a tool will exist, because
retrieving data from GraphQL requires only the thinnest of layers over the existing GraphQL libraries, which are quite feature-rich
the transformation/writing will likely be part of a different tool. I'm sure several tools for this already exist. The simplest example I could think of is Git. Getting a diff is as simple as running git diff after overwriting an existing version-controlled file.
A simple example of retrieving the data is adapted from the graphql-request Documentation Quickstart
import { request, gql } from 'graphql-request'
import { writeFile } from 'fs/promises'
const query = gql`
{
Movie(title: "Inception") {
releaseDate
actors {
name
}
}
}
`
request('https://api.graph.cool/simple/v1/movies', query)
.then((data) => writeFile('data.json', data))
I'm using RedwoodJS.
My front-end (what they call the "web" "side") has (among other files) a HomePage.ts and MainCell.ts, and then the Success function in MainCell calls a 3rd party API.
Everything is working.
However, I now want to start caching the results from the 3rd party API.
I've created a database table and a back-end "service" called cachedQueries.ts (using Prisma), which has:
export async function getFromCacheOrFresh(key: string, getFresh: Function, expiresInSec: number): Promise<any> {
const nowMoment = dayjs.utc();
const nowStr = nowMoment.format(dbTimeFormatUtc);
const cached = await getCachedQuery({ key, expiresAtCutoff: nowStr });
console.log('getFromCacheOrFresh cached', cached);
if (cached) {
const cachedValueParsed = JSON.parse(cached.value);
console.log('cachedValueParsed', cachedValueParsed);
return cachedValueParsed;
} else {
const fresh = await getFresh();
saveToCache(key, JSON.stringify(fresh), expiresInSec);
return fresh;
}
}
I have proven that this cachedQueries.ts service works and properly saves to and retrieves from the DB. I.e. for any 3rd-party APIs that can be called from the back-end (instead of from the front-end), the flow all works well.
Now my challenge is to enable it to cache front-end 3rd-party API queries too.
How I can call my getFromCacheOrFresh function from the Success function of the MainCell in the front-end?
I must be confused about Apollo, GraphQL, RedwoodJS, Prisma, etc relate to each other.
P.S. Client-side caching will not suffice. I really need the 3rd-party API results to be saved in the DB on my server.
I eventually figured it out.
I created a new cell called GeoCell, and I'm returning an empty string for each function of GeoCell (Success, Loading, Empty, and Failure).
That feels weird but works.
What was unintuitive to me was the idea that I was required to use a JSX component (since Apollo would never let me query or mutate GraphQL outside of a JSX component), but I didn’t want the cell component to actually display anything… because all that it needs to do in its Success is call a helper function that affects elements aleady created by a different cell (i.e. addMarkerAndInfoWindow affects the div of the existing Google Map but doesn’t display anything where the GeoCell was actually located).
As https://stackoverflow.com/a/65373314/470749 mentions, there's a related discussion on the Redwood Community Forum: Thinking about patterns for services, GraphQL, Apollo, cells, etc.
I have setup a Gatsby Client which connects to Contentful using the gatsby-source-contentful plugin. I have also connected a simple custom API which is connected using the gatsby-source-graphql plugin.
When I run the dev-server I am able to query my pages from Contentful in the playground.
I am also able to query my custom API through the playground as well.
So both APIs work and are connected with Gatsby properly.
I want to programatically generate a bunch of pages that have dynamic sections (references) which an author can add and order as she wishes.
I do achieve this using the ...on Node connection together with fragments I define within each dynamic section. It all works out well so far.
My actual problem:
Now I have a dynamic section which is a Joblist. This Component requires to get data out of the Contentful API as it stores values like latitude and longitude. So the author is free to set a point on a map and set a radius. I successfully get this information out of Contentful using a fragment inside the component:
export const query = graphql `
fragment JoblistModule on ContentfulJoblisteMitAdresse {
... on ContentfulJoblisteMitAdresse {
contentful_id
radius
geo {
lon
lat
}
}
}`
But how can I pass this information in to another query that fetches the jobdata from my custom API? If I understand Gatsby correctly I somehow have to connect these two API's together? Or can I run another query somehow that fetches these values passed in as variables? How and where would I achieve this?
I could not find any approach neither inside the gatsby-node.js (since passed-in context can only be used as variables inside a query) nor in the template-file (since I can run only 1 query at a time), nor in the component itself (since this only accept staticQuery)
I don't know where my misunderstanding is. So I would very appreciate any hints, help or examples.
Since your custom API is a graphQL API, you can use delegateToSchema from the graphql-tools package to accomplish this.
You will need to create a resolver using Gatsby's setFieldsOnGraphQLNodeType API. Within this resolver, your resolve function will call delegateToSchema.
We have a similar problem, our blog posts have an "author" field which contains an ID. We then do a graphQL query to another system to look up author info by that ID.
return {
remoteAuthor: {
type: person,
args: {},
resolve: async (source: ContentfulBlogPost, fieldArgs, context, info) => {
if (!source.author) {
return null
}
// runs the selection on the remote schema
// https://github.com/gatsbyjs/gatsby/issues/14517
return delegateToSchema({
schema: authorsSchema,
operation: 'query',
fieldName: 'Person',
args: { id: source.author },
context,
info,
})
},
},
}
This adds a 'remoteAuthor' field to our blog post type, and whenever it gets queried, those selections are proxied to the remote schema where the person type exists.
Using Apollo cache as global store - for remote and local data, is very convenient.
However, while I've never used redux, I think that the most important thing about it is implementing flux: an event driven architecture in the front-end that separate logic and ensure separation of concerns.
I don't know how to implement that with Apollo. The doc says
When mutation modifies multiple entities, or if it creates or deletes entities, the Apollo Client cache is not automatically updated to reflect the result of the mutation. To resolve this, your call to useMutation can include an update function.
Adding an update function in one part of the application that handle all cache updates; by updating queries and/or fragments for the all other parts of the application, is exactly what we want to avoid in Flux / Event driven architecture.
To illustrate this, let me give a single simple example. Here, we have (at least 3 linked components)
1. InboxCount
Component that show the number of Inbox items in SideNav
query getInboxCount {
inbox {
id
count
}
}
2. Inbox list items
Component that displays items in Inbox page
query getInbox {
inbox {
id
items {
...ItemPreview
...ItemDetail
}
}
}
Both of those components read data from those GQL queries from auto generated hooks ie. const { data, loading } = useGetInboxItemsQuery()
3. AddItem
Component that creates a new item. Because it creates a new entity I need to manually update cache. So I am forced to write
(pseudo-code)
const [addItem, { loading }] = useCreateItemMutation({
update(cache, { data }) {
const cachedData = cache.readQuery<GetInboxItemsQuery>({
query: GetInboxItemsDocument,
})
if (cachedData?.inbox) {
// 1. Update items list GetInboxItemsQuery
const newItems = cachedData.inbox.items.concat(data.items)
cache.writeQuery({
query: GetInboxItemsDocument,
data: {
inbox: {
id: 'me',
__typename: 'Inbox',
items: newItems,
},
},
})
// 2. Update another query wrapped into another reusable method, here
setInboxCount(cache, newItems.length)
}
},
})
Here, my AddItem component must be aware of my different other queries / fragments declared in my application 😭Moreover, as it's quite verbose, complexity is increasing very fast in update method. Especially when multiple list / queries should be updated like here
Does anyone have recommendations about implementing a more independent components? Am I wrong with how I created my queries?
The unfortunate truth about update is that it trades simplicity for performance. A truly "dumb" client would only receive data from the server and render it, never manipulating it. By instructing Apollo how to modify our cache after a mutation, we're inevitably duplicating the business logic that already exists on our server. The only way to avoid this is to either:
Have the mutation return a larger section of the graph. For example, if a user creates a post, instead of returning the created post, return the complete user object, including all of the user's posts.
Refetch the affected queries.
Of course, often neither approach is particularly desirable and we opt for injecting business logic into our client apps instead.
Separating this business logic could be as simple as keeping your update functions in a separate file and importing them as needed. This way, at least you can test the update logic separately. You may also prefer a more elegant solution like utilizing a Link. apollo-link-watched-mutation is a good example of a Link that lets you separate the update logic from your components. It also solves the issue of having to keep track of query variables in order to perform those updates.