Getting "object is not extensible" when trying to add fields to result gql data objects after upgrading to Apollo client 2 - apollo-client

When getting results from the server, usually I'm adding extra fields to the object in my Apollo Angular client, for convenience usage later on, like this:
this.apollo.watchQuery<any>({query: gql`...`})
.valueChanges.pipe(
map(result => {
result.data.sites.forEach(s => {
s.fullName = `${s.parent.displayName} - {s.displayName}`;
}
return result;
});
This used to work fine with Apollo client 1. Now I'm upgrading my code to Apollo client 2 and the latest apollo-cache-inmemory and I'm getting an error when trying to do the above:
TypeError: Cannot add property fullName, object is not extensible
I realize I can make deep copies of the objects and that would resolve, but why is this change? Is there a way to make Apollo return an extensible object like before? I have many usages of queries in my code and in almost all of them I'm adding fields to the result like the above, so it'll be a fairly big code change. Also, deep copy will have a slight performance issue.
Thanks!

Do not mutate the result, just update the fullName in immutable way.

Related

How can I specify maximum cache-time for any data matching a regex in apollo-client with InMemoryCache?

Some fields coming from the graphql server will have the shape short-lived-token-XYZ123. Ideally we wouldn't even have to know the field names ahead of time, as any code we write will live in a library. How can I hook into the InMemoryCache or the ApolloClient object to set the cache time of fields with values matching a regex? Causing them to poll at a set interval would be really ideal, but because polling is query-centric, I dont think that is possible at the field level. Giving them a specific cache time would be enough. Is there a way to hook into the InMemoryCache with a function that gets called on every read?
Another option would be to make these token strings a graphql type Token like
type Token {
id: String
}
and then in the client it might be possible to define a custom cache behavior for this type when initializing the cache like
new InMemoryCache({
typePolicies: {
Token: {
fields: {
id: {
read(cachedVal) {
if (cacheTimeElapsed){
return null
} else {
return cachedVal
}
}
}
},
},
},
But Im also unclear HOW to bust the cache using the read function. What do I return from the function to tell the cache that it is busted and needs to refetch? These docs are...challenging. If I could just call a function on every single read and do what I need to do, that would be ideal.
These fields will also be annotated in the apollo-server with #token(for other reasons), and we could potentially hook in here to somehow tell the client to cache-bust these fields. Not sure how, but it's another option.
I posted the same question on the Apollo forums and received the answer that, remarkably, they don't support setting specific cache times or invalidating the cache from the read function of the typePolicies. It is apparently on the roadmap.
A third party caching library was suggested instead: https://github.com/NerdWalletOSS/apollo-cache-policies
Looking in the "Why does this exist?" in the NerdWallet README, you can see they mention that this is a common pain point with the InMemoryCache

How to separate logic when updating Apollo cache that used as global store?

Using Apollo cache as global store - for remote and local data, is very convenient.
However, while I've never used redux, I think that the most important thing about it is implementing flux: an event driven architecture in the front-end that separate logic and ensure separation of concerns.
I don't know how to implement that with Apollo. The doc says
When mutation modifies multiple entities, or if it creates or deletes entities, the Apollo Client cache is not automatically updated to reflect the result of the mutation. To resolve this, your call to useMutation can include an update function.
Adding an update function in one part of the application that handle all cache updates; by updating queries and/or fragments for the all other parts of the application, is exactly what we want to avoid in Flux / Event driven architecture.
To illustrate this, let me give a single simple example. Here, we have (at least 3 linked components)
1. InboxCount
Component that show the number of Inbox items in SideNav
query getInboxCount {
inbox {
id
count
}
}
2. Inbox list items
Component that displays items in Inbox page
query getInbox {
inbox {
id
items {
...ItemPreview
...ItemDetail
}
}
}
Both of those components read data from those GQL queries from auto generated hooks ie. const { data, loading } = useGetInboxItemsQuery()
3. AddItem
Component that creates a new item. Because it creates a new entity I need to manually update cache. So I am forced to write
(pseudo-code)
const [addItem, { loading }] = useCreateItemMutation({
update(cache, { data }) {
const cachedData = cache.readQuery<GetInboxItemsQuery>({
query: GetInboxItemsDocument,
})
if (cachedData?.inbox) {
// 1. Update items list GetInboxItemsQuery
const newItems = cachedData.inbox.items.concat(data.items)
cache.writeQuery({
query: GetInboxItemsDocument,
data: {
inbox: {
id: 'me',
__typename: 'Inbox',
items: newItems,
},
},
})
// 2. Update another query wrapped into another reusable method, here
setInboxCount(cache, newItems.length)
}
},
})
Here, my AddItem component must be aware of my different other queries / fragments declared in my application 😭Moreover, as it's quite verbose, complexity is increasing very fast in update method. Especially when multiple list / queries should be updated like here
Does anyone have recommendations about implementing a more independent components? Am I wrong with how I created my queries?
The unfortunate truth about update is that it trades simplicity for performance. A truly "dumb" client would only receive data from the server and render it, never manipulating it. By instructing Apollo how to modify our cache after a mutation, we're inevitably duplicating the business logic that already exists on our server. The only way to avoid this is to either:
Have the mutation return a larger section of the graph. For example, if a user creates a post, instead of returning the created post, return the complete user object, including all of the user's posts.
Refetch the affected queries.
Of course, often neither approach is particularly desirable and we opt for injecting business logic into our client apps instead.
Separating this business logic could be as simple as keeping your update functions in a separate file and importing them as needed. This way, at least you can test the update logic separately. You may also prefer a more elegant solution like utilizing a Link. apollo-link-watched-mutation is a good example of a Link that lets you separate the update logic from your components. It also solves the issue of having to keep track of query variables in order to perform those updates.

Use Graphql variables to define fields

I am trying to do something effectively like this
`query GetAllUsers($fields: [String]) {
users {
...$fields
}
}`
Where my client (currently Apollo for react) then passes in an array of fields in the variables section. The goal is to be able to pass in an array for what fields I want back, and that be interpolated to the appropriate graphql query. This currently returns a GraphQL Syntax error at $fields (expects a { but sees $ ). Is this even possible? Am I approaching this the wrong way?
One other option I had considered was invoking a JavaScript function and passing that result to query(), where the function would do something like the following:
buildQuery(fields) {
return gql`
query {
users {
${fields}
}
}`
}
This however feels like an unecessary workaround.
Comments summary:
Non standard requirements requires workarounds ;)
You can use fragments (for predefined fieldsets) but they probably won't be freely granular (field level).
Variables are definitely not for query definition (but for variables used in query).
Daniel's suggestion: gql-query-builder
It seams that graphQL community is great and full of people working on all possible use cases ... it's enough to search for solutions or ask on SO ;)

graphql multiple mutations using prior mutation return results?

I understand that mutations are sequential, so it makes sense to me that if Mutation 1 creates an entity and returns an id, that Mutation 2 should have access to that id. However I don't see any examples online and can't seem to get it to work. I see that people say you need to handle this in the resolve function of your route but it seems like extra unnecessary code if I can get this in just the query.
For example I have the following where accounts belong to clients and hence need the clientId before being created. However this does not work...
mutation createClientAndAccount($account: AccountInput, $client: ClientInput){
createClient(client: $client){ clientId }
createAccount(account: $account, clientId: USE_CLIENT_ID_FROM_ABOVE) { ... }
}
I've also tried nesting mutations but didn't have much luck there either...
Is what i'm trying to do possible? Would the resolve function of createAccount have the return data from createClient?
This is not possible right now, though would be useful.
See this PR.
Maybe using a custom schema directive we could achieve that.
Schema stitching will be a better approach(though usually it is preferred in API Gateway for merging APIs from different services).
If this requirement is very rare in your application, simply creating a new API that can do both CreateClientAndAccount is enough.

Why is Fluent NHibernate with LINQ returning an empty list (with Oracle database)?

I'm using Fluent NHibernate (the NH3 build - #694) and LINQ to connect to an Oracle 11 database. However, I don't seem to be able to get any data from the database. The connection seems to be working, as, if I change my login info, it throws an error.
I'm using the following code:
// Setup.
OracleClientConfiguration oracleClientConfiguration =
OracleClientConfiguration.Oracle10
.ShowSql()
.ConnectionString(connectionString);
_sessionFactory =
Fluently.Configure()
.Database(oracleClientConfiguration)
.Mappings(m => m.FluentMappings
.AddFromAssemblyOf<Feed>())
.BuildSessionFactory();
// Query.
using (ISession session = _sessionFactory.OpenSession())
{
IEnumerable<Category> categories = session.Query<Category>().ToList(); // Returns empty list.
// And so on...
}
I have a map for the Category table, but, no matter what I put in there, I still get an empty list. Also, even though I use ShowSql(), I'm not seeing any NHibernate output in the VS (2010) window.
I'm using TestDriven.NET (3.x) to run the code. No errors are thrown and the Assert.NotEmpty (xUnit) on the returned collection fails (obviously).
I'm stuck, as the code is running and just returning nothing and I can't get any diagnostic info. I even tried getting NHibernate to write to log4net (TraceAppender), but, again, nothing.
I'd appreciate any pointers - even if it's a way of getting the thing to tell me what it's trying to do.
Turns out that one of the classes used in the mapping was marked "internal".
Can you try replacing the lower block of your code with this?
using (ISession session = _sessionFactory.OpenSession())
{
IEnumerable<Category> categories = session.CreateCriteria(typeof(Category)).List<Category>();
}
At first glance I suspect the issue is your use of ToList() instead of List<T>(), but please let me know if this suggestion gets you past the immediate issue.
Check the Fluent NHibernate xml files. I face the same problem and after all I realize that my xml files were out-of-date.

Resources