I'm working with Apollo Client local state and cache and although I've gone through the docs (https://www.apollographql.com/docs/react/essentials/local-state), a couple of tutorials (for example, https://www.robinwieruch.de/react-apollo-link-state-tutorial/) and looked at some examples, I'm a bit befuddled. In addition to any insight you might be able provide with the specific questions below, any links to good additional docs/resources to put things in context would be much appreciated.
In particular, I understand how to store local client side data and retrieve it, but I'm not seeing how things integrate with data retrieved from and sent back to the server.
Taking the simple 'todo app' as a starting point, I have a couple of questions.
1) If you download a set of data (in this case 'todos') from the server using a query, what is the relationship between the cached data and the server-side data? That is, I grab the data with a query, it's stored in the cache automatically. Now if I want to grab that data locally, and, say, modify it (in this case, add a todo or modify it), how I do that? I know how to do it for data I've created, but not data that I've downloaded, such as, in this case, my set of todos. For instance, some tutorials reference the __typename -- in the case of data downloaded from the server, what would this __typename be? And if I used readQuery to grab the data downloaded from the server and stored in the cache, what query would I use? The same I used to download the data originally?
2) Once I've modified this local data (for instance, in the case of todos, setting one todo as 'completed'), and written it back to the cache with writeData, how does it get sent back to the server, so that the local copy and the remote copy are in sync? With a mutation? So I'm responsible for storing a copy to the local cache and sending it to the server in two separate operations?
3) As I understand it, unless you specify otherwise, if you make a query from Apollo Client, it will first check to see if the data you requested is in the cache, otherwise it will call the server. Why, then, do you need to make an #client in the example code to grat the todos? Because these were not downloaded from the server with a prior query, but are instead only local data?
const GET_TODOS = gql`
{
todos #client {
id
completed
text
}
visibilityFilter #client
}
`;
If they were in fact downloaded with an earlier query, can't you just use the same query that you used originally to get the data from the server, not putting #client, and if the data is in the cache, you'll get the cached data?
4) Lastly, I've read that Apollo Client will update things 'automagically -- that is, if you send modified data to the server (say, in our case, a modified todo) Apollo Client will make sure that that piece of data is modified in the cache, referencing it by ID. Are there any rules as to when it does and when it doesn't? If Apollo Client is keeping things in sync with the server using IDs, when do we need to handle it 'manually', as above, and when not?
Thanks for any insights, and if you have links to other docs than those above, or a good tutorial, I'd be grateful
The __typename is Apollo's built-in auto-magic way to track and cache results from queries. By default you can look up items in your cache by using the __typename and id of your items. You usually don't need to worry about __typename until you manually need to tweak the cache. For the most part, just re-run your server queries to pull from the cache after the original request. The server responses are cached by default, so the next time you run a query it will pull from the cache.
It depends on your situation, but most of the time if you set your IDs properly Apollo client will automatically sync up changes from a mutation. All you should need to do is return the id property and any changed fields in your mutation query and Apollo will update the cache auto-magically. So, in the case you are describing where you mark a todo as completed, you should probably just send the mutation to the server, then in the mutation response you request the completed field and the id. The client will automatically update.
You can use the original query. Apollo client essentially caches things using a query + variable -> results map. As long as you submit the same query with the same variables it will pull from the cache (unless you explicitly tell it not to).
See my answer to #2 above, but Apollo client will handle it for you as long as you include the id and any modified data in your mutation. It won't handle it for you if you add new data, such as adding a todo to a list. Same for removing data.
Related
In Apollo's GraphQL version, there are fetch policies that specify whether a fetch query should obtain data from server or use the local cache (if any data is available).
In addition, cache normalization allows usage of the cache to cut down on the amount of data that needs to be obtained from the server. For example, if I am requesting object A and object B, but earlier I had requested A and C, then in my current query it will get A from cache, and get B from server.
However, however, these specify cache policies for the entire query. I want to know if there is a method for specifying TTLs on individual fields.
From a developer standpoint, I want to be able to specify in my query that I want to go to cache for some information that I am requesting, but not others. For example, take the below query:
query PersonInfo($id: String) {
person(id: $id) {
birthcertificate // Once this is cached, it is cached forever. I should just always get this info from the cache if it is available.
age // I want to have this have a TTL of a day before invalidating the cached value and going to network
legalName // I want to always go to network for this information.
}
}
In other words, for a fixed id value (and assuming this is the only query that touches the person object or its fields):
the first time I make this query, I get all three fields from the server.
now if I make this query again within a few seconds, I should only get the third field (legalName) from the server, and the first two from the cache.
now, if I then wait more than a day, and then make this query again, I get birthCertificate from the cache, and age + legalName from the server.
Currently, to do this the way I would want to, I end up writing three different queries, one for each TTL. Is there a better way?
Update: there is some progress on cache timing done on the iOS client (https://github.com/apollographql/apollo-ios/issues/142), but nothing specifically on this?
It would be a nice feature but AFAIK [for now, taking js/react client, probably the same for ios]:
there is no query normalization, only cache normalization
if any of requested field not exists in cache then entire query is fetched from network
no time stored in cache [normalized] entries (per query/per type)
For now [only?] solution is to [save in local state/]store timestamps for each/all/some queries/responses (f.e. in onCompleted) and use it to invalidate/evict them before fetching. It could probably be automated f.e. starting timers within some field policy fn.
You can fetch person data at start (session) just after login ... any following and more granular person(id: $id) { birthcertificate } query (like in react subcomponent) can have "own" 'cache-only' policy. If you need always fresh legalName, fetch for it [separately or not] with network-only policy.
I am wondering if there is a way to expire cached items after a certain time period, e.g., 24 hours.
I know that Apollo Client v3 provides methods such as cache.evict and cache.gc which are a good start and I am already using; however, I want a way to delete cache items after a given time period.
What I am doing at the minute is adding a TimeToLive field to every object in my Apollo schema, and when the backend returns an object, the field is populated with the current time + 24 hours (i.e. the time in 24 hours time). Then when I query the data in the front end, I check the to see if the TimeToLive field of the returned data is in the future (if not that means the data was definitely retrieved from the cache and in which case I call the refetch function, which forces the query to fetch the data from the server. However, this doesn't seem like the best way to do things, mainly because I have to iterate over every result in the returned data anch check if any of the returned objects are expired; and if so, everything is refetched.
Another solution I thought of was to use something like React Native Queue and have a background task that periodically checks the cache and deleted items that have expired. But again, I am not totally sold on this solution.
For a little bit of context here: I am building a cooking / recipes app - and recipes / posts are cached on the device; however, my concern is that a user could delete a post, but everyone else who has that post cached would still be able to see it, and hence by expiring the cached item at least they would only be able to see for a number of hours before it is removed. However they might be a better way to do this all together, i.e. have the sever contact clients with the cached item (though I couldn't think of any low lift solutions at the time of writing this)
apollo-invalidation-policies replaces the Apollo-client InMemoryCache with InvalidationPolicyCache and within the typePolicies you can specify a timeToLive field. If an object is accessed beyond their TTL, they are evicted and no data is returned.
In Meteor, I have a little confusion between Session and Local Collection.
I know that Session is a temporary reactive key-value store, client-side only, and is cleaned on page refresh.
Local collection seems to be the same: reactive, temporary client-side storage, cleaned on page refresh with more flexible function like insert, update & remove query like server-side Mongo collection.
So I guess I could manage everything in Local Collection without Session, or, everything in Session without Local Collection.
But what is the best and efficient way to use Session and/or Local collection?
Simply, when to use Session and not use it?
And when to use Local collection and when not use it?
As I read your question I told myself that this is a very easy question, but then I was scratching my head. I tried to figure out an example that you can just accomplish with session or collections. But I didn't found any use-case. So let's rollup things from begin. Basically you already answered the question on your own, because it is the little sugar that makes collections something special.
When to use a collection?
Basically a collection is a database artifact. Imagine you have a client-server-application. All the data is persisted in the server side storage. Now you can use a local collection to provide the user a small subset of the servers collection. So a client collection is a database with reduced amount of data. The advantage is that you can access the collection with queries. You can use the same queries on server and client. In additon a collection always contains multiple objects of the same type. Sometimes you produce data on client for the client. No server interaction needed. Than you can use a local collection. A local collection provides the same functionality as a normal collection without server communication. This should be used if you have multiple objects with the same structure and in special if you'd like to use query operators.
You can also save the data inside a session object. Session objects can contain multiple objects as well. But imaging you want to find an object in an objectarray indexed with a special id. Than you need to iterate throw the whole array in order to find this object. You have to write additional logic, that can be handled with collection like magic. Further, collections return cursors. A cursor is an reactive object that just changes if the selected data changes. That means if you use find with an id. Than this object just rerenders when the object to this id changes. With session you can't. When a session changes you need to rerender all depending objects.
When to use a session?
For everything else. Sessions are often just small objects that contain some configuration logic. It is basically just one object and not a multiple occurency of equal objects. Haven't time now to go in detail but if it does not fit the collection use-cases you can use sessions.
Have a look at this post that describes why sessions should not be overused.
I assume that by local collection you mean: new Mongo.Collection(null)
The difference is that local collections do not survive hot code pushes. A refresh will erase Session, but hot code push will not, there's special code in Meteor to persist the values of the Session variable in the case of a hot code push..
You would use Session whenever you're storing temporary values that do NOT need to be persisted to the database.
Trivial examples could include a users selection of filters or the item in an index vies that is currently selected.
manipulated data in minimongo (insert, update, delete etc) is intended to be sent back to the server and stored in the database. For example this could be updating a users profile information etc.
I am working on an MVC3 and Razor website. The user has to select their way through a few choices before finally working on the data.
For example:
Client List -> Version List (Filtered by client) -> Etc (Filtered by version)
Once a user selects a client, they select a version for the client. So I'm passing the client id on the querystring. For each mode of the controller of version I'm passing around the client id. On views that I want to show the client name, I'm querying the database for the client and stuffing it into the ViewBag. This seems very inefficient. I feel like I could use a cookie to hold the client id & name.
Now that I've got my version controller done, I'm facing the same pattern again with each subsequent controller, but now I need to persist both client and version...
What is a preferred approach for persisting information like this across requests?
This seems very inefficient
That's what database are made and optimized for => query data based on fields and if you put indexes on those fields it will be screamingly fast. Of course Session, Cookies, Cache are some common techniques that you could employ to limit the number of queries to the database but you will have to assume the possible staleness of data that you are getting this way (if some other thread/process modified the data in the database you no longer get correct results).
So before doing any premature optimizations here's what I would recommend you: hammer your database until you discover that this is actually a bottleneck for your application. Databases might become bottleneck in some very high traffic applications where you should resort to one of the afforementioned techniques (or in some poorly written applications of course but let's exclude this possibility for the moment).
You should use TempData, which allows you to pass data between the current and next HTTP requests. Be sure to keep in mind that it uses the session.
Greg Shackles has a great article all about TempData here
see this similar question MVC3 multi step form - How to persist model object
I need to synchronize my Relational database(Oracle or Mysql) to CouchDb. Do anyone has any idea how its possible. if its possbile than how we can notify the CouchDb for any changes happened on the relational DB.
Thanks in advance.
First of all, you need to change the way you think about database modeling. Synchronizing to CouchDB is not just creating documents of all your tables, and pushing them to Couch.
I'm using CouchDB for a site in production, I'll describe what I did, maybe it will help you:
From the start, we have been using MySQL as our primary database. I had entities mapped out, including their relations. In an attempt to speed up the front-end I decided to use CouchDB as a content repository. The benefit was to have fully prepared documents, that contained all the relational data, so data could be fetched with much less overhead.
Because the documents can contain related entities - say a question document that contains all answers - I first decided what top-level entities I wanted to push to Couch. In my example, only questions would be pushed to Couch, and those documents would contain the answers, and possible some metadata, such as tags, user info, etc. When requesting a question on the frontend, I would only need to fetch one document to have all the information I need at that point.
Now for your second question: how to notify CouchDB of changes. In our case, all the changes in our data are done using a CMS. I have a single point in my code which all edit actions call. That's the place where I hooked in a function that persisted the object being saved to CouchDB. The function determines if this object needs persisting (ie: is it a top level entity), then creates a document of this object (think about some sort of toArray function), and fetches all its relations, recursively. The complete document is then pushed to CouchDB.
Now, in your case, the variables here may be completely different, but the basic idea is the same: figure out what documents you want saved, and how they look like. Then write a function that composes these documents and make sure this is called when changes are made to your relational database.
Notifying CouchDB of a change
CouchDB is very simple. Probably the easiest thing is directly updating an existing document. Two ways to implement this come to mind:
The easiest way is a normal CouchDB update: Fetch the current document by id; modify it; then send it back to Couch with HTTP PUT or POST.
If you have clear application-specific changes (e.g. "the views value was incremented") then writing an _update function seems prudent. Update function are very simple: they receive an HTTP query and a document; they modify the document; and then CouchDB stores the new version. You write update functions in Javascript and they run on the server. It is a great way to "compress" common actions into simpler (and fewer) HTTP queries.