Apollo Client subscribeToMore and updateQuery behaviour for updated vs new records - apollo-client

It looks like subscribeToMore's callback updateQuery behaves differently depending whether the inbound record is already existing in the client cache. That is, for new records (as in every example on the web) you are required to return a new version of the cache based on previous and subscriptionData. Great for manually merging new records into the local cache.
BUT, if the inbound record already exists (by id), the new record is instantly merged and there is nothing updateQuery callback can do about it. updateQuery is still called, but the new data comes into updateQuery already merged into the previous parameter.
Ideally, I’d like to modify inbound records in the case of insert or update. Any ideas?

Related

GraphQL: Are cursors part of a read transaction? Polling for new data

I am using GraphQL to download the entire dataset from an API.
Pagination is implemented using the first and after arguments.
If I order the items by create date desc, and my code is polling using the after set to the last item it received, will it see new items as they are created?
Or is the cursor like an actual database cursor where you are iterating over a result (inside an RDMS read transaction, which is like a snapshot of the result set)?
Note: The API does not implement subscription so I cannot subscribe to events.

Looking for help understanding Apollo Client local state and cache

I'm working with Apollo Client local state and cache and although I've gone through the docs (https://www.apollographql.com/docs/react/essentials/local-state), a couple of tutorials (for example, https://www.robinwieruch.de/react-apollo-link-state-tutorial/) and looked at some examples, I'm a bit befuddled. In addition to any insight you might be able provide with the specific questions below, any links to good additional docs/resources to put things in context would be much appreciated.
In particular, I understand how to store local client side data and retrieve it, but I'm not seeing how things integrate with data retrieved from and sent back to the server.
Taking the simple 'todo app' as a starting point, I have a couple of questions.
1) If you download a set of data (in this case 'todos') from the server using a query, what is the relationship between the cached data and the server-side data? That is, I grab the data with a query, it's stored in the cache automatically. Now if I want to grab that data locally, and, say, modify it (in this case, add a todo or modify it), how I do that? I know how to do it for data I've created, but not data that I've downloaded, such as, in this case, my set of todos. For instance, some tutorials reference the __typename -- in the case of data downloaded from the server, what would this __typename be? And if I used readQuery to grab the data downloaded from the server and stored in the cache, what query would I use? The same I used to download the data originally?
2) Once I've modified this local data (for instance, in the case of todos, setting one todo as 'completed'), and written it back to the cache with writeData, how does it get sent back to the server, so that the local copy and the remote copy are in sync? With a mutation? So I'm responsible for storing a copy to the local cache and sending it to the server in two separate operations?
3) As I understand it, unless you specify otherwise, if you make a query from Apollo Client, it will first check to see if the data you requested is in the cache, otherwise it will call the server. Why, then, do you need to make an #client in the example code to grat the todos? Because these were not downloaded from the server with a prior query, but are instead only local data?
const GET_TODOS = gql`
{
todos #client {
id
completed
text
}
visibilityFilter #client
}
`;
If they were in fact downloaded with an earlier query, can't you just use the same query that you used originally to get the data from the server, not putting #client, and if the data is in the cache, you'll get the cached data?
4) Lastly, I've read that Apollo Client will update things 'automagically -- that is, if you send modified data to the server (say, in our case, a modified todo) Apollo Client will make sure that that piece of data is modified in the cache, referencing it by ID. Are there any rules as to when it does and when it doesn't? If Apollo Client is keeping things in sync with the server using IDs, when do we need to handle it 'manually', as above, and when not?
Thanks for any insights, and if you have links to other docs than those above, or a good tutorial, I'd be grateful
The __typename is Apollo's built-in auto-magic way to track and cache results from queries. By default you can look up items in your cache by using the __typename and id of your items. You usually don't need to worry about __typename until you manually need to tweak the cache. For the most part, just re-run your server queries to pull from the cache after the original request. The server responses are cached by default, so the next time you run a query it will pull from the cache.
It depends on your situation, but most of the time if you set your IDs properly Apollo client will automatically sync up changes from a mutation. All you should need to do is return the id property and any changed fields in your mutation query and Apollo will update the cache auto-magically. So, in the case you are describing where you mark a todo as completed, you should probably just send the mutation to the server, then in the mutation response you request the completed field and the id. The client will automatically update.
You can use the original query. Apollo client essentially caches things using a query + variable -> results map. As long as you submit the same query with the same variables it will pull from the cache (unless you explicitly tell it not to).
See my answer to #2 above, but Apollo client will handle it for you as long as you include the id and any modified data in your mutation. It won't handle it for you if you add new data, such as adding a todo to a list. Same for removing data.

Dynamics CRM Plugin can't retrieve records created earlier in the pipeline

I have a chain of synchronous events that take place.
a custom control calls an action
action creates a couple of records
action then triggers a plugin which tries to retrieve records that were created in step 2, but the query returns nothing
I suspect this is happening because all the events are in the same transaction and therefore the records they create are not yet committed to the database. Is this correct?
Is there an easy way to retrieve records that were created earlier in the pipeline or am I stuck having to stuff OutputParameter object into SharedVariables?

Object from a different session - saveOrUpdate in another session?

In the application that i'm working on, i have a senario,
UI updates some data which is already retrived from the database and sends an object back to the server(say using EntityTag.java), Server retrives the object again from the DB(say Entity.java which is mapping file in hibernate) and copies all the values from the EntityTag into Entity.java objct.
Now, using some service, it tries to save the updated Entity.java object. This is does in spring declarative transactions. So, i'm assuming that the new transaction is started on this service.
What I was hoping to see in this service method is a session. merge() because, we updated an object which was detached, but here they use saveOrUpdate on the entity.java object. I see that the object is updated into the table without any issues. This is so wierd, until now i've been thinking that the merge will merge the object into the session and later, i can commit the changes, this seems soo wierd to me.
In what cases does saveandupdate work without issues?
See this answer, saveOrUpdate works well if there is never the risk that there is not already an object with the same database identifier associated to the session where saveOrUpdate is being called.
If an object already exists with the same database Id, saveOrUpdate throws an error, while merge will just replace the object in the session with the new object and return a reference to the new attached object.
Hibernate had a method called saveOrUpdateCopy that provided the same as current merge,
which got standardized as merge in JPA.
Although saveOrUpdateis still available and used in many tutorials, you would be better off always using merge when dealing with detached objects, as it's hard / error prone to try to guess if an object is already in a session.

Updating Solr Index when product data has changed

We are working on implementing Solr on e-commerce site. The site is continuously updated with a new data, either by updates made in existing product information or add new product altogether.
We are using it on asp.net mvc3 application with solrnet.
We are facing issue with indexing. We are currently doing commit using following:
private static ISolrOperations<ProductSolr> solrWorker;
public void ProductIndex()
{
//Check connection instance invoked or not
if (solrWorker == null)
{
Startup.Init<ProductSolr>("http://localhost:8983/solr/");
solrWorker = ServiceLocator.Current.GetInstance<ISolrOperations<ProductSolr>>();
}
var products = GetProductIdandName();
solrWorker.Add(products);
solrWorker.Commit();
}
Although this is just a simple test application where we have inserted just product name and id into the solr index. Every time it runs, the new products gets updated all at once, and available when we search it. I think this create the new data index into solr everytime it runs? Correct me if I'm wrong.
My Question is:
Does this recreate Solr Index Data in whole? Or just update the data that is changed/new? How? Even if it only updates changed/new data, how it knows what data is changed? With large data set, this must have some issues.
What is the alternative way to track what has changed since last commit, and is there any way to add those product into Solr index that has changed.
What happens when we update existing record into solr? Does it delete old data and insert new and recreate whole index? Is this resource intensive?
How big e-commerce retailer does this with millions of products.
What is the best strategy to solve this problem?
When you do an update only that record is delete and inserted. Solr does not update the records. The other records are untouched. When you commit the data new segments would be created with this new data. On optimize the data is optimized into a single segment.
You can use Incremental build technique to add/update records after the last build. DIH provides it out of the box, If you are handling it manually through jobs you can maintain the timestamp and run builds.
Solr does not have an update operation. It will perform a delete and add. So you have to use the complete data again and not just the updated fields. Its not resource intensive. Usually only Commit and Optimize are.
Solr can handle any amount of data. You can use Sharding if your data grows beyond the handling capacity of a single machine.

Resources