Use relay cache data on react-native app while fresh data is being fetched - caching

I have a React Native app integrated with Relay and I want to delivery an offline-first experience for users.
So, in the first app launch a placeholder should be shown while data is being loaded. After that, every time the app be launched I want to show the last cached data while a fresh data is loaded.
I found this issue from 2015 and based on eyston's answer I've tried to implement a CacheManager based on relay-cache-manager using the AsyncStorage. With the CacheManager I can save and load relay records from cache but when the network is disabled the app isn't able to show cached data.
Is there any way of use relay cached data while relay is fetching fresh data?

We have a production app which uses Relay and RealmDB for offline experience. We took a separate approach from CacheManager because CacheManager was not quite ready at that time. We used relay-local-schema for this.
We defined the entire schema required for mobile using relay-local-schema. This could be the same file as what your backend server would be using for defining graphql schema and change the resolve function to resolve the data from realm db. For this we also created schema in realmdb which had nearly same structure as the graphql schema for simplicity of writing data returned by backend server to realmdb. You can also automate generating of this schema by using the graphql introspection query. We defined a custom network layer where we made sure that all Relay queries always touch the local db. In the sendQueries function all queries are resolved with relay-local-schema which gets resolved very fast and react views shows the old data and at same time a network request is made for each request in sendQueries function. On receiving data from network request it is written in realmdb and Relay in-memory is store is also populated with the new data, this automatically refreshes all the react views whose data changed. To write data to Relay in-memory store we used the following undocumented method
Relay.Store.getStoreData().handleQueryPayload(query, response);
You can get query object from request that you receive in sendQueries function using request.getQuery().
Our current implementation is bit tied up with our business logic and hence it is difficult to open source this logic. I'll try to provide a demo app is possible.

Related

Load data from Database once and available all the time using spring and show in JSP

I want to Load Dropdown data from Database at once and set inside java object and tie to my view (JSP page ) and available all the time for that particular controller or functionality using spring mvc AND jsp pages.
I dont want to load on application start up as ours is big one and and each functionality is independent.
It takes a lot of time to start the application if i load on application start up
Is there a way to it using spring mvc pattern and using JSP
Could someone please let me know how to do it
As you have not mentioned how frequently you are doing the database operation or how frequently you are fetching the data. Considering the average user.
Approach: Create your own local cache/ program cache implementation.
Instead of loading all the data from the database during startup, load only master data which will be common for all. If master data is also high then you can perform the lazy loading approach.
Load the data of a specific feature when it is requested for the first time. Keep the data in the local cache.
Whenever someone is making the changes then add the data in the cache and save the same to the database. so you will always have latest data in the cache.
Advantage:
Very useful for common or static master data
-If you need good business logic for some common data. This way only once you are processing the data and keeping cache.
-Fetching the data is very fast as it doesn't involve database request except for the first time
Disadvantage:
If you have a very high number of users and a very high update operation then the updating cache will delay the update process as you need to update it sequentially.
I suggest you can use a combination of approaches to improve the code quality and processing.
This sounds like a typical cache functionality.
Spring supports caching out of the box by #EnableCaching on application level and #Cacheable(“cachename”) on the repository method retrieving your dropdown data. In your simple use case you not even need an additional framework as there is a CacheManager based on ConcurrentHashMap which simply caches for ever.
With caching in place your controller can simply fetch the dropdown data from the repository. Caching will ensure only the first call will really fetch from database and keeps the result in memory for all upcoming calls.
If you ever need more sophisticated caching you only have to exchange the cache manager and configure the cache for your needs.

GraphQL Subscription With Hasura or Vanilla Websocket For Realtime Text Editing

I'm trying to build an app with realtime text editing and am stuck on how to best architecturally proceed. Currently, I'm using graphql with apollo and hasura to fetch user profile information along with document metadata and content.
To save content, we are currently just debouncing graphql mutations of the entire document. The drawback here is a user can navigate away without the document content being written back. As a result, I want to move to an approach based on websockets.
For an app that involves realtime text editing, where on each stroke we're sending a delta update, should I try to
use graphql subscriptions with a custom resolver since I'm already using apollo and hasura, or
use a vanilla, separate websocket for each document and avoid using graphql completely?
In both cases, instead of directly writing back to postgres, we would use a redis cache while the websocket connection is live, and then persist the cache content back to postgres when the connection is closed. The former approach would take advantage of already having apollo client, and authentication through hasura; while the latter approach would avoid sending binary blobs (delta updates) over graphql but require more setup.

Relay Modern Cache Vs Store

Reading through the Relay docs I am confused about the concept of Cache mentioned in the network layer (https://facebook.github.io/relay/docs/en/network-layer.html) vs the Relay Store (https://facebook.github.io/relay/docs/en/relay-store.html).
Are these two different caches? Which one does automatically get garbage collected by Relay?
#anon the network layer is how you connect the client to your GraphQL server, creating a store on your application, to cache the data. The second link is more about how you update the store using the updater function after running a mutation for example.
As you can see on the first link:
// Create a network layer from the fetch function
const network = Network.create(fetchQuery);
const store = new Store(new RecordSource())
you are creating the network layer/store using the relay-runtime package.
Hope it helps :)

How to implement Transport-Client for running a website which uses ElasticSearch as database?

Goal: Create a website which uses ElasticSearch to deliver its content.
Problem: There will be many users concurrently accessing the website.
Options:
Create and destroy a transport-client object for every request
Create a pool of transport-client objects which will be reused
Use a transport-client object as singleton
According to the docs here, a Node-Client is not an option for this scenario.
Technical background if it makes any difference: The website will be using Play Framework with Java. There will be a fancy JS frontend and it is expected that there will be many tiny AJAX HTTP requests harassing ElasticSearch.
Use the Transport Client as a singleton.

Can Sync gateway views be pulled/replicated on client side?

I have this use case, where I have created server side views on sync gateway based on a rolling time window of 10 days. Is there a way to directly pull those on my device side?
When I look at the documentation, I see that there's no way these can be replicated directly and one needs to make REST calls:
http://developer.couchbase.com/documentation/mobile/1.2/develop/guides/sync-gateway/accessing-cb-views/index.html
Is that assumption correct?
The other approach I saw was that let all the data be replicated on the client side and then write Couchbase lite views on the client side using Map reduce functions. Which one's the correct approach out of the 2?
Yes I believe that your assumption is correct - views have to be queried directly via the public REST API. I also believe that your solution for syncing data and then querying it on the client side will also work.
In order to find the "correct approach" I would consider your app needs and deployment workflow:
Using view on the server will require:
Managing (CRUD) of the views in SG - similar to managing functions in a database. These would ideally be managed by some deployment / management code.
Clients need to be able to make the API call to the public interface to access view info. This then requires a cache to work offline.
Slicing data locally means that sync will bring down all data and the device will have to perform the search / slice / aggregation previously carried out by the server. This will:
Work offline.
Put a potential extra strain on the app device.
I don't think that there are any easy answers here - ideally views would be synced to the device, but I don't know if that's even possible with the current SG implementation.
(Note 1: that the views must be created in Sync Gateway via the admin REST interface and not through the Couchbase web interface.).
(Note 2: I'm a server-side programmer, so this view is tainted.)
What I ended up doing was writing webhooks, which basically let me have the same docs replicated onto a Couchbase server. Then I did all needed aggregations and pushed those to syn gatewy(which got replicated to the app).
May or mayn't be right but works for my case....

Resources