Kendo Grid Offline Read and fallback server read if online - kendo-ui

In my app, I want to load data if it is offline or online for my user. If it is offline, it should pull recent data from indexed Db and if it is online it should fetch from server URL. How can we achieve this fallback mechanism?
I am looking for an approach where I will load grid using cached data from indexed DB and then overwrite it with server side data. so I will have some data for the user always irrespective of it whether online/offline.
Is this approach possible to design?
Please suggest.
thanks in advance

Related

Load data from Database once and available all the time using spring and show in JSP

I want to Load Dropdown data from Database at once and set inside java object and tie to my view (JSP page ) and available all the time for that particular controller or functionality using spring mvc AND jsp pages.
I dont want to load on application start up as ours is big one and and each functionality is independent.
It takes a lot of time to start the application if i load on application start up
Is there a way to it using spring mvc pattern and using JSP
Could someone please let me know how to do it
As you have not mentioned how frequently you are doing the database operation or how frequently you are fetching the data. Considering the average user.
Approach: Create your own local cache/ program cache implementation.
Instead of loading all the data from the database during startup, load only master data which will be common for all. If master data is also high then you can perform the lazy loading approach.
Load the data of a specific feature when it is requested for the first time. Keep the data in the local cache.
Whenever someone is making the changes then add the data in the cache and save the same to the database. so you will always have latest data in the cache.
Advantage:
Very useful for common or static master data
-If you need good business logic for some common data. This way only once you are processing the data and keeping cache.
-Fetching the data is very fast as it doesn't involve database request except for the first time
Disadvantage:
If you have a very high number of users and a very high update operation then the updating cache will delay the update process as you need to update it sequentially.
I suggest you can use a combination of approaches to improve the code quality and processing.
This sounds like a typical cache functionality.
Spring supports caching out of the box by #EnableCaching on application level and #Cacheable(“cachename”) on the repository method retrieving your dropdown data. In your simple use case you not even need an additional framework as there is a CacheManager based on ConcurrentHashMap which simply caches for ever.
With caching in place your controller can simply fetch the dropdown data from the repository. Caching will ensure only the first call will really fetch from database and keeps the result in memory for all upcoming calls.
If you ever need more sophisticated caching you only have to exchange the cache manager and configure the cache for your needs.

How does CM & CD server communicates in sitecore?

I am new to sitecore and just trying to understand its architecture/design. Just curious to know how Intranet and Internet server communicates and how does the data flow happens between these two layers in on-prem and on AWS EC2 environment? I have surfed enough in the web and couldn't find the appropriate explanation.
Really appreciate if anyone can help me understand.
When u do a publish from CM, it puts a record in eventqueue table in Web Db.
all CD servers will hit the eventqueue table table for update and proceed.
default is 2 seconds once this hit happens.
In short, they communicate via events in the database(s). Note: This is very simplified but seeing it this way helped me understand how the events work and troubleshoot issues.
For example, when publishing an item, the publisher (running on CM or on a dedicated role) reads its data from the master database and writes it to the web database. When done, it raises an event by writing a row in the EventQueue table in web database. The CD server(s) picks up this event and clears its corresponding caches etc. causing a reload of that data from the web database.
All Sitecore databases have the EventQueue table and events goes to the table in different databases, depending on the type of event. An events is basically just a class name and a set of serialized data. Events can be raised "locally" and "globally" indicating if several instances should pick up the event. Think of a scenario where you have two CD servers sharing one web database, both CD's would have to pick up the event.
To keep track on what events has been processed, a "EQSTAMP" value is stored in the Properties table. It's named [database]_EQSTAMP_[InstanceName]. It's therefore essential that not two Sitecore instances share the same instance name. If not set, Sitecore will make an instance name by combining the hostname and IIS site name. The decimal Value of this timestamp corresponds to the hexadecimal Stamp column in the EventQueue table.
Normally, you should never have to play with these tables yourself, but I find it good to have some insights in how they work and keep an eye on them. They can grow in size and cause some issues. The CleanupEventQueue scheduled task is responsible for removing old processed events from the EventQueue tables. You may want to play with the scheduling of this agent if your EventQueue grows too large between cleanups.
Note: This is the most common way of communication between the servers. Later versions of Sitecore have other techniques as well, such as Rebus.
Event Queues. Why? How? When? article that explains it in detail, it also describes the pitfalls of using this mechanism in real life as well.
Please also be aware that Sitecore.Link project is a good place to get more knowledge regarding Sitecore functionality.
It accumulates Sitecore knowledge all around the web.
Thanks.

Use relay cache data on react-native app while fresh data is being fetched

I have a React Native app integrated with Relay and I want to delivery an offline-first experience for users.
So, in the first app launch a placeholder should be shown while data is being loaded. After that, every time the app be launched I want to show the last cached data while a fresh data is loaded.
I found this issue from 2015 and based on eyston's answer I've tried to implement a CacheManager based on relay-cache-manager using the AsyncStorage. With the CacheManager I can save and load relay records from cache but when the network is disabled the app isn't able to show cached data.
Is there any way of use relay cached data while relay is fetching fresh data?
We have a production app which uses Relay and RealmDB for offline experience. We took a separate approach from CacheManager because CacheManager was not quite ready at that time. We used relay-local-schema for this.
We defined the entire schema required for mobile using relay-local-schema. This could be the same file as what your backend server would be using for defining graphql schema and change the resolve function to resolve the data from realm db. For this we also created schema in realmdb which had nearly same structure as the graphql schema for simplicity of writing data returned by backend server to realmdb. You can also automate generating of this schema by using the graphql introspection query. We defined a custom network layer where we made sure that all Relay queries always touch the local db. In the sendQueries function all queries are resolved with relay-local-schema which gets resolved very fast and react views shows the old data and at same time a network request is made for each request in sendQueries function. On receiving data from network request it is written in realmdb and Relay in-memory is store is also populated with the new data, this automatically refreshes all the react views whose data changed. To write data to Relay in-memory store we used the following undocumented method
Relay.Store.getStoreData().handleQueryPayload(query, response);
You can get query object from request that you receive in sendQueries function using request.getQuery().
Our current implementation is bit tied up with our business logic and hence it is difficult to open source this logic. I'll try to provide a demo app is possible.

Pattern for "load local, then update, then remote and maybe update"

I'm looking for a coding pattern that comfortably solves the following problem: This is a paradigm I find myself having to use a lot in my development. After the user opens any UI likely the following is happening:
Load data from local storage
Update UI with local data
Load data from remote
Store remote data to local storage
Update UI with remote data
A simple example would be loading a Twitter timeline for example. The app will present to the user what it has, updates from remote in the background and then updates the UI if it found more recent tweets.
Ideally, each step would run async and be cancelable at any time. I've been using a mix of Futures/Promises and Callbacks so far but I find myself writing a lot of glue code...
Any suggestions would be greatly appreciated.

CSV importing: server or browser?

From a conceptual point of view, what solution would perform better for a CSV importing task into a database in a SaaS?
Parse the CSV file in the browser, make an AJAX call to server for every row.
Upload the CSV file and let the server parse and insert it into DB
I know it is a too open question, given that no technology or hardware is specified. Anyway, what's better for the web server's performance? Getting thousands of connections or having to upload and parse big files?
I think the answer to your question depends a bit, but from my experience, sending the data to a server and uploading a CSV into the database has several benefits. For one, there is less "overhead per row" on uploading a straight CSV to a web or app server, and you can take advantage of things like server HW and physical proximity to the DB server for speed. There are also a lot of tooks that handle CSVs efficiently on the server side, depending on the tech stack you choose. I think it would be advantageous to send it en masse and have the server process the data upon upload.
HTH,
CDC

Resources