Parse Platform Online mode and fallback for offline if timeout/network issue - parse-platform

Is there a way in Parse Platform to fallback to local data store if there is no connection ?
I understand that there is pin/pinInBackground, so I can pin any object to the LocalDataStore.
Then I can query the localdatastore to get that info.
However, I want always to try to get first the server data, and if it fails, get the local data.
Is there a way to do this automatically?
(or I have to pin everything locally, then query remote and if it fails, then query locally)

Great question.
Parse has the concept of cached queries. https://docs.parseplatform.org/ios/guide/#caching-queries
The interesting feature of cached queries is that you can specify - "if no network". However this only works if you have previously cached the query results. I've also found that the delay between losing network connectivity and the cached query recognising that its lost network makes the whole capability a bit rubbish.
How I have resolved this issue is using a combination of the AlamoFire library and pinning objects. The reason I chose to use the AlamoFire library is because it's extremely well supported and it spots drops in network connectivity near immediately. I only have a few hundred records so I'm not worrying about pinning objects and definitely performance does not seem to be affected. So how I work this is....
Define some class objects at the top of the class
// Network management
private var reachability: NetworkReachabilityManager!
private var hasInternet: Bool = false
Call a method as the view awakes
// View lifecycle
override func awakeFromNib() {
super.awakeFromNib()
self.monitorReachability()
}
Update object when network availability changes. I know this method could be improved.
private func monitorReachability() {
NetworkReachabilityManager.default?.startListening { status in
if "\(status)" == "notReachable" {
self.hasInternet = false
} else {
self.hasInternet = true
}
print("hasInternet = \(self.hasInternet)")
}
}
Then when I call a query I have a switch as I set up the query object.
// Start setup of query
let query = PFQuery(className: "mySecretClass")
if self.hasInternet == false {
query.fromLocalDatastore()
}
// Complete rest of query configuration
Of course I pin all the results I ever return from the server.

Related

Apollo GraphQL client query returns introspection result instead of data

I'm currently trying to get data from the Squidex API for a NextJS app, by using Apollo as GraphQL client.
On localhost, both in dev/production mode, everything works fine.
After deploying the app on Heroku, the same query returns the introspection schema as result, instead of the expected data. On a real example, by running a query like:
{
queryPageContents(search: "Home") {
...PagesFragmentsPageId
data {
...PagesFragmentsPage
...PagesFragmentsHome
}
}
}
${Pages.fragments.pageId}
${Pages.fragments.page}
${Pages.fragments.home}
}
Basically I'm asking various data about a webpage, using fragments and so on.
My expected data should be as follows - such as on localhost:
But I'm receiving this on Heroku instead of the above one:
Because of this, my app fails its rendering because my code looks for a JSON node called queryPageContents - as shown in the first screenshot - but It's currently receiving __schema as query result. So this causes a 500 error on front-end.
I googled around and I've found this graphql-disable-introspection that has to be installed on server-side. I don't know if it could solve this issue, but I'm not getting how this could happens.
Any advice about this?
Thanks everyone in advance.
As Sebastian Stehle from Squidex noticed, by deferring the page data initialization after a data loading check, the issue it beings solved. In a NextJS scenario like this:
[...]
const {loading, error, data} = useQuery(PAGE_QUERY.pages.home);
// Instead here, pages it's being moved after error/loading checking as follows
// Exception check
if (error) {
return <ErrorDb error={error} />
}
// DB fetching check
if (loading) {
return null;
}
// Data initialization here
const pages = data.queryPageContents;
return (
// Page
[...]
);
[...]

Deleting Apollo Client cache for a given query and every set of variables

I have a filtered list of items based on a getAllItems query, which takes a filter and an order by option as arguments.
After creating a new item, I want to delete the cache for this query, no matter what variables were passed. I don't know how to do this.
I don't think updating the cache is an option. Methods mentionned in Apollo Client documentation (Updating the cache after a mutation, refetchQueries and update) all seem to need a given set of variables, but since the filter is a complex object (with some text information), I would need to update the cache for every given set of variables that were previously submitted. I don't know how to do this. Plus, only the server does know how this new item impact pagination and ordering.
I don't think fetch-policy (for instance setting it to cache-and-network) is what I'm looking for, because if accessing the network is what I want after having created a new item, when I'm just filtering the list (typing in a string to search), I want to stay with the default behavior (cache-only).
client.resetStore would reset the store for all type of queries (not only the getAllItems query), so I don't think it's what I'm looking for either.
I'm pretty sure I'm missing something here.
There's no officially supported way of doing this in the current version of Apollo but there is a workaround.
In your update function, after creating an item, you can iterate through the cache and delete all nodes where the key starts with the typename you are trying to remove from the cache. e.g.
// Loop through all the data in our cache
// And delete any items where the key start with "Item"
// This empties the cache of all of our items and
// forces a refetch of the data only when it is next requested.
Object.keys(cache.data.data).forEach(key =>
key.match(/^Item/) && cache.data.delete(key)
)
This works for queries that exist a number of times in the cache with different variables, i.e. paginated queries.
I wrote an article on Medium that goes in to much more detail on how this works as well as an implementation example and alternative solution that is more complicated but works better in a small number of use cases. Since this article goes in to more detail on a concept I have already explained in this answer, I believe it is ok to share here: https://medium.com/#martinseanhunt/how-to-invalidate-cached-data-in-apollo-and-handle-updating-paginated-queries-379e4b9e4698
this worked for me (requires apollo 2 for cache eviction feature) - clears query matched by regexp from cache
after clearing cache query will be automatically refeteched without need to trigger refetch manually (if you are using angular: gql.watch().valueChanges will perform xhr request and emit new value)
export const deleteQueryFromCache = (cache: any, matcher: string | RegExp): void => {
const rootQuery = cache.data.data.ROOT_QUERY;
Object.keys(rootQuery).forEach(key => {
if (key.match(matcher)) {
cache.evict({ id: "ROOT_QUERY", fieldName: key })
}
});
}
ngrx like
resolvers = {
removeTask(
parent,
{ id },
{ cache, getCacheKey }: { cache: InMemoryCache | any; getCacheKey: any }
) {
const key = getCacheKey({ __typename: "Task", id });
const { [key]: deleted, ...data } = cache.data.data;
cache.data.data = { ...data };
return id;
}
}

Why are connections to Azure Redis Cache so high?

I am using the Azure Redis Cache in a scenario of high load for a single machine querying the cache. This machine roughly gets and sets about 20 items per second. During daytime this increases, during nighttime this is less.
So far, things have been working fine. Today I realized that the metric of "Connected Clients" is extremely high, although I only have 1 client that just constantly Gets and Sets items. Here is a screenshot of the metric I mean:
My code looks like this:
public class RedisCache<TValue> : ICache<TValue>
{
private IDatabase cache;
private ConnectionMultiplexer connectionMultiplexer;
public RedisCache()
{
ConfigurationOptions config = new ConfigurationOptions();
config.EndPoints.Add(GlobalConfig.Instance.GetConfig("RedisCacheUrl"));
config.Password = GlobalConfig.Instance.GetConfig("RedisCachePassword");
config.ConnectRetry = int.MaxValue; // retry connection if broken
config.KeepAlive = 60; // keep connection alive (ping every minute)
config.Ssl = true;
config.SyncTimeout = 8000; // 8 seconds timeout for each get/set/remove operation
config.ConnectTimeout = 20000; // 20 seconds to connect to the cache
connectionMultiplexer = ConnectionMultiplexer.Connect(config);
cache = connectionMultiplexer.GetDatabase();
}
public virtual bool Add(string key, TValue item)
{
return cache.StringSet(key, RawSerializationHelper.Serialize(item));
}
I am not creating more than one instance of this class, so this is not the problem. Maybe I missunderstand the connections metric and what they really mean is the number of times I access the cache, however, it would not really make sense in my opinion. Any ideas, or anyone with a similar problem?
StackExchange.Redis had a race condition that could lead to leaked connections under some conditions. This has been fixed in build 1.0.333 or newer.
If you want to confirm this is the issue you are hitting, get a crash dump of your client application and look at the objects on the heap in a debugger. Look for a large number of StackExchange.Redis.ServerEndPoint objects.
Also, several users have had a bugs in their code that resulted in leaked connection objects. This is often because their code is trying to re-create the ConnectionMultiplexer object if they see failures or disconnected state. There is really no need to recreate the ConnectionMultiplexer as it has logic internally to recreate the connection as necessary. Just make sure to set abortConnect to false in your connection string.
If you do decide to re-create the connection object, make sure to dispose the old object before releasing all references to it.
The following is the pattern we are recommending:
private static Lazy lazyConnection = new Lazy(() => {
return ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,abortConnect=false,ssl=true,password=...");
});
public static ConnectionMultiplexer Connection {
get {
return lazyConnection.Value;
}
}

Is it a good practice to build a collection cache?

I'm experiencing with MongoDB with Node.js using the plugin node-mongodb-native. A problem I'm experiencing is the amount of nested callbacks. I'm trying to simplify a few things by lessening the code required for a query.
Instead of this ...
db.collection("test", function(err, collection) {
collection.find(...).toArray(function(err, results) {
// ...
});
});
... I was thinking of building an object which acts as a cache of collections so that the first callback is not necessary. I'm using the following code for building the object:
var collections = {};
["test", "foo"].forEach(function(name) {
db.collection(name, function(err, coll) {
collections[name] = coll;
});
});
With it, I'm able to clean up the first code snippet to:
collections.test.find(...).toArray(function(err, results) {
// ...
});
I was wondering whether this is a good practice. It works just fine, but I guess the callback of getting a collection is there for a reason. Does it make sense to build a collection cache as I'm doing now?
That completely depends on what a collection object is.
- Is it live?
- Is it connected to the database?
- Does it do any internal caching?
- Does it reflect new data?
Without knowing those details I recommend you create a lazy evaluation proxy.
Mongo.collection("test").find(...).toArray(function(err, results) {
// ...
});
The idea here is that you internally store the find command and when you call toArray you get the collection and invoke the find command on it, then invoke toArray.
This means your getting a new collection every time and avoid the "is caching safe" problem but still have a nice API.

Separation of Concerns: Returning Projected Data between layers From a Linq Query

I'm using Linq and having trouble doing something that I believe should be trivial. I want to return data from one layer so it can be used independently of linq in another layer.
Suppose I have a Data Access Layer. It knows about the entity framework and how to interact with it. But, it doesn't care who accesses it. The one interesting requirement I have is that the queries in the entity framework return projected data that is not part of the Entity Model itself. Please don't ask me to change this part of the requirement and make POCOs for each return type, as it is not the best design given the problem I am trying to solve. Below is an example.
public class ChartData
{
public function <<returnType??>> GetData()
{
MyEntities context = new MyEntities();
var results = from context.vManyColumnsOfData as v
where v.CompanyName = "acme"
select new {Year = v.SalesYear, Income = v.Income};
return ??;
}
}
Then, I would like to have an ASP.Net UI layer be able to call into the Data Access Layer to get the data in order to bind it to a control. The UI layer should have no notion of where the data came from. It should only know that it has the data it needs to bind. Below is an example.
protected void chart_Load(object sender, EventArgs e)
{
// set some chart properties
chart.Skin = "Default";
...
// Set the data source
ChartData dataMgr = new ChartData();
<<returnType?>> data = dataMgr.GetData();
chart.DataSource = data;
chart.DataBind();
}
What is the best way to send linq projected data back to another layer?
If you don't need to use the projected type statically, just return IEnumerable<object>.
Please don't ask me to change this part of the requirement and make
POCOs for each return type, as it is not the best design given the
problem I am trying to solve.
I feel like I should rightly ignore this, as the best thing to do is to return a defined type. Anonymous types are useful when they are wholly contained within the method that creates them. Once you start passing them around, it is time to go ahead and give them the proper class treatment.
However, to live within your imposed limitations, you can return IEnumerable<object> from the method and use that or var at the callsite and rely upon the dynamic binding of the control to get at the data. It's not going to help you if you need to deal with the object programmatically, but it will serve fine for databinding.
You can not return an anonymous type, so basically for this you will need POCO's even though you don't want them.
"not the best design given the problem I am trying to solve"
Could you explain what you are trying to achieve a little more? It might be possible to return some type of list containing a dictionary of items (ie rows and columns). Think something like an untyped dataset (yuck)
Your GetData method can use IEnumerable (the "old" non-generic interface) as its return type.
Any dynamic resolution (e.g. ASP.NET or XAML bindings) should work as expected, which seems to be what you want to do.
However, if you want to use the results in your code, you will probably have to resort to .NET 4's dynamic keyword.
The following example can be run in LINQPad (in "C# Program" mode) and illustrates this:
void Main()
{
var v = GetData();
foreach (dynamic element in v)
{
((string)element.Name).Dump();
}
}
public IEnumerable GetData()
{
return from i in Enumerable.Range(1, 10)
select new
{
Name = "Item " + i,
Value = i
};
}
Keep in mind that, design-wise, coding like this will make most people frown and can affect performance.

Resources