After deleting some items from my db i get this -> Realms.RealmInvalidObjectException: This object is detached. Was it deleted from the realm?
In Realm Xamarin, you have to use RealmResult Notifications to be notified of when there is a change to your database.
Considering Realm is zero-copy and the objects you obtain from it are just proxies to the underlying database, if you delete an object on any thread, then that object will be deleted on every thread in Realm's latest snapshot for the thread.
So it's best if you always make sure you're notified of changes in your result set and update the UI accordingly, and handle the case when your objects could have been deleted due to some operation (by making sure they're still valid).
Related
Do I need a garbage collector in LakeFS when I delete an object from a branch by API?
Using appropriate method of course.
Do I understand right that the garbage collector is used only for objects that are deleted by a commit. And this objects are soft deleted (by the commit). And if I use the delete API method than the object is hard deleted and I don’t need to invoke the garbage collector?
lakeFS manages versions of your data. So deletions only affect successive versions. The object itself remains, and can be accessed by accessing an older version.
Garbage collection removes the underlying files. Once the file is gone, its key is still visible in older versions, but if you try to access the file itself you will receive HTTP status code 410 Gone.
For full information, please see the Garbage collection docs.
A Realm holds a read lock on the version of the data accessed by it, so that changes made to the Realm on different threads do not modify or delete the data seen by this Realm. Calling this method releases the read lock, allowing the space used on disk to be reused by later write transactions rather than growing the file
Is there a matching function in Xamarin.Realm like in Objc/Swift's RLMRealm invalidate.
If not, is this a backlog item or is it not required(?) with the C# wrapper.
I think calling Realm.Refresh() would be a workaround - it will cause the Realm instance to relinquish the read lock it has at the moment and move it to the latest version which would free up the old version for compaction.
Ordinarily moving the read lock to the latest version would happen automatically if the thread you run on has a running CFRunLoop or ALooper, but on a dedicated worker thread you'd be responsible for calling Refresh() on your own to advance the read lock.
Please open an issue on https://github.com/realm/realm-dotnet for Invalidate() if Refresh() doesn't work for you.
I think you would use Realm.Close(). See:
https://realm.io/docs/xamarin/latest/api/class_realms_1_1_realm.html#a7f7a3199c392465d0767c6506c1af5b4
Closes the Realm if not already closed. Safe to call repeatedly. Note that this will close the file. Other references to the same database on the same thread will be invalidated.
I cannot find any hint in the docs regarding object lifecycle management.
In the XPC service, do I have to keep a strong reference to the NSXPCListener, or does the resume call take care of this effectively?
I'm using Swift and a connection creation object to get most of the stuff out of the main.swift file:
// main.swift
if let dependencies = Dependencies().setUp() {
// Actually run the service code (and never return)
NSRunLoop.currentRunLoop().run()
}
I have the hunch that the dependencies object (which creates the NSXPCListener during set up) should keep a strong reference to the listener object. But the resume method is said to work like operation queues.
Conversely, does the client need to keep the NSXPCConnection around?
In the XPC service, upon incoming connection, does setting exportedObject retain that object for the duration of the connection, or do I have to keep a strong ref to it myself?
Consequently: When multiple connections come in, should I maintain a list of exportedObjects?
In either the client of service, should I obtain a remoteObjectProxy once and keep it around, or should I obtain a proxy object anew for every call?
My partcular XPC service is a launchd process running all the time, not a one-off thing, and the client app itself might run for a few hours in the background, too. I worry whether it's safe to keep a proxy object to the background service for a potentially long-running communication.
If background services crash, launchd is said to restart them. Now if my service was a "launch on demand" service instead, will message calls to proxy objects issue a relaunch if necessary, will obtaining a proxy object do, or will only reconnecting achieve that?
Thanks for helping me sort this out!
We're investigating moving to a distributed cache using Windows AppFabric. Our ASP.NET 4.0 application currently has a cache implementation that uses MemoryCache.
One key feature is that when items are added to the cache, a CacheItemPolicy is included that contains a ChangeMonitor:
CacheItemPolicy policy = new CacheItemPolicy();
policy.Priority = CacheItemPriority.Default;
policy.ChangeMonitors.Add(new LastPublishDateChangeMonitor(key, item, GetLastPublishDateCallBack));
The change monitor internally uses a timer to periodically trigger the delegate passed into it - which is usually a method to get a value from a DB for comparison.
The policy and its change monitor are then included when an item is added to the cache:
Cache.Add(key, item, policy);
An early look at AppFabric's DataCache class seem to indicate whilst a Timespan can be included when adding items to cache, a CacheItemPolicy itself can't be.
Is there an another way to implement the same ChangeMonitor-type functionality in AppFabric. Notifications perhaps?
Cheers
Neil
There are only two hard problems in computer science: cache
invalidation, naming things and off-by-one errors.
Phil Karlton
Unfortunately AppFabric has no support for this sort of monitoring to invalidate a cached item, and similarly no support for things like SqlCacheDependency.
However, AppFabric 1.1 brought in support for read-through and write-behind. Write-behind means that your application updates the cached data first rather than the underlying database, so that the cache always holds the latest version (and therefore the underlying data doesn't need to be monitored); the cache then updates the underlying database asynchronously. To implement read-through/write-behind, you'll need to create an object that inherits from DataCacheStoreProvider (MSDN) and write Read, Write and Delete methods that understand the structure of your database and how to update it.
According to http://nhibernate.info/doc/nh/en/index.html#manipulatingdata-exceptions, after a database exception the Session should be discarded.
Now, in our web app, in some cases, it's normal to throw and catch ADOExceptions. For instance for constraint violations.
According to the document linked to we should then abandon the session. However, we still want to do some work with the database if we get a constraint violation so I need a new session.
In our tests we do this by calling
CurrentSessionContext.Unbind(SessionFactory).Close();
CurrentSessionContext.Bind(SessionFactory.OpenSession());
but in the web app we don't use CurrentSessionContext, we use LazySessionContext. So we can't directly reference the CurrentSessionContext in our business classes since it isn't used from the web and we can't reference the LazySessionContext since the HttpContext is not available during integration testing.
Is there a way to dispose and recreate a session and connect it to the current context, without directly referencing the context class? I have the SessionFactory object and the Session object.
Without wanting to sound critical I would suggest that you need to rethink the design of your application. You should implement an interface either through the use of combo boxes for instance or validation that would prevent users entering data that would cause ADOExceptions such as constraint violations. If these do then occur they are then exceptional circumstances that you can report to your users as an internal error and maybe log that error through a separate mechanism such as through the health monitoring built into ASP.NET.
I would also add that your entities may need another look as constraint violations are not something you normally need to worry about when you are using NHibernate.