How long will it take to persist the latest key-value after I put the entry? - chronicle

I created a chronicle map with createOrRecoverPersistedTo method. But after restarting the jvm process some entries disappeared. How long will it take to persist the latest key-value after I put the entry? Is there anyway forcing the latest data to be persisted?

Theoretically, unless some "failure" events happen like JVM process crash or kill, or the server shutdown, you shouldn't wait until entries are persisted, OS memory subsystem takes care of that.
As was explained here in details, using createOrRecoverPersistedTo() after "normal" server restart (moreover, just the JVM process restart) is a harmful antipattern. You should simply use createPersistedTo().
Moreover, in the recent issue entry disappearence was also reported when ChronicleMap recovered without catastrophic events, that could be a bug in the recovery proceduces in Chronicle Map.
If you restart your server, mapped memory may not be fully flushed to files, e. g. in Linux the "safe" timeout when memory should be flushed is defined in "vm.dirty_expire_centisecs" configuration (google it), which should be 30 seconds by default.

Related

Dotnet Core In Memory Cache - what is the default expiration

I'm using MemoryCache in a dotnet core C# project. I'm using it store a list of enums that I read out of a collection (I want to store it since it uses reflection to load the enums which takes time).
However, since my collection isn't going to change, I don't want it to expire at all. How do I do that? If I don't set any Expiration (SlidingExpiration or Absolute Expiration), will my cache never expire?
If you do not specify an absolute and/or sliding expiration, then the item will theoretically remain cached indefinitely. In practical terms, persistence is dependent on two factors:
Memory pressure. If the system is resource-constrained, and a running app needs additional memory, cached items are eligible to be removed from memory to free up RAM. You can however disable this by setting the cache priority for the entry to CacheItemPriority.NeverRemove.
Memory cache is process-bound. That means if you restart the server or your application restarts for whatever reason, anything stored in memory cache is gone. Additionally, this means that in web farm scenarios, each instance of your application will have it's own memory cache, since each is a separate process (even if you're simply running multiple instances on the same server).
If you care about only truly doing the operation once, and persisting the result past app shutdown and even across multiple instances of your app, you need to employ distributed caching with a backing store like SQL Server or Redis.

Is a hard restart of redis required to free memory?

I recently came upon a SO question where the op asked in which scenarios redis frees up memory. It seems they were recommended a hard start is a potential way, however this is untested in the case of redis. Can anyone let me know for sure whether this works?
I have a live environment, I don't want to have to restart redis-server, but its memory foot print is debilitating now and I'm on the verge of a server migration. So it's important for me to remove as much bloat as possible (and there's a ton of bloat).
I'm not sure what you mean by "bloat", but attaching your server's INFO ALL output may be helpful.
By default, Redis uses jemalloc as a memory allocator. The allocator is in charge of actually freeing RAM for the OS to reclaim, after Redis frees it. Redis v4 and above include the ability to force the allocator to purge the freed RAM (MEMORY PURGE, see https://github.com/antirez/redis-doc/pull/851).
Regardless of purge, there's also the matter of memory fragmentation. While v4 has the experimental active defrag feature, a restart is the way to "fix" that in prior versions.
To mitigate a restart and the downtime involved, use Redis' replication to create a slave and failover your apps to it before restarting the original master.

SQLite DB locking on Windows like a mutex and without polling

I have the following situation:
One process is reading from a SQLite database.
Another processes is updating the database. Updates do not happen very often and all transactions are short. (less than 0.1ms on average)
The process that is reading should have low latencies for the queries. (around 0.1ms)
If the locking of SQLite would work like a mutex or readers-writer lock, everything would be ok.
From reading http://www.sqlite.org/lockingv3.html this should be possible. SQLite is using
LockFileEx(), sometimes without the LOCKFILE_FAIL_IMMEDIATELY, which would block the calling
process as desired.
However I could not figure out how to use/configure SQLite to achieve this behavior. Using a busy-handler
would involve polling, what is not acceptable because the minimal sleep time is usually 15ms on Windows.
I would like that the query is executed as soon as the update transaction ends.
Is this possible without changing the source code of SQLite. If not, is there such a patch available somewhere?
SQLite does not use a synchronization mechanism that would wait just until a lock is released.
SQLite never uses a blocking locking call; when it finds that the database is locked, is waits for some time and tries again.
(You could install your own busy handler to wait for a shorter time.)
The easiest way to prevent readers and a writer from blocking each other is to use WAL mode.
If you cannot use WAL mode, you can synchronize your transactions by implementing your own synchronization mechanism: use a common named mutex in all processes, and lock it around all your transactions.
(This would reduce concurrency if you had multiple readers.)

Azure in role cache exceptions when service scales

I am using Windows Azure SDK 2.2 and have created an Azure cloud service that uses an in-role cache.
I have 2 instances of the service running under normal conditions.
When the services scales (up to 3 instances, or back down to 2 instances), I get lots of DataCacheExceptions. These are often accompanied by Azure db connection failures from the process going in inside the cache. (If I don't find the entry I want in the cache, I get it from the db and put it into the cache. All standard stuff.)
I have implemented retry processes on the cache gets and puts, and use the ReliableSqlConnection object with a retry process for db connection using the Transient Fault Handling application block.
The retry process uses a fixed interval retrying every second for 5 tries.
The failures are typically;
Microsoft.ApplicationServer.Caching.DataCacheException: ErrorCode:SubStatus:There is a temporary failure. Please retry later
Any idea why the scaling might cause these exceptions?
Should I try a less aggressive retry policy?
Any help appreciated.
I have also noticed that I am getting a high percentage (> 70%) cache miss rate and when the system is struggling, there is high cpu utilisation (> 80%).
Well, I haven't been able to find out any reason for the errors I am seeing, but I have 'fixed' the problem, sort of!
When looking at the last few days processing stats, it is clear the high cpu usage corresponds with the cloud service having 'problems'. I have changed the service to use two medium instances instead of two small instances.
This seems to have solved the problem, and the service has been running quite happily, low cpu usage, low memory usage, no exceptions.
So, whilst still not discovering what the source of the problems were, I seem to have overcome them by providing a bigger environment for the service to run in.
--Late news!!! I noticed this morning that from about 06:30, the cpu usage started to climb, along with the time taken for the service to process as it should. Errors started appearing and I had to restart the service at 10:30 to get things back to 'normal'. Also, when restarting the service, the OnRoleRun process threw loads of DataCacheExceptions before it started running again, 45 minutes later.
Now all seems well again, and I will monitor for the next hours/days...
There seems to be no explanation for this, remote desktop to the instances show no exceptions in the event log, other logging is not showing application problems, so I am still stumped.

How to emulate shm_open on Windows?

My service needs to store a few bits of information (at minimum, at least 20 bits or so, but I can easily make use of more) such that
it persists across service restarts, even if the service crashed or was otherwise terminated abnormally
it does not persist across a reboot
can be read and updated with very little overhead
If I store this information in the registry or in a file, it will not get automatically emptied when the system reboots.
Now, if I were on a modern POSIX system, I would use shm_open, which would create a shared memory segment which persists across process restarts but not system reboots, and I could use shm_unlink to clean it up if the persistent data somehow got corrupted.
I found MSDN : Creating Named Shared Memory and started reimplementing pieces of it within my service; this basically uses CreateFileMapping(INVALID_HANDLE_NAME, ..., PAGE_READWRITE, ..., "Global\\my_service") instead of shm_open("/my_service", O_RDWR, O_CREAT).
However, I have a few concerns, especially centered around the lifetime of this pagefile-backed mapping. I haven't found answers to these questions in the MSDN documentation:
Does the mapping persist across reboots?
If not, does the mapping disappear when all open handles to it are closed?
If not, is there a way to remove or clear the mapping? Doesn't need to be while it's in use.
If it does persist across reboots, or does disappear when unreferenced, or is not able to be reset manually, this method is useless to me.
Can you verify or find faults in these points, and/or recommend a different approach?
If there were a directory that were guaranteed to be cleaned out upon reboot, I could save data in a temporary file there, but it still wouldn't be ideal: under certain system loads, we are encountering file open/write failures (rare, under 0.01% of the time, but still happening), and this functionality is to be used in the logging path. I would like not to introduce any more file operations here.
The shared memory mapping would not persist across reboots and it will disappear when all of its handles are closed. A memory mapping object is a kernel object - they always get deleted when the last reference to them goes away, either explicitly via a CloseHandle or when the process containing the reference exits.
Try creating a registry key with RegCreateKeyEx with REG_OPTION_VOLATILE - the data will not preserved when the corresponding hive is unloaded. This will be at system shutdown for HKLM or user logoff for HKCU.
sounds like maybe you want serialization instead of shared memory? If that is indeed appropriate for your application, the way you serialize will depend on your language. If you're using c++, check out boost::serialize. C# undoubtedly has lots of serializations options (like java), if that's what you're using.

Resources