enterprise library cached object size - caching

I've setup the enterprise library caching counters using Perfmon. However all I can see is number of entries in cache.
COuld someone please help me if there's way to find out the size of the cached object so that I can specify correct value for Max num of items to be cached and removed etc?
Also, what does Missed Caches really means as I see quiet large number of misses although my web application is working as expected. Do I need to worry about this counter?

Enterprise Library Caching does not provide the size of the cache or the size of objects in the cache.
There are various approaches to finding the object size that you could use to try to tune the cache size. See:
Find out the size of a .net object
How to get object size in memory?
A Cache Miss is when an item is attempted to be retrieved from the cache but the key is not found in the cache. Usually when this happens you would add the item to the cache. This is not usually alarming since for a cache with no backing store it will be empty at first so initially you would see cache misses but misses should decrease as the cache is loaded (unless of course items expire and are removed from the cache).

Related

What happend if guava cache is full and no evictable element?

I am using google guava cache with reference-based eviction.
I wonder what happened if the cache is full and no element of it is marked as evictable? Is there an out of memory exception thrown?
Reference-based eviction is essentially no different than Java's standard GC behavior - the GC just ignores the reference's presence in the cache. If an object falls out of scope (everywhere but the cache) it will be evicted from the cache during GC. If all elements of the cache are in scope somewhere else and therefore cannot be GCed you will run into memory problems exactly like you would if you weren't using a cache. You cannot have more data in memory than the JVM is configured to permit. Using a reference-evicting cache doesn't change this.

Redis memory management - clear based on key, database or instance

I am very new to Redis. I've implemented caching in our application and it works nicely. I want to store two main data types: a directory listing and file content. It's not really relevant, but this will cache files served up via WebDAV.
I want the file structure to remain almost forever. The file content needs to be cached for a short time only. I have set up my expiry/TTL to reflect this.
When the server reaches memory capacity is it possible to priorities certain cached items over others? i.e. flush a key, flush a whole database or flush a whole instance of Redis.
I want to keep my directory listing and flush the file content when memory begins to be an issue.
EDIT: Reading this article seems to be what I need. I think I will need to use volatile-ttl. My file content will have a much shorter TTL set, so this should in theory clear that first. If anyone has any other helpful advice I would love to hear it, but for now I am going to implement this.
Reading this article describes what I needed. I have implemented volatile-ttl as my memory management type.

OLAP Saiku Cache expires

I'm using Saiku and PHPAnalytics to run MDX queries on my cube.
it seems if i run queries it's all good, caching is fine. But if I go for 2 hours and run those queries again - it does not using cache! Why? I need the cache to be saved for a long time! What to do? I tried to add this ti mondrian.properties mondrian.rolap.CachePool.costLimit = 2147483647
But no help. What do to?
The default in-memory cache of Mondrian stores things in a WeakHashMap. This means that it could be cleared at the discretion of the JVM's garbage collector. Most application servers are setup to do a periodical sweep of garbage collection (usually each hour or so). You have to either tweak your JVM's configuration to not do this.
-Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000
You can also implement your own cache implementation of the SegmentCache SPI. If your implementation uses hard references, they will never be collected. This is trickier to do and will require you to do quite a bit of studying to get it right. You can start by taking a look at the default implementation and start from there.
The mondrian cache should cache up until the cache is deliberately flushed. That said it uses an aging system to determine what should be cached should it run out of memory to store the data, the oldest query gets pushed out of the cache and replaced.
I've not tried the PHPAnalytics stuff, but maybe they've put some call into the Saiku server to flush the cache on a regular basis, otherwise this shouldn't happen.

Why do my ATG repository item caches end up with usedRatios of well over 100%?

I am running ATG 9 with a bunch of different objects configured in the repository.xml to have specific cache sizes, ttl etc.
For example:
<item-descriptor name="USER"
query-expire-timeout="300000"
item-expire-timeout="300000"
item-cache-timeout="300000"
item-cache-size="20000"
query-cache-size="50">
...
I am expecting that the cache would not grow above that size and would expire old items to keep the cache size at or under the item-cache-size. However when I look at the cache stats in the Dynamo admin console, I see several of our items have usedRatios of 500-1000%. This is hogging all of the memory in the JVM over time as more and more items are cached and apparently never released. If I invoke the invalidateCaches method on the Repository in the admin console the free memory jumps back way up and then the slow march down begins again.
How can I ensure that the caches do not grow over their configured size and take over all the memory? Is there some configuration setting I am missing? Are there code tricks one must employ to keep the cache from growing out of control? The ATG docs aren't the most informative and googling around hasn't yielded much info either.
After starting your ATG instance, I suggest navigating to the ProfileAdapterRepository in the Dynamo Admin page (/dyn/admin/nucleus/atg/userprofiling/ProfileAdapterRepository/?propertyName=definitionFiles) and viewing the combined view of the repository definition files. It's the best way to be sure what the final file looks like, since it can be built up of many files.
You should see the attributes you've configured on the "user" repository item through this interface (Note all lower case).
If you don't see your attributes here then you probably don't have your repository definition file loaded, either the module you're working on isn't started or the file is not on the configuration path.

What is the Oracle KGL SIMULATOR?

What is this thing called a KGL SIMULATOR and how can its memory utilisation be managed by application developers?
The background to the question is that I'm occasionally getting errors like the following and would like to get a general understanding of what is using this heap-space?
ORA-04031: unable to allocate 4032 bytes of shared memory ("shared pool","select text from > view$ where...","sga heap(3,0)","kglsim heap")
I've read forum posts through Google suggesting that the kglsim is related to the KGL SIMULATOR, but there is no definition of that component, or any tips for developers.
KGL=Kernel General Library cache manager, as the name says it deals with library objects such cursors, cached stored object definitions (PL/SQL stored procs, table definitions etc).
KGL simulator is used for estimating the benefit of caching if the cache was larger than currently. The general idea is that when flushing out a library cache object, it's hash value (and few other bits of info) are still kept in the KGL simulator hash table. This stores a history of objects which were in memory but flushed out.
When loading a library cache object (which means that no existing such object is in library cache), Oracle goes and checks the KGL simulator hash table to see whether an object with matching hash value is in there. If a matching object is found, that means that the required object had been in cache in past, but flushed out due space pressure.
Using that information of how many library cache object (re)loads could have been been avoided if cache had been bigger (thanks to KGL simulator history) and knowing how much time the object reloads took, Oracle can predict how much response time would have been saved instancewide if shared pool was bigger. This is seen from v$library_cache_advice.
Anyway, this error was probably raised by a victim session due running out of shared pool space. In other words, someone else may have used up all the memory (or all the large enough chunks) and this allocation for KGL sim failed because of that.
v$sgastat would be the first point for troubleshooting ORA-4031 errors, you need to identify how much free memory you have in shared pool (and who's using up most of the memory).
--
Tanel Poder
http://blog.tanelpoder.com
I've found that KGL stands for "Kernel Generic Library".
Your issue could be a memory leak within Oracle. You probably should open a case with Oracle support.

Resources