I'm using varnish with
-s malloc,1G"
It's currently 98% full. Once it completely full what will happen?
With it purge?
Maybe purge old images/pages?
Or better yet purge the files with least amount of hits?
It looks like Varnish uses a LRU (least recently used) strategy to remove items from cache when the cache becomes full with things whose TTL (time to live) has not expired (so first remove things whose TTL is expired, if the cache is still full remove things least recently accessed).
See
https://www.varnish-cache.org/trac/wiki/ArchitectureLRU
Note you can watch the n_lru_nuked counter to see the rate at which things are being flushed from the cache due to LRU.
Related
I just want to flush cache by variations, for example just flush the cache with variations id 5
I did't find any reference about flush params ..
thanks in advance .
There is no way to flush cache by variation, at least not in any standardized way (implementation would differ for different cache storages, and for some of them this could be impossible). However you can invalidate caches using TagDependency - after calling TagDependency::invalidate() old cache still will be stored in cache storage, but it will be discarded on Cache::get() call.
Using APCu with TYPO3 6.2 extensively, I always get a high fragmentation of the cache over time. I already had values of 99% with a smaller shm_size.
In case you are a TYPO3 admin, I also switched the caches cache_pagesection, cache_hash, cache_pages (currently for testing purposes moved to DB again), cache_rootline, extbase_reflection, extbase_opject as well as some other extension caches to apc backend. Mainly switching the cache_hash away from DB sped up menu rendering times dramatically (https://forge.typo3.org/issues/57953)
1) Does APC fragmentation matter at all or should I simply watch out that it just never runs out of memory?
2) To TYPO3 admins: do you happen to have an idea which tables cause fragmentation most and what bit in the apcu.ini configuration is relevant for usage with TYPO3?
I already tried using apc.stat = 0, apc.user_ttl = 0, apc.ttl = 0 (as in the T3 caching guide http://docs.typo3.org/typo3cms/CoreApiReference/CachingFramework/FrontendsBackends/Index.html#caching-backend-apc) and to increase the shm_size (currently at 512M where normally around 100M would be used). Shm_size does a good job at reducing fragmentation, but I'd rather have a smaller but full cache than a large one unused.
3) To APC(u) admins: could it be that frequently updating cache entries that change in size as well cause most of the fragmentation? Or is there any other misconfiguration that I'm unaware of?
I know there is a lot of entries in cache (mainly JSON data from remote servers) where some of them update every 5 minutes and normally are a different size each time. If that is indeed a cause, how can I avoid it? Btw: APCU Info shows there are a lot of entries taking up only 2kB but each with a fragmented spacing of about 200 Bytes.
4) To TYPO3 and APC admins: apc has a great integration in TYPO3, but for more frequently updating and many small entries, would you advise a different cache backend than apc?
This is no longer relevant for us, I found a different solution reverting back to MySQL cache. Though if anyone comes here via search, this is how we did it in the end:
Leave the APC cache alone and only use it for the preconfigured extbase_object cache. This one is less than 1MB, has only a few inserts at the beginning and yields a very high hit / miss ratio after. As stated in the install tool in the section "Configuration Presets", this is what the cache backend has been designed for.
I discovered this bug https://forge.typo3.org/issues/59587 in the process and reviewed our cache usage again. It resulted in huge cache entries only used for tag-to-ident-mappings. My conclusion is, even after trying out the fixed cache, that APCu is great for storing frequently accessed key-value mappings but yields when a lot of frequently inserted or tagged entries are around (such as cache_hash or cache_pages).
Right now, the MySQL cache tables have a better performance with extended usage of the MySQL server memory cache (but in contrast to APCu with disc backup). This was the magic setup for our my.cnf (found here: http://www.mysqlperformanceblog.com/2007/11/01/innodb-performance-optimization-basics/):
innodb_buffer_pool_size = 512M
innodb_log_file_size = 256M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 2
innodb_thread_concurrency = 8
innodb_flush_method=O_DIRECT
innodb_file_per_table
With this additional MySQL server setup, the default typo3 cache tables do their job best.
I have a specific cache system in Redis.
The content of this system is quite volatile, and values get added and removed all the time. I want to keep the "used" keys in memory as much as possible, while getting the old ones to expire.
Each request can require hundreds of keys from the cache.
I'm aware that I could set a "long enough" expire time, and just dealt with the Cache misses, but I'd like to have as little misses as possible.
Currently I'm doing something like this, when I'm writing / reading to the cache (pseudo code)
# write
write(key, value)
expire(key, ttl)
# read
read(key)
expire(key, ttl)
I can optimise the read by using pipelining.
Now this still seems like it's not the best way of doing it.
Could someone give me a better strategy?
If you can live with the (current) resolution of 10 seconds then the OBJECT IDLETIME command would let you get a better sense of what has not been used for a while (in blocks of 10 seconds)
> SET X 10
OK
> OBJECT IDLETIME X
10
I would create a script (https://redis.io/commands/script-load) that does this atomically and faster directly on the server side and then use it with EvalSha (https://redis.io/commands/evalsha).
This saves the extra round trip on each of the commands.
Alternatively you can implement a similar algorithm to the LRU cache that Redis runs when it's out of space (https://redis.io/topics/lru-cache) - every once in a while get random keys and remove them if they're too old for you, optionally loop until you get a long sequence of new keys.
If what you are trying to achieve is a perfect LRU cache (Least Recently Used), you can tune Redis to behave like this globally, here is a link about Redis as LRU:
http://oldblog.antirez.com/post/redis-as-LRU-cache.html
Note that it is using maxmemory property on redis and the eviction rule is global unless you look at volatile LRU: How to make Redis choose LRU eviction policy for only some of the keys?
You are using a manual solution for eviction with custom expiration / TTL which is the most powerful solution, but maybe you can simplify your configuration and have a better predictable cache in memory size with this solution.
I'm using Saiku and PHPAnalytics to run MDX queries on my cube.
it seems if i run queries it's all good, caching is fine. But if I go for 2 hours and run those queries again - it does not using cache! Why? I need the cache to be saved for a long time! What to do? I tried to add this ti mondrian.properties mondrian.rolap.CachePool.costLimit = 2147483647
But no help. What do to?
The default in-memory cache of Mondrian stores things in a WeakHashMap. This means that it could be cleared at the discretion of the JVM's garbage collector. Most application servers are setup to do a periodical sweep of garbage collection (usually each hour or so). You have to either tweak your JVM's configuration to not do this.
-Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000
You can also implement your own cache implementation of the SegmentCache SPI. If your implementation uses hard references, they will never be collected. This is trickier to do and will require you to do quite a bit of studying to get it right. You can start by taking a look at the default implementation and start from there.
The mondrian cache should cache up until the cache is deliberately flushed. That said it uses an aging system to determine what should be cached should it run out of memory to store the data, the oldest query gets pushed out of the cache and replaced.
I've not tried the PHPAnalytics stuff, but maybe they've put some call into the Saiku server to flush the cache on a regular basis, otherwise this shouldn't happen.
I've setup the enterprise library caching counters using Perfmon. However all I can see is number of entries in cache.
COuld someone please help me if there's way to find out the size of the cached object so that I can specify correct value for Max num of items to be cached and removed etc?
Also, what does Missed Caches really means as I see quiet large number of misses although my web application is working as expected. Do I need to worry about this counter?
Enterprise Library Caching does not provide the size of the cache or the size of objects in the cache.
There are various approaches to finding the object size that you could use to try to tune the cache size. See:
Find out the size of a .net object
How to get object size in memory?
A Cache Miss is when an item is attempted to be retrieved from the cache but the key is not found in the cache. Usually when this happens you would add the item to the cache. This is not usually alarming since for a cache with no backing store it will be empty at first so initially you would see cache misses but misses should decrease as the cache is loaded (unless of course items expire and are removed from the cache).