Drupal cache not working - tables remain empty - caching

My problem is similar to these, but since I am not allowed to comment, I have to ask again:
a) Drupal cache not working (tables empty)
b) Drupal cache tables are empty, not receiving data
I noticed that after I turned off the Boost module and activated the "normal" Drupal cache, cache tables (starting with cache_ in the database) are remaining empty. I read that this might be related to the use of the Memcache module which I used, too. But disabling that module doesn't change anything either.
I also suspected the Elysia Cron module to clear the cache every minute, but a) cache tables are always completely empty and b) system_cron just runs every hour.
Any more ideas what could be wrong?

It seems it did have to do with memcache: I had forgotten to remove the following line from my settings.php:
$conf['cache_inc'] = 'sites/all/modules/memcache/memcache.inc';
After removing/uncommenting it, the cache tables are being filled.

Related

Yii2 delete cache by variations

I just want to flush cache by variations, for example just flush the cache with variations id 5
I did't find any reference about flush params ..
thanks in advance .
There is no way to flush cache by variation, at least not in any standardized way (implementation would differ for different cache storages, and for some of them this could be impossible). However you can invalidate caches using TagDependency - after calling TagDependency::invalidate() old cache still will be stored in cache storage, but it will be discarded on Cache::get() call.

Possibility of stale data in cache-aside pattern

Just to re-cape cache-aside pattern it defines following steps when fetching and updating data.
Fetching Item
Return the item from cache if found in it.
If not found in cache, read from data store.
Put the read item in cache and return it.
Updating Item
Write the item in data store.
Remove the corresponding entry from cache.
This works perfectly in almost all cases, but it seems to fail in one theoretical scenario.
What if step 1 & 2 of updating item, happen between step 2 & 3 of fetching item. In other words, consider that initially data store had the value 'A' and it was not in cache. So when fetching item, we read 'A' from data store but before we put into the cache, the item was updated to 'B' in another thread (So 'B' was written in data store and tried to remove the entry from cache, which was not there at that time). Now when the fetching thread puts the item it read (i.e. 'A') in cache. So now 'A' will stay cached, and further fetches will return stale data, until item expires or updated again.
So am I missing something here, is my understanding of pattern is wrong. Or that the scenario is just practically impossible, and there is no need to worry about it.
Also I would like to know if some changes can be made in the pattern to avoid this problem.
Your understanding of the pattern appears perfectly correct, according to the MSDN definition. In fact, it mentions the same failure scenario that you describe.
The order of the steps in this sequence is important. If the item is removed before the cache is updated, there is a small window of opportunity for a client application to fetch the data (because it is not found in the cache) before the item in the data store has been changed, resulting in the cache containing stale data.
The MSDN article does note that, "it is usually impractical to expect that cached data will always be completely consistent with the data in the data store." Expiration and eviction are two strategies mentioned for dealing with this problem.
An old computer science joke goes like this.
There are only two hard problems in computer science: cache invalidation, naming things, and off-by-one errors.
You've stumbled upon the first of these problems.
Also I would like to know if some changes can be made in the pattern
to avoid this problem.
There is no way to avoid this situation in general. Memcached protocol introduces a special command:
"cas" is a check and set operation which means "store this data but
only if no one else has updated since I last fetched it."
Scenario should be modified:
Fetching Item
Return the item from cache if found in it.
If not found in cache, read from data store.
Check and swap the corresponding entry in cache and return it.
Updating Item
Check and swap the corresponding entry in cache.
Write the item in data store.
This scenario also does not guarantee full consistency.
Imagine the following situation:
Writing item in data store fails, while updating item in cache succeed. The latest item value will be kept in cache only.

APC with TYPO3: high fragmentation over time

Using APCu with TYPO3 6.2 extensively, I always get a high fragmentation of the cache over time. I already had values of 99% with a smaller shm_size.
In case you are a TYPO3 admin, I also switched the caches cache_pagesection, cache_hash, cache_pages (currently for testing purposes moved to DB again), cache_rootline, extbase_reflection, extbase_opject as well as some other extension caches to apc backend. Mainly switching the cache_hash away from DB sped up menu rendering times dramatically (https://forge.typo3.org/issues/57953)
1) Does APC fragmentation matter at all or should I simply watch out that it just never runs out of memory?
2) To TYPO3 admins: do you happen to have an idea which tables cause fragmentation most and what bit in the apcu.ini configuration is relevant for usage with TYPO3?
I already tried using apc.stat = 0, apc.user_ttl = 0, apc.ttl = 0 (as in the T3 caching guide http://docs.typo3.org/typo3cms/CoreApiReference/CachingFramework/FrontendsBackends/Index.html#caching-backend-apc) and to increase the shm_size (currently at 512M where normally around 100M would be used). Shm_size does a good job at reducing fragmentation, but I'd rather have a smaller but full cache than a large one unused.
3) To APC(u) admins: could it be that frequently updating cache entries that change in size as well cause most of the fragmentation? Or is there any other misconfiguration that I'm unaware of?
I know there is a lot of entries in cache (mainly JSON data from remote servers) where some of them update every 5 minutes and normally are a different size each time. If that is indeed a cause, how can I avoid it? Btw: APCU Info shows there are a lot of entries taking up only 2kB but each with a fragmented spacing of about 200 Bytes.
4) To TYPO3 and APC admins: apc has a great integration in TYPO3, but for more frequently updating and many small entries, would you advise a different cache backend than apc?
This is no longer relevant for us, I found a different solution reverting back to MySQL cache. Though if anyone comes here via search, this is how we did it in the end:
Leave the APC cache alone and only use it for the preconfigured extbase_object cache. This one is less than 1MB, has only a few inserts at the beginning and yields a very high hit / miss ratio after. As stated in the install tool in the section "Configuration Presets", this is what the cache backend has been designed for.
I discovered this bug https://forge.typo3.org/issues/59587 in the process and reviewed our cache usage again. It resulted in huge cache entries only used for tag-to-ident-mappings. My conclusion is, even after trying out the fixed cache, that APCu is great for storing frequently accessed key-value mappings but yields when a lot of frequently inserted or tagged entries are around (such as cache_hash or cache_pages).
Right now, the MySQL cache tables have a better performance with extended usage of the MySQL server memory cache (but in contrast to APCu with disc backup). This was the magic setup for our my.cnf (found here: http://www.mysqlperformanceblog.com/2007/11/01/innodb-performance-optimization-basics/):
innodb_buffer_pool_size = 512M
innodb_log_file_size = 256M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 2
innodb_thread_concurrency = 8
innodb_flush_method=O_DIRECT
innodb_file_per_table
With this additional MySQL server setup, the default typo3 cache tables do their job best.

Key based caching

I'm reading this article:
http://37signals.com/svn/posts/3113-how-key-based-cache-expiration-works
I'm not using rails so I don't really understand their example.
It says in #3:
When the key changes, you simply write the new content to this new
key. So if you update the todo, the key changes from
todos/5-20110218104500 to todos/5-20110218105545, and thus the new
content is written based on the updated object.
How does the view know to read from the new todos/5-20110218105545 instead of the old one?
I was confused about that too at first -- how does this save a trip to the database if you have to read from the database anyway to see if the cache is valid? However, see Jesse's comments (1, 2) from Feb 12th:
How do you know what the cache key is? You would have to fetch it from the database to know the mtime right? If you’re pulling the record from the database already, I would expect that to be the greatest hit, no?
Am I missing something?
and then
Please remove my brain-dead comment. I just realized why this doesn’t matter: the caching is cascaded, so yes a full depth regeneration incurs a DB hit. The next cache hit will incur one DB query for the top-level object—all the descendant objects are not queried because the cache for the parent object includes cached versions for the children (thus, no query necessary).
And Paul Leader's comment 2 below that:
Bingo. That’s why is works soooo well. If you do it right it doesn’t just eliminate the need to generate the HTML but any need to hit the db. With this caching system in place, our data-vis app is almost instantaneous, it’s actually useable and the code is much nicer.
So given the models that DHH lists in step 5 of the article and the views he lists in step 6, and given that you've properly setup your relationships to touch the parent objects on update, and given that your partials access your child data as parent.children, or even child.children in nested partials, then this caching system should have a net gain because as long as the parent's cache-key is still valid then the parent.children lookup will never happen and will also be pulled from cache, etc.
However, this method may be pointless if your partials reference lots of instance variables from the controller since those queries will already have been performed by the time Rails sees the calls to cache in the view templates. In that case you would probably be better off using other caching patterns.
Or at least this is my understanding of how it works. HTH

Tweaking magento for performance

i'm looking on performance (server load time) of magento site and i'm trying to tune search result pages. I realized that when I disabled all heavy things like top navigation, lev layered navigation and product listing and I cleared all cache then after this magento core does like 60 SQL queries agains a database. Does anyone have any procedure how to rid of them or how to reduce them to some acceptable amount?
Also can I somehow reduce a time spent during creating of blocks?
Thank you very much,
Jaro.
Magento is a extremely flexible ecommerce framework, but that flexibility comes with a price: performance. This answer is a collection of pointers and some details on caching (especially for blocks).
One thing to consider is the Magento environment, e.g. tuning the php, the web server (favor nginx over Apache), and MySQL. Also, set up a good caching backend for Magento. All these are covered e.g. in the Magento Performance Whitepaper that applies also to the CE.
After the environment is set up, the other side of things is the code.
Reducing the number of queries is possible for some pages by enabling the flat table catalog (System > Configuration > Catalog > Frontend), but you will always have a high number of queries.
You also can't really reduce the time spent creating the blocks except by tuning the environment (APC, memory, CPU). So as the other commenters said, your best choice is utilizing the caching functionality that Magento has built in.
Magento Block Caching
Because you specifically mentioned blocks in the question, I'll elaborate a bit more on block caching. Block caching is governed by three properties:
cache_lifetime
cache_key
cache_tags
All these properties can be set in the _construct() method of a block using setData() or magic setters, or by implementing the associated getter methods (getCacheLifetime(), getCacheKey(), getCacheTags()).
The cache_lifetime is specified in (integer) seconds. If it is set to false(boolean), the block will be cached for ever (no expiry). If it is set to nullthe block will not be cached (this is the default in Mage_Core_Block_Abstract).
The cache_key is the unique string that is used to identify the cache record in the cache pool. By default it is constructed from the array returned by the method getCacheKeyInfo().
// Mage_Core_Block_Abstract
public function getCacheKeyInfo()
{
return array(
$this->getNameInLayout()
);
}
public function getCacheKey()
{
if ($this->hasData('cache_key')) {
return $this->getData('cache_key');
}
/**
* don't prevent recalculation by saving generated cache key
* because of ability to render single block instance with different data
*/
$key = $this->getCacheKeyInfo();
//ksort($key); // ignore order
$key = array_values($key); // ignore array keys
$key = implode('|', $key);
$key = sha1($key);
return $key;
}
The best way to customize the cache key in custom blocks is to override the getCacheKeyInfo() method and add the data that you need to uniquely identify the cached block as needed.
For example, in order to cache a different version of a block depending on the customer group you could do:
public function getCacheKeyInfo()
{
$info = parent::getCacheKeyInfo();
$info[] = Mage::getSingleton('customer/session')->getCustomerGroupId()
return $info;
}
The cache_tags are an array that enable cache segmentation. You can delete sections of the cache matching one or more tags only.
In the admin interface under System > Cache Management you can see a couple of the default cache tags that are available (e.g. BLOCK_HTML, CONFIG, ...). You can use custom cache tags, too, simply by specifying them.
This is part of the Zend_Cache implementation, and needs to be customized far less frequently compared to the cache_lifetime and the cache_key.
Other Caching
Besides blocks Magento caches many other things (collection data, configuration, ...).
You can cache your own data using Mage::app()->saveCache(), Mage::app()->loadCache(), Mage::app()->cleanCache() and Mage::app()->removeCache(). Please look in Mage_Core_Model_App for details on these methods, they are rather straight forward.
You will also want to use a full page cache module. If you are using the Magento EE, you already have one. Otherwise search Magento Connect - there are many options (commercial).
Some of those modules also tune various parts of Magento for you beyond the full page caching aspect, e.g. Nitrogento (commercial).
Using a reverse proxy like Varnish is also very beneficial.
There are quite a number of blog posts on this subject. Here is one post by the publishers of the Nitrogento extension.
If you are running Magento on a more low-scale environment, check out my post on the optimization of the file cache backend on magebase.com.
I am adding additional comments for speed:
Instead of using Apache use nginx or litespeed.
Make sure flat catalog is used.
If possible use FPC.
compiler mode to be set on.
Merge css and js(Fooman Speedster ).
Use image sprites to reduce number of request.
Use query cache but avoid size greater then 64 MB.
Remove all modules not in use by removing there xml.Just disabling will not do.
Session to be on Ram.
Use of APC recommended.
Your cron should be run in offpeak hours.
Delete additional stores if not in use.
Delete cart rules if not in use.
optimize image for size.
Use Ajax where ever possible.
CMS blocks take more time then a magento block so unless you want a block to be modified do not use CMS blocks.
Do not use collection count use collection getSize to get what is the collection size.
Minimize number of searchable attributes as these result in columns in flat catalog table and will slow down your search.
Use of Solr search is recommended. It comes with EE version but it can be installed with CE as well.
Minimize customer group as suggested in comment.
Enable compression in .htaccess (mod_gzip for Apache 1.3, mod_deflate for Apache 2)
Remove staging stores if on EE.
Use Apache mod_expires and be sure to set how long files should be cached.In case you are on Apache server.
Use a Content Delivery Network (CDN).
Enable Apache KeepAlives.
Make your output W3C compliant
Use of getChildHtml('childName') is recommended as this will cache block against direct use of block code in .phtml file.
Make sure cron is run so as to clean logs stored in data base.
Number of days log should be minimized as per requirement.
Load cache on RAM if memory permits.
Reduce hard disc file reads and try reads from ram as this is faster.
Upgrade PHP version to above 5.3
If on EE make sure that most pages are delivered without application initialization.Even if one container needs application initialization its going to effect execution speed as apart form URL rewrites most of the other code will get executed.
Check XML for blocks placed in default handle and if those blocks not on specific page then move those XML values from default handle to specific handles.It has been observed that lots of blocks are executed that are not displayed.
If using FPC make sure your containers are cached and repeat request for container is delivered via cache.Improper placeholder definition results in container cache not being used but each time new container content getting generated.
Analyze page blocks and variables and if possible add those variables/blocks to cache.
Switch off Logs writing in Magento.
Remove Admin notification module.
Use of image sprites.
Use some web test tool to analyse number of requests and other html related parameters responsible for download time and act accordingly.
Remove attributes if not needed.With proper care we can even remove system attributes if not in use.
42: If on enterprise make sure partial indexing is effectively used.
Write your own solr search populate to bypass Magento search indexing.
Clean _cl tables or reduce _cl table rows.
I would add into list: try to avoid file cache if possible, replace it by apc / redis / memcache( As suggested by Jaro)
Remove system attributes not in use( Be careful,do a thorough check before removing).
There are some cron tab jobs that are not applicable to all stores so depending on your store features those can be removed.
Optimization by proper attribute management like setting required attribute to yes or is searchable or required in listing etc.
Some observers are not required for all stores so in case those observers are not applicable to a specific Magento site then they should be removed.
Make sure FPC is applicable to most of the site pages. Specially when you added some new controllers to delivering a page.
Magento has lots of features.For this it has many events and associated observers.There are few features that are not used by a store so any observer related to that feature should be removed.e.g : If you check enterprise version there is category permission concept which if not used, then its recommended that on save after events permission related observers to be removed.
If a specific attribute is to be save for a product then instead of call $product->save call a function that will save specific attribute.
In EE version that has partial indexing and triggers modify triggers to avoid multiple entries to_cl tables.
No.phtml files bypasses blocks and use modules or resources directly.As this will result in no caching which in-turn means more work for Magento.
Delivering images depending on device in use.
Some of the FPC recommended for community : Lesti( Free as on date ),Amasty( commercial),extender(commercial ) and Bolt(commercial).
Warming Cache.
Controlling bots by .htaccess during peak hrs.
Pre-populating values in a custom table for Layered Navigation via a custom script that executes daily by cron.
Making sure to avoid unwanted Keys to reduce cache size.
Using a higher PHP version 5.4+
Using a higher Mysql version( 5.5 +)
Reduce number of Dom elements.
Move all js out from html pages.
Remove commented html.
Modify triggers if on enterprise version(1.13 or higher) so as to reduct _cl table entries.As these entries results in cache flushing which in turn results in lower cache hit,hence more TTFB time.
Use Magmi to import products.
As Vinai said, Magento is all about extensibility and raw performance is secondary but remedied by things like indexing and caching. Significantly improving performance without caching is going to be very difficult. Short of full-page caching, enabling block caching is a good method of improving performance but proper cache invalidation is key. Many blocks are cacheable but not already configured to be cached by default so identify the slowest ones using profiling use Vinai's guide for enabling caching. Here are a few additional things to keep in mind with block caching:
Any block that lists product info should have the product's tag which is 'catalog_product_'.$productId. Similarly, use 'catalog_category_'.$categoryId for categories. This will ensure the cache is invalidated when the product or category is saved (edited in backend). Don't set these in the constructor, set them in an overridden getCacheTags() so that they are only collected when the block is saved and not when it is loaded from cache (since that would defeat the purpose of caching it).
If you use https and the block can appear on an https page and includes static resources, make sure the cache key includes Mage::app()->getRequest()->isSecure() or else you'll end up with http urls on https pages and vice versa.
Make sure your cache backend has plenty of capacity and avoid needless cache flushes.
Don't cache child blocks of a block that is itself cached unless the parent changes much more frequently than the child blocks or else you're just cluttering your cache backend.
If you do cache tagging properly you should be able to use a very long default cache lifetime safely. I believe setting "false" as the lifetime actually uses the default, not infinite. The default is 7200 seconds but can be configured in local.xml.
Using the redis backend in most cases will give you the best and most consistent performance. When using Redis you can monitor used memory size using this munin plugin.
Just to follow on from Mark... most of the tables in the Magento database are InnoDB. Whilst the query cache can be used in a few specific places, the following are more directly relevant...
innodb_buffer_pool_size
innodb_thread_concurrency
innodb_flush_method
innodb_flush_log_at_trx_commit
I also use
innodb_file_per_table
as this can be beneficial in reorganising specific tables.
If you give the database enough resource, (within reason) the amount of traffic really doesn't load the server up at all as the majority of queries are repeats anyway, and are delivered out of database cache.
In other words, you're probably worrying about nothing...
Make sure mysql query cache is turned on. And set these variables in mysql (maybe need tweaking depending on your setup).
query_cache_type=1
query_cache_size=64M
i found a very interesting blog post about Magento Performance optimization, there are many configuration settings for your server and your magento store, was very helpful for me.
http://www.mgt-commerce.com/blog/magento-on-steroids-best-practice-for-highest-performance/
First you need to audit and optimize time to first byte (TTFB).
Magento has profiler built-in that will help you identify unoptimized code blocks.
Examine your template files and make sure you DO NOT load product models inside a loop (common performance hog):
foreach($collection as $_product){
$_product = Mage::getModel('catalog/product')->load($_product->getId()
I see this code often in product/list.phtml
I wrote a step-by-step article on how to optimize TTFB
Disclaimer: the link points to my own website

Resources