Is there a maximum number of child on VirtueMart? - joomla

Joomla 2.5 / VirtueMart
Im using custom field / Plug-ins of type Stockable variants for creating childs products.
here I have a parent with 106 child, and the problem started when I'm now trying to save any change on the parent is not saved.
it's just redirect me to virtueMart dashboard.
What's the closest thing that cost this problem?
is there any limit?
or apache performances needed?
I was thinking that the max_post_size is too low so I change it to 64M

I had the same problem with many custom fields. And it was hard to find the solution. But in my case it was limitation of a POST request for max variables.
You could try to change php.ini variable "max_input_vars" (it may not be there, just add it). Set it to 2000 (1000 is default).
max_input_vars = 2000

Related

The Laravel registration page takes a long time to load

I just started using Laravel. I haven't written almost any code yet, the problem on my site is that it takes 120 seconds to load on the register page, there is no problem on other pages, but when I want to enter the register area, the page takes about 120 seconds to load, what can I do?
I couldn't change anything, I'm just experiencing this late loading in the register field can anyone help?
Solution 1. If the required js vs. cs's for the error in the Software Console are not produced, you need to fill them again and again to reduce the failure. It can read files.
Set max_execution_time 120 to max_input_time -1 memory_limit to at least 512M. Cpanel operators Bug exercises editing controls and minimizes line spacing on those pages. You can easily use it with Laravel Debugbar so that you don't benefit from Cpanel. Finally not sure if it can be seen from DNS Ms, which may be caused by Server Location. Take it easy now. Sorry for the delay.

crawl small homepage with metadata.transfer and N:M-relationships

hi folks,
We use StormCrawler with elasticsearch to make an index of our homepage which consist of "old pages" and "new pages".
My Question in short:
If two pages A(old),B(new) link to page X, how to pass metadata from B to X?
My Question in long:
We relauched our homepage step by step. So at time we have pdf-Files which are reachable via only the old html-pages, via only the new html-page or on both ways.
For "order by" purpose we must mark all pdf-Files which are reachable by the new html-pages.
So we insert "newHomepage=true" to seeds.txt and "metadata.transfer/-newHomepage" to "crawler-conf.yaml": Fine :-)
But for the pdf-Files which are reachable from old !and! new html-pages, we now have a race condition: If our pdf-File is "DISCOVERED" from an old page this information (newHomepage=false) is in Status-Index and can not be overridden.
( StatusUpdaterBolt does not override documents, IndexerBolt does override by default).
To make the thinks more complicate: in our case a URL (at html-page) to a PDF is redirected two times, before the file is delivered.
So from my point of view we have two possibilities:
Start the crawler two times. First we only index our new pages (and all reachable pdf files), second we index our old pages.
--> Problems with new pages which are changed after crawler was started
Store "outbound_links" and use them to set "newHomepage" independently from the crawler
--> short times with wrong metadata in index
Any advice or other ideas?
Best regards
Karsten
thanks for sharing your problem and great to hear that you are using SC. This is an interesting and unusual use case.
Your analysis of the problem is correct. An intuitive approach would be to extend the default StatusUpdaterBolt so that it updates the metadata if a document already exists. You'd need to remove the part that does the check on whether the doc has a status of DISCOVERED.
This would slow things down, but since you are dealing with a single website, this should not have a massive impact.
You could push the logic even further by setting a new nextFetchDate if the document had been fetched so that it gets refetched and updated quicker in the doc index (as opposed to the status one).

Tweaking magento for performance

i'm looking on performance (server load time) of magento site and i'm trying to tune search result pages. I realized that when I disabled all heavy things like top navigation, lev layered navigation and product listing and I cleared all cache then after this magento core does like 60 SQL queries agains a database. Does anyone have any procedure how to rid of them or how to reduce them to some acceptable amount?
Also can I somehow reduce a time spent during creating of blocks?
Thank you very much,
Jaro.
Magento is a extremely flexible ecommerce framework, but that flexibility comes with a price: performance. This answer is a collection of pointers and some details on caching (especially for blocks).
One thing to consider is the Magento environment, e.g. tuning the php, the web server (favor nginx over Apache), and MySQL. Also, set up a good caching backend for Magento. All these are covered e.g. in the Magento Performance Whitepaper that applies also to the CE.
After the environment is set up, the other side of things is the code.
Reducing the number of queries is possible for some pages by enabling the flat table catalog (System > Configuration > Catalog > Frontend), but you will always have a high number of queries.
You also can't really reduce the time spent creating the blocks except by tuning the environment (APC, memory, CPU). So as the other commenters said, your best choice is utilizing the caching functionality that Magento has built in.
Magento Block Caching
Because you specifically mentioned blocks in the question, I'll elaborate a bit more on block caching. Block caching is governed by three properties:
cache_lifetime
cache_key
cache_tags
All these properties can be set in the _construct() method of a block using setData() or magic setters, or by implementing the associated getter methods (getCacheLifetime(), getCacheKey(), getCacheTags()).
The cache_lifetime is specified in (integer) seconds. If it is set to false(boolean), the block will be cached for ever (no expiry). If it is set to nullthe block will not be cached (this is the default in Mage_Core_Block_Abstract).
The cache_key is the unique string that is used to identify the cache record in the cache pool. By default it is constructed from the array returned by the method getCacheKeyInfo().
// Mage_Core_Block_Abstract
public function getCacheKeyInfo()
{
return array(
$this->getNameInLayout()
);
}
public function getCacheKey()
{
if ($this->hasData('cache_key')) {
return $this->getData('cache_key');
}
/**
* don't prevent recalculation by saving generated cache key
* because of ability to render single block instance with different data
*/
$key = $this->getCacheKeyInfo();
//ksort($key); // ignore order
$key = array_values($key); // ignore array keys
$key = implode('|', $key);
$key = sha1($key);
return $key;
}
The best way to customize the cache key in custom blocks is to override the getCacheKeyInfo() method and add the data that you need to uniquely identify the cached block as needed.
For example, in order to cache a different version of a block depending on the customer group you could do:
public function getCacheKeyInfo()
{
$info = parent::getCacheKeyInfo();
$info[] = Mage::getSingleton('customer/session')->getCustomerGroupId()
return $info;
}
The cache_tags are an array that enable cache segmentation. You can delete sections of the cache matching one or more tags only.
In the admin interface under System > Cache Management you can see a couple of the default cache tags that are available (e.g. BLOCK_HTML, CONFIG, ...). You can use custom cache tags, too, simply by specifying them.
This is part of the Zend_Cache implementation, and needs to be customized far less frequently compared to the cache_lifetime and the cache_key.
Other Caching
Besides blocks Magento caches many other things (collection data, configuration, ...).
You can cache your own data using Mage::app()->saveCache(), Mage::app()->loadCache(), Mage::app()->cleanCache() and Mage::app()->removeCache(). Please look in Mage_Core_Model_App for details on these methods, they are rather straight forward.
You will also want to use a full page cache module. If you are using the Magento EE, you already have one. Otherwise search Magento Connect - there are many options (commercial).
Some of those modules also tune various parts of Magento for you beyond the full page caching aspect, e.g. Nitrogento (commercial).
Using a reverse proxy like Varnish is also very beneficial.
There are quite a number of blog posts on this subject. Here is one post by the publishers of the Nitrogento extension.
If you are running Magento on a more low-scale environment, check out my post on the optimization of the file cache backend on magebase.com.
I am adding additional comments for speed:
Instead of using Apache use nginx or litespeed.
Make sure flat catalog is used.
If possible use FPC.
compiler mode to be set on.
Merge css and js(Fooman Speedster ).
Use image sprites to reduce number of request.
Use query cache but avoid size greater then 64 MB.
Remove all modules not in use by removing there xml.Just disabling will not do.
Session to be on Ram.
Use of APC recommended.
Your cron should be run in offpeak hours.
Delete additional stores if not in use.
Delete cart rules if not in use.
optimize image for size.
Use Ajax where ever possible.
CMS blocks take more time then a magento block so unless you want a block to be modified do not use CMS blocks.
Do not use collection count use collection getSize to get what is the collection size.
Minimize number of searchable attributes as these result in columns in flat catalog table and will slow down your search.
Use of Solr search is recommended. It comes with EE version but it can be installed with CE as well.
Minimize customer group as suggested in comment.
Enable compression in .htaccess (mod_gzip for Apache 1.3, mod_deflate for Apache 2)
Remove staging stores if on EE.
Use Apache mod_expires and be sure to set how long files should be cached.In case you are on Apache server.
Use a Content Delivery Network (CDN).
Enable Apache KeepAlives.
Make your output W3C compliant
Use of getChildHtml('childName') is recommended as this will cache block against direct use of block code in .phtml file.
Make sure cron is run so as to clean logs stored in data base.
Number of days log should be minimized as per requirement.
Load cache on RAM if memory permits.
Reduce hard disc file reads and try reads from ram as this is faster.
Upgrade PHP version to above 5.3
If on EE make sure that most pages are delivered without application initialization.Even if one container needs application initialization its going to effect execution speed as apart form URL rewrites most of the other code will get executed.
Check XML for blocks placed in default handle and if those blocks not on specific page then move those XML values from default handle to specific handles.It has been observed that lots of blocks are executed that are not displayed.
If using FPC make sure your containers are cached and repeat request for container is delivered via cache.Improper placeholder definition results in container cache not being used but each time new container content getting generated.
Analyze page blocks and variables and if possible add those variables/blocks to cache.
Switch off Logs writing in Magento.
Remove Admin notification module.
Use of image sprites.
Use some web test tool to analyse number of requests and other html related parameters responsible for download time and act accordingly.
Remove attributes if not needed.With proper care we can even remove system attributes if not in use.
42: If on enterprise make sure partial indexing is effectively used.
Write your own solr search populate to bypass Magento search indexing.
Clean _cl tables or reduce _cl table rows.
I would add into list: try to avoid file cache if possible, replace it by apc / redis / memcache( As suggested by Jaro)
Remove system attributes not in use( Be careful,do a thorough check before removing).
There are some cron tab jobs that are not applicable to all stores so depending on your store features those can be removed.
Optimization by proper attribute management like setting required attribute to yes or is searchable or required in listing etc.
Some observers are not required for all stores so in case those observers are not applicable to a specific Magento site then they should be removed.
Make sure FPC is applicable to most of the site pages. Specially when you added some new controllers to delivering a page.
Magento has lots of features.For this it has many events and associated observers.There are few features that are not used by a store so any observer related to that feature should be removed.e.g : If you check enterprise version there is category permission concept which if not used, then its recommended that on save after events permission related observers to be removed.
If a specific attribute is to be save for a product then instead of call $product->save call a function that will save specific attribute.
In EE version that has partial indexing and triggers modify triggers to avoid multiple entries to_cl tables.
No.phtml files bypasses blocks and use modules or resources directly.As this will result in no caching which in-turn means more work for Magento.
Delivering images depending on device in use.
Some of the FPC recommended for community : Lesti( Free as on date ),Amasty( commercial),extender(commercial ) and Bolt(commercial).
Warming Cache.
Controlling bots by .htaccess during peak hrs.
Pre-populating values in a custom table for Layered Navigation via a custom script that executes daily by cron.
Making sure to avoid unwanted Keys to reduce cache size.
Using a higher PHP version 5.4+
Using a higher Mysql version( 5.5 +)
Reduce number of Dom elements.
Move all js out from html pages.
Remove commented html.
Modify triggers if on enterprise version(1.13 or higher) so as to reduct _cl table entries.As these entries results in cache flushing which in turn results in lower cache hit,hence more TTFB time.
Use Magmi to import products.
As Vinai said, Magento is all about extensibility and raw performance is secondary but remedied by things like indexing and caching. Significantly improving performance without caching is going to be very difficult. Short of full-page caching, enabling block caching is a good method of improving performance but proper cache invalidation is key. Many blocks are cacheable but not already configured to be cached by default so identify the slowest ones using profiling use Vinai's guide for enabling caching. Here are a few additional things to keep in mind with block caching:
Any block that lists product info should have the product's tag which is 'catalog_product_'.$productId. Similarly, use 'catalog_category_'.$categoryId for categories. This will ensure the cache is invalidated when the product or category is saved (edited in backend). Don't set these in the constructor, set them in an overridden getCacheTags() so that they are only collected when the block is saved and not when it is loaded from cache (since that would defeat the purpose of caching it).
If you use https and the block can appear on an https page and includes static resources, make sure the cache key includes Mage::app()->getRequest()->isSecure() or else you'll end up with http urls on https pages and vice versa.
Make sure your cache backend has plenty of capacity and avoid needless cache flushes.
Don't cache child blocks of a block that is itself cached unless the parent changes much more frequently than the child blocks or else you're just cluttering your cache backend.
If you do cache tagging properly you should be able to use a very long default cache lifetime safely. I believe setting "false" as the lifetime actually uses the default, not infinite. The default is 7200 seconds but can be configured in local.xml.
Using the redis backend in most cases will give you the best and most consistent performance. When using Redis you can monitor used memory size using this munin plugin.
Just to follow on from Mark... most of the tables in the Magento database are InnoDB. Whilst the query cache can be used in a few specific places, the following are more directly relevant...
innodb_buffer_pool_size
innodb_thread_concurrency
innodb_flush_method
innodb_flush_log_at_trx_commit
I also use
innodb_file_per_table
as this can be beneficial in reorganising specific tables.
If you give the database enough resource, (within reason) the amount of traffic really doesn't load the server up at all as the majority of queries are repeats anyway, and are delivered out of database cache.
In other words, you're probably worrying about nothing...
Make sure mysql query cache is turned on. And set these variables in mysql (maybe need tweaking depending on your setup).
query_cache_type=1
query_cache_size=64M
i found a very interesting blog post about Magento Performance optimization, there are many configuration settings for your server and your magento store, was very helpful for me.
http://www.mgt-commerce.com/blog/magento-on-steroids-best-practice-for-highest-performance/
First you need to audit and optimize time to first byte (TTFB).
Magento has profiler built-in that will help you identify unoptimized code blocks.
Examine your template files and make sure you DO NOT load product models inside a loop (common performance hog):
foreach($collection as $_product){
$_product = Mage::getModel('catalog/product')->load($_product->getId()
I see this code often in product/list.phtml
I wrote a step-by-step article on how to optimize TTFB
Disclaimer: the link points to my own website

Magento Configure Products + Simple Products

Current Magento setup:
Configure Products + Simple Products
Problem:
Configurable product from 2000 simple products is causing an issue as Magento stops responding if i make configurable products from more than 500 simple products
Error Message: Fatal error: Allowed memory size of 41943040 bytes exhausted (tried to allocate 2186999 bytes) in /homepages/3/d347795961/htdocs/magento/lib/Zend/Db/Statement/Pdo.php on line 228
product name = test
Type= configurable
Associated products = 696 simple products
Error Message: Fatal error: Allowed memory size of 41943040 bytes exhausted (tried to allocate 311296 bytes) in /homepages/3/d347795961/htdocs/magento/app/design/frontend/default/flp/template/catalog/product/price.phtml on line 290
Operating System: 1&1 Dual Core – L Dedicated Server
CPU: Opteron1216
Clock Rate: 2 x 2.4 GHz
Ram: 2GB (RAID 1 Software)
Memory: 40m
Comments:
I have been told by my developer that if i obtain a server with 312m memory or more – he recommended a company called www.bluehost.com – this should solve the problem?
I have 3 products with numerous size variants in each – adding up to 2000 possibilities between the 3 different products. I also have 2 more products with approx 1500 size variants or possible combinations, which we have not even attempted to load yet due to the aforementioned.
My developer is now offering “Simple products with custom option” as a solution http://s347795977.websitehome.co.uk/magento/index.php/test-product-4158.html - this will however doing it this way will apparently not allow me to check stock – which will have direct bearing of trying to integrate Magento into the Sage Accounting package.......I am also not sure of what other problems or restrictions it will raise i.e changing pricing easily
Questions:
- Is it possible to have numerous variations (size combinations) on a specific product using Configure Products + Simple Products Setup
- This being the case, if i swop to another server package offering 312m or more – will this solve the problem – and more importantly allow me to lead the additional 1500 products on over and above the 2000 i am currently having a problem with.
Any advice or assistance regarding this matter would greatly be appreciated.
Yes, theoretically it's entirely possible to have large combinations of simple products for a configurable product. In practice, performance problems (such as the one you are experiencing) will limit you.
Maybe? Increasing the limit will allow you to bump your PHP memory limit higher (something which you may be able to do even now), but the essential problem of loading that many products is still present.
The first thing to try is to bump PHP's memory limit. 40M is pretty low, and upping it may resolve your problem.
Otherwise, would it be possible to make slightly different permutations for the products? e.g. if the current configurable attributes are foo, bar and baz, make a product for each value of baz to limit the number of configurations?
Hope that helps!
Thanks,
Joe
You should raise memory_limit a bit. You can do it in php.ini with vps or dedicated server, also in server allows php.ini custom. Or you can change value in .htaccess
For example:
php_value memory_limit 128M

Apache Redirects/Rewrite Maximum

I have a migration project from a legacy system to a new system. The move to the new system will create new unique id's for the objects being migrated; however, my users and search indexes will have the URLs with the old ids. I would like to set up an apache redirect or rewrite to handle this but am concerned about performance with that large number of objects (I expect to have approximatelty 500K old id to new id mappings).
Has anyone implemented this on this scale? Or knows if apache can stand to this big a redirect mapping?
If you have a fixed set of mappings, you should give a mod_rewrite rewrite map of the type
Hash File a try.
I had the very same question recently. As I found no practical answer, we implemented an htaccess 6 rules of which 3 had 200,000 conditions.
That means an htaccess file with the size of 150 MB. It was actually fine for half a day, when noone was using this particular website, even though page load times were in the seconds. However next day, our whole server got hammered, with loads well above 400. (machine is 8 cores, 16 GB RAM, SAS RAID5, so no problem with resources usually)
I suggest if you need to implement anything like this. Design your rules, so they don't need conditions, and put them in a dbm rewrite map. this easily solved the performance issues for us.
http://httpd.apache.org/docs/current/rewrite/rewritemap.html#dbm
Can you phrase the rewrites using a smaller number of rules? Is there a pattern which links the old URLs to the new ones?
If not, I'd be concerned about Apache with 500K+ rewrite mappings, that's just way past its comfort zone. Still, it might surprise you.
It sounds to me like you need to write a database-backed application just to handle the redirects, with the mapping itself stored in the database. That would scale much better.
I see this is an old topic but did you every find a a solution?
I have a case where the developers are using htaccess to redirect more than 30,000 URLs using RedirectMatch in a .htaccess file.
I am concerned about performance and management errors given the size of this file.
What I recommended is that since all of the old urls have:
/sub/####
That they move this to the database and create
/sub/index.php
Redirect all requests for:
www.domain.com/sub/###
to
www.domain.com/sub/index.php
Then have index.php send the redirect since the new URLs and old ids can be looked up in the database.
This way only HTTP requests for the old URLs are hitting re-write processes instead of every single HTTP request.

Resources