Magento Configure Products + Simple Products - magento

Current Magento setup:
Configure Products + Simple Products
Problem:
Configurable product from 2000 simple products is causing an issue as Magento stops responding if i make configurable products from more than 500 simple products
Error Message: Fatal error: Allowed memory size of 41943040 bytes exhausted (tried to allocate 2186999 bytes) in /homepages/3/d347795961/htdocs/magento/lib/Zend/Db/Statement/Pdo.php on line 228
product name = test
Type= configurable
Associated products = 696 simple products
Error Message: Fatal error: Allowed memory size of 41943040 bytes exhausted (tried to allocate 311296 bytes) in /homepages/3/d347795961/htdocs/magento/app/design/frontend/default/flp/template/catalog/product/price.phtml on line 290
Operating System: 1&1 Dual Core – L Dedicated Server
CPU: Opteron1216
Clock Rate: 2 x 2.4 GHz
Ram: 2GB (RAID 1 Software)
Memory: 40m
Comments:
I have been told by my developer that if i obtain a server with 312m memory or more – he recommended a company called www.bluehost.com – this should solve the problem?
I have 3 products with numerous size variants in each – adding up to 2000 possibilities between the 3 different products. I also have 2 more products with approx 1500 size variants or possible combinations, which we have not even attempted to load yet due to the aforementioned.
My developer is now offering “Simple products with custom option” as a solution http://s347795977.websitehome.co.uk/magento/index.php/test-product-4158.html - this will however doing it this way will apparently not allow me to check stock – which will have direct bearing of trying to integrate Magento into the Sage Accounting package.......I am also not sure of what other problems or restrictions it will raise i.e changing pricing easily
Questions:
- Is it possible to have numerous variations (size combinations) on a specific product using Configure Products + Simple Products Setup
- This being the case, if i swop to another server package offering 312m or more – will this solve the problem – and more importantly allow me to lead the additional 1500 products on over and above the 2000 i am currently having a problem with.
Any advice or assistance regarding this matter would greatly be appreciated.

Yes, theoretically it's entirely possible to have large combinations of simple products for a configurable product. In practice, performance problems (such as the one you are experiencing) will limit you.
Maybe? Increasing the limit will allow you to bump your PHP memory limit higher (something which you may be able to do even now), but the essential problem of loading that many products is still present.
The first thing to try is to bump PHP's memory limit. 40M is pretty low, and upping it may resolve your problem.
Otherwise, would it be possible to make slightly different permutations for the products? e.g. if the current configurable attributes are foo, bar and baz, make a product for each value of baz to limit the number of configurations?
Hope that helps!
Thanks,
Joe

You should raise memory_limit a bit. You can do it in php.ini with vps or dedicated server, also in server allows php.ini custom. Or you can change value in .htaccess
For example:
php_value memory_limit 128M

Related

Magento Loading Time (ttfb over 1.6 Minutes)

We increased our products from 12k to 50k.
Now we're faceing a huge loading time issue. Sites are loading over 1-2 minutes. It seems like the first ttfb takes much to long and our developers can't find a solution/reason for that. someone knows a solution for this or someone who eveb could fix it? (paid)
Loading Time in Screenshot
Slow example url would be: https://gear2game.ch/unterhaltungselektronik/tv-home-cinema/tv.html
Also good to know, the homepage is fast, but all categories and products are slow :(
Best regards
Sandro
When the homepage is fast, but categories and products are slow, that is frequently an indication that way too much data is being loaded somewhere in a custom module, or you're not making use of indexers (which are a necessity with 50k products).
Slow ttfb also indicates that it's an issue on your backend (and given you just added a bunch of products, I'd say it's because some of the settings in Magento to handle large amounts of data are not enabled).
These are the things I'd start with:
Have you reindexed?
Are you using flat tables?
Are your caches enabled?
Are you using the Magento swatch module? This is notoriously bad with many products.
With a 50K products you should really think about Varnish.
Here are some useful links :
https://varnish-cache.org/
https://www.magentocommerce.com/magento-connect/turpentine-varnish-cache.html

When 777ms are still not good enough (MariaDB tuning)

Over the past couple of months, I've been on a rampage optimising a Joomla website that I'm managing. When I first started, the homepage used to open in around 30-40 seconds, in spite of repeatedly upgrading my dedicated server, as suggested by the hosting firm.
I was able to bring the pagespeed down to around 800ms by religiously following all the recommendations of the likes of GT Matrix and PingdomTools, (such as using JCH-optimize, .htaccess caching and compression settings, and MaxCDN) but now I'm stuck optimising my my.cnf settings, trying various settings suggested on a number of related articles. The fastest I'm getting the homepage to open - with the current settings - is 777ms after refresh, which might not sound too bad, but look at the configuration of my dedicated server:
2 Quads, 128GB, 2x480GB SSD RAID
CloudLinux/Cpanel/WHM
Apache/suEXEC/PHP5/FastCGI
MariaDB 10.0.17 (all tables converted to XtraDB/InnoDB)
The site traffic is moderate, 10,000 and 20,000 visitors per day, with around 200,000 pageviews.
These are the current my.cnf settings. My goal is to bring the pagespeed down to under 600ms, which should be possible with this kind of hardware, provided it is tuned the right way.
[mysqld]
local-infile=0
max_connections=10000
max_user_connections=1000
max_connect_errors=20
key_buffer_size=1G
join_buffer_size=1G
bulk_insert_buffer_size=1G
max_allowed_packet=1G
slow_query_log=1
slow_query_log_file="diskar/mysql-slow.log"
long_query_time=40
connect_timeout=120
wait_timeout=20
interfactive_timeout=25
back_log=500
query_cache_type=1
query_cache_size=512M
query_cache_limit=512K
query_cache_min_res_unit=2K
sort_buffer_size=1G
thread_cache_size=16
open_files_limit=10000
tmp_table_size=8G
thread_handling=pool-of-threads
thread_stack=512M
thread_pool_size=12
thread_pool_idle_timeout=500
thread_cache_size=1000
table_open_cache=52428
table_definition_cache=8192
default-storage-engine=InnoDB
[innodb]
memlock
innodb_buffer_pool_size=96G
innodb_buffer_pool_instances=12
innodb_additional_mem_pool_size=4G
innodb_log_bugger_size=1G
innodb_open_files=300
innodb_data_file_path=ibdata1:400M:autoextend
innodb_use_native_aio=1
innodb_doublewrite=0
innodb_user_atomic_writes=1
innodb_flus_log_at_trx_commit=2
innodb_compression_level=6
innodb_compression_algorithm=2
innodb_flus_method=O_DIRECT
innodb_log_file_size=4G
innodb_log_files_in_group=3
innodb_buffer_pool_instances=16
innodb_adaptive_hash_index_partitions=16
innodb_thread_concurrency
innodb_thread_concurrency=24
innodb_write_io_threads=24
innodb_read_io_threads=32
innodb_adaptive_flushing=1
innodb_flush_neighbors=0
innodb_io_capacity=20000
innodb_io_capacity_max=40000
innodb_lru_scan_depth=20000
innodb_purge_threads=1
innodb_randmon_read_ahead=1
innodb_read_io_threads=64
innodb_write_io_threads=64
innodb_use_fallocate=1
innodb_use_atomic_writes=1
inndb_use_trim=1
innodb_mtflush_threads=16
innodb_use_mfflush=1
innodb_file_per_table=1
innodb_file_format=Barracuda
innodb_fast_shutdown=1
I tried Memcached and APCU, but it didn't work. The site actually runs 2-3 times faster with 'Files' as the caching handler in Joomla's Global Configuration. And yes, I ran my-sqltuner, but that was of no help.
I am newby as far as Linux is concerned and suspect that above settings could be improved. Any comments and/or suggestions?
long_query_time=40
Set that to 1 so you can find out what the slow queries are.
max_connections=10000
That is unreasonably high. If you come anywhere near it, you will have more problems than failure to connect. Let's say only 3000.
query_cache_type=1
query_cache_size=512M
The Query cache is hurting performance by being so large. This is because any write causes all QC entries for the table to be purged. Recommend no more than 50M. If you have heavy writes, it might be better to change the type to DEMAND and pepper your SELECTs with SQL_CACHE (for relatively static tables) or SQL_NO_CACHE (for busy tables).
What OS?
Are the entries in [innodb] making it into the system? I thought these needed to be in [mysqld]. Check by doing SHOW VARIABLES LIKE 'innodb%';.
Ah, buggers; a spelling error:
innodb_log_bugger_size=1G
innodb_flus_log_at_trx_commit=2
inndb_use_trim=1
and more??
After you get some data in the slowlog, run pt-query-digest, and let's discuss the top couple of queries.

Magento Disabling Flat rate when free shipping activated

Hi when it is above 25euros I need to activate the free shipping
When it is above 25euros I have free shipping but i have the flat rate also on the screen
Can anyone help me to disable the flat rate when the free shipping in activated
I prefer to have done using the magento backend rather than changing the code
Magento ver. 1.7.0.2
Thanks
First Goto admin>System>Configuration>Sales>Shipping method
Find here Setting minimum order amount 25.
and after that try below link ,which disabling other shipping method
https://magento.stackexchange.com/questions/13796/hide-other-shipping-methods-when-free-shipping-is-enabled/
Why use flat rates for your case.
i suggest use table rate and define 0 for cart total above 25 and your flat charges for range between 0 and 25.

Is there a maximum number of child on VirtueMart?

Joomla 2.5 / VirtueMart
Im using custom field / Plug-ins of type Stockable variants for creating childs products.
here I have a parent with 106 child, and the problem started when I'm now trying to save any change on the parent is not saved.
it's just redirect me to virtueMart dashboard.
What's the closest thing that cost this problem?
is there any limit?
or apache performances needed?
I was thinking that the max_post_size is too low so I change it to 64M
I had the same problem with many custom fields. And it was hard to find the solution. But in my case it was limitation of a POST request for max variables.
You could try to change php.ini variable "max_input_vars" (it may not be there, just add it). Set it to 2000 (1000 is default).
max_input_vars = 2000

Huge page buffer vs. multiple simultaneous processes

One of our customer has a 35 Gb database with average active connections count about 70-80. Some tables in database have more than 10M records per table.
Now they have bought new server: 4 * 6 Core = 24 Cores CPU, 48 Gb RAM, 2 RAID controllers 256 Mb cache, with 8 SAS 15K HDD on each.
64bit OS.
I'm wondering, what would be a fastest configuration:
1) FB 2.5 SuperServer with huge buffer 8192 * 3500000 pages = 29 Gb
or
2) FB 2.5 Classic with small buffer of 1000 pages.
Maybe some one has tested such case before and will save me days of work :)
Thanks in advance.
Because there is many processor I would start by Classic.
But try all.
Perhaps soon 2.5 with superclassic can be great for you.
just to dig out the old thread for anyone who may need this.
We use fb classic 2.5 on 75GB db, machine almost the same as described one.
SuperServer was inefficient during tests. Buffers and page size changes only made performance a little bit less miserable.
Currently we use Classic with xinetd, page size = 16384, page buffers = 5000,
SuperServer will use ONLY ONE procesor.
Since you have 24 cores your best option is to use Clasic.
SuperClasic is not yet ready to scale well in a multi processor enviroment.
Definitely go with one of the 'classic' architectures.
If you're using Firebird 2.5, check out SuperClassic.
I am currently having a client who has similar requirements.
The best solution for that case was to install FirebirdSQL 2.5 SuperClassic and just leaving the default small caching settings, because if you have free memory (RAM), Windows and also Linux do better caching of the database then firebird does. The Caching feature of Firebird is not really fast, so let the OS do it.
Also depending on what backup-software you use - if it creates full backups of the firebird-database often, then you can deactivate forced writes on the databases. (just do it if you know what you are doing and if know what can happes by deactivating the forced writes).

Resources