today I dealt with a task to load a module's configuration into running Magento site under heavy load. I copied config.xml file of new module and everything to fix some issue.
Our Magento runs with memcached caching backend.
To have a module running I had to clear cache completly and that had an impack on performance of the site, we had 500 of concurent users . So I'm looking for solution how to deploy changes in of configuration without clearing cache.
Is there any?
Thanks for any thoughts and ideas.
Jaro.
Here is a method of updating the config cache rather than clearing it, thus avoiding race-conditions.
https://gist.github.com/2715268
You don't have to clear the entire cache to load a module's configuration. You can install the module by using the Flush Magento Cache* option. Eventually you'll need to clear the cache to see your front-end changes if any were made. The best thing to do to minimize performance impact is to clear it during off-peak or low-usage times.
*edited - Thanks Fiasco Labs
it's probably best practice to put the system in maintenance mode, make sure all admin sessions are logged out, check that everyone's out and then manually delete var/cache/mage--? folders. You then log back in on one admin session, let it run till you see an admin session has started, log back out and then back into Admin to start checking the site for full function of the freshly installed module.
You will always have to flush cache when installing a module or changing its configurations. This is necessary to force rereading of configurations, to empty out incompatible opcode and force Magento to re-read application code and templates for the changes you have just made.
Yes, it has a momentary impact on your site's performance, but can cause some really interesting issues if you don't.
I've had situations where using the button in Admin wasn't enough, for module installs, it's probably best practice to put the system in maintenance mode, make sure all admin sessions are logged out, check that everyone's out and then manually delete var/cache/mage--? folders. You then log back in on one admin session, let it run till you see an admin session has started, log back out and then back into Admin to start checking the site for full function of the freshly installed module.
This is of course overkill for simple config changes where a cache flush is sufficient.
More info on clearing the cache in Magento
Related
So, I just want to ask something about how to handle cache properly?
Having a problem on when I deploy the application/site in our client server.
Once I successfully deployed it, the end users from the other side seems like failed to
get the exact changes from their end. We did some investigation, and found that it happens because of the cache loaded in their browser was still up and running...
The quick solution is to have them (client end users) manually clear their cache on their respective machines...
Unfortunately, this kind of solution is a little inconvenient given that the manually clearing of cache should be done each machine...
So, are there any way to have the cache clear automatically or something like that?
How did the deployment happen on your own company/clients? Do we do the same thing?
Happy to hear your thoughts about this, thanks!
I have built a new site for a customer and taken over managing their domain and using a new hosting. The previous site and hosting have been completely taken down.
I am running into a major issue that I am not sure how to fix. The previous developer used a service worker to cache and load the previous site. The problem is that users that had previous visited the site keep seeing the old one since it is all loading from a cache. This old site no longer even exists so I have no way of adding any javascript to remove the service worker from their browser unless they hit the new site.
Has anyone ever had this issue and know of a way to resolve it? Note, asking the users to delete the service worker from their browser won't work.
You can use cache busting to achieve the outcome. As per Keycdn
Cache busting solves the browser caching issue by using a unique file
version identifier to tell the browser that a new version of the file
is available. Therefore the browser doesn’t retrieve the old file from
cache but rather makes a request to the origin server for the new
file.
In case you want to update the service worker itself, you should know, for a service worker an update is triggered if any of the following happens:
A navigation to an in-scope page.
A functional events such as push and sync, unless there's been an update
check within the previous 24 hours.
Calling .register() only if the service worker URL has changed. However, you should avoid changing the worker URL.
Updating the service worker
Maybe using the clear-site-data header would be the most thorough solution.
Considering installing Varnish Cache on a VPS web server but wondering what issues that might cause if eg php code needs debugging. In the past I've found that caching systems make debugging more difficult because the cached version of a web page does not change immediately following a code change. Ideally debugging needs to all be done on a test site, but sometimes it's necessary to do on a production version.
Can Varnish Cache be temporarily turned off either for individual domains or for the whole server whilst debugging?
None or very little development should be done on a production box, but indeed, sometimes you need to troubleshoot things at live site.
Varnish makes it a bit troublesome to see why a particular request to a page failed: it will mask fatal PHP errors with its own "Backend Fetch Failed" error. This makes it less obvious that there's a problem with your PHP code and makes you immediately blame Varnish.
You can temporarily make Varnish pass through its cache by piping all requests directly to the configured backend. In this way it will work absolutely the same in regards to debugging PHP code (as if Varnish isn't actually there!). My steps for this are:
Open your VCL file and immediately after sub vcl_recv {, place a line return (pipe);
Reload your Varnish configuration with service varnish reload or systemctl reload varnish (depends on your Linux distro).
To go back to caching (production setting), remove the line and reload Varnish again. No downtime when doing these steps.
We have Magento EE 1.14. Admin was working fine till last two days its speed dropped dramatically. Frontend is not affected. Also no changes in code or server configuration. here is my attempt to fix the problem but nothing worked:
Log cleaning is properly configured
removed two unused extensions. but no improvement
tried to disable non-critical extensions to see if speed will improve but also not luck.
I can NOT use REDIS cache at this time. but configured new server which is using REDIS cache and move to it next month.
sometimes backend will gain speed for few minutes
I enabled profilers the source of the delay is mage ( screenshot attached ).
here are my question:
Is there anyway to know the exact reason for Mage delay ?
do I have other test i can use to identify the cause of delay ?
Thanks in advance,
It could be delay on external resources connection. Do you have new relic or similar software? Check there for slow connections. If You don't have NR, profile admin by blackfire.io. Magento profiler is really unhelpful :)
Follow below steps:
Delete unused extensions
It is best to remove unused extensions rather than just disabling them. If you disable an extension, it will still exist in the database. It would not only increase the size of your database (DB) but it also adds to the reading time for DB. So, keep your approach clear: If you don’t need it, DELETE it.
Keep your store clean by deleting unused and outdated products
One should keep in mind that a clean store is a fast store. We can operationalize the front-end faster by caching and displaying only a limited set of products even if we have more than 10,000 items in the back-end, but we cannot escape their wrath. If the number of products keeps on increasing at the backend, it may get slower, so it is best to remove unused products. Repeat this activity in every few months to keep the store fast.
Reindexing
One of the basic reasons why website administrators experience slow performance while saving a product is because of reindexing. Whenever you save a product, the Magento backend starts to reindex, and since you have a lot of products, it will take some time to complete. This causes unnecessary delays.
Clear the Cache
Cache is very important as far as any web application is concerned so that a web server does not have to process the same request again and again.
I am having performance problems with my website and after profiling noticed that it looked like the cache was not being loaded. So I went to admin and looked at the Cache Management page and all caches were disabled. I re-enabled them and sometimes one will show as enabled, sometimes none will.
When I am able to get Configuration cache to show as enabled, I can view the profiler on the front end and see the line:
mage::app::init::config::load_cache
This line was not showing in the profiler before I enabled the cache. However, after a short period of time (30 seconds or so), any caches that were enabled show as disabled again and the front end profiler no longer has this line (the cache isn't being used).
So far, I can not get the cache to stay on. I have apache ownership on the var/cache and 777 permissions. The files are created there initially, but I am also use apc cache.
Configuration is:
<cache>
<backend>apc</backend>
<prefix>SH_</prefix> </cache>
Does anyone have any ideas?
Try changing the <prefix>SH_</prefix> setting to something that is guaranteed to be unique, e.g. your database name.
Background
Given the information you provided, I suspect another Magento instance is running on the same machine with the same cache prefix.
Whenever you change the settings, Magento writes them to the database, and then also saves the cache (see Mage_Core_Model_Cache::_initOptions()). Because the Magento instances share the same fast backend cache pool (because of the identical prefix), the settings are also used by the other host. Once the cache is cleared by the other host, their (disabled) setting is written to the cache. Now your Instance also sees the caches as disabled.
I'm unable to provide evidence without the option to test, but, well, this is my best guess.