Magento error prevents me from flushing cache to fix error - magento

I have the error "Mage registry key "_singleton/my_observer" already exists" error which is preventing me from clearing the cache and getting the site working again. I had originally accidentally added a duplicate my_observer class to my config.xml which is what caused the initial problem, and I since removed both instances completely, but I still get the same error. I have removed all instances and mentions of my_observer from the site, but the error still keeps popping up (I use phpstorm to search the entire project for any mention, and it found none).
I have tried flushing the cache through a shell command, but I only get the error 'php_network_getaddresses: getaddrinfo failed: Name or service not known'.
I have empties the var/cache folder and the var/session folder as well, to no avail.
I have cleared the cache in my browser, used another browser, and used incognito mode, all of which did not work either.
I know I basically need to flush the cache to make the site work again, but I basically can't flush the cache until I flush the cache.

Thank goodness I found an answer to my question. My cache is a Redis cache and I used the following commands to flush the cache through CLI (note, I had to install redis-tools in the CLI for this to work):
Pick one of the options below
redis-cli FLUSHDB
redis-cli -n DB_NUMBER FLUSHDB
redis-cli -n DB_NUMBER FLUSHDB ASYNC
redis-cli FLUSHALL
redis-cli FLUSHALL ASYNC
You can find your DB number in app/etc/local.xml.
Here is my source:
https://www.cyberciti.biz/faq/how-to-flush-redis-cache-and-delete-everything-using-the-cli/

Related

Phoenix Caches Indefinitely

I'm writing a project with a lot of static files
Whenever I change a custom.js file, even thought it get changed both in
web/static/assets/custom.js and priv/static/assets/custom.js
When I try to reach the resource I get the old version, which is also most of the times corrupted
I tried:
restarting the server
running brunch build
removing the whole _build directory
changing the files further
clearing browsers cache
using url localhost:4000/assets/js/custom.js -H 'Pragma: no-cache'
Still the server serves old file
Edit
It seems to be an issue with mtime difference between vagrant VM and host.
So the real question is:
How to eliminate that issue?

Manually Purging Nginx Cache Causes Errors in Log File

I am attempting to clear the nginx cache when the CMS (ExpressionEngine) publishes new content. I have been just purging the entire folder and letting the cache rebuild itself. It seems to be working fine, but it is filling up the error logs with these entries:
2014/12/15 12:35:09 [crit] 21686#0: unlink() "/var/nginx/cache/default/6197dda0a6cadcec5563533cb6027580" failed (2: No such file or directory)
2014/12/15 12:35:10 [crit] 21686#0: unlink() "/var/nginx/cache/default/bb8eca6b51c655989bd717a9708b244e" failed (2: No such file or directory)
2014/12/15 12:35:10 [crit] 21686#0: unlink() "/var/nginx/cache/default/6f9b9aea38c5761a87cffd365e51e7a4" failed (2: No such file or directory)
It seems that nginx keeps track of the cache files and gets confused when it goes to purge them after I already did.
Is there a better way to be purging the cache that doesn't cause these errors?
Off the top of my head, a way of doing this is specifying secret headers in nginx which will bypass cache, thus theoretically purging the existing files.
But also, there is nothing wrong in your way of doing it. The only ugliness is these logs, which invariably show up as [crit], which they are not in case of manual purge. :)
"It appears that these errors occur when NGINX itself tries to delete cache entries after the time specified by the inactive parameter of the fastcgi_cache_path directive. The default for this is only 10 minutes, but you can set it to whatever value you want. I’ve set it to 7 days myself, which seems to work well as I haven’t seen this error at all after changing it."
Source: https://www.miklix.com/nginx/deleting-nginx-cache-puts-critical-unlink-errors-in-error-log/

openshift DIY, 503 error after deleting and adding again testrubyserver.ruby file

I am trying openshift DIY cartridge. I use a windows system to manage the server from command line. I managed to run a simple html5 website. I have deleted the testrubyserver.ruby file from the webpage folder for test purposed and then added it again to my webfolder. Now i have 503 error. No restart, no stop, no start helps. I am stuck in 503. Does anyone know what to do? How can I make the testrubyserver.ruby run again?
Solved my problem. I checked the log file in the folder: app-root / logs. There I found out that
nohup: failed to run command `/..//testrubyserver.rb': Permission denied
I change in filezilla the permissions for the file from rw to rwx to execute it. Restarted the server and then it worked.
I do not know if this is the right approach. At least it makes my app running again.

Changing Base URL in Magento

I am moving a magento store from mydomaintest.com to mydomain.com.
When I say move, in this instance, we simply used the Cpanel to Modify Account and changed the Domain Name from mydomaintest.com to mydomain.com.
Then using the advice found in forums I used PHPMyAdmin to update the Magento Core Config table to the new BaseURL for both Secure and Unsecure url's.
After doing this I deleted all files in /var/cache.
Trying to access the site by domain name or IP is providing the following error:
Fatal error: require_once() [function.require]: Failed opening required '/home/mydomain/public_html/errors/report.php' (include_path='/home/mydomain/public_html/app/code/local:/home/mydomain/public_html/app/code/community:/home/mydomain/public_html/app/code/core:/home/mydomain/public_html/lib:.:/usr/lib/php:/usr/local/lib/php') in /home/mydomain/public_html/app/Mage.php on line 847
Please help, we are trying to move live today and can't seem to figure this one out.
Thanks!
John
Go to System > Index management and Reindex data as it also contain the url rewrites. Also be sure to check System > Cache Management (some versions still have that) and flush all cache as var/cache is not the only caching location. The zend components save their cache in the tmp folder.
I had this issue with Magento running with Apache2 on Ubuntu 14.10
Make sure that MySQL module for PHP is install:
dpkg --list | grep php5-mysql
If it is not listed, you need to install it:
sudo apt-get install php5-mysql
Then restart Apache:
sudo service apache2 restart
In our case we get this message because someone deleted the "error" folder - the site works fine until an error happens.
Once we restored the folder (and make sure PHP can access it), we see the normal Magento error page.
If you don't have the folder you can download Magento and extract it from the archive.

Download large files in Heroku

I am facing some issues when downloading large files in Heroku. I have to download and parse files greater than 1Gb. What I am trying to do right now, is use curl to download them into /tmp folder (of a Rails application).
The curl command is: "curl --retry 999 -o #{destination} #{uri} 2> /dev/null" and destination is Rails.root.join("tmp", "file.example")
The problem is that after a few minutes downloading, the "curl" process that is downloading the file is finished, way far from the download is finished. Before being finished, the logs show lots of "Memory exceeded". This led me to the thinking that when I am saving to /tmp folder, it is storing the downloaded content in the memory and when it memory hit its limit, the process is killed.
I would like to know if any of you have already experienced a similar issue on Heroku and if saving to /tmp folder really works like this. If so, do you have any suggestions to get this working at Heroku?
thanks,
Elvio
You are probably better off saving the file in an external cloud provider like S3 using the fog gem. In any case, Heroku is a read only filesystem, so they won't allow you to curl, must less write to it.

Resources