I've set up Magento 1.8.1 with PHP 5.4 and APC 3.1.15 (all running on EC2 - AWS Linux).
Magento crashes intermittently (and frequently) with APC active. I can't reproduce the issue with any specific area of the site; but going to various pages in the admin, I can get it to crash within five minutes.
According to the logs...
PHP Fatal error: Cannot override final method Mage_Core_Block_Abstract::toHtml() in /var/www/includes/src/Mage_Adminhtml_Block_Widget.php on line 35
With APC off, the problem goes away (and performance sucks). I've read everything I could find from Google searches, but nothing specific to this issue. All the issues seem to be about memory consumption (adjusting shared memory, etc). It's not a segfault or out of memory error.
Anyone have any insight into this error?
Related
My queue jobs all run fairly seamlessy in our production server, but about every 2 - 3 months I start getting a lot of timeout exceeded/too many attempts exceptions.
Our app is running with event sourcing and many events are queued so neededless to say we have a lot of jobs passing through the system (100 - 200k per day generally).
I have not found the root cause of the issues yet, but a simple re-deploy through Laravel Envoyer fixes the issue. This is most likely due to the cache:clear command being run.
Currently, the cache is handled by Redis and is on the same server as the app. I was considering moving the cache to its own server/instance but this still does not help me with the root cause.
Does anyone have any ideas what might be going on here and how I can diagnose/fix it? I am guessing the cache is just getting overloaded/running out of space/leaking etc. over time but not really sure where to go from here.
Check :
The version of your redis make an update of the predis package
The version of your Laravel
Your server
I hope I gave you some solutions
I have a server with Magento 2 installed and the website goes down randomly for no reason whatsoever. When it goes down I restart the server and the website comes back up. I'm running the top command and there is no indication either on the resources being consumed while the website goes down
It is an instance on AWS with 8 core and 32 GB of ram using Provisioned IOPS SSD storage.
I'm completely dumbfounded by this. Already cleared all logs for Magento and database logs to clear up space because I was running low on space
Considering installing Varnish Cache on a VPS web server but wondering what issues that might cause if eg php code needs debugging. In the past I've found that caching systems make debugging more difficult because the cached version of a web page does not change immediately following a code change. Ideally debugging needs to all be done on a test site, but sometimes it's necessary to do on a production version.
Can Varnish Cache be temporarily turned off either for individual domains or for the whole server whilst debugging?
None or very little development should be done on a production box, but indeed, sometimes you need to troubleshoot things at live site.
Varnish makes it a bit troublesome to see why a particular request to a page failed: it will mask fatal PHP errors with its own "Backend Fetch Failed" error. This makes it less obvious that there's a problem with your PHP code and makes you immediately blame Varnish.
You can temporarily make Varnish pass through its cache by piping all requests directly to the configured backend. In this way it will work absolutely the same in regards to debugging PHP code (as if Varnish isn't actually there!). My steps for this are:
Open your VCL file and immediately after sub vcl_recv {, place a line return (pipe);
Reload your Varnish configuration with service varnish reload or systemctl reload varnish (depends on your Linux distro).
To go back to caching (production setting), remove the line and reload Varnish again. No downtime when doing these steps.
I have a Drupal 6 site that is frequently (about once a day) going down. The hosting provider is reporting that something in our site code is occupying all Apache threads but keeping them idle, making the server run out of threads to respond to new requests. A simple restart of Apache frees the threads and fixes the issue, though it reoccurs within a few hours or a day.
I have no idea how to troubleshoot this issue and have never come across PHP code doing this. Is there some kind of Apache settings change I can make to capture more information about what might be keeping a thread occupied but idle? What typical PHP routines can cause this behavior? I looked for code that connects to external resources, but didn't see any issues there.
Any hints for what to look at, capture more information, or PHP code that can cause this would be most useful.
With Drupal6 you could have the poormanscron module running sometimes, or even the classical cron (from crontab wget or whatever).
Then you could get one heavy cron operation putting your database under heavy stuff. Then if your database reponse time is becoming very slow every http request will become very slow (as for example sessions are in the database, and several hundreds queries are required for a drupal page). having all reqests slowing down may put all the avĂ ilable php process in a 'occupied state'.
Restarting apache all current process are stoped. If you run the cron via wget and not via drush cron tasks are a nice thing to check (running cron via drush would make it run via php-cli' restarting apache would not kill the cron). You can try a module like elysia cron to get more details on cron tasks and maybe isolate the long ones (you have a report on tasks duration).
This effect (one request hurting bad the database, all requests slowing down, no more process available) could also be done by one bad piece of code coming from any of your installed modules. This would be harder to detect.
So I would ensure slow queries are tracked on MySQL (see my.cnf otinons), then analyse theses requests with tolls like mysqsla. The problem is that sometimes one query is so big that all query becames slow. Se use time of crash te detect the first ones. Use also tho MySQL option to track queries not using indexes.
Another way to get all apache process stalled on php operation with drupal is having a lock problem. Drupal is using is own lock implementation with MySQL. You could maybe add some watchdog (drupal internal debug messages) calls on theses files to try to detect locks problems.
Then you could also have sonme external http requests calls made by drupal. Calling external websites like facebook, google, some tiny url tools, or drupal.org module update things (which always try to find all modules, even the one you write). If the distant website is down or filtering your traffic you'll have problems (but the apache restart would not help you, so it may not be that).
I am using Magento 1.5.0.1 and am getting the occasional 'Call to a member function getId() on a non-object' error on checkout.
The customer will try several times to checkout with the same details and the error 'Call to a member function getId() on a non-object' will keep coming up, but then, after a few seconds or a few minutes, the error will stop and the checkout will go through.
This does not happen 100% of the time.
I have checked:
1) Apache error logs are clean, apache has lots of system resources free. Optimised for Magento according to official guide.
2) MySQL error logs are clean, mysql has lots of system resources free. Optimised for Magento according to official guide.
3) PHP error logs will only show 'Call to a member function getId() on a non-object', there is no indication that PHP ran out of ram, i.e. a failed to allocated memory error typical of RAM running out.
4) All other Magento optimisations have been performed: caching, compilation, APC, PHP limit of 256mb.
5) APC has lots of system resources fee.
6) CPU never maxed out 25-50% utilisation, RAM only 40-50% used, over 50% free!
Can also get 'Call to a member function getStoreId() on a non-object' error message.
I am tearing my hair out trying what else more I can do! Out of 50 orders, it will screw up for about 2-3 orders, i.e. the customer tries to checkout about 5-10 times in a space of 5-10 minutes.
What can be locking up?
When I analyzed similar errors on my Magento install (1.4.2), I was able to put together that it was some kind of hack attempt against the site based on whois & nslookup searches. People (bots?) are placing orders without their session properly initialized, thus there isn't a store (or any other) object established for an Id to come from. This should be flagged as a bug so they can properly handle this case and re-initialize the session or perform some other action to better harden against people poking around the forms' logic.
Additionally, there is some kind of bug in the code where occasionally, a customer can place an order for an item that's $0. In every case, it was a normal customer, who when called, was surprised and gave us the CC details for manual payment processing. Clearing cache and everything first thing every day helps with this.