I have a Drupal 7 site with NGINX & Boost. Almost everything is working perfect and cached pages are served very very fast.
The problem is that some pages, randomly, won't load. Once a page gets "locked" it simply won't load until PHP-FPM is restarted. After I restart PHP-FPM a different page(s) wont load, and also this page(s) will work only after PHP-FPM is restarted again... and so on.
I think it's not an error with Drupal or NGINX because locked pages load correctly after restarting PHP-FPM. I guess it's a random thing causing PHP-FPM to "lock" certain processes (sorry if it doens't make sense).
Thanks for your help!
PD: I have perusio's config for Drupal 7 with Nginx using drupal_boost.conf
Related
I'm using Drupal 8, Nginx as a web server, and REDIS cache (deployed in container) in one of the web applications, and I'm facing an issue regarding slowness. After a few days, the performance automatically drops from 5 seconds to 30 seconds, and after clearing the Drupal and REDIS caches, it returns to normal.
I cleared the Drupal cache manually => issue is not resolved.
Clear the Drupal + REDIS cache => The problem has been resolved.
I'd like to know if anyone else is experiencing a similar problem, as well as the root cause and solution to this problem.
I have a laravel app that loads fine on localhost but when deployed on a shared 1&1 host server (for demo purposes) the first page load is very slow (up to 12s !). It's only occuring on first page load, after that it works perfectly fine as if the site was going on "sleep" mode when not used for a while.
It does sound like a cache issue though I activated all laravel caches (views, config, routes..).
Someone mentioned a similar problem on a godaddy shared host, their solution was to have a cron pinging the site every minute to keep it alive, that probably works but that's not a very satisfactory solution.
The debugger/console are not showing much :
On first page load :
Queries 343ms
Route request 12.46s
The console is showing a 12.69s TTFB waiting time
After reload
Queries 39.47ms
Route request 238ms
console 334ms.
Has anyone come across a similar issue before ?
Issue seems to be related to the host. Same install on a different host works perfectly, no real answer here..
Considering installing Varnish Cache on a VPS web server but wondering what issues that might cause if eg php code needs debugging. In the past I've found that caching systems make debugging more difficult because the cached version of a web page does not change immediately following a code change. Ideally debugging needs to all be done on a test site, but sometimes it's necessary to do on a production version.
Can Varnish Cache be temporarily turned off either for individual domains or for the whole server whilst debugging?
None or very little development should be done on a production box, but indeed, sometimes you need to troubleshoot things at live site.
Varnish makes it a bit troublesome to see why a particular request to a page failed: it will mask fatal PHP errors with its own "Backend Fetch Failed" error. This makes it less obvious that there's a problem with your PHP code and makes you immediately blame Varnish.
You can temporarily make Varnish pass through its cache by piping all requests directly to the configured backend. In this way it will work absolutely the same in regards to debugging PHP code (as if Varnish isn't actually there!). My steps for this are:
Open your VCL file and immediately after sub vcl_recv {, place a line return (pipe);
Reload your Varnish configuration with service varnish reload or systemctl reload varnish (depends on your Linux distro).
To go back to caching (production setting), remove the line and reload Varnish again. No downtime when doing these steps.
Help me find the reason why before downloading any of the pages is a delay of about 2 minutes, and then abruptly loaded site. No changes to the site did not do throughout the year. Before, everything was fine, 2 weeks ago is started. Site: www.proudandcurvy.co.uk
It's probably some kind of DNS timeout, is the web server configured to do DNS lookups? You really want to turn that off.
You use the Apache HTTPD, the configuration option that you should be looking for is
HostnameLookups and it should be set to Off.
Can anyone see a reason not to enable the WSDL Cache in Magento?
I have an EPOS system that is periodically talking to Magento from outside the network. When it does this, the site suffers from a huge dip in speed, as it appears to struggle with the SOAP API. Even hitting the site with an https request like this:
https://[site-url]/api/v2_soap?wsdl=1
The response can be up to 10 seconds. Sometimes, when lots of these requests are made, the server grinds to a halt, and there are many sleeping connections left in the MySQL database.
On checking whether Magento is configured for WSDL caching, I notice that it isn't. I didn't develop the site, however, and I'm wondering if there are any legitimate reasons not to enable this feature?
Maybe this is obvious: for debugging.
I've experienced an issue (running Magento and PHP-FPM) where the WSDL cache became corrupted during a huge spike in traffic, which resulted in 503 errors whenever a SoapClient was constructed. That cache didn't clear through restarts of PHP-FPM, Apache, and the machine. Clearing the SOAP cache solved the problem, but it took some time to debug, and cache issues tend to be extra maddening.
I should say that I have no idea if this is a common issue, but the WSDL cache is a component that, like any component, can break.