Debugging code on a server running Varnish Cache - debugging

Considering installing Varnish Cache on a VPS web server but wondering what issues that might cause if eg php code needs debugging. In the past I've found that caching systems make debugging more difficult because the cached version of a web page does not change immediately following a code change. Ideally debugging needs to all be done on a test site, but sometimes it's necessary to do on a production version.
Can Varnish Cache be temporarily turned off either for individual domains or for the whole server whilst debugging?

None or very little development should be done on a production box, but indeed, sometimes you need to troubleshoot things at live site.
Varnish makes it a bit troublesome to see why a particular request to a page failed: it will mask fatal PHP errors with its own "Backend Fetch Failed" error. This makes it less obvious that there's a problem with your PHP code and makes you immediately blame Varnish.
You can temporarily make Varnish pass through its cache by piping all requests directly to the configured backend. In this way it will work absolutely the same in regards to debugging PHP code (as if Varnish isn't actually there!). My steps for this are:
Open your VCL file and immediately after sub vcl_recv {, place a line return (pipe);
Reload your Varnish configuration with service varnish reload or systemctl reload varnish (depends on your Linux distro).
To go back to caching (production setting), remove the line and reload Varnish again. No downtime when doing these steps.

Related

Why are my localhost HTTP response times so slow?

Using localhost and Tomcat 7, I'm seeing between 600-800ms per request in Chrome Developer tools for a specific webapp. Requests are JS files, CSS files, images or the initial server response. Some responses are less than 1KB, others are over 100KB.
As a result, it's taking around 10 seconds to load one page of the webapp. When I load the same webapp on our production server, it's taking less than 1 second to load an entire page.
I'm not sure where to continue debugging the issue...
I've ruled out it being a browser issue by testing in Safari too.
I've turned it off and on again
Reduced response to 500-600ms overall
I've cleared out my log files
I've ruled out the webapp's frontend entirely by hitting a resource directly, ex: http://ts.xyz.com:9091/1.0/toolsList/javascript/toolsList.js or http://ts.xyz.com:9091/awake
I've tested another webapp and that performs lightning-quick
So, it has to be this particular app and it has to be locally.
I've seen such behaviour long time ago when the webserver (Apache httpd back then) was configured to make DNS lookups for logs - these took awfully long time especially when an IP could not be resolved. As it doesn't make sense for a localhost app to be orders of magnitude slower (especially when you're talking about serving static resources) I'd check for any network related issues: Database connections, logging configurations, DNS lookups, TLS server trust issues (with backends, database, LDAP or others).
I can't decide if I add this as "if everything else fails" or rather add this as "but first try this:"... you decide:
Compare the setup of your production server with your development server (localhost) and make extra extra extra sure that there's no meaningful difference.

Any reason NOT to enable WSDL Caching for SOAP in Magento?

Can anyone see a reason not to enable the WSDL Cache in Magento?
I have an EPOS system that is periodically talking to Magento from outside the network. When it does this, the site suffers from a huge dip in speed, as it appears to struggle with the SOAP API. Even hitting the site with an https request like this:
https://[site-url]/api/v2_soap?wsdl=1
The response can be up to 10 seconds. Sometimes, when lots of these requests are made, the server grinds to a halt, and there are many sleeping connections left in the MySQL database.
On checking whether Magento is configured for WSDL caching, I notice that it isn't. I didn't develop the site, however, and I'm wondering if there are any legitimate reasons not to enable this feature?
Maybe this is obvious: for debugging.
I've experienced an issue (running Magento and PHP-FPM) where the WSDL cache became corrupted during a huge spike in traffic, which resulted in 503 errors whenever a SoapClient was constructed. That cache didn't clear through restarts of PHP-FPM, Apache, and the machine. Clearing the SOAP cache solved the problem, but it took some time to debug, and cache issues tend to be extra maddening.
I should say that I have no idea if this is a common issue, but the WSDL cache is a component that, like any component, can break.

Ability to reload change in Magento site's configuration without clearing cache

today I dealt with a task to load a module's configuration into running Magento site under heavy load. I copied config.xml file of new module and everything to fix some issue.
Our Magento runs with memcached caching backend.
To have a module running I had to clear cache completly and that had an impack on performance of the site, we had 500 of concurent users . So I'm looking for solution how to deploy changes in of configuration without clearing cache.
Is there any?
Thanks for any thoughts and ideas.
Jaro.
Here is a method of updating the config cache rather than clearing it, thus avoiding race-conditions.
https://gist.github.com/2715268
You don't have to clear the entire cache to load a module's configuration. You can install the module by using the Flush Magento Cache* option. Eventually you'll need to clear the cache to see your front-end changes if any were made. The best thing to do to minimize performance impact is to clear it during off-peak or low-usage times.
*edited - Thanks Fiasco Labs
it's probably best practice to put the system in maintenance mode, make sure all admin sessions are logged out, check that everyone's out and then manually delete var/cache/mage--? folders. You then log back in on one admin session, let it run till you see an admin session has started, log back out and then back into Admin to start checking the site for full function of the freshly installed module.
You will always have to flush cache when installing a module or changing its configurations. This is necessary to force rereading of configurations, to empty out incompatible opcode and force Magento to re-read application code and templates for the changes you have just made.
Yes, it has a momentary impact on your site's performance, but can cause some really interesting issues if you don't.
I've had situations where using the button in Admin wasn't enough, for module installs, it's probably best practice to put the system in maintenance mode, make sure all admin sessions are logged out, check that everyone's out and then manually delete var/cache/mage--? folders. You then log back in on one admin session, let it run till you see an admin session has started, log back out and then back into Admin to start checking the site for full function of the freshly installed module.
This is of course overkill for simple config changes where a cache flush is sufficient.
More info on clearing the cache in Magento

AppFabric velocity as a state server

Is anybody using Windows AppFabric Server for out of process state management?
Any feedback, advice would be appreciated.
Using AppFabric Caching. We tried this and it appeared to work, was easy to setup etc. There are some very strange settings when setting up the cache about peristance which need to be read carefully.
Our issue was on two server we installed IIS and Appfabric Caching and told app to try the local one first. When we went into production is just started to fail. It appears that with only two servers there is a lead server which if it goes down things stop working, we read that if needed to scale to 3 or more servers to get the behaviour we wanted. Not an option when just gone live and not working so we switched to SQL server for now while we look at nCache and ScaleOut and Memchached
The other issue is that caching and session state are not the same animal, if you loose your cache it should not be the end of the world just put it back together, we need to keep session state for the lotted time period at all costs.

PHP5.3 with FastCGI caching problem accross different requests

I have designed a stylesheet/javascript files bundler and minifier that uses a simple cache mechanism. It simply writes into a file the timestamp of each bundled files and compares those timestamps to prevent rewriting the "master file" again. That way, after an application update (here my website), where CSS or JS files were modified, a single request would trigger the caching again only once. This, and all other requests would then see a compiled file such as master.css?v=1234567.
The thing is, under my development environment, every tests pass, integration works great and everything works as expected. However, on my staging environment, on a server with PHP5.3 compiled with FastCGI, my cached files seems to get rewritten with invalid data but only when not requested from the same browser.
Use case:
I make the first request on Firefox, under Linux. Everything works as expected for every other requests on that browser.
As soon as I make a request on Windows/Linux (IE7, IE8, Chrome, etc) my cache file gets invalid data, but only on the staging server running under FastCGI, not under development!
Running back another request on Firefox recaches the file correctly.
I was then wondering, does FastCGI has anything to do with it? I thought browser's clients or even operating systems didn't have anything to do with server side code.
I know this problem is abstractly described, but pasting any concrete code would be too heavy IMO, but I will do it if it can clear up my question.
I have tried remote debugging my code, and found that everything was still working as expected, even the cached file gets written correctly. I saw that when the bug occurs, the file gets written with the expected data, but then gets rewritten back with invalid data after two seconds -after php has finished its execution!-
Is there a way to disable that FastCGI caching for specific requests through a PHP function maybe?
Depending on your environment, you could look at working something out using .htaccess in Apache to serve those requests in regular cgi mode. This could probably be done with just a simple AddHandler, and Action that points to the cgi directly. This kind of assumes that you are deploying to some kind of shared hosting environment where you don't have direct access to Apache's config.
Since fastcgi persists the process for a certain amount of time, it makes sense that it could be clobbering the file at a later point after initial execution, although what the particular bug might be is beyond me.
Not much help, I know, but might give you a few ideas...
EDIT:
Here is the .htaccess code from my comment below
Options -Indexes +FollowSymLinks +ExecCGI
AddHandler php-cgi .php
Action php-cgi /cgi-bin/php5.cgi

Resources