PHP5.3 with FastCGI caching problem accross different requests - caching

I have designed a stylesheet/javascript files bundler and minifier that uses a simple cache mechanism. It simply writes into a file the timestamp of each bundled files and compares those timestamps to prevent rewriting the "master file" again. That way, after an application update (here my website), where CSS or JS files were modified, a single request would trigger the caching again only once. This, and all other requests would then see a compiled file such as master.css?v=1234567.
The thing is, under my development environment, every tests pass, integration works great and everything works as expected. However, on my staging environment, on a server with PHP5.3 compiled with FastCGI, my cached files seems to get rewritten with invalid data but only when not requested from the same browser.
Use case:
I make the first request on Firefox, under Linux. Everything works as expected for every other requests on that browser.
As soon as I make a request on Windows/Linux (IE7, IE8, Chrome, etc) my cache file gets invalid data, but only on the staging server running under FastCGI, not under development!
Running back another request on Firefox recaches the file correctly.
I was then wondering, does FastCGI has anything to do with it? I thought browser's clients or even operating systems didn't have anything to do with server side code.
I know this problem is abstractly described, but pasting any concrete code would be too heavy IMO, but I will do it if it can clear up my question.
I have tried remote debugging my code, and found that everything was still working as expected, even the cached file gets written correctly. I saw that when the bug occurs, the file gets written with the expected data, but then gets rewritten back with invalid data after two seconds -after php has finished its execution!-
Is there a way to disable that FastCGI caching for specific requests through a PHP function maybe?

Depending on your environment, you could look at working something out using .htaccess in Apache to serve those requests in regular cgi mode. This could probably be done with just a simple AddHandler, and Action that points to the cgi directly. This kind of assumes that you are deploying to some kind of shared hosting environment where you don't have direct access to Apache's config.
Since fastcgi persists the process for a certain amount of time, it makes sense that it could be clobbering the file at a later point after initial execution, although what the particular bug might be is beyond me.
Not much help, I know, but might give you a few ideas...
EDIT:
Here is the .htaccess code from my comment below
Options -Indexes +FollowSymLinks +ExecCGI
AddHandler php-cgi .php
Action php-cgi /cgi-bin/php5.cgi

Related

Debugging code on a server running Varnish Cache

Considering installing Varnish Cache on a VPS web server but wondering what issues that might cause if eg php code needs debugging. In the past I've found that caching systems make debugging more difficult because the cached version of a web page does not change immediately following a code change. Ideally debugging needs to all be done on a test site, but sometimes it's necessary to do on a production version.
Can Varnish Cache be temporarily turned off either for individual domains or for the whole server whilst debugging?
None or very little development should be done on a production box, but indeed, sometimes you need to troubleshoot things at live site.
Varnish makes it a bit troublesome to see why a particular request to a page failed: it will mask fatal PHP errors with its own "Backend Fetch Failed" error. This makes it less obvious that there's a problem with your PHP code and makes you immediately blame Varnish.
You can temporarily make Varnish pass through its cache by piping all requests directly to the configured backend. In this way it will work absolutely the same in regards to debugging PHP code (as if Varnish isn't actually there!). My steps for this are:
Open your VCL file and immediately after sub vcl_recv {, place a line return (pipe);
Reload your Varnish configuration with service varnish reload or systemctl reload varnish (depends on your Linux distro).
To go back to caching (production setting), remove the line and reload Varnish again. No downtime when doing these steps.

Why are my localhost HTTP response times so slow?

Using localhost and Tomcat 7, I'm seeing between 600-800ms per request in Chrome Developer tools for a specific webapp. Requests are JS files, CSS files, images or the initial server response. Some responses are less than 1KB, others are over 100KB.
As a result, it's taking around 10 seconds to load one page of the webapp. When I load the same webapp on our production server, it's taking less than 1 second to load an entire page.
I'm not sure where to continue debugging the issue...
I've ruled out it being a browser issue by testing in Safari too.
I've turned it off and on again
Reduced response to 500-600ms overall
I've cleared out my log files
I've ruled out the webapp's frontend entirely by hitting a resource directly, ex: http://ts.xyz.com:9091/1.0/toolsList/javascript/toolsList.js or http://ts.xyz.com:9091/awake
I've tested another webapp and that performs lightning-quick
So, it has to be this particular app and it has to be locally.
I've seen such behaviour long time ago when the webserver (Apache httpd back then) was configured to make DNS lookups for logs - these took awfully long time especially when an IP could not be resolved. As it doesn't make sense for a localhost app to be orders of magnitude slower (especially when you're talking about serving static resources) I'd check for any network related issues: Database connections, logging configurations, DNS lookups, TLS server trust issues (with backends, database, LDAP or others).
I can't decide if I add this as "if everything else fails" or rather add this as "but first try this:"... you decide:
Compare the setup of your production server with your development server (localhost) and make extra extra extra sure that there's no meaningful difference.

What can be reason for slow content downloading from webserver?

I'm trying to increase performance of webpage. I'm using ReactJS + webpack, which compiles my jsx files into one file - search.bundle.js. And server is returning this file for 2-3 seconds. File size is ~200KB. Is file size the only reason?
On local server it works pretty well. But on remote webserver it is really slow..
There is Google Map and listing of items on page, which I get using ajax request. This is recursive request (while not get enough data, or timeout) which is called in componentDidMount, but as I understand it can't because it can start request items only after script is loaded on page.
So is there any way to achive more faster downloading this script? Or I should just try to reduce size of script?
And some data from headers tab:
on local:
on remote:
The answer to this question is that script has a source map available. With Chrome's Developer Tools panel open, it makes transparent requests to any available source map files (which it does not show you).
These files are often absolutely massive (mine are 3MB+) and take considerable time to pull on every refresh. Nginx specifically also handles them very poorly, at least in my setup. I think setting the max sendfile chunk size to be lower helps.

How do you troubleshoot all Apache threads becoming occupied and idle?

I have a Drupal 6 site that is frequently (about once a day) going down. The hosting provider is reporting that something in our site code is occupying all Apache threads but keeping them idle, making the server run out of threads to respond to new requests. A simple restart of Apache frees the threads and fixes the issue, though it reoccurs within a few hours or a day.
I have no idea how to troubleshoot this issue and have never come across PHP code doing this. Is there some kind of Apache settings change I can make to capture more information about what might be keeping a thread occupied but idle? What typical PHP routines can cause this behavior? I looked for code that connects to external resources, but didn't see any issues there.
Any hints for what to look at, capture more information, or PHP code that can cause this would be most useful.
With Drupal6 you could have the poormanscron module running sometimes, or even the classical cron (from crontab wget or whatever).
Then you could get one heavy cron operation putting your database under heavy stuff. Then if your database reponse time is becoming very slow every http request will become very slow (as for example sessions are in the database, and several hundreds queries are required for a drupal page). having all reqests slowing down may put all the avĂ ilable php process in a 'occupied state'.
Restarting apache all current process are stoped. If you run the cron via wget and not via drush cron tasks are a nice thing to check (running cron via drush would make it run via php-cli' restarting apache would not kill the cron). You can try a module like elysia cron to get more details on cron tasks and maybe isolate the long ones (you have a report on tasks duration).
This effect (one request hurting bad the database, all requests slowing down, no more process available) could also be done by one bad piece of code coming from any of your installed modules. This would be harder to detect.
So I would ensure slow queries are tracked on MySQL (see my.cnf otinons), then analyse theses requests with tolls like mysqsla. The problem is that sometimes one query is so big that all query becames slow. Se use time of crash te detect the first ones. Use also tho MySQL option to track queries not using indexes.
Another way to get all apache process stalled on php operation with drupal is having a lock problem. Drupal is using is own lock implementation with MySQL. You could maybe add some watchdog (drupal internal debug messages) calls on theses files to try to detect locks problems.
Then you could also have sonme external http requests calls made by drupal. Calling external websites like facebook, google, some tiny url tools, or drupal.org module update things (which always try to find all modules, even the one you write). If the distant website is down or filtering your traffic you'll have problems (but the apache restart would not help you, so it may not be that).

How do I serve a binary file through rack?

I think I'm being a bit silly here, but I keep getting errors complaining about the SERVER_NAME key missing from the env hash, and I can't find any substantial documentation on Rack::SendFile..
so- how do I serve up files?
If you're serving large files for download, I'd recommend to let the webserver serve the large data. This way, you don't waste precious resources for running your Rack app just to let the user do a lengthy download.
If you response with a special Header (X-Sendfile for Apache, X-Accel-Redirect for Nginx), the webserver will use the content of the file given as the body for the response. This way, your Rack app becomes ready for the next request while the webserver takes care of the lengthy process of sending data to the user. You may need to enable this feature for your webserver first.

Resources