Serving dynamically generated images using Nginx and FastCGI - image

I am using Nginx, and need to be able to generate images on the fly. When the client sends a request for an image, I need to run an external program to generate the image. The external program leaves the generated image in the filesystem.
It seems that the easiest approach would be to write a FastCGI script that runs the external program and then reads the image from the filesystem, transferring it via FastCGI to nginx.
However, this seems inefficient, since I would need to write my own file copy routine, and the file is copied from the disk into a local buffer, then into packets for FastCGI transfer to nginx, then into nginx's buffer, and then finally into packets to send to the client. It seems that it would be more efficient to leverage nginx's ability to efficiently serve static content.
Ideally, I'd like some way to make nginx wait until the image has been generated, and then serve it from the disk. Another thought is that maybe the FastCGI response could use some kind of header indicate that nginx should actually go and serve a file, instead of the response from the FastCGI script. Are either of these approaches possible?

X-Accel-Redirect - exactly what you are looking for.
Usage example can be found here: http://kovyrin.net/2006/11/01/nginx-x-accel-redirect-php-rails/
Nginx is asynchronous, so it would serve all other connection without waiting data from you FastCGI script.

Related

Transfer file takes too much time

I have an empty API in laravel code with nginx & apache server. Now the problem is that the API takes a lot of time if I try with different files and the API responds quickly if I try with blank data.
Case 1 : I called the API with a blank request, that time response time will be only 228ms.
Case 2 : I called the API with a 5MB file request, then file transfer taking too much time. that's why response time will be too long that is 15.58s.
So how can we reduce transfer start time in apache or nginx server, Is there any server configuration or any other things that i missed up ?
When I searched on google it said keep all your versions up-to-date and use php-fpm, but when I configure php-fpm and http2 protocol on my server I noticed that it takes more time than above. All server versions are up-to-date with the current version.
This has more to do with the fact one request has nothing to process so the response will be prompt, whereas, the other request requires actual processing and so a response will take as long as the server requires to process the content of your request.
Depending on the size of the file and your server configuration, you might hit a limit which will result in a timeout response.
A solution to the issue you're encountering is to chunk your file upload. There are a few packages available so that you don't have to write that functionality yourself, an example of such a package is the Pionl Laravel Chunk Upload.
An alternative solution would be to offload the file processing to a Queue.
Update
When I searched on google about chunking it's not best solution for
small file like 5-10 MB. It's a best solution for big files like
50-100 MB. So is there any server side chunking configuration or any
other things or can i use this library to chunking a small files ?
According to the library document this is a web library. What should I
use if my API is calling from Android and iOS apps?
True, chunking might not be the best solution for smaller files but it is worthwhile knowing about. My recommendation would be to use some client-side logic to determine if sending the file in chunks is required. On the server use a Queue to process the file upload in the background allowing the request to continue processing without waiting on the upload and a response to be sent back to the client (iOS/Android app) in a timely manner.

Downloading file from ftp server to local machine

How should I download file from an ftp server to my local machine using php? Is curl good for this?
you can use wget, or curl, from PHP. Be aware that the PHP script will wait for the download to finish. So if the download takes longer than your PHPs max_execution_time, your PHP script will be killed during runtime.
The best way to implement something like this is by doing it asynchronously, that way you don't slow down the execution of the PHP script which is probably supposed to serve a page later.
There are many ways to implement it asynchronously. The cleanest one is probably to use some queue like RabbitMQ or ZeroMQ over AMQP. A less clean one, which works as well, would be writing the urls to download into a file, and then implement a cronjob which minutely checkes this file for new urls to download and executes the download.
just some ideas...

Does Apache cache the gzipped version of a static file?

If you configure Apache to enable gzip compression for your static HTML/CSS/JS/etc. files, it automatically outputs a gzipped version to any client that sends an appropriate Accept-Encoding request header. (And for other clients, it just sends the raw uncompressed file.)
My question is: does Apache recompress the raw file every time it is requested by a gzip-accepting client? Or does it cache the gzipped copy, and only recompress it if it notices the last-modified time on the file has changed?
And if it does cache a gzipped copy of your files, where is this cache stored?
No it doesn't cache the gzipped file.
However the cost of compressing the file is less than the cost of squirting the extra packets across the network, hence even without caching you will see lower overall CPU usage (and lower memory usage, and fewer context switches) on your server - and a faster response at the client.
Note that the compressed file is NOT stored in the temp folder - mod_deflate reads input into a fixed size buffer in memory - and when the buffer is full (or the stream ends) the content is compressed and handed back to the webserver.
It will use even less CPU (although speed won't improve noticably) if the content is pre-compressed or cached serverside - there's multiple ways of doing this - mod_rewrite can test for the presence of filename.gz and serve it up in place of filename or you can use a reverse proxy (assuming the content is also served up with caching instructions).
No, it does not. This is described in the mod_deflate documentation now:
Since mod_deflate re-compresses content each time a request is made, some performance benefit can be derived by pre-compressing the content and telling mod_deflate to serve them without re-compressing them.
Apache does not keep any cached files. It only keeps the files you tell it to keep. Here is how compression works:
Browser requests page and states it accepts compression
Server finds page and reads the header of the request.
Server sends page to the browser (compresses if stated it accepts in header request - compressed file is stored in memory/temp folder)
Browser receives the information and displays (after decompression if compressed). The browser then caches the page and images.
Server removes any hint of the compressed file from memory/temp folder to free up space for the next request. It does log in the access_log the transaction.
The browser when it requests the same file or page again. It sends a request to the server stating that it accepts compression, and the current files and Modified Date. From here the server would respond that they are the same and send no additional information, or send only the changed files based on the Modified Date.

PHP5.3 with FastCGI caching problem accross different requests

I have designed a stylesheet/javascript files bundler and minifier that uses a simple cache mechanism. It simply writes into a file the timestamp of each bundled files and compares those timestamps to prevent rewriting the "master file" again. That way, after an application update (here my website), where CSS or JS files were modified, a single request would trigger the caching again only once. This, and all other requests would then see a compiled file such as master.css?v=1234567.
The thing is, under my development environment, every tests pass, integration works great and everything works as expected. However, on my staging environment, on a server with PHP5.3 compiled with FastCGI, my cached files seems to get rewritten with invalid data but only when not requested from the same browser.
Use case:
I make the first request on Firefox, under Linux. Everything works as expected for every other requests on that browser.
As soon as I make a request on Windows/Linux (IE7, IE8, Chrome, etc) my cache file gets invalid data, but only on the staging server running under FastCGI, not under development!
Running back another request on Firefox recaches the file correctly.
I was then wondering, does FastCGI has anything to do with it? I thought browser's clients or even operating systems didn't have anything to do with server side code.
I know this problem is abstractly described, but pasting any concrete code would be too heavy IMO, but I will do it if it can clear up my question.
I have tried remote debugging my code, and found that everything was still working as expected, even the cached file gets written correctly. I saw that when the bug occurs, the file gets written with the expected data, but then gets rewritten back with invalid data after two seconds -after php has finished its execution!-
Is there a way to disable that FastCGI caching for specific requests through a PHP function maybe?
Depending on your environment, you could look at working something out using .htaccess in Apache to serve those requests in regular cgi mode. This could probably be done with just a simple AddHandler, and Action that points to the cgi directly. This kind of assumes that you are deploying to some kind of shared hosting environment where you don't have direct access to Apache's config.
Since fastcgi persists the process for a certain amount of time, it makes sense that it could be clobbering the file at a later point after initial execution, although what the particular bug might be is beyond me.
Not much help, I know, but might give you a few ideas...
EDIT:
Here is the .htaccess code from my comment below
Options -Indexes +FollowSymLinks +ExecCGI
AddHandler php-cgi .php
Action php-cgi /cgi-bin/php5.cgi

How do I serve a binary file through rack?

I think I'm being a bit silly here, but I keep getting errors complaining about the SERVER_NAME key missing from the env hash, and I can't find any substantial documentation on Rack::SendFile..
so- how do I serve up files?
If you're serving large files for download, I'd recommend to let the webserver serve the large data. This way, you don't waste precious resources for running your Rack app just to let the user do a lengthy download.
If you response with a special Header (X-Sendfile for Apache, X-Accel-Redirect for Nginx), the webserver will use the content of the file given as the body for the response. This way, your Rack app becomes ready for the next request while the webserver takes care of the lengthy process of sending data to the user. You may need to enable this feature for your webserver first.

Resources