How do I serve a binary file through rack? - ruby

I think I'm being a bit silly here, but I keep getting errors complaining about the SERVER_NAME key missing from the env hash, and I can't find any substantial documentation on Rack::SendFile..
so- how do I serve up files?

If you're serving large files for download, I'd recommend to let the webserver serve the large data. This way, you don't waste precious resources for running your Rack app just to let the user do a lengthy download.
If you response with a special Header (X-Sendfile for Apache, X-Accel-Redirect for Nginx), the webserver will use the content of the file given as the body for the response. This way, your Rack app becomes ready for the next request while the webserver takes care of the lengthy process of sending data to the user. You may need to enable this feature for your webserver first.

Related

Transfer file takes too much time

I have an empty API in laravel code with nginx & apache server. Now the problem is that the API takes a lot of time if I try with different files and the API responds quickly if I try with blank data.
Case 1 : I called the API with a blank request, that time response time will be only 228ms.
Case 2 : I called the API with a 5MB file request, then file transfer taking too much time. that's why response time will be too long that is 15.58s.
So how can we reduce transfer start time in apache or nginx server, Is there any server configuration or any other things that i missed up ?
When I searched on google it said keep all your versions up-to-date and use php-fpm, but when I configure php-fpm and http2 protocol on my server I noticed that it takes more time than above. All server versions are up-to-date with the current version.
This has more to do with the fact one request has nothing to process so the response will be prompt, whereas, the other request requires actual processing and so a response will take as long as the server requires to process the content of your request.
Depending on the size of the file and your server configuration, you might hit a limit which will result in a timeout response.
A solution to the issue you're encountering is to chunk your file upload. There are a few packages available so that you don't have to write that functionality yourself, an example of such a package is the Pionl Laravel Chunk Upload.
An alternative solution would be to offload the file processing to a Queue.
Update
When I searched on google about chunking it's not best solution for
small file like 5-10 MB. It's a best solution for big files like
50-100 MB. So is there any server side chunking configuration or any
other things or can i use this library to chunking a small files ?
According to the library document this is a web library. What should I
use if my API is calling from Android and iOS apps?
True, chunking might not be the best solution for smaller files but it is worthwhile knowing about. My recommendation would be to use some client-side logic to determine if sending the file in chunks is required. On the server use a Queue to process the file upload in the background allowing the request to continue processing without waiting on the upload and a response to be sent back to the client (iOS/Android app) in a timely manner.

Cache a static file in memory forever on Nginx?

I have Nginx running in a Docker container, and it serves some static files. The files will never change at runtime - if they actually do change, the container will be stopped, the image will be rebuilt, and a new container will be started.
So, to improve performance, it would be perfect if Nginx would read the static files only one single time from disk and then server it from memory forever. I have found some configuration options to configure caching, but at least from what I have seen none of them provided this "forever" behavior that I'm looking for.
Is this possible at all? If so, how do I need to configure Nginx to achieve this?
Nginx as an HTTP server cannot do memory-caching of static files or pages.
Nginx is a capable and mature HTTP and proxy server. But there seems to be some confusion about its capabilities with respect to caching. Nginx server cannot memory-cache files when running as a pure Web server. And…wait what!? Let me rephrase: Nginx HTTP server cannot memory-cache files or pages.
Possible Workaround
The Nginx community’s answer is: no problem, let the OS do memory caching for you! The OS is written by smart people (true) and knows the what, when, where, and how of caching (a mere opinion). So, they say, cat your static files to /dev/null periodically and just trust it to cache your stuff for you! For those who are wondering and pondering, what’s the cat /dev/null reference has to do with caching? Read on to find out more (hint: don’t do it!).
How does it work?
It turns out that Linux is a fine-tuned beast that’s hawk-eyed about what goes in and out of its cache thingy. That cache thingy is called the Page Cache. The Page Cache is the memory store where frequently-accessed files are partially or entirely stored so they’re quickly accessible. The kernel is responsible for keeping track of files that are cached in memory, when they need to be updated, or when they need to be evicted. The more free RAM that’s available the larger the page cache the “better” the caching.
Please refer below diagram for more depth explanation:
Operating system does in memory caching by default. It's called page cache. In addition, you can enable sendfile to avoid copying data between kernel space and user space.

Serving dynamically generated images using Nginx and FastCGI

I am using Nginx, and need to be able to generate images on the fly. When the client sends a request for an image, I need to run an external program to generate the image. The external program leaves the generated image in the filesystem.
It seems that the easiest approach would be to write a FastCGI script that runs the external program and then reads the image from the filesystem, transferring it via FastCGI to nginx.
However, this seems inefficient, since I would need to write my own file copy routine, and the file is copied from the disk into a local buffer, then into packets for FastCGI transfer to nginx, then into nginx's buffer, and then finally into packets to send to the client. It seems that it would be more efficient to leverage nginx's ability to efficiently serve static content.
Ideally, I'd like some way to make nginx wait until the image has been generated, and then serve it from the disk. Another thought is that maybe the FastCGI response could use some kind of header indicate that nginx should actually go and serve a file, instead of the response from the FastCGI script. Are either of these approaches possible?
X-Accel-Redirect - exactly what you are looking for.
Usage example can be found here: http://kovyrin.net/2006/11/01/nginx-x-accel-redirect-php-rails/
Nginx is asynchronous, so it would serve all other connection without waiting data from you FastCGI script.

PHP5.3 with FastCGI caching problem accross different requests

I have designed a stylesheet/javascript files bundler and minifier that uses a simple cache mechanism. It simply writes into a file the timestamp of each bundled files and compares those timestamps to prevent rewriting the "master file" again. That way, after an application update (here my website), where CSS or JS files were modified, a single request would trigger the caching again only once. This, and all other requests would then see a compiled file such as master.css?v=1234567.
The thing is, under my development environment, every tests pass, integration works great and everything works as expected. However, on my staging environment, on a server with PHP5.3 compiled with FastCGI, my cached files seems to get rewritten with invalid data but only when not requested from the same browser.
Use case:
I make the first request on Firefox, under Linux. Everything works as expected for every other requests on that browser.
As soon as I make a request on Windows/Linux (IE7, IE8, Chrome, etc) my cache file gets invalid data, but only on the staging server running under FastCGI, not under development!
Running back another request on Firefox recaches the file correctly.
I was then wondering, does FastCGI has anything to do with it? I thought browser's clients or even operating systems didn't have anything to do with server side code.
I know this problem is abstractly described, but pasting any concrete code would be too heavy IMO, but I will do it if it can clear up my question.
I have tried remote debugging my code, and found that everything was still working as expected, even the cached file gets written correctly. I saw that when the bug occurs, the file gets written with the expected data, but then gets rewritten back with invalid data after two seconds -after php has finished its execution!-
Is there a way to disable that FastCGI caching for specific requests through a PHP function maybe?
Depending on your environment, you could look at working something out using .htaccess in Apache to serve those requests in regular cgi mode. This could probably be done with just a simple AddHandler, and Action that points to the cgi directly. This kind of assumes that you are deploying to some kind of shared hosting environment where you don't have direct access to Apache's config.
Since fastcgi persists the process for a certain amount of time, it makes sense that it could be clobbering the file at a later point after initial execution, although what the particular bug might be is beyond me.
Not much help, I know, but might give you a few ideas...
EDIT:
Here is the .htaccess code from my comment below
Options -Indexes +FollowSymLinks +ExecCGI
AddHandler php-cgi .php
Action php-cgi /cgi-bin/php5.cgi

How to ensure SWF is cached by the browser the first time it is downloaded when serving via HTTP and Mongrel cluster?

We have a Rails app that instantiates a SWF object 16 times (it has to do this, it's just the nature of the application). Instead of being downloaded once, it is being downloaded 16 times.
How can we ensure this SWF is cached by the browser the first time it is downloaded? It's being served directly from Apache - can we modify the HTTP headers to accomplish this?
Some information:
Browser caching the resources with
code 304.
Domain points to cluster and traffic
is forwarded to two servers(.3 and
.4) in the cluster.
Each server has own copy of the code
with different timestamp of the
files.
If there are any subsequent requests to the SWF then chances are that .3 or .4 may
serve and browser treats image as modified on the server since image file timestamp is different.
Any help would be appreciated, as it would greatly improve the app's performance after the initial load.
Try to set the Expires header to a high enough value. also adding Cache-Control: max_age=<some high number> will help.
Apache's mod_expires will help with all of that.

Resources