I have a script, that generates and outputs images directly (uses http://glide.thephpleague.com/).
All images are served from /img/.
How can I configure NGINX to cache them, bypass the script and serve images directly?
E.g. it should catch response from script, put the image somewhere (the best if on a separate server) and serve directly on subsequent requests
You'll need to provide your nginx config if you'd like a more complete answer.
Following the standard cache setup found here should do the trick. If you're serving files out of the /img/ always, then you could do the following:
location ^~ /img/ {
alias /absolute/path/to/img/folder;
expires 31d; #or whatever you prefer
add_header Vary Accept-Encoding;
add_header Pragma public;
add_header Cache-Control public;
error_page 404 = #your_upstream_generating_the_files;
}
What this does is it first checks the /img/ folder if the file is there. If it is not, you want to pass it to your application so that it can generate it for you. Next time the resource is requested, it will serve it out of the /img/ folder.
Related
I have a Laravel app where X-Frame-Options is currently SAMEORIGIN. I want to either remove or change it for one specific static html file. This file is not generated by blade, it's just a static html file.
I know that in Laravel I can do:
$response->headers->remove('X-Frame-Options');
And in nginx:
add_header X-Frame-Options "ALLOWALL";
or
proxy_hide_header X-Frame-Options;
I've tried adding the following lines in /etc/nginx/nginx.conf but that seems to break my whole website...
location /folder/file.html {
proxy_hide_header X-Frame-Options;
}
I think doing this in nginx is faster as it doesn't have to go through Laravel, but if that's not true and it's easier to set in Laravel then I'm open to that.
Update
First of all, I needed to put the location snippet inside the server part in the conf file...
Second, it looks like Laravel adds the X-Frame-Options header because after adding the location snippet correctly and restarting the server, it's still there.
It looked like Laravel still added the header, but that wasn't the case. The part proxy_hide_header didn't work, in the end this worked:
server {
location /folder/file.html {
add_header X-Frame-Options "";
}
}
I use WordPress as a home page (mysite.com) and Django for the rest (mysite.com/pages). I have single nginx config file for both of them and its working fine. Moreover, they reside in a single server block.
Here are some parts of my configuration:
server {
...
root /var/www/wordpress;
...
location /static/ {
alias /home/username/Env/virtenv/mysite/static/;
}
}
With this both WordPress and Django static files are served.
Now I want to cache all static files for 7 days in the browser. If I put this in addition:
location ~* \.(jpg|jpeg|png|svg|gif|ico|css|js)$ {
expires 7d;
}
then the WordPress static files are properly cached, but Django's static files are not even served.
I know this is because the /static/ location is never reached and then Nginx searched Django's files in the root directive which is on server block level and which points to the WordPress location.
I can remove the cache location and add expires 7d; in the /static/ location. This will in opposite cache only the static files of Django.
But how can I make both static resources to be cached for a week?
Finally I was able to cache both static files - of Wordpress and Django.
For requests starting with /static/, I found a way to make Nginx use the /static/ location and not the general static files location, which has the purpose to only cache Wordpress static files.
Its so simple: just use the location modifier ^~.
As this wonderful article of DigitalOcean says:
The block modifier ^~ means that if this block is selected as the best non-regular expression match, regular expression matching will not take place.
Here is the working version of the two location blocks:
location ~* \.(?:jpg|jpeg|png|svg|gif|ico|css|js)$ {
# Cache Wordpress or any other public static files
# of those types for a week in the browser
expires 7d;
}
location ^~ /static/ {
alias /home/username/Env/virtenv/mysite/static/;
# Cache Django's static files for a week in the browser
expires 7d;
}
All requests with mysite.com/static/... will match the /static/ location and will not continue to any other regex location.
And for example requests as mysite.com/wp-content/themes/a-theme/style.css?ver=4.7.2
will match the regex location.
I want to use nginx to be a proxy between clients and a set of my apps, the problem I encountered is:
I have an app(on different machine than nginx) that is having static content(images, css etc.)
That static content is gzipped
I configured nginx as follows:
location ^~ /static/ {
etag on;
expires 1d;
more_set_headers "Vary: Accept-Encoding";
location ~* \.(css|js)$ {
gzip_static on;
expires -1;
proxy_pass http://my_upstream;
}
proxy_pass http://my_upstream;
}
and I was expecting to have etags working for things like js and css but they are not. I'm supposing it's because that js and css files are not on the same machine as nginx and it's the problem that gzip_static on suppose to fix.
So basically my question is, is it possible to have it working that way? And if it is how to do it:)
According to this forum post, etag on is not compatible with a proxy_pass that may modify the response (as gzip_static does):
The ETag header used in it's strong from means that it must be
changed whenever bits of an entity are changed. This basically
means that ETag headers have to be removed by filters which change
a response content.
As suggested in that same post, passing through Last-Modified will usually be enough for clients to make conditional requests.
I'm curious if nginx can be configured so the cache is saved out in some manner that would make the data user-friendly? While all my options might fall short of anything anyone would consider "human friendly" I'm in general interested in how people configure it to meet their specific needs. The documentation may be complete but I am very much a learn by example type guy.
My current configuration is from an example I ran accross and if it were to be used woud is not much more than proof to me that nginx correctly proxy caches/the data
http {
# unrelated stuff...
proxy_cache_path /var/www/cache levels=1:2 keys_zone=my-cache:8m max_size=1000m inactive=600m;
proxy_temp_path /var/www/cache/tmp;
server {
server_name g.sente.cc;
location /stu/ {
proxy_pass http://sente.cc;
proxy_cache my-cache;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
}
}
Nginx has two methods to cache content:
proxy_store is when Nginx builds a mirror. That is, it will store the file preserving the same path, while proxying from the upstream. After that Nginx will serve the mirrored file for all the subsequent requests to the same URI. The downside is that Nginx does not control expiration, however you are able to remove (and add) files at your will.
proxy_cache is when Nginx manages a cache, checking expiration, cache size, etc.
I have a dynamic content controller in CodeIgniter that pulls images from GridFS. The server is running nginx and I am trying to set the cache control headers in my nginx config to cache the images served by this dynamic content controller for 7 days. I have the config set correctly in my nginx config, but I am getting 404 headers from nginx because the files do not physically exist on the server.
My cache control directive is as follows:
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires 7d;
log_not_found off;
}
log_not_found helps keep nginx from logging the 404 error, but the headers that are sent to the browser are still 404 errors. I tried setting the headers manually via php's "header" function, but because nginx is using php-fpm, it was doing some weird stuff.
Can anyone point me in the correct direction on how to get my cache control headers set up properly for this situation? Thanks everyone =)
UPDATE:
I changed my nginx conf with a special location for all my static files and my dynamic controller.
location ~ ^/(dres|js|css|art)/ {
access_log off;
expires 7d;
add_header Cache-Control public;
try_files $uri $uri/ /index.php?$args;
}
Nginx is setting the correct expires headers on the static files, but I cannot for the life of me get fastcgi and nginx to output the expires headers for the dynamically output images. I must be missing something in my fastcgi config to allow expiration headers when serving php files.
Aren't you supposed to set fastcgi_cache for that?
Solved for the most part. Realized that using php's "header" function was working, there were other issues which were making me think it was not. I just added this to my dynamic image controller:
// seconds, minutes, hours, days
$expires = (60*60*24*7);
header("Pragma: public");
header("Cache-Control: maxage=".$expires);
header('Expires: ' . gmdate('D, d M Y H:i:s', time()+$expires) . ' GMT');
Now at least the expiration is working like I want for the dynamic images. I haven't figured out how to specify expiration for static files without getting a 404 on these dynamic images, but this is better for now.