In my nginx config I have some url rewrite rules for images as seen below:
location / {
rewrite ^/custom/path/(.*)/(.*)-(.*).jpg$ /media/images/products/$1/$3.jpg last;
}
They work just fine. I'm also trying to set Expire headers for all static resources (images, css, js). I've added the following block for that:
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires 1y;
}
This works fine for everything except the images that have the url rewrite rules (which return 404 Not found). Anyone know what I'm doing wrong here?
Moving the rewrite rule inside the same block as the expires rule did the trick:
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires 1y;
rewrite ^/custom/path/(.*)/(.*)-(.*).jpg$ /media/images/products/$1/$3.jpg last;
}
Related
Today, I have a problem with NGINX.
In my NGINX config I have this :
location ~ / {
try_files $uri $uri/ /index.php?$query_string;
}
For example, I want to access https://my.domain/hello with a route defined with Laravel. It works
but now if I access https://my.domain/hello/ it returns a 404 NGINX error page. I also point out that I use Plesk.
Do you have any ideas on how to fix that ?
Thank you for your time ;)
Add 301 redirect if path is not directory and ending on /
if (!-d $request_filename) {
rewrite ^/(.*)/$ /$1 permanent;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
}
If you check the nginx config template you will find this line
/usr/local/psa/admin/conf/templates/default/domain/nginxDomainVirtualHost.php
<?php if ($VAR->domain->physicalHosting->directoryIndex && !$VAR->domain->physicalHosting->proxySettings['nginxProxyMode']): ?>
location ~ /$ {
index <?=$VAR->quote($VAR->domain->physicalHosting->directoryIndex)?>;
}
<?php endif ?>
You either need to reenable proxy mode (it will reenable apache but nginx will still be receiving the request in the first place) or create a custom template for your domain and delete this line
This will not concern you, but if you were not using php (eg: nodejs), you could also disable php support, which would also get rid of this line
I am having a conflict with two blocks with nginx 1.8.0.
The first block is to setup static cache for certain file types:
location ~* \.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf|swf)$ {
add_header "Access-Control-Allow-Origin" "*";
access_log off;
log_not_found off;
expires max;
}
The second block is a series of rewrites defined by filetype:
location /files {
rewrite ^/files/master\.([0-9]+)?\.css$ /min/?g=css&456 break;
rewrite ^/files/master\.([0-9]+)?\.js$ /min/?g=js&456 break;
rewrite ^/files/second\.([0-9]+)?\.js$ /min/?g=jsa&456 break;
}
The rewrites result in a 404. Any rewrite that uses a filetype defined in the static cache rule results in a 404 error. If I change the rewrite rule to a different filetype or comment out the static cache file block, it works.
What am I missing in the cache static files that is preventing a rewrite from being performed at a later config setting?
After much gnashing of teeth, I ended up changing the redirects to a try_files parameter. The parameters must be higher in the conf file than the static cache file.
location ~ ^/files/master\.([0-9]+)?\.css$ {
try_files $uri /min/?g=css&456;
}
location ~ ^/files/master\.([0-9]+)?\.js$ {
try_files $uri /min/?g=js&456;
}
location ~ ^/files/second\.([0-9]+)?\.js$ {
try_files $uri /min/?g=jsa&456;
}
This will allow me to run the minify toolset.
I tried to setup at least 2 servers with nginx (origin + edge). both compiled with the mp4-module. The origin holds all my mp4-files. Edge is configured with all the caching-stuff (see below) that work as expected, each mp4-file request a second time is served by the edge-cache without origin traffic.
But I want to be able to seek in the file. The functionality comes from the mp4-module. Just append the query-param "?start=120" tells nginx to serve the mp4-content starting with timestamp 120sec. This works fine with origin directly requested. But as soon as i enable mp4-module in the caching-location of the nginx, the request will be 404.
nginx.conf # origin:
server {
listen 80;
server_name localhost;
root /usr/share/nginx/www;
location ~ \.mp4$ {
mp4;
expires max;
}
}
nginx.conf # edge:
proxy_cache_path /usr/share/nginx/cache levels=2:2 keys_zone=icdn_cache:10m inactive=7d max_size=2g;
proxy_temp_path /usr/share/nginx/temp;
proxy_ignore_headers X-Accel-Expires Cache-Control Set-Cookie;
log_format cache '[$time_local] Cache: $upstream_cache_status $upstream_addr $upstream_response_time $status $bytes_sent $proxy_add_x_forwarded_for $request_uri';
access_log /usr/local/nginx/logs/cache.log cache;
upstream origin {
server <origin-domain>;
}
server {
listen 80;
server_name localhost;
location ~ \.mp4$ {
mp4;
proxy_cache icdn_cache;
proxy_pass http://origin;
proxy_cache_key $uri;
}
}
I also tried:
location / {
location ~ \.mp4$ { mp4; }
proxy_cache icdn_cache;
proxy_pass http://origin;
proxy_cache_key $uri;
}
Is there a way to make cached mp4-files work with the seeking-function of mp4-module?
You must use proxy_store. proxy_cache will create a lot of files for every ?start=xxxx request.
To let an mp4 module seek in files you need the full movie. proxy_store will make a mirror on the cache server.
proxy_cache is part of proxy module. Currently you can't use nginx mp4 module with proxy, it only works for static files, that's it.
How can I 301 rewrite mysite.com/page/ to mysite.com/page/index.html using nginx?
In Apache I had:
RewriteRule page/ http://mysite.com/page/index.html [R=301,L]
Thanks for help,
hydn
Try this settings:
location / {
rewrite /page/ http://mysite.com/page/index.html permanent;
...
}
I see from your comment to Sergiei that the '/page/' directory and '/page/index.html' does not actually exist and is rewritten elsewhere. So not surprising that Nginx gives a 404 not found.
What exactly should get served if a visitor requests '/page/index.html'? I.E., what does that get rewritten to?
If it is index.php?q=/page/index.html, then your config should be:
server {
# index directive should be in server or http block
# So no need to repeat in every location block
# unless specifically overriding it
index index.php index.html;
location / {
rewrite ^/page(/?)$ /page/index.html break;
try_files $uri $uri/ /index.php?q=$uri;
}
}
You could also use
server {
index index.php index.html;
location / {
try_files $uri $uri/ /index.php?q=$request_uri;
}
}
But there may be some issues with that. All depends on the detail of your application.
So I'm moving my site away from Apache and onto Nginx, and I'm having trouble with this scenario:
User uploads a photo. This photo is resized, and then copied to S3. If there's suitable room on disk (or the file cannot be transferred to S3), a local version is kept.
I want requests for these images (such as http://www.mysite.com/p/1_1.jpg) to first look in the p/ directory. If no local file exists, I want to proxy the request out to S3 and render the image (but not redirect).
In Apache, I did this like so:
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^p/([0-9]+_[0-9]+\.jpg)$ http://my_bucket.s3.amazonaws.com/$1 [P,L]
My attempt to replicate this behavior in Nginx is this:
location /p/ {
if (-e $request_filename) {
break;
}
proxy_pass http://my_bucket.s3.amazonaws.com/;
}
What happens is that every request attempts to hit Amazon S3, even if the file exists on disk (and if it doesn't exist on Amazon, I get errors.) If I remove the proxy_pass line, then requests for files on disk DO work.
Any ideas on how to fix this?
Shouldn't this be an example of using try_files?
location /p/ {
try_files $uri #s3;
}
location #s3{
proxy_pass http://my_bucket.s3.amazonaws.com;
}
Make sure there isn't a following slash on the S3 url
You could improve your s3 proxy config like this. Adapted from https://stackoverflow.com/a/44749584:
location /p/ {
try_files $uri #s3;
}
location #s3 {
set $s3_bucket 'your_bucket.s3.amazonaws.com';
set $url_full '$1';
proxy_http_version 1.1;
proxy_set_header Host $s3_bucket;
proxy_set_header Authorization '';
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header x-amz-meta-server-side-encryption;
proxy_hide_header x-amz-server-side-encryption;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_intercept_errors on;
resolver 8.8.4.4 8.8.8.8 valid=300s;
resolver_timeout 10s;
proxy_pass http://$s3_bucket$url_full;
}
Thanks to keep my coderwall post :) For the caching purpose you can improve it a bit:
http {
proxy_cache_path /tmp/cache levels=1:2 keys_zone=S3_CACHE:10m inactive=24h max_size=500m;
proxy_temp_path /tmp/cache/temp;
server {
location ~* ^/cache/(.*) {
proxy_buffering on;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
...
proxy_cache S3_CACHE;
proxy_cache_valid 24h;
proxy_pass http://$s3_bucket/$url_full;
}
}
}
One more recommendation is to extend resolver cache upto 5 min:
resolver 8.8.4.4 8.8.8.8 valid=300s;
resolver_timeout 10s;
break isn't doing quite what you expect nginx will do the last thing you ask of it, which makes sense if you start digging around making modules... but basically protect your proxy_pass with the does-not-exist version
if (-f $request_filename) {
break;
}
if(!-f $request_filename)
proxy_pass http://s3;
}
I ended up solving this by checking to see if the file doesn't exist, and if so, rewriting that request. I then handle the re-written request and do the proxy_pass there, like so:
location /p/ {
if (!-f $request_filename) {
rewrite ^/p/(.*)$ /ps3/$1 last;
break;
}
}
location /ps3/ {
proxy_pass http://my_bucket.s3.amazonaws.com/;
}