Distributed & Cached MP4 PseudoStreaming (seeking) with Nginx - caching

I tried to setup at least 2 servers with nginx (origin + edge). both compiled with the mp4-module. The origin holds all my mp4-files. Edge is configured with all the caching-stuff (see below) that work as expected, each mp4-file request a second time is served by the edge-cache without origin traffic.
But I want to be able to seek in the file. The functionality comes from the mp4-module. Just append the query-param "?start=120" tells nginx to serve the mp4-content starting with timestamp 120sec. This works fine with origin directly requested. But as soon as i enable mp4-module in the caching-location of the nginx, the request will be 404.
nginx.conf # origin:
server {
listen 80;
server_name localhost;
root /usr/share/nginx/www;
location ~ \.mp4$ {
mp4;
expires max;
}
}
nginx.conf # edge:
proxy_cache_path /usr/share/nginx/cache levels=2:2 keys_zone=icdn_cache:10m inactive=7d max_size=2g;
proxy_temp_path /usr/share/nginx/temp;
proxy_ignore_headers X-Accel-Expires Cache-Control Set-Cookie;
log_format cache '[$time_local] Cache: $upstream_cache_status $upstream_addr $upstream_response_time $status $bytes_sent $proxy_add_x_forwarded_for $request_uri';
access_log /usr/local/nginx/logs/cache.log cache;
upstream origin {
server <origin-domain>;
}
server {
listen 80;
server_name localhost;
location ~ \.mp4$ {
mp4;
proxy_cache icdn_cache;
proxy_pass http://origin;
proxy_cache_key $uri;
}
}
I also tried:
location / {
location ~ \.mp4$ { mp4; }
proxy_cache icdn_cache;
proxy_pass http://origin;
proxy_cache_key $uri;
}
Is there a way to make cached mp4-files work with the seeking-function of mp4-module?

You must use proxy_store. proxy_cache will create a lot of files for every ?start=xxxx request.
To let an mp4 module seek in files you need the full movie. proxy_store will make a mirror on the cache server.

proxy_cache is part of proxy module. Currently you can't use nginx mp4 module with proxy, it only works for static files, that's it.

Related

Redirect all HTTP traffic to HTTPS seems to be impossible

I am posting the question because the previous attempts have proved to be futile.
I have a rails server using nginx, and I am trying to redirect all http traffic to https.
Here is my nginx.conf file:
upstream backend {
server unix:PROJECT_PATH/tmp/thin1.sock;
server unix:PROJECT_PATH/tmp/thin2.sock;
server unix:PROJECT_PATH/tmp/thin3.sock;
server unix:PROJECT_PATH/tmp/thin4.sock;
server unix:PROJECT_PATH/tmp/thin5.sock;
server unix:PROJECT_PATH/tmp/thin6.sock;
server unix:PROJECT_PATH/tmp/thin7.sock;
server unix:PROJECT_PATH/tmp/thin8.sock;
}
server {
listen 80 default_server;
listen 443 default_server ssl;
server_name app_name;
ssl_certificate path_to_certificate_file.crt;
ssl_certificate_key path_to_certificatefile.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
root PATH_TO_PUBLIC_FOLDER;
access_log path_to_project/log/access.log;
error_log path_to_project/log/error.log;
client_max_body_size 10m;
large_client_header_buffers 4 16k;
location /ping {
echo "pong"
return 200;
}
# Cache static content
location ~* ^.+\.(jpg|jpeg|gif|css|png|js|ico|swf|wav)$ {
expires max;
log_not_found off;
}
# Status, local only (accessed via ssh+wget)
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
# double slash removal
set $test_uri $host$request_uri;
if ($test_uri != $host$uri$is_args$args) {
rewrite ^/(.*)$ /$1 break;
}
location / {
if ($http_x_forwarded_proto = 'http') {
return 301 https://$server_name$request_uri;
}
try_files $uri #proxy;
}
location #proxy {
proxy_redirect off;
# Inform we are on SSL
proxy_set_header X-Forwarded-Proto https;
# force timeouts if one of backend is died
proxy_next_upstream error timeout invalid_header http_502 http_503;
# Set headers
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://backend;
}
error_page 500 502 503 504 /500.html;
}
The current configuration causes:
400 Bad Request The plain HTTP request was sent to HTTPS port
You may notice the /ping location. That's because I have the servers behind a GCE balancer that performs a health check, and this is THE ONLY one I do not want to redirect. Everything else should be redirected to HTTPS.
Previous attempts:
server {
listen 80;
server_name app_name;
location /ping {
echo "pong";
return 200;
}
location / {
return 301 https://$server_name$request_uri;
}
}
With the https server part like the current config (with listen 80 default_server commented). This causes a too many redirections error.
I tried to simply redirect ALL traffic to https, including the health check. GCE expects a 200 response and instead it gets a 301, thus marking the machine as unhealthy and rendering the application useless.
I also tried the ssl on; on the https server config, same result (400)
I also tried to toggle the config.force_ssl = true in the rails project to no avail. Every other solution I try fails too.
Did anyone stumble on this also?
It seems the problem was not the Nginx config, but the certificates.
Putting a valid certificate led me to create an https backend and health check. Everything is working fine now.

How make a nginx cache like a varnish?

I'm hosting a website in heroku and using nginx in a cloud host as a proxy.
In my cloud host I defined this:
## /etc/nginx/sites-available/default
server {
charset utf-8;
listen 80;
server_name mywebsite.com www.mywebsite.com;
location /api {
proxy_pass http://mywebsite-api.herokuapp.com;
}
location /auth {
proxy_pass http://mywebsite-api.herokuapp.com;
}
location / {
fastcgi_cache CACHE_KEY;
fastcgi_cache_valid 200 60m;
proxy_pass http://mywebsite-fe.herokuapp.com;
}
}
## in /etc/nginx/nginx.conf
.......
http {
fastcgi_cache_path /etc/nginx/cache levels=1:2 keys_zone=CACHE_KEY:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
.....
I want to make a static content in nginx, like a varnish. How can I do this using nginx with a proxy to heroku?
Thanks for this.
Change fastcgi_ to proxy_. The fastcgi_ versions are for php-fpm.

Caching images on all folder levels of nginx reverse proxy

I'm trying to get my head around caching images for my open source image hosting serivce PictShare.
Pictshare has a smart query system where an uploaded image can be in a "virtual subdirectory" that changes the image. For example this is the link to the uploaded stackoverflow logo: https://www.pictshare.net/6cb55fe938.png
I can resize it to 300 width by adding /300/ to the URL before the image name: https://www.pictshare.net/300/6cb55fe938.png
Since I'm dealing with a lot of traffic lately I want my nginx proxy to be able to cache all images in from all virtual sub folders but it's not working. I've read many articles and many stackoverflow posts but no solution worked for me.
So far this my productive vhost file
proxy_cache_path /etc/nginx/cache/pictshare levels=1:2 keys_zone=pictshare:50m max_size=1000m inactive=30d;
proxy_temp_path /etc/nginx/tmp 1 2;
proxy_cache_key "$scheme$request_method$host$request_uri";
proxy_ignore_headers "Set-Cookie";
proxy_hide_header "Set-Cookie";
proxy_buffering on;
server {
...
location / {
proxy_pass http://otherserver/pictshare/;
include /etc/nginx/proxy_params;
location ~* \.(?:jpg|jpeg|gif|png|ico)$ {
expires max;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_cache_valid 200 301 302 1y;
proxy_cache pictshare;
proxy_pass http://otherserver/pictshare$request_uri;
}
}
}
The problem is that no files are cached and I see every image request on the proxy destination.
The only way I got it to work was by adding a special location to the host file that has caching explicitly enabled:
location /cached {
proxy_cache_valid 200 1y;
proxy_cache pictshare;
expires 1y;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
include /etc/nginx/proxy_params;
proxy_pass http://otherserver/pictshare/thumbs/;
proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;
}
The obvious problem with this solution is that the images are only cached when the request starts with /cached eg: https://www.pictshare.net/cached/6cb55fe938.png
Adding the caching commands to the root directory is no option for me since I don't want the forms and pages to be cached, just the images
Where is my mistake?
proxy_cache_path /etc/nginx/cache/pictshare levels=1:2 keys_zone=my_cache:50m max_size=3g inactive=180m;
proxy_temp_path /etc/nginx/tmp 1 2;
proxy_cache_key "$scheme$request_method$host$request_uri";
location ~* ^.+\.(jpe?g|gif|png|ico|pdf)$ {
access_log off;
include /etc/nginx/proxy.conf;
proxy_pass http://backend;
proxy_cache pictshare;
proxy_cache_valid any 12h;
add_header X-Proxy-Cache $upstream_cache_status;
root /var/www/public_html/cached; }
location / {
include /etc/nginx/proxy.conf;
proxy_pass http://backend;
root /var/www/public_html;
}
nginx first searches for the most specific prefix location given by literal strings regardless of the listed order. In the configuration above the only prefix location is “/” and since it matches any request it will be used as a last resort. Then nginx checks locations given by regular expression in the order listed in the configuration file. The first matching expression stops the search and nginx will use this location. If no regular expression matches a request, then nginx uses the most specific prefix location found earlier.
http://nginx.org/en/docs/http/request_processing.html

Nginx serving non-secure resources on https domain

I'm having some issues setting up a server with an SSL certificate. I was able to install the certificate just fine and restarted the nginx service. However, when I attempt to load my website, I see that all img, css and js files are being retrieved with http instead of https. This is a Magento website. Is there something wrong with my conf file?
server {
listen 80;
server_name www.my-domain.com;
return 301 $scheme://my-domain.com$request_uri;
}
server {
listen 80;
listen 443 ssl;
ssl_certificate /etc/ssl/my-domain/my-domain_com.pem;
ssl_certificate_key /etc/ssl/my-domain/my-domain_com.key;
access_log /var/log/nginx/magento.local-access.log;
error_log /var/log/nginx/magento.local-error.log;
server_name my-domain.com;
root /var/www/my-domain;
include conf/magento_rewrites.conf;
include conf/magento_security.conf;
# PHP handler
location ~ \.php {
## Catch 404s that try_files miss
if (!-e $request_filename) { rewrite / /index.php last; }
## Store code is defined in administration > Configuration > Manage Stores
fastcgi_param MAGE_RUN_CODE default;
fastcgi_param MAGE_RUN_TYPE store;
# By default, only handle fcgi without caching
include conf/magento_fcgi.conf;
}
# 404s are handled by front controller
location #magefc {
rewrite / /index.php;
}
# Last path match hands to magento or sets global cache-control
location / {
## Maintenance page overrides front controller
index index.html index.php;
try_files $uri $uri/ #magefc;
expires 24h;
}
rewrite ^/minify/([0-9]+)(/.*.(js|css))$ /lib/minify/m.php?f=$2&d=$1 last;
rewrite ^/skin/m/([0-9]+)(/.*.(js|css))$ /lib/minify/m.php?f=$2&d=$1 last;
location /lib/minify/ {
allow all;
}
}
maybe cause this is how they are called, if you need to serve every thing as https, i would create an empty server that listens on 80 and redirect to https
server {
# listen 80; delete this part
listen 443 ssl;
# the rest of the config
}
# add this server
server {
listen 80;
server_name example.com;
location / # or a more specific location '~ \.(jpg|css|js|jpeg|png|gif)' {
return https://example.com$request_uri;
}
}
or just fix the css location, it might be an absolute URL with http

Nginx Proxy to Files on Local Disk or S3

So I'm moving my site away from Apache and onto Nginx, and I'm having trouble with this scenario:
User uploads a photo. This photo is resized, and then copied to S3. If there's suitable room on disk (or the file cannot be transferred to S3), a local version is kept.
I want requests for these images (such as http://www.mysite.com/p/1_1.jpg) to first look in the p/ directory. If no local file exists, I want to proxy the request out to S3 and render the image (but not redirect).
In Apache, I did this like so:
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^p/([0-9]+_[0-9]+\.jpg)$ http://my_bucket.s3.amazonaws.com/$1 [P,L]
My attempt to replicate this behavior in Nginx is this:
location /p/ {
if (-e $request_filename) {
break;
}
proxy_pass http://my_bucket.s3.amazonaws.com/;
}
What happens is that every request attempts to hit Amazon S3, even if the file exists on disk (and if it doesn't exist on Amazon, I get errors.) If I remove the proxy_pass line, then requests for files on disk DO work.
Any ideas on how to fix this?
Shouldn't this be an example of using try_files?
location /p/ {
try_files $uri #s3;
}
location #s3{
proxy_pass http://my_bucket.s3.amazonaws.com;
}
Make sure there isn't a following slash on the S3 url
You could improve your s3 proxy config like this. Adapted from https://stackoverflow.com/a/44749584:
location /p/ {
try_files $uri #s3;
}
location #s3 {
set $s3_bucket 'your_bucket.s3.amazonaws.com';
set $url_full '$1';
proxy_http_version 1.1;
proxy_set_header Host $s3_bucket;
proxy_set_header Authorization '';
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header x-amz-meta-server-side-encryption;
proxy_hide_header x-amz-server-side-encryption;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_intercept_errors on;
resolver 8.8.4.4 8.8.8.8 valid=300s;
resolver_timeout 10s;
proxy_pass http://$s3_bucket$url_full;
}
Thanks to keep my coderwall post :) For the caching purpose you can improve it a bit:
http {
proxy_cache_path /tmp/cache levels=1:2 keys_zone=S3_CACHE:10m inactive=24h max_size=500m;
proxy_temp_path /tmp/cache/temp;
server {
location ~* ^/cache/(.*) {
proxy_buffering on;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
...
proxy_cache S3_CACHE;
proxy_cache_valid 24h;
proxy_pass http://$s3_bucket/$url_full;
}
}
}
One more recommendation is to extend resolver cache upto 5 min:
resolver 8.8.4.4 8.8.8.8 valid=300s;
resolver_timeout 10s;
break isn't doing quite what you expect nginx will do the last thing you ask of it, which makes sense if you start digging around making modules... but basically protect your proxy_pass with the does-not-exist version
if (-f $request_filename) {
break;
}
if(!-f $request_filename)
proxy_pass http://s3;
}
I ended up solving this by checking to see if the file doesn't exist, and if so, rewriting that request. I then handle the re-written request and do the proxy_pass there, like so:
location /p/ {
if (!-f $request_filename) {
rewrite ^/p/(.*)$ /ps3/$1 last;
break;
}
}
location /ps3/ {
proxy_pass http://my_bucket.s3.amazonaws.com/;
}

Resources