Exclude directory from nginx cache headers via "Additional nginx directives" on Plesk VPS - caching

I'm running a Joomla site running on a Plesk VPS with Apache + nginx. In the "Web Server Settings" in Plesk for that domain under "Additional nginx directives" (which will override the server-wide nginx configuration) I have specified:
add_header Cache-Control "private, max-age=604800, must-revalidate";
All works well and the site now gets served with proper cache headers added and Google PageSpeed stopped complaining - but I'm getting some errors with back-end functions now such as when uploading batches of images to my website's gallery. This seems to be related to the above as it works normally again when the directive is removed.
How can I re-write the additional nginx directive above to exclude the /administrator/ directory of my site from having any cache headers added by nginx?

Use a location block that negates the path you don't want the header added to:
location ^~ /administrator/ {
add_header Cache-Control "private, max-age=604800, must-revalidate";
}

If you have nginx+apache:
add_header Cache-Control "private, max-age=604800, must-revalidate";
location ^~ /administrator/ {
add_header Cache-Control "no-cache, max-age=1";
proxy_pass http://[....IP address of domain ....]:7080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

Related

Creating proxy_pass from cloudfront to ec2 instanse

I'm want to create move our front end setup on CloudFront and s3.
I got an application with a static web site with an nginx that proxy_pass the host header to another port on the same machine (as seen in the code below).
location /api {
proxy_pass http://my_app:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
client_max_body_size 1000M;
server_tokens off;
After I uploaded my frontend to s3 and configured CloudFront origin to go to the s3 static web server how do I configure the Origin Settings & Cache Behavior Settings to send the header to the ec2 instance as shown in the current Nginx config?

Django media file not serve in production but serving in development (localhost)?

Django media file serving in development but not on production. whatever image i am uploading through Django admin it serving on website on local host but when i live my site on digital ocean its no displaying. how to solve this issue can any one tell ? my website url-http://139.59.56.161 click on book test menu
Resurrecting a long-dead question which was all I could find to help me out here. Recording my answer for posterity. My "production" environment uses nginx as a reverse proxy in front of uwsgi hosting my django application. The solution is that Django just does not serve files in Production; instead you should configure your web-server to do that.
Django is slightly unhelpful in talking about static files and then saying 'media files: same.'
So, I believe its best to catch file requests up front, in my case in the nginx server, to reduce double-handling and also your front-end web-server is the most optimised for the job.
To do this:
within a server definition block in your /etc/nginx/sites-available/[site.conf], define the webroot, the directory on your server's file system that covers everything with the declaration 'root [dir]'.
server {
listen 80;
server_name example.com www.example.com;
root /srv/;
This next block tells nginx to send all the traffic to the uwsgi service running django - I lifted it holus bolus from an example, probably on digitalocean.com.
location / {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Client-Verify SUCCESS;
proxy_set_header X-Client-DN $ssl_client_s_dn;
proxy_set_header X-SSL-Subject $ssl_client_s_dn;
proxy_set_header X-SSL-Issuer $ssl_client_i_dn;
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
include uwsgi_params;
uwsgi_pass unix:/run/uwsgi/mysite6.sock;
}
Now, here are the bits we need to serve files when they are requested. try_files attempts to serve $uri and then $uri/, and it would be a good idea to put a file like 'resource_not_found.html' in /srv and set it as the last fallback for try_files, so the user knows that this part has been unintentionally left blank.
location /static/ {
try_files $uri $uri/ ;
}
location /media/ {
try_files $uri $uri/ ;
}
}
That concludes our server block for http, hence the extra close "}".
Alternatively, you can get uwsgi doing it by setting 'static-map' or 'static-map2'. 'static-map' "eats" the mapped url part, whereas static-map2 adds it.
static-map /files=/srv/files
means a request for /files/funny.gif will serve /srv/files/files.gif.
static-map2 /files=/srv
will do the same thing, because it will take a request for /files/funny.gif and look for /srv/files/funny.gif. As per the uwsgi docs, you can create as many of these mappings as you want, even to the same uri, and they will be checked in order of appearance. Damnit, I've just now finally found the docs for nginx open source.
uwsgi docs

How can configure the Magento 2 With Varnish Cache with HTTPS

Thank you for looking on this.
I have a Magento 2.1.8 website and it will run on the Amazon EC2 with this https://aws.amazon.com/marketplace/pp/B007OUYR4Y Amazon AMI.
I have optimized everything on Magento 2 website but did not get the proper result on this.
I have tried to use the Varnish cache but it is not working with the HTTPS.
anyone have an idea, how can use the varnish with the HTTPS to optimize the website speed.
Varnish Cache does dot speak HTTPS natively. You'll need an SSL terminator such as Hitch, HAProxy, etc. deployed in front of Varnish, ideally using the PROXY protocol.
With my setups I use NGINX as a proxy to handle both http and https requests and then use Varnish as the backend so NGINX handles all the SSL certificates.
Here's an example of my NGINX ssl template:
server {
listen server-ip:443 ssl;
server_name example.com www.example.com;
ssl_certificate /home/user/conf/web/ssl.example.com.pem;
ssl_certificate_key /home/user/conf/web/ssl.example.com.key;
location / {
proxy_pass http://varnish-ip:6081;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Nginx on;
proxy_redirect off;
}
location #fallback {
proxy_pass http://varnish-ip:6081;
}
}

Nginx fails to load static after putting caching configuration

I am trying to implement leverage browser caching for my flask project nginx. But when i insert the code inside conf file static files are not served by nginx showing 403 permission denied error.
This is my conf file for site in site-enabled
server {
listen 80;
server_name site.in;
root /root/site-demo/;
access_log /var/log/site/access_log;
error_log /var/log/site/error_log;
location / {
proxy_pass http://127.0.0.1:4000/;
proxy_redirect http://127.0.0.1:4000 http://site.in;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
expires 30d;
add_header Pragma public;
add_header Cache-Control "public";
}
}
When i remove the cache expire part everything works fine. I tried similar question's answers and put root specification. But still error remains same. User specified in nginx.conf is www-data. Do i have to change user?

Nginx/Django Admin POST https only

I've got an Nginx/Gunicorn/Django server deployed on a Centos 6 machine with only the SSL port (443) visible to the outside world. So unless the server is called with the https://, you won't get any response. If you call it with an http://domain:443, you'll merely get a 400 Bad Request message. Port 443 is the only way to hit the server.
I'm using Nginx to serve my static files (CSS, etc.) and all other requests are handled by Gunicorn, which is running Django at http://localhost:8000. So, navigating to https://domain.com works just fine, as do links within the admin site, but when I submit a form in the Django admin, the https is lost on the redirect and I'm sent to http://domain.com/request_uri which fails to reach the server. The POST action does work properly even so and the database is updated.
My configuration file is listed below. The location location / section is where I feel like the solution should be found. But it doesn't seem like the proxy_set_header X-* directives have any effect. Am I missing a module or something? I'm running nginx/1.0.15.
Everything I can find on the internet points to the X-Forwarded-Protocol https like it should do something, but I get no change. I'm also unable to get the debugging working on the remote server, though my next step may have to be compiling locally with debugging enabled to get some more clues. The last resort is to expose port 80 and redirect everything...but that requires some paperwork.
[http://pastebin.com/Rcg3p6vQ](My nginx configure arguments)
server {
listen 443 ssl;
ssl on;
ssl_certificate /path/to/cert.crt;
ssl_certificate_key /path/to/key.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
server_name example.com;
root /home/gunicorn/project/app;
access_log /home/gunicorn/logs/access.log;
error_log /home/gunicorn/logs/error.log debug;
location /static/ {
autoindex on;
root /home/gunicorn;
}
location / {
proxy_pass http://localhost:8000/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Protocol https;
}
}
Haven't had time yet to understand exactly what these two lines do, but removing them solved my problems:
proxy_redirect off;
proxy_set_header Host $host;

Resources