Nginx get index.html always from cache - caching

Hello I'm trying to configure a nginx cache system.
This is my configuration file
proxy_cache_valid 200 1m;
proxy_ignore_headers Set-Cookie;
add_header Cache-Control public;
add_header X-Cache-Status $upstream_cache_status;
proxy_cache my_cache;
proxy_pass http://myurl;
All works good. If I look at headers request :
X-cache-status : HIT;
But if for example I change my index.html and make another request the old one is shown.
I think I'm missing something for this configuration. There is some other configuration for getting best performance from nginx cache?
Can someone help me?
Thanks

Related

Downtime when deploying Laravel to Azure

Im deploying a laravel site to a Azure Web App (running linux).
After upgrading to PHP 8 and nginx I experience a lot more downtime after deployment. Several minutes of nginx's Bad Gateway error.
In order to get laravel working with nginx I need to copy a nginx conf file from my project to nginx's config on the server.
Im running startup.sh after deploy that has the following commands as first lines:
cp /home/site/wwwroot/devops/nginx.conf /etc/nginx/sites-available/default;
service nginx reload
Content of my nginx.conf:
server {
# adjusted nginx.conf to make Laravel 8 apps with PHP 8.0 features runnable on Azure App Service
# #see https://laravel.com/docs/8.x/deployment
listen 8080;
listen [::]:8080;
root /home/site/wwwroot/public;
index index.php;
client_max_body_size 100M;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-Content-Type-Options "nosniff";
gzip on;
gzip_proxied any;
gzip_min_length 256;
gzip_types
application/atom+xml
application/geo+json
application/javascript
application/x-javascript
application/json
application/ld+json
application/manifest+json
application/rdf+xml
application/rss+xml
application/xhtml+xml
application/xml
font/eot
font/otf
font/ttf
image/svg+xml
text/css
text/javascript
text/plain
text/xml;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ [^/]\.php(/|$) {
fastcgi_split_path_info ^(.+?\.php)(|/.*)$;
fastcgi_pass 127.0.0.1:9000;
include fastcgi_params;
fastcgi_param HTTP_PROXY "";
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param QUERY_STRING $query_string;
fastcgi_intercept_errors on;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 3600;
fastcgi_read_timeout 3600;
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
}
}
I've also tried to use Azure Deployment Slots but the swap is happening before the Bad Gateway error has gone away.
Is there something else I can do to minimize the downtime/time for the project to get up and running again?
The "Bad Gateway" error suggests that Nginx is unable to connect to the backend, which in this case is PHP-FPM.
There are a few things you can try to minimize the downtime:
Increase the fastcgi_connect_timeout, fastcgi_send_timeout, and fastcgi_read_timeout values in your nginx configuration file. This will give PHP-FPM more time to start up and respond to requests.
Optimize your PHP code. Make sure your code is optimized for performance, as this will help reduce the time it takes for the site to start up.
Use Azure Deployment Slots for testing. Deployment slots allow you to test your code in a staging environment before deploying it to production. This can help reduce the risk of downtime in your production environment.
Try to make sure that your PHP-FPM and nginx services are always running, and that they are started automatically when the server boots up.
Try to reduce the number of restarts needed during deployment by using a deployment process that utilizes rolling upgrades.
Finally, you can try deploying a simple HTML file first, and then deploy the Laravel codebase. This will ensure that the web server and PHP are working before deploying the Laravel codebase.
Use trial and error to find out the best solution for your use case.

Redirect from Http to https issue in NGINX Google Compute Engine

We already tried other solutions on Stack Overflow but they didn't work for us.
We are having issues while redirecting our Domain url from http to https.
When we hit the http://example.com, it is not getting redirected to https://example.com. We have also set up a Google Managed SSL in the Load Balancer in our Google Cloud Network Service.
We are using the Google Cloud Compute Engine for hosting the website and Google domains for url. Apart from that we are using the NGINX as our web server and Laravel as our framework. We also contacted the Google support team but couldn't worked.
Front and Backend Load Balancer Configuration:
PHP Framework - Laravel V8
Compute Engine - Debian 10 Buster
Below is the code for NGINX config file.
NGINX Default Config file
server
{
listen 80;
server_name example.in www.example.in;
root /var/www/html/test;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php7.3-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
So the below configuration really solved my issue .
I just added a new Port 80 (Http) configuration in my Front End configuration of my Load Balancer along with the Port 443 (Https) .
Now the Domain URL is getting redirected from http to https with secure connections.
Please refer to the below Screenshot of my Load Balancer Frontend Configuration .
Thank you #JohnHanley for your answer ;)
I think your NGINX configuration needs to adjust to listen on port 443 and you need to get the SSL certificate accordingly.
Please refer : https://cloud.google.com/community/tutorials/https-load-balancing-nginx.

HTTP/2 server pushed assets fail to load (HTTP2_CLIENT_REFUSED_STREAM)

I have the following error ERR_HTTP2_CLIENT_REFUSED_STREAM in chrome devtools console for all or some of the assets pushed via http2.
Refreshing the page and clearing the cache randomly fixes this issue partially and sometimes completely.
I am using nginx with http2 enabled (ssl via certbot) and cloudflare.
server {
server_name $HOST;
root /var/www/$HOST/current/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
set $auth "dev-Server.";
if ($request_uri ~ ^/opcache-api/.*$){
set $auth off;
}
auth_basic $auth;
auth_basic_user_file /etc/nginx/.htpasswd;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
location ~* \.(?:css|js)$ {
access_log off;
log_not_found off;
# Let laravel query strings burst the cache
expires 1M;
add_header Cache-Control "public";
# Or force cache revalidation.
# add_header Cache-Control "public, no-cache, must-revalidate";
}
location ~* \.(?:jpg|jpeg|gif|png|ico|xml|svg|webp)$ {
access_log off;
log_not_found off;
expires 6M;
add_header Cache-Control "public";
}
location ~* \.(?:woff|ttf|otf|woff2|eot)$ {
access_log off;
log_not_found off;
expires max;
add_header Cache-Control "public";
types {font/opentype otf;}
types {application/vnd.ms-fontobject eot;}
types {font/truetype ttf;}
types {application/font-woff woff;}
types {font/x-woff woff2;}
}
listen 443 ssl http2;
ssl_certificate /etc/letsencrypt/live/$HOST/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/$HOST/privkey.pem; # managed by Certbot
# include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = $HOST) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name $HOST;
listen 80;
return 404; # managed by Certbot
}
Googling this error doesn't return much results and if it helps, it's a laravel 6 app that pushes those assets, if i disable assets pushing in laravel, then all assets load correctly.
I don't even know where to start looking.
Update 1
I enabled chrome logging, and inspected the logs using Sawbuck, following the instructions provided here and found that the actual error has some relation with a 414 HTTP response, which implies some caching problem.
Update 2
I found this great The browser can abort pushed items if it already has them which states the following:
Chrome will reject pushes if it already has the item in the push cache. It rejects with PROTOCOL_ERROR rather than CANCEL or REFUSED_STREAM.
it also links to some chrome and mozilla bugs.
Which led me to disable cloudflare completely and test directly with the server, i tried various Cache-Control directives and also tried disabling the header, but the same error occures, upon a refresh after a cache clear.
Apparently, chrome cancel the http/2 pushed asset even when not present in push cache, leaving the page broken.
For now, i'm disabling http/2 server push in laravel app as a temporary fix.
We just got the exact same problem you're describing. We got "net::ERR_HTTP2_CLIENT_REFUSED_STREAM" on one of our Javascript files. Reloading and clearing cache worked but then the problem came back, seemingly randomly. The same issue in Chrome and Edge (Chromium based). Then I tried in Firefox and got the same behavior but Firefox complained that the response for that url was "text/html". My guess is that for some reason we had gotten a "text/html" response cached for that url in Cloudflare. When I opened that url directly in Firefox I got "application/javascript" and then the problem went away. Still not quite sure how this all happened though.
EDIT:
In our case it turned out that the response for a .js file was blocked by the server with a 401 and we didn't send out any cache headers. Cloudflare tries to be helpful because the browser was expecting a .js file so the response was cached even if the status was 401. Which later failed for others because we tried to http2 push a text/html response with status 401 as a .js file. Luckliy Firefox gave us a better, actionable error message.
EDIT2:
Turns out it wasn't a http header cache issue. It was that we had cookie authentication on .js files, and it seems that the http2 push requests doesn't always include cookies. The fix was to allow cookieless requests on the .js files.
For anyone who comes here using Symfony's local development server (symfony server:start), I could not fix it but this setting (which is the default setting) stops the server to try to push preloaded assets and the error goes away:
config/packages/dev/webpack_encore.yaml
webpack_encore:
preload: false

Laravel Glide - 404 Accessing Images: NGINX Cache Issue

Within my Laravel 5.1 app, I'm using https://github.com/thephpleague/glide to resize and serve images via a .cache directory once resized. I am running into the exact same issue as Laravel Glide not finding images at urls with extension
Have had to disable caching in NGINX:
# cache.appcache, your document html and data
location ~* \.(?:manifest|appcache|html?|xml|json)$ {
expires -1;
access_log logs/static.log;
}
# Feed
location ~* \.(?:rss|atom)$ {
expires 1h;
add_header Cache-Control "public";
}
# Media: images, icons, video, audio, HTC
location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
expires 1M;
access_log off;
add_header Cache-Control "public";
}
# CSS and Javascript
location ~* \.(?:css|js)$ {
expires 1y;
access_log off;
add_header Cache-Control "public";
}
# WebFonts
# If you are NOT using cross-domain-fonts.conf, uncomment the following directive
location ~* \.(?:ttf|ttc|otf|eot|woff|woff2)$ {
expires 1M;
access_log off;
add_header Cache-Control "public";
}
After disabling these rules I can access the images again. Any tips? I can still "see" the images when enabling this, but my server cannot read them. A little lost on what could be going on here. Thanks for any help!
Take 'jpg|jpeg|gif|png|' out of your matches, restart nginx, clear browser cache. Glide will sort the caching out for your images.

Exclude directory from nginx cache headers via "Additional nginx directives" on Plesk VPS

I'm running a Joomla site running on a Plesk VPS with Apache + nginx. In the "Web Server Settings" in Plesk for that domain under "Additional nginx directives" (which will override the server-wide nginx configuration) I have specified:
add_header Cache-Control "private, max-age=604800, must-revalidate";
All works well and the site now gets served with proper cache headers added and Google PageSpeed stopped complaining - but I'm getting some errors with back-end functions now such as when uploading batches of images to my website's gallery. This seems to be related to the above as it works normally again when the directive is removed.
How can I re-write the additional nginx directive above to exclude the /administrator/ directory of my site from having any cache headers added by nginx?
Use a location block that negates the path you don't want the header added to:
location ^~ /administrator/ {
add_header Cache-Control "private, max-age=604800, must-revalidate";
}
If you have nginx+apache:
add_header Cache-Control "private, max-age=604800, must-revalidate";
location ^~ /administrator/ {
add_header Cache-Control "no-cache, max-age=1";
proxy_pass http://[....IP address of domain ....]:7080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

Resources