How to configure NginX to serve Cached Content only when Backend is down (5xx Resp. Codes)? - caching

I've configured my system with NginX listening on port 80, serving static content and proxying dynamic requests to a backend server.
I can configure NginX to cache content generated by the backend, but I want this cached content be served only when the Backend responds with an error http 5xx, or when it's totally down.
We tried the proxy_cache_use_stale option with max-age of 1 second, it worked but it has one negative side.. which is simply dozens of requests being served from cache during this 1 second cache-aged-content. These requests served from cache will miss further Backend processing (Stats for example).
We can only afford to live with this negativity IF the backend was down,
Thus, the cache will act as a backup or a failover solution. But as long as the backend is up and responding, no requests should be served from cache.
I would appreciate any hints

Take a look at proxy_cache_use_stale

proxy_intercept_errors might be what you're looking for.
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_intercept_errors

Related

Can I use CloudFlare's Proxy with an AWS Application Load Balancer?

I have created a listener on the LB (load balancer) with rules so that requests to different subdomains. I have set a CNAME for each subdomain in Cloudflare.
The problem is when I try to use Proxy feature, when I turn it off, my page works without a problem, but when i turn it on, it results in a time out.
There is a way to use Proxy feature with an LB?
The very simple answer is yes, this is very possible to do.
The longer answer is that a timeout indicates that some change in the request by Cloudflare creates a timeout for your AWS servers. Just a couple of examples could be:
A firewall rule that times out Cloudflare's IPs
Servers are not respecting X-Forwarded-For header and all requests from a small group of IPs are messing with application logic
It would help to know if the requests are reaching your load balancer and furthermore if the servers behind the LB are receiving the requests.

Nginx cache : return cached response AND forward request to backend server also

I want to use nginx cache (or varnish) in front of a web server (nginx).
The content of the database are modified once a day, so the response time can be significantly improved by serving a cached result. However the backend server still needs to receive and track each & every request in real time.
As a result, the cache won't reduce the load on the backend server because it'll still process & track the request, but the response to the client would be much faster.
Is it possible to do it in nginx or varnish ?
(i.e. return cached response to the client instantly AND forward the request to the backend server at the same time).

Can I use https with Varnish Cache

Can I use varnish cache with https or will this have little to no performance gain? What are the pros and cons? I've set up my vcl for http only. I want to try this with https now.
I've read this but it's from 2011:
https://www.varnish-cache.org/docs/trunk/phk/ssl.html
Varnish in itself does not support SSL and is very unlikely to do so in the overseeable future.
To use SSL and still be able to cache with varnish you have to terminate the SSL before the request is sent to varnish. This can be done efficiently by for instance HAProxy or Nginx.
To find out exactly how to configure this; a simple google search for ssl termination haproxy/nginx will yield more than enough results-
You set the X-Forwarded-For headers in HAProxy. If there is already set an X-Forwarded-For header other reverse proxies will always just add their own to it, the left-most or first address is the source address. You don't have to think about that, anything that reads and uses X-Forwarded-For headers will sort that out automagically.
You also want to set the X-Forwarded-Proto so you can do all sorts of magic in Varnish, like redirecting traffic not using TLS without hitting your backend servers and separate the caches, as Varnish doesn't talk TLS, which can lead to some interesting results, like images not being served up because they are requested over HTTP when the page is served over HTTPS.
Side question, are you using HAProxy to actually load balance between multiple backends? If not, why not just terminate the TLS connection in Apache, send that to Varnish and then back to Apache again?

Is it better to use CORS or nginx proxy_pass for a RESTful client-server app?

I have a client-server app where the server is a Ruby on rails app that renders JSON and understands RESTful requests. It's served by nginx+passenger and it's address is api.whatever.com.
The client is an angular js application that consumes these services (whatever.com). It is served by a second nginx server and it's address is whatever.com.
I can either use CORS for cross subdomain ajax calls or configure the client' nginx to proxy_pass requests to the rails application.
Which one is better in terms of performance and less trouble for developers and server admins?
Unless you're Facebook, you are not going to notice any performance hit from having an extra reverse proxy. The overhead is tiny. It's basically parsing a bunch of bytes and then sending them over a local socket to another process. A reverse proxy in Nginx is easy enough to setup, it's unlikely to be an administrative burden.
You should worry more about browser support. CORS is supported on almost every browser, except of course for Internet Explorer and some mobile browsers.
Juvia uses CORS but falls back to JSONP. No reverse proxy setup.

Load balancing with nginx

I want to stop serving requests to my back end servers if the load on those servers goes above a certain level. Anyone who is already surfing the site will still get routed but new connection will be sent to a static server busy page until the load drops below a pre determined level.
I can use cookies to let the current customers in but I can't find information on how to to routing based on a custom load metric.
Can anyone point me in the right direction?
Nginx has an HTTP Upstream module for load balancing. Checking the responsiveness of the backend servers is done with the max_fails and fail_timeout options. Routing to an alternate page when no backends are available is done with the backup option. I recommend translating your load metrics into the options that Nginx supplies.
Let's say though that Nginx is still seeing the backend as being "up" when the load is higher than you want. You may be able to adjust that further by tuning the max connections of the backend servers. So, maybe the backend servers can only handle 5 connections before the load is too high, so you tune it only allow 5 connections. Then on the front-end, Nginx will time-out immediately when trying to send a sixth connection, and mark that server as inoperative.
Another option is to handle this outside of Nginx. Software like Nagios can not only monitor load, but can also proactively trigger actions based on the monitor it does.
You can generate your Nginx configs from a template that has options to mark each upstream node as up or down. When a monitor detects that the upstream load is too high, it could re-generate the Nginx config from the template as appropriate and then reload Nginx.
A lightweight version of the same idea could done with a script that runs on the same machine as your Nagios server, and performs simple monitoring as well as the config file updates.

Resources