Nginx cache : return cached response AND forward request to backend server also - caching

I want to use nginx cache (or varnish) in front of a web server (nginx).
The content of the database are modified once a day, so the response time can be significantly improved by serving a cached result. However the backend server still needs to receive and track each & every request in real time.
As a result, the cache won't reduce the load on the backend server because it'll still process & track the request, but the response to the client would be much faster.
Is it possible to do it in nginx or varnish ?
(i.e. return cached response to the client instantly AND forward the request to the backend server at the same time).

Related

High response time for HTTPS requests on Elastic Beanstalk

I am currently hosting a Laravel project on Elastic Beanstalk. The issue is that requests made over HTTPS are experiencing much slower response times (average of 5 seconds). I have ruled out internet issues and the CPU/RAM utilization of the server is not fully utilized. Additionally, php-fpm (with nginx) is correctly configured with 16 pools on each instance (t3.small).
The problem seems to be with Axios (XHR request) but sometimes other HTML pages also experience the same issue. You can test this yourself by visiting https://laafisoft.bf (open the developer tools to check the response time). The configuration that I am using for the Load Balancer can be found in the image below. The certificate that I am using for HTTPS is issued by AWS Certificate Manager (RSA 2048).
When testing, I also noticed that requests over HTTP (port 80) were much faster (average of 200ms), but after some time the response time for HTTP requests increased to the same level as HTTPS requests. I am confident that the issue is not related to my Laravel application or a database problem. For comparison, I have the same version of the website hosted on DigitalOcean without a Load Balancer and it has much faster response times (https://demo.laafisoft.bf).
Any help is welcome, I'm new to AWS so maybe I'm missing something.

Will Cloudfare Cache bridge Heroku free-tier idle time?

I have a simple static website on Heroku free-tier. Heroku puts servers on the sleep after some time of no traffic, and then the first user coming again needs to wait some ˜30 seconds till the server boots again.
My question is whether having the website cached by Cloudflare would bridge this 30 second waiting time, please?
By default, Cloudflare respects the origin web server’s cache headers unless overridden via an Edge Cache TTL Page Rule.
You might be able to instruct Cloudflare to cache all of your static assets by making Your server respond with the right header. You will be able to clear Cloudflare’s cache from their dashboard every time you update your website but the website might still be cached forever on your past visitors' browsers.
You can bypass this problem by making your server respond with reasonable cache headers and instructing Cloudflare to cache it forever using their custom edge cache page rule (and clearing the cache manually every time you update your website)
Note that if your website uses non-static requests (I.e login, signup, filling forms) this won’t work at all

IE11 is very slow on HTTPS sites

We have an internal web application and since short, it is very slow in IE11. Chrome and Firefox do not have this problem.
We have an HTTP version of the same website, and this is fast with up to 6 concurrent HTTP sessions to the server and session persistence.
However, when we change to HTTPS, all this changes.
We still have session persistence, but only 2 simultaneous sessions to the servers and each request seems to take ages. It is not the server response that is slow, but the "start" time before IE11 sends the request (and the next request). The connection diagram changes to a staircase, issuing one request at a time, one after the other. Each request taking 100-200ms even if the request itself returns only a couple of bytes.
The workstation has Symantec Endpoint Protection installed (12.1), and also Digital Guardian software (4.7).

Measuring response time for a Vaadin/Atmosphere based application

I have a Vaadin application that is using web sockets (Atmosphere) to push data to the browser. This means that there is no normal HTTP request/response cycle. From the looks of it, I do get HTTP GET requests, but the response is pushed out via the web socket asynchronously.
Because of this the response time (and message size) metrics I can get from the web server logs are useless.
How can I get information logged as to how long every request processing took?
Chrome Developer Tools allows you to see time stamps of messages sent using web sockets (in the Network tab).

How to configure NginX to serve Cached Content only when Backend is down (5xx Resp. Codes)?

I've configured my system with NginX listening on port 80, serving static content and proxying dynamic requests to a backend server.
I can configure NginX to cache content generated by the backend, but I want this cached content be served only when the Backend responds with an error http 5xx, or when it's totally down.
We tried the proxy_cache_use_stale option with max-age of 1 second, it worked but it has one negative side.. which is simply dozens of requests being served from cache during this 1 second cache-aged-content. These requests served from cache will miss further Backend processing (Stats for example).
We can only afford to live with this negativity IF the backend was down,
Thus, the cache will act as a backup or a failover solution. But as long as the backend is up and responding, no requests should be served from cache.
I would appreciate any hints
Take a look at proxy_cache_use_stale
proxy_intercept_errors might be what you're looking for.
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_intercept_errors

Resources