Will Cloudfare Cache bridge Heroku free-tier idle time? - heroku

I have a simple static website on Heroku free-tier. Heroku puts servers on the sleep after some time of no traffic, and then the first user coming again needs to wait some ˜30 seconds till the server boots again.
My question is whether having the website cached by Cloudflare would bridge this 30 second waiting time, please?

By default, Cloudflare respects the origin web server’s cache headers unless overridden via an Edge Cache TTL Page Rule.
You might be able to instruct Cloudflare to cache all of your static assets by making Your server respond with the right header. You will be able to clear Cloudflare’s cache from their dashboard every time you update your website but the website might still be cached forever on your past visitors' browsers.
You can bypass this problem by making your server respond with reasonable cache headers and instructing Cloudflare to cache it forever using their custom edge cache page rule (and clearing the cache manually every time you update your website)
Note that if your website uses non-static requests (I.e login, signup, filling forms) this won’t work at all

Related

IE11 is very slow on HTTPS sites

We have an internal web application and since short, it is very slow in IE11. Chrome and Firefox do not have this problem.
We have an HTTP version of the same website, and this is fast with up to 6 concurrent HTTP sessions to the server and session persistence.
However, when we change to HTTPS, all this changes.
We still have session persistence, but only 2 simultaneous sessions to the servers and each request seems to take ages. It is not the server response that is slow, but the "start" time before IE11 sends the request (and the next request). The connection diagram changes to a staircase, issuing one request at a time, one after the other. Each request taking 100-200ms even if the request itself returns only a couple of bytes.
The workstation has Symantec Endpoint Protection installed (12.1), and also Digital Guardian software (4.7).

Can any caching DNS servers refresh their cache asynchronously?

We run a latency-sensitive system. We found one significant cause of latency: some processes were making blocking DNS lookups to remote nameservers. To mitigate this, we have installed a local caching DNS resolver, specially dnsmasq.
But we still see occasional significant pauses where queries to the local DNS cache (dnsmasq) can take a long time. These are caused by TTL expiry; in these cases dnsmasq queries its upstream server before responding to the local process.
We would like to eliminate these pauses, too. I would like our local DNS cache to always respond immediately, even if the response is stale. The cache should query its upstream server asynchronously. For example, if the cache serves a stale response, it could refresh this asynchronously. Or a more sophisticated policy would be to refresh the cache asynchronously shortly before the TTL expires.
But I can't find any such setting for dnsmasq, or for any other caching DNS servers I've looked at. Are any DNS servers designed to run in this configuration?
Knot resolver with configuration modules = { 'predict' } will start asynchronous refresh of records that are put into answer at a moment when their TTL is close to expiration.
Note that version 2.0.0 has a bug that defeats this refresh for records without DNSSEC signatures (will be fixed in the next release).
Unbound DNS Server also does this with a prefetch option - yes/no.

Nginx cache : return cached response AND forward request to backend server also

I want to use nginx cache (or varnish) in front of a web server (nginx).
The content of the database are modified once a day, so the response time can be significantly improved by serving a cached result. However the backend server still needs to receive and track each & every request in real time.
As a result, the cache won't reduce the load on the backend server because it'll still process & track the request, but the response to the client would be much faster.
Is it possible to do it in nginx or varnish ?
(i.e. return cached response to the client instantly AND forward the request to the backend server at the same time).

How to configure NginX to serve Cached Content only when Backend is down (5xx Resp. Codes)?

I've configured my system with NginX listening on port 80, serving static content and proxying dynamic requests to a backend server.
I can configure NginX to cache content generated by the backend, but I want this cached content be served only when the Backend responds with an error http 5xx, or when it's totally down.
We tried the proxy_cache_use_stale option with max-age of 1 second, it worked but it has one negative side.. which is simply dozens of requests being served from cache during this 1 second cache-aged-content. These requests served from cache will miss further Backend processing (Stats for example).
We can only afford to live with this negativity IF the backend was down,
Thus, the cache will act as a backup or a failover solution. But as long as the backend is up and responding, no requests should be served from cache.
I would appreciate any hints
Take a look at proxy_cache_use_stale
proxy_intercept_errors might be what you're looking for.
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_intercept_errors

Load balancing with nginx

I want to stop serving requests to my back end servers if the load on those servers goes above a certain level. Anyone who is already surfing the site will still get routed but new connection will be sent to a static server busy page until the load drops below a pre determined level.
I can use cookies to let the current customers in but I can't find information on how to to routing based on a custom load metric.
Can anyone point me in the right direction?
Nginx has an HTTP Upstream module for load balancing. Checking the responsiveness of the backend servers is done with the max_fails and fail_timeout options. Routing to an alternate page when no backends are available is done with the backup option. I recommend translating your load metrics into the options that Nginx supplies.
Let's say though that Nginx is still seeing the backend as being "up" when the load is higher than you want. You may be able to adjust that further by tuning the max connections of the backend servers. So, maybe the backend servers can only handle 5 connections before the load is too high, so you tune it only allow 5 connections. Then on the front-end, Nginx will time-out immediately when trying to send a sixth connection, and mark that server as inoperative.
Another option is to handle this outside of Nginx. Software like Nagios can not only monitor load, but can also proactively trigger actions based on the monitor it does.
You can generate your Nginx configs from a template that has options to mark each upstream node as up or down. When a monitor detects that the upstream load is too high, it could re-generate the Nginx config from the template as appropriate and then reload Nginx.
A lightweight version of the same idea could done with a script that runs on the same machine as your Nagios server, and performs simple monitoring as well as the config file updates.

Resources