How to serve from Cloudflare cache without requesting the Cloudflare Worker? - caching

I have a Cloudflare Worker, whose responses can be cached for a long time. I know I can use Cache API inside the worker, but I want the request to never reach the worker at all, if Cache TTL is not reached.
There will be more than 10 million requests to this url, and I don't see the point paying for a Worker, that most of the time will just fetch a response from the Cache API.
I know a workaround - just host the worker code on a server, and use Page Rules, to cache everything from this origin. But I'm wondering if I could use Worker as origin, and somehow make Page Rules work with it. Just setting a Page Rule to cache everythig and cache TTL setting to 1 month still routes all requests to the Worker and doesn't cache anything.

There's currently no way to do this.
It's important to understand that this is really a pricing question, not a technical question. Cloudflare has chosen to price Workers based on the traffic level of a site that is served using Workers. This pricing decision isn't necessarily based on Cloudflare's costs, and Cloudflare's costs wouldn't necessarily be lower if your Worker runs less often (since the cost of deployment would not change, and the cost of executing a worker is quite low), so it doesn't necessarily make sense for Cloudflare to offer a discount for Worker-based sites that manage to serve most responses from cache.
With that said, Cloudflare could very well decide to offer this discount in the future for competitive or other reasons. But, at this time, there are no plans for this.
There's a longer explanation on the Cloudflare forums: https://community.cloudflare.com/t/cache-in-front-of-worker/171258/8

Related

Is it necessary to cache bust in HTTP/2?

In HTTP/1, to avoid extra network requests that would determine if resources should remain cached, we would set a high max-age or Expires value on static assets, and give them a unique URL for each revision. But in HTTP/2, requests are cheap, so can we get by without cache-busting, and just rely on ETags, last-modified, et al?
The only advantage I can see with continuing to bust the cache (besides dually serving HTTP/1 and HTTP/2 clients) would be to save bandwidth checking if resources are out-of-date. And even that is probably going to be insignificant with HPACK. So am I missing something, or can I stop cache-busting now?
The "necessary" part depends on how extreme do you feel about performance. In short, if you can live with three or four round-trips cache busting is not required. Otherwise cache busting is still the only way to remove those.
Here are some arguments related to HTTP/2 vs HTTP/1.1, the issue of latency, and the use of HTTP/2 Push.
HTTP/2 requests are not instantaneous
HTTP/2 requests are cheaper than HTTP/1.1, but not too much. In HTTP/1.1, once the browser opens the six to eight TCP connections to the server it has six to eight channels to do revalidations. In some scenarios of high TCP packet loss, high latency and especially at the beginning of the connections where TCP slow start is king, the many TCP sockets of HTTP/1.1 work better than a single HTTP/2 TCP connection. HTTP/2 is good, but not a silver bullet.
HTTP/2 connections still have network latency. We have been averaging round-trip-time (RTT) for visitors to our site (It can be measured using HTTP/2 Ping) and because not everybody is in the same block that our server, our mean RTT is between 200 and 280 ms. A 304 revalidation will cost 1 RTT. In a site that doesn't use asset concatenation each new level of the asset tree will cost a further RTT.
HTTP/2 Push can save you as many RTTs as you want while working decently with the cache. But there are some issues, keep reading!
HTTP/2 Push works best with cache busting
The ideal scenario is that the server doesn't push fresh resources, but it pushes everything that has changed since the client's last visit.
If a browser considers a resource fresh (e.g. because of max-age), it rejects or doesn't use any push for that resource. That makes impossible to refresh an asset that the browser considers fresh with HTTP/2 Push.
Pushing 304 revalidations doesn't work due to a widespread bug in browsers. Those would be required with a small max-age.
Therefore, the only way of keeping RTTs to a minimum, not pushing anything that the browser already has and still being able to push a new version of an asset is to use cache busting, i.e, a new name or query parameter for new versions of assets.
See also
Url query parameters are still needed to update assets at clients
Interactions with the browser's cache

Shouldn't CloudFlare with "Cache Everything" cache everything?

I have a CloudFlare account and found out that if I use page rules, I could use a more agressive cache setting called "Cache Everything", when reading about this, I understood that it should basically cache everything. I tested it on a site that is completely static, and I set the expiration time to 1 day.
Now after a few days looking at how many requests have been served from the cache and not from the cache, there's no change, still about 25% of the requests have not been served from the cache.
The two rules I've added for Cache Everything are:
http://www.example.com/
and
http://www.example.com/*
Both with Cache Everything and 1 day expiration time.
So my questions are, have I misinterpreted the use of Cache Everything (I thought I only should get one request per page/file each day using this setting), or is something wrong with my rules? Or maybe do I need to wait a few days for the cache to kick in?
Thanks in advance
"Or maybe do I need to wait a few days for the cache to kick in?"
Our caching is really designed to function based on the number of requests for the resources (a minimum of three requests), and works basically off of the "hot files" on your site (frequently requested) and is also very much data center related. If we get a lot of requests in one data center, for example, then we would cache the resources.
Also keep in mind that our caching will not cache third-party resources that are on your site (calls to ad platforms, etc.).

How to slow down WWW on nameserver level?

for scientific purposes I would like to know how to slow down www server on DNS level.
Is it possible via TTL setting ?
Thank You
Ralph
It should not be possible to slow down the speed of a website (http) solely by modifying the DNS response.
However, you could easily slow down the initial page load time via DNS by modifying the DNS server to take an abnormally long time before returning the DNS results. The problem is, this will really only effect the initial load of the website, as after that, web browsers, computers, and ISPs will cache the results.
.
the TTL you spoke of only effects how long the DNS result should be cached for, which generally has minimal effect on speed of the website. That being said, theoretically it would be possible to set the DNS TTL to a value close to 0, requiring the client to have to re-lookup the IP via DNS with nearly every page load. This would make nearly every new page from the website load very slowly.
However, the problem with this attack is that in the real world, venders and ISPs often don't follow the rules exactly. There are numerous ISPs and even some consumer devices that don't honor low TTL values in DNS replies, and will cache the DNS result for a decent period of time regardless of what the DNS server asked it to be cached for.
.
So from my experience in lowering TTL to very low values while transferring services to new IPs, and seeing ridiculously long caching time regardless, I would say that while such an attack such as this may work, it would depend hugely on what DNS server each victim is using, and in most cases would make close to no delay after the initial page load.

Performance of memcache on a shared server

Lately I've been experimenting with increasing performance on my blog, and not just one-click fixes but also looking at code in addition to other things like CDN, cache, etc.
I talked to my host about installing memcache so I can enable it in W3 Total Cache and he seems to think it will actually hinder my site as it will instantaneously max out my RAM usage (which is 1GB).
Do you think he is accurate, and should I try it anyway? My blog and forum (MyBB) get a combined 200,000 pageviews a month.
In fact, having 200.000 pageviews a month, I would go a way from a 'shared' host, and buy a VPS or dedicated server or something, Memcache(d) is a good tool indeed, but there is lots of other way you can get better performance.
Memcached is good if you know how to use it correctly, (The w3 total cache memcached thing, doesn't do the job).
As a performance engineer, I think a lot about speed, but also about server load and stuff. Im working much with wordpress sites, and the way I increase the performance to the maximum on my servers, is to generate HTML pages of my wordpress sites, this will result in 0 or minimal access to the PHP handler itself, which increase performance a lot.
What you then again can do, is to add another caching proxy in front of the web server, etc Varnish, which caches results, which means you'll never touch the web-server either.
What it will do, is when the client request your page, it will serve the already processed page directly via the memory, which is pretty fast. You then have a TTL on your files, and can be as low as 50 seconds which is default. 50 seconds doesn't sounds a lot. But if you have 200k pageviews, that means you will have 4.5 pageviews each minute if you had same amount of pageviews each minute. So peak hours doesn't count.
When you do 1 page view, there will be a lot of processing going on:
Making the first request to the web-server, starting the php process, process data, grap stuff from the DB, process the data, process the PHP site, etc. If we can do this for a few requests it will speed up the performance.
Often you should be able to generate HTML files of your forum too, which then would be renewed each 1-2 minutes, if there is a request to the file. it will require 1 request being processed instead of 4-9 requests (if not more).
You can limit the amount of memory that memcached uses. If the memory is maxed out the oldest entries are pruned. In CentOS/Debian there is /etc/default/memcached and you can set the maximum memory with the -m flag.
In my experience 64MB or even 32MB of memcached memory are enough for Wordpress and make a huge difference. Be sure to not cache whole pages (that fills the cache pretty fast) instead use memcache for the Wordpress Object Cache.
For generall Performance: Make sure to have a recent PHP Version (5.3+) and have APC installed. For Database Queries I would skip W3TC and go directly for the MySQL Query Cache.

How many dynos would require to host a static website on Heroku?

I want to host a static website on Heroku, but not sure how many dynos to start off with.
It says on this page: https://devcenter.heroku.com/articles/dyno-requests that the number of requests a dyno can serve, depends on the language and framework used. But I have also read somewhere that 1 dyno only handles one request at a time.
A little confused here, should 1 web dyno be enough to host a static website with very small traffic (<1000 views/month, <10/hour)? And how would you go about estimating additional use of dynos as traffic starts to increase?
Hope I worded my question correctly. Would really appreciate your input, thanks in advance!
A little miffed since I had a perfectly valid answer deleted but here's another attempt.
Heroku dynos are single threaded, so they are capable of dealing with a single request at a time. If you had a dynamic page (php, ruby etc) then you would look at how long a page takes to respond at the server, say it took 250ms to respond then a single dyno could deal with 4 requests a second. Adding more dynos increases concurrency NOT performance. So if you had 2 dynos, in this scenario you be able to deal with 8 requests per second.
Since you're only talking static pages, their response time should be much faster than this. Your best way to identify if you need more is to look at your heroku log output and see if you have sustained levels of the 'queue' value; this means that the dynos are unable to keep up and requests are being queued for processing.
Since most HTTP 1.1 clients will create two TCP connections to the webserver when requesting resources, I have a hunch you'll see better performance on single clients if you start two dynos, so the client's pipelined resources requests can be handled pipelined as well.
You'll have to decide if it is worth the extra money for the (potentially slight) improved performance of a single client.
If you ever anticipate having multiple clients requesting information at once, then you'll probably want more than two dynos, just to make sure at least one is readily available for additional clients.
In this situation, if you stay with one dyno. The first one is free, the second one puts you over the monthly minimum and starts to trigger costs.
But, you should also realize with one dyno on Heroku, the app will go to sleep if it hasn't been accessed recently (I think this is around 30 minutes). In that case, it can take 5-10 seconds to wake up again and can give your users a very slow initial experience.
There are web services that will ping your site, testing for it's response and keeping it awake. http://www.wekkars.com/ for example.

Resources