Shouldn't CloudFlare with "Cache Everything" cache everything? - caching

I have a CloudFlare account and found out that if I use page rules, I could use a more agressive cache setting called "Cache Everything", when reading about this, I understood that it should basically cache everything. I tested it on a site that is completely static, and I set the expiration time to 1 day.
Now after a few days looking at how many requests have been served from the cache and not from the cache, there's no change, still about 25% of the requests have not been served from the cache.
The two rules I've added for Cache Everything are:
http://www.example.com/
and
http://www.example.com/*
Both with Cache Everything and 1 day expiration time.
So my questions are, have I misinterpreted the use of Cache Everything (I thought I only should get one request per page/file each day using this setting), or is something wrong with my rules? Or maybe do I need to wait a few days for the cache to kick in?
Thanks in advance

"Or maybe do I need to wait a few days for the cache to kick in?"
Our caching is really designed to function based on the number of requests for the resources (a minimum of three requests), and works basically off of the "hot files" on your site (frequently requested) and is also very much data center related. If we get a lot of requests in one data center, for example, then we would cache the resources.
Also keep in mind that our caching will not cache third-party resources that are on your site (calls to ad platforms, etc.).

Related

How to serve from Cloudflare cache without requesting the Cloudflare Worker?

I have a Cloudflare Worker, whose responses can be cached for a long time. I know I can use Cache API inside the worker, but I want the request to never reach the worker at all, if Cache TTL is not reached.
There will be more than 10 million requests to this url, and I don't see the point paying for a Worker, that most of the time will just fetch a response from the Cache API.
I know a workaround - just host the worker code on a server, and use Page Rules, to cache everything from this origin. But I'm wondering if I could use Worker as origin, and somehow make Page Rules work with it. Just setting a Page Rule to cache everythig and cache TTL setting to 1 month still routes all requests to the Worker and doesn't cache anything.
There's currently no way to do this.
It's important to understand that this is really a pricing question, not a technical question. Cloudflare has chosen to price Workers based on the traffic level of a site that is served using Workers. This pricing decision isn't necessarily based on Cloudflare's costs, and Cloudflare's costs wouldn't necessarily be lower if your Worker runs less often (since the cost of deployment would not change, and the cost of executing a worker is quite low), so it doesn't necessarily make sense for Cloudflare to offer a discount for Worker-based sites that manage to serve most responses from cache.
With that said, Cloudflare could very well decide to offer this discount in the future for competitive or other reasons. But, at this time, there are no plans for this.
There's a longer explanation on the Cloudflare forums: https://community.cloudflare.com/t/cache-in-front-of-worker/171258/8

How to do cache warmup in TYPO3

In another question, there is the recommendation to setup a cache_clearAtMidnight via TypoScript and do a subsequent cache warmup.
I would like to know how to do this cache warmup because I did not find a scheduler task to do it.
(Clearing the entire cache once a day seems excessive, but the cache warmup seems like a good idea to me in any case.)
As I don't know whether there is an internal mechanism in TYPO3 for cache warming, I built my own little cache warmer based around a simple PHP script (can actually be anything – Python, PHP, Bash,...). The script reads the sitemap.xml and requests each page via cURL.
I use a custom user agent to exclude these requests from statistics.
curl_setopt($ch, CURLOPT_USERAGENT, 'cache warming - TYPO3');
You can use this ext. Its simple wget wrapper but you can add it as Scheduler task.
https://github.com/visuellverstehen/t3fetch
There are extensions available to do cache warmup:
crawler
b13/warmup
See also this relatively new blog post (part 1) on caching by Benni Mack:
Caching in TYPO3 - Part 1
In general, there are a number of things to consider as well, e.g. changing cache duration, optimizing for pages to load faster without being cached etc.
Btw, cache_clearAtMidnight does not clear the cache at midnight, it sets the expire time to be at midnight. Once the cache has been expired, on next page hit, it will be regenerated. Has the same effect, but might be good to know.

From a purely caching point of view, is there any advantage using the new Cache API instead of regular http cache?

The arrival of service workers has led to a great number of improvements to the web. There are many use cases for service workers.
However, from a purely caching point of view, does it makes sense to use the Cache API?
Many approaches make assumptions of how resources will be handled.
Often only the URL is used to determine how the resource should be handled with strategies such as Network first, Network Only, Stale-while-revalidate, Cache first and Cache only. This can be tedious work, because you have to define a specific handler for many URLs. It's not scalable.
Instead I was thinking of using regular HTTP cache in combination with the Cache API. Response headers contain useful information that can be used to cache and verify if the cache can still be used or if a new version would be available. Together with best practice caching (for example https://jakearchibald.com/2016/caching-best-practices/), this could create a generic service worker that has not te be updated when resources change.
Based on the response headers, a resource could be handled by a custom handler. If the headers would ever be updated, it would be possible to handle the resource with a different handler if necessary.
But then I realised, I was just reimplementing browser cache with the Cache API. This would mean that the resources would be cached double (take this with a grain of salt), by storing it in both the browser and the service worker cache. Additionally, while the Cache API provides more control, most handlers can be (sort of) simulated with http cache:
Network only: Cache-Control: no-store
Cache only: Cache-Control: immutable
Cache first: Cache-Control: max-age with validation (Etag, Last Modified, ...)
Stale-while-revalidate: Cache-Control: stale-while-revalidate
I don't immediately see how to simulate network first, but then again this would imply support for offline usage (or bad connection). (Keep in mind, this is not the use case I'm looking for).
While it's always useful to provide a fallback (using service workers & Cache API), is it worth having the resources possibly cached double and having copied the browser's caching logic? I'm aware that the Cache API can be used to precache resources, but I think these could also be precached by requesting them in advance.
Lastly, I know the browser is in charge of managing the browser cache and a developer has limited control over it (using HTTP Cache headers).
But the browser could also choose to remove the whole service worker cache to clear disk space. There are ways to make sure the cache persists, but that's not the point here.
My questions are:
What advantages has the Cache API that can't be simulated with regular browser cache?
What could be cached with the Cache API, but not with regular browser cache?
Is there another way to create a service worker that does not need to be updated
What advantages has the Cache API that can't be simulated with regular browser cache?
CacheAPI have been created to be manipulated by Service Worker, so you can do nearly what you want with it, be you can't interfere or do anything to HTTP cache, it's all in browser mechanic, i'm not sure but HTTP cache is completly wreck when your offline, not CacheAPI.
What could be cached with the Cache API, but not with regular browser cache?
Before caching request, you can alter request to fit your need, or even cache response with Cache-Control: 0 if you want. Even store custom data that will need after.
Is there another way to create a service worker that does not need to be updated
It need a bit of work, be two of solution to achieve that is :
On each page call you communicate with SW using postMessage to compare version (it can be an id, a hash or even whole list of assets), it's different, you can load list of ressource from given location then add it to cache. (Due to javascript use, this won't work if you have to make it work from AMP)
Each time a user load a page, or each 10/20min ( or both, whatever you want ), you call a files to know your assert version, if it's different, you do the same thing on the other solution.
Hope I help

Magento Admin suddenly slowed down

We have Magento EE 1.14. Admin was working fine till last two days its speed dropped dramatically. Frontend is not affected. Also no changes in code or server configuration. here is my attempt to fix the problem but nothing worked:
Log cleaning is properly configured
removed two unused extensions. but no improvement
tried to disable non-critical extensions to see if speed will improve but also not luck.
I can NOT use REDIS cache at this time. but configured new server which is using REDIS cache and move to it next month.
sometimes backend will gain speed for few minutes
I enabled profilers the source of the delay is mage ( screenshot attached ).
here are my question:
Is there anyway to know the exact reason for Mage delay ?
do I have other test i can use to identify the cause of delay ?
Thanks in advance,
It could be delay on external resources connection. Do you have new relic or similar software? Check there for slow connections. If You don't have NR, profile admin by blackfire.io. Magento profiler is really unhelpful :)
Follow below steps:
Delete unused extensions
It is best to remove unused extensions rather than just disabling them. If you disable an extension, it will still exist in the database. It would not only increase the size of your database (DB) but it also adds to the reading time for DB. So, keep your approach clear: If you don’t need it, DELETE it.
Keep your store clean by deleting unused and outdated products
One should keep in mind that a clean store is a fast store. We can operationalize the front-end faster by caching and displaying only a limited set of products even if we have more than 10,000 items in the back-end, but we cannot escape their wrath. If the number of products keeps on increasing at the backend, it may get slower, so it is best to remove unused products. Repeat this activity in every few months to keep the store fast.
Reindexing
One of the basic reasons why website administrators experience slow performance while saving a product is because of reindexing. Whenever you save a product, the Magento backend starts to reindex, and since you have a lot of products, it will take some time to complete. This causes unnecessary delays.
Clear the Cache
Cache is very important as far as any web application is concerned so that a web server does not have to process the same request again and again.

Performance of memcache on a shared server

Lately I've been experimenting with increasing performance on my blog, and not just one-click fixes but also looking at code in addition to other things like CDN, cache, etc.
I talked to my host about installing memcache so I can enable it in W3 Total Cache and he seems to think it will actually hinder my site as it will instantaneously max out my RAM usage (which is 1GB).
Do you think he is accurate, and should I try it anyway? My blog and forum (MyBB) get a combined 200,000 pageviews a month.
In fact, having 200.000 pageviews a month, I would go a way from a 'shared' host, and buy a VPS or dedicated server or something, Memcache(d) is a good tool indeed, but there is lots of other way you can get better performance.
Memcached is good if you know how to use it correctly, (The w3 total cache memcached thing, doesn't do the job).
As a performance engineer, I think a lot about speed, but also about server load and stuff. Im working much with wordpress sites, and the way I increase the performance to the maximum on my servers, is to generate HTML pages of my wordpress sites, this will result in 0 or minimal access to the PHP handler itself, which increase performance a lot.
What you then again can do, is to add another caching proxy in front of the web server, etc Varnish, which caches results, which means you'll never touch the web-server either.
What it will do, is when the client request your page, it will serve the already processed page directly via the memory, which is pretty fast. You then have a TTL on your files, and can be as low as 50 seconds which is default. 50 seconds doesn't sounds a lot. But if you have 200k pageviews, that means you will have 4.5 pageviews each minute if you had same amount of pageviews each minute. So peak hours doesn't count.
When you do 1 page view, there will be a lot of processing going on:
Making the first request to the web-server, starting the php process, process data, grap stuff from the DB, process the data, process the PHP site, etc. If we can do this for a few requests it will speed up the performance.
Often you should be able to generate HTML files of your forum too, which then would be renewed each 1-2 minutes, if there is a request to the file. it will require 1 request being processed instead of 4-9 requests (if not more).
You can limit the amount of memory that memcached uses. If the memory is maxed out the oldest entries are pruned. In CentOS/Debian there is /etc/default/memcached and you can set the maximum memory with the -m flag.
In my experience 64MB or even 32MB of memcached memory are enough for Wordpress and make a huge difference. Be sure to not cache whole pages (that fills the cache pretty fast) instead use memcache for the Wordpress Object Cache.
For generall Performance: Make sure to have a recent PHP Version (5.3+) and have APC installed. For Database Queries I would skip W3TC and go directly for the MySQL Query Cache.

Resources