Cache purge does not work (even not purge everything) if I add a rule like following (Image#1) => set both Browser Cache TTL and Edge Cache TTL to 4 hours
By does not work I mean, url is still cached, shown in image three, the url first time (before opening after the rules (mentioned above) added, url takes above 500ms every time, but after caching it takes 2ms, even after pruge (shown in image2), also tried purge everything.
Image#1
Image#2
Image#3
So I am stuck with purge does not work at all after having such a rule (Image#1).
Believe the issue here is that Cloudflare doesn't support Wildcard purge, on lower plan levels. You must enter the exact URL(s) you want to purge (one-by-one) or clear all cache completely for entire site.
Related
I'm implementing an amp-live-list for our site and I have everything set up. Everything looks good when I go the AMP version of my live blog pages (where the element is implemented) however when I run the URL through Google, i.e. https://www.google.com/amp/www.example.com/test-live-blog/amp, it can take up to 3-4 minutes for an update to come through even though polling is set to the minimum 15 seconds.
The delay directly on the AMP URL, i.e. https://www.example.com/test-live-blog/amp, is around the expected 15 second mark. Does Google AMP have a separate cache or request header it uses? What response header can I set to try and reduce this time to live for the AMP version of my document? I can't find any suitable documentation for these kinds of caching questions. Thanks.
The Google AMP Cache respect the max-age header, as specified in the docs:
The cache follows a "stale-while-revalidate" model. It uses the origin's caching headers, such as Max-Age, as hints in deciding whether a particular document or resource is stale. When a user makes a request for something that is stale, that request causes a new copy to be fetched, so that the next user gets fresh content.
The Google AMP cache, including the case where the cache ping is used, has some latency which is on the order of minutes and I have seen as low as a minute.
I'm running APC mainly to cache objects and query data as user cache entries, each item it setup with a specific time relevant to the amount of time it's required in the cache, some items are 48 hours but more are 2-5 minutes.
It's my understanding that when the timeout is reached and the current time passes the created at time then the item should be automatically removed from the user cache entries?
This doesn't seem to be happening though and the items are instead staying in memory? I thought maybe the garbage collector would remove these items but it doesn't seem to have done even though it's running once an hour at the moment.
The only other thing I can think is that the default apc.user_ttl = 0 overrides the individual timeout values and sets them to never be removed even after individual timeouts?
In general, a cache manager SHOULD keep your entries for as long as possible, and MAY delete them if/when necessary.
The Time-To-Live (TTL) mechanism exists to flag entries as "expired", but expired entries are not automatically deleted, nor should they be, because APC is configured with a fixed memory size (using apc.shm_size configuration item) and there is no advantage in deleting an entry when you don't have to. There is a blurb below in the APC documentation:
If APC is working, the Cache full count number (on the left) will
display the number of times the cache has reached maximum capacity and
has had to forcefully clean any entries that haven't been accessed in
the last apc.ttl seconds.
I take this to mean that if the cache never "reached maximum capacity", no garbage collection will take place at all, and it is the right thing to do.
More specifically, I'm assuming you are using the apc_add/apc_store function to add your entries, this has a similar effect to the apc.user_ttl, for which the documentation explains as:
The number of seconds a cache entry is allowed to idle in a slot in
case this cache entry slot is needed by another entry
Note the "in case" statement. Again I take this to mean that the cache manager does not guarantee a precise time to delete your entry, but instead try to guarantee that your entries stays valid before it is expired. In other words, the cache manager puts more effort on KEEPING the entries instead of DELETING them.
apc.ttl doesn't do anything unless there is insufficient allocated memory to store new coming variables, if there is sufficient memory the cache will never expire!!. so you have to specify your ttl for every variable u store using apc_store() or apc_add() to force apc to regenerate it after end of specified ttl passed to the function. if u use opcode caching it will also never expire unless the page is modified(when stat=1) or there is no memory. so apc.user_ttl or apc.ttl are actually have nothing to do.
I'd like to set browser caching for some Amazon S3 files. I plan to use this meta data:
Cache-Control: max-age=86400, must-revalidate
that's equal to one day.
Many of the examples I see look like this:
Cache-Control: max-age=3600
Why only 3600 and why not use must-revalidate?
For a file that I rarely change, how long should it be cached?
What happens if I update the file and need that update to be seen immediately, but its cache doesn't expire for another 5 days?
Why only 3600 ?
Assumingly because the author of that particular example decided that one hour was an appropiate cache timeout for that page.
Why not use must-revalidate ?
If the response does not contain information that is strictly required to follow the cache rules you set, omitting must-revalidate could in theory ensure that a few more requests are delivered through the cache. See this answer for details, the most relevant part being from the HTTP spec:
When a cache has a stale entry that it would like to use as a response
to a client's request, it first has to check with the origin server
(or possibly an intermediate cache with a fresh response) to see if
its cached entry is still usable.
For a file that I rarely change, how long should it be cached?
Many web performance advices says to set a very far into the future cache expiration, such as a few years. This way, the client browser will only download the data once, and subsequent visits will be served from the cache. This works well for "truly static" files, such as Javascript or CSS.
On the other hand, if the data is dynamic, but does not change too often, you should set an expiration time that is reasonable based for your specific scenario. Do you need to get the newest version to the customer as soon as it's available, or is it okay to serve a stale version ? Do you know when the data change ? Etc. An hour or a day is often appropiate trade-offs between server load, client performance, and data freshness, but it depends on your requirements.
What happens if I update the file and need that update to be seen immediately, but its cache doesn't expire for another 5 days?
Give the file a new name, or append a value to the querystring. You will of course need to update all links. This is the general approach when static resources need to change.
Also, here is a nice overview of the cache control attributes available to you.
I am using memcache to store larger session data that is choking the MYSQL/PHP limitations.
One thing however, is that these may not be requested again, and hence the 'lazy' memcache purging may not work.
Im trying to determine if there is a function that will purge all expired caches without having to walk through all of them.
OR ... if memcache will only consume so much memory, and then purge the expired as needed to make room ..
Just looking for the most optimal way to handle these caches.
Thanks
One thing however, is that these may not be requested again, and hence
the 'lazy' memcache purging may not work.
Im trying to determine if there is a function that will purge all
expired caches without having to walk through all of them.
I'm not really sure what you mean by this. There is no need to purge stuff that is expired. If it is expired, it doesn't really count anymore. You won't get these results because they aren't valid anymore (expired).
OR ... if memcache will only consume so much memory, and then purge
the expired as needed to make room ..
Well, if you have expired items, they will be overwritten, so I don't see what the issue is.
Lets say you have room for 2 key/value pairs. If you have one that is expired, and one that is used but not expired, and you need to save a third, you can. You don't need to actually purge anything, because it will remove the expired one.
I run a Symfony 1.4 project with very large amount of data. The main page and category pages are using pagers which need to know how much rows are available. I'm passing a query which contains joins to the pager which leads to a loading-time of 1 minute on these pages.
I configured cache.yml for the respective actions. But I think the workaround is insufficient and here are my assumptions:
Symfony rebuilds the cache within a single request which is made by a user. Let's call this user "cache-victim" to simplify things.
In our case, the data needs to be up-to-update - a lifetime of 10 minutes would be sufficient. Obviously, the cache won't be rebuilt, if no user is willing to be the "cache-victim" and therefore just cancels the request. Are these assumptions correct?
So, I came up with this idea:
Symfony should fake the http-request after rebuilding the cache. The new cache-entries should be written on a temporary file/directory and should be swapped with the previous cache-entries, as soon as cache rebuilding has finished.
Is this possible?
In my opinion, this is similar to the concept of double buffering.
Wouldn't it be silly, if there was a single "gpu-victim" in a multiplayer game who sees the screen building up line by line? (This is a lop-sided comparison, I know ... ;) )
Edit
There is no "cache-victim" - Every 10 minutes page reloading takes 1 minute for every user.
I think your problem is due to some missing or wrong indexes. I've a sf1.4 project for a large soccer site (i.e. 2M pages/day) and pagers aren't going so slow even if our database has more than 1M rows these days. Take a look at your query with EXPLAIN and check where it is going bad...
Sorry for necromancing (is there a badge for that?).
By configuring cache.yml you are just caching the view layer of your app (that is, css, js and html) for REQUESTS WITHOUT PARAMETERS. Navigating the pager obviously has a ?page=X on the GET request.
Taken from symfony 1.4 config.yml documentation:
An incoming request with GET parameters in the query string or submitted with the POST, PUT, or DELETE method will never be cached by symfony, regardless of the configuration. http://www.symfony-project.org/reference/1_4/en/09-Cache
What might help you is to cache the database results, but its a painful process on symfony/doctrine. Refer to:
http://www.symfony-project.org/more-with-symfony/1_4/en/08-Advanced-Doctrine-Usage#chapter_08_using_doctrine_result_caching
Edit:
This might help you as well:
http://www.zalas.eu/symfony-meets-apc-alternative-php-cache