cache invalidate operation and cache flush operation in cache memory - caching

what is meant by cache invalidate operation and cache flush operation in cache memory ?
please explain as if explaining to a layman, as I'm a very newbie to cache-related stuff.

Related

Write allocation policy with caches [duplicate]

This question already has answers here:
For Write-Back Cache Policy, why data should first be read from memory, before writing to cache?
(2 answers)
Why data is fetched from main memory in Write Allocate cache policy
(1 answer)
MESI protocol. Write with cache miss. Why needs main memory value fetch?
(1 answer)
Closed 10 months ago.
I was just wondering about in write allocation policy of caches, first we access data from main memory and put into cache and then update in the cache. If anyway we use write back policy to counter cache coherency then after updating data into cache we don't have to update simultaneously in the main memory. But I am not able to understand why are we fetching old data from memory to cache first and then updating in case of write miss and we use write allocation with write back policy, why not directly put that data into cache itself.

Yii2 delete cache by variations

I just want to flush cache by variations, for example just flush the cache with variations id 5
I did't find any reference about flush params ..
thanks in advance .
There is no way to flush cache by variation, at least not in any standardized way (implementation would differ for different cache storages, and for some of them this could be impossible). However you can invalidate caches using TagDependency - after calling TagDependency::invalidate() old cache still will be stored in cache storage, but it will be discarded on Cache::get() call.

What happend if guava cache is full and no evictable element?

I am using google guava cache with reference-based eviction.
I wonder what happened if the cache is full and no element of it is marked as evictable? Is there an out of memory exception thrown?
Reference-based eviction is essentially no different than Java's standard GC behavior - the GC just ignores the reference's presence in the cache. If an object falls out of scope (everywhere but the cache) it will be evicted from the cache during GC. If all elements of the cache are in scope somewhere else and therefore cannot be GCed you will run into memory problems exactly like you would if you weren't using a cache. You cannot have more data in memory than the JVM is configured to permit. Using a reference-evicting cache doesn't change this.

What happens to the cache on page fault?

In a processor, what happens to the cache when the operating system replaces a page, if there is not enough space to hold all running processes' pages in memory? Does it need to flush the cache on every page replacement?
Thanks in advance for your replies.
When a page is swapped in, the contents are read off the disk and into memory. Typically this is done using DMA. So the real question is, "How is the cache kept coherent with DMA?". You can either have DMA talk to the cache controller on each access, or make the OS manage the cache manually. See http://en.wikipedia.org/wiki/Direct_memory_access#Cache_coherency.
I am not 100% sure of what happens in details, but caches and virtual memory using paging
are similar: both are divided in "pages".
The same way that only one page needs to be replaced in a page fault, only one line of
the cache needs to be replaced when it occurs a miss on the cache. The cache has
several "pages" (lines), but only the problematic page will be replaced.
There are other things that I do not know if takes part on such replacements: cache size,
cache coherency - write-through/back and so on. I hope someone else can give you a more detailed answer.

What does "cache write" mean?

The task is to count "cache writes" during some algorithm work.
So what's cache write?
Is it a process when data is written to the cache or what?
It should be in your lecture notes or textbook.
But yes, a cache write is when you write to the cache. One of the problems with a cache write is that in order to be in a state where you have a cache write, it means you didn't already have the data you wanted in the cache.

Resources