Is BlueField SmartNIC L2 cache also divided into some slices? - caching

From Sandy Bridge and forward, the last level cache is divided into some slices. L2 cache, which
is typically shared across all SmartNIC (e.g., BlueField) cores. I wonder if SmartNIC L2 cache is also divided into some slices?

Related

Is there such thing as a semi-shared cache?

I'm doing a little research on the caching hierarchy and have come across the concept of shared and private caches. I can see examples of where caches are either private to a specific core (at Higher levels) and then where the cache is shared amongst all of the cores.
Are there any such examples of a cache being shared across a certain subset of cores at an intermediary hierarchy level and if not, why? My impression is that this would act as a middle-ground in the trade-off between latency and hit rate, although I'm unable to find an example of such a structure.
Sharing an intermediate level cache among multiple cores — but fewer than share the last level cache — is not a common design point. There are, however, a few designs that share L2 cache with as many cores as L3 cache is shared.
POWER4 and POWER5 both shared L2 cache among two cores, with L3 also shared among two cores. Since L3 cache data was stored off-chip (tags were on-chip) and each chip only had two cores, this is more similar to just sharing last level cache. Total L2 capacity was strongly constrained by chip size and L3 (having off-chip data) had somewhat high latency, so sharing to increase effective capacity was more attractive than for more recent designs with on-chip L3.
SPARC M7 is a more interesting example. M7 had a 256 KiB L2 data cache shared among two cores and an L2 instruction cache shared among four cores with L3 shared among four cores (the documentation I have seen is not entirely clear that L3 is not unified, but the evidence generally points to L3 being private to each cluster of four cores). Since data L2 is shared only among two cores, this might count as sharing L2 among fewer cores than L3 even though instruction L2 is shared with the same number of cores as L3.
Since M7 cores are 8-way threaded (as well as having only two-wide, out-of-order execution), L2 latency is less important (both thread-level parallelism and instruction level parallelism extracted from out-of-order execution can hide latency and a narrower core reduces the execution potential loss from a given number of stall cycles). Since the processor targets commercial workloads with high thread-level parallelism and low instruction-level parallelism, increasing the core and thread count were primary goals; sharing L2 caches can exploit common instruction and data use — the former is especially significant, but data sharing is not rare — facilitating lower total capacity, leaving room for more cores.
SPARC M8 was similar, but the L2 data cache was made private and the issue width doubled to four-wide. The increase in issue width increases the importance of L2 latency, especially with modest sized (16 KiB) L1 caches. Instruction cache is somewhat more latency tolerant given an ability to fetch ahead in an instruction stream.
Some considerations of the tradeoffs of intermediate level cache sharing
Increasing the size of an L2 cache via sharing would reduce the capacity miss rate when capacity demand is imbalanced (not only when one core is inactive but even when different phases of the same program are active on different cores), but sharing L2 among multiple cores increases conflict misses. Increasing associativity can eliminate this effect at the cost of higher energy per access.
When two cores access the same memory locations within a shortish period of time, a shared cache can increase effective capacity by reducing replication as well as potentially improving replacement decisions and providing limited prefetch. Sharing can also reduce cache block ping-pong if the writer and reader share L2 cache; however, explicitly taking advantage of such increases the complexity of software core allocation. If sharing of a frequently written value is unavoidably common, even a random reduction in ping-ponging may be attractive, but the benefit diminishes rapidly as the number of cores involved increases.
When L2 is an intermediate level cache, access latency has significant importance since capacity misses from a smaller L2 will generally hit in L3. Doubling the capacity will increase access latency by more than 40% (latency is roughly proportional to the square root of capacity). Arbitration among multiple requester's also tends to increase latency. (A non-uniform cache architecture, where different cache blocks have different latencies can compensate for such. E.g., in the context of sharing among two cores, a quarter of the capacity could be located closest to each core and the remaining half at an intermediate distance from both cores. However, NUCA introduces complexity in allocation.)
While increasing L2 capacity would use area that could otherwise by used by L3 cache (or more cores or other features), the size of L3 slices is typically so much larger than L2 capacity that this effect is not a primary consideration.
Sharing L2 among two cores also means that the provided bandwidth must be suitable for two highly active cores. While banking can be used to facilitate such (and extra bandwidth might be exploitable by a single active core), such increased bandwidth is not entirely free.
Sharing L2 would also motivate increasing the complexity of cache allocation and replacement. One would prefer to avoid one core wasting capacity (or even associativity). Such moderating mechanisms are sometimes provided for last level cache (e.g., Intel's Cache Allocation Technology), so this is not a hard barrier. Some of the moderating mechanisms could also facilitate better replacement in general, and L2 mechanisms could exploit metadata associated with L3 cache (reducing the tagging overhead for metadata tracking) to adjust behavior.
Sharing L2 cache also introduces complexity with respect to frequency adjustment. If one core supports a lower frequency, the interface between the core and L2 becomes more complex, increasing access latency. (In theory, a NUCA design like that mentioned above could have a small close portion running at the local frequency and only pay the clock boundary crossing penalty when accessing the more distant portion.)
Power gating is also simplified when L2 cache is dedicated to a single core. Rather than having three power domains (two cores and L2), a private L2 can be turned off with its core so only two power domains are needed. (Note that adding power domains is not extremely expensive and has been proposed for reducing power by dynamically reducing cache capacity.)
A shared L2 cache can also provide a convenient merging point for the on-chip network, reducing the number of nodes in the broader network. (This merging could alternatively be done behind the L2 cache, providing lower latency and potentially higher bandwidth communication between two cores while also providing isolation.)
Conclusion
Fundamentally, sharing increases utilization — which is good for throughput (roughly speaking, efficiency) but bad for latency (local performance) — but hinders optimization by specialization. For L2 caches with a backing L3 cache, the specialization benefit (lower latency) tends to outweigh the utilization benefit for general designs (which generally trade throughput and efficiency for lower latency). The on-chip L3 cache reduces the cost of L2 capacity misses, so a higher L2 miss rate with a faster L2 hit time can reduce average memory access time.
At the cost of design complexity and some overheads, sharing can be made more flexible or the costs of sharing can be reduced. Increasing complexity increases development risk and marketing risk (not just time to market but feature complexity increases the difficulty of the buyer's choice yet marketing simplifications can seem deceptive). For L2 caches, the costs of more nuanced sharing seem to have generally not be considered worth the potential benefits.

Why do L1 and L2 Cache waste space saving the same data?

I don't know why L1 Cache and L2 Cache save the same data.
For example, let's say we want to access Memory[x] for the first time. Memory[x] is mapped to the L2 Cache first, then the same data piece is mapped to L1 Cache where CPU register can retrieve data from.
But we have duplicated data stored on both L1 and L2 cache, isn't it a problem or at least a waste of storage space?
I edited your question to ask about why CPUs waste cache space storing the same data in multiple levels of cache, because I think that's what you're asking.
Not all caches are like that. The Cache Inclusion Policy for an outer cache can be Inclusive, Exclusive, or Not-Inclusive / Not-Exclusive.
NINE is the "normal" case, not maintaining either special property, but L2 does tend to have copies of most lines in L1 for the reason you describe in the question. If L2 is less associative than L1 (like in Skylake-client) and the access pattern creates a lot of conflict misses in L2 (unlikely), you could get a decent amount of data that's only in L1. And maybe in other ways, e.g. via hardware prefetch, or from L2 evictions of data due to code-fetch, because real CPUs use split L1i / L1d caches.
For the outer caches to be useful, you need some way for data to enter them so you can get an L2 hit sometime after the line was evicted from the smaller L1. Having inner caches like L1d fetch through outer caches gives you that for free, and has some advantages. You can put hardware prefetch logic in an outer or middle level of cache, which doesn't have to be as high-performance as L1. (e.g. Intel CPUs have most of their prefetch logic in the private per-core L2, but also some prefetch logic in L1d).
The other main option is for the outer cache to be a victim cache, i.e. lines enter it only when they're evicted from L1. So you can loop over an array of L1 + L2 size and probably still get L2 hits. The extra logic to implement this is useful if you want a relatively large L1 compared to L2, so the total size is more than a little larger than L2 alone.
With an exclusive L2, an L1 miss / L2 hit can just exchange lines between L1d and L2 if L1d needs to evict something from that set.
Some CPUs do in fact use an L2 that's exclusive of L1d (e.g. AMD K10 / Barcelona). Both of those caches are private per-core caches, not shared, so it's like the simple L1 / L2 situation for a single core CPU you're talking about.
Things get more complicated with multi-core CPUs and shared caches!
Barcelona's shared L3 cache is also mostly exclusive of the inner caches, but not strictly. David Kanter explains:
First, it is mostly exclusive, but not entirely so. When a line is sent from the L3 cache to an L1D cache, if the cache line is shared, or is likely to be shared, then it will remain in the L3 – leading to duplication which would never happen in a totally exclusive hierarchy. A fetched cache line is likely to be shared if it contains code, or if the data has been previously shared (sharing history is tracked). Second, the eviction policy for the L3 has been changed. In the K8, when a cache line is brought in from memory, a pseudo-least recently used algorithm would evict the oldest line in the cache. However, in Barcelona’s L3, the replacement algorithm has been changed to also take into account sharing, and it prefers evicting unshared lines.
AMD's successor to K10/Barcelona is Bulldozer. https://www.realworldtech.com/bulldozer/3/ points out that Bulldozer's shared L3 is also victim cache, and thus mostly exclusive of L2. It's probably like Barcelona's L3.
But Bulldozer's L1d is a small write-through cache with an even smaller (4k) write-combining buffer, so it's mostly inclusive of L2. Bulldozer's write-through L1d is generally considered a mistake in the CPU design world, and Ryzen went back to a normal 32kiB write-back L1d like Intel has been using all along (with great results). A pair of weak integer cores form a "cluster" that shares an FPU/SIMD unit, and shares a big L2 that's "mostly inclusive". (i.e. probably a standard NINE). This cluster thing is Bulldozer's alternative to SMT / Hyperthreading, which AMD also ditched for Ryzen in favour of normal SMT with a massively wide out-of-order core.
Ryzen also has some exclusivity between core clusters (CCX), apparently, but I haven't looked into the details.
I've been talking about AMD first because they have used exclusive caches in recent designs, and seem to have a preference for victim caches. Intel hasn't tried as many different things, because they hit on a good design with Nehalem and stuck with it until Skylake-AVX512.
Intel Nehalem and later use a large shared tag-inclusive L3 cache. For lines that are modified / exclusive (MESI) in a private per-core L1d or L2 (NINE) cache, the L3 tags still indicate which cores (might) have a copy of a line, so requests from one core for exclusive access to a line don't have to be broadcast to all cores, only to cores that might still have it cached. (i.e. it's a snoop filter for coherency traffic, which lets CPUs scale up to dozens of cores per chip without flooding each other with requests when they're not even sharing memory.)
i.e. L3 tags hold info about where a line is (or might be) cached in an L2 or L1 somewhere, so it knows where to send invalidation messages instead of broadcasting messages from every core to all other cores.
With Skylake-X (Skylake-server / SKX / SKL-SP), Intel dropped that and made L3 NINE and only a bit bigger than the total per-core L2 size. But there's still a snoop filter, it just doesn't have data. I don't know what Intel's planning to do for future (dual?)/quad/hex-core laptop / desktop chips (e.g. Cannonlake / Icelake). That's small enough that their classic ring bus would still be great, so they could keep doing that in mobile/desktop parts and only use a mesh in high-end / server parts, like they are in Skylake.
Realworldtech forum discussions of inclusive vs. exclusive vs. non-inclusive:
CPU architecture experts spend time discussing what makes for a good design on that forum. While searching for stuff about exclusive caches, I found this thread, where some disadvantages of strictly inclusive last-level caches are presented. e.g. they force private per-core L2 caches to be small (otherwise you waste too much space with duplication between L3 and L2).
Also, L2 caches filter requests to L3, so when its LRU algorithm needs to drop a line, the one it's seen least-recently can easily be one that stays permanently hot in L2 / L1 of a core. But when an inclusive L3 decides to drop a line, it has to evict it from all inner caches that have it, too!
David Kanter replied with an interesting list of advantages for inclusive outer caches. I think he's comparing to exclusive caches, rather than to NINE. e.g. his point about data sharing being easier only applies vs. exclusive caches, where I think he's suggesting that a strictly exclusive cache hierarchy might cause evictions when multiple cores want the same line even in a shared/read-only manner.

How multilevel CPU caches having the same cache line size work?

Note: I'm not sure if StackOverflow is the correct place for that question or if there is a more suitable StackExchange sub for this
I've read in a book, that for multilevel CPU caches, cache line size increases as per level's total memory size. I can totally undrestand how this works (or at least I think so) when used with quite simple architectures. Then I came accross this question. Question is how cache memories of the same cache line can cooperate?
This is how I percieve the way of cache memories with different cache line size work. For simplicity, lets suppose there are no different caches for data and for instructions and we only have L1 and L2 caches (L3 and L4 not exist).
If L1 has cache line size of 64 bytes and L2 of 128 bytes, when we have cache miss on L2 and we need to fetch the desired byte or word from main memory, we also bring its closest bytes or words in order to fill the 128 bytes of the L2 cache line. Then because of the locality of the references to memory locations produced by the processor we have higher probability of geting a hit on L2 whe missing on L1. But if we had equal cache line sizes this of course wouldn't happen, with the previous algorithm. Can you explain me some sort/simple algorithm or implementation of how modern CPUs take advantage of caches having the same line size?
Thanks in advance.
I've read in a book, that for multilevel CPU caches, cache line size increases as per level's total memory size.
That's not true for most CPUs. Usually the line size is the same in all caches, but the total size increases. Often also the associativity, but usually not by as much as the total size, so the number of sets typically increases.
The point of multi-level caches is to get low latency and large size without needing a single cache that's both large and low latency (because that's physically impossible).
HW prefetch into L2 and/or L1 is what makes sequential read work well, not larger line size in out levels of cache. (And in multi-core CPUs, private L1/L2 + shared L3 provide private latency + bandwidth filters for the memory workload hits the shared domain, but then you have L3 as a coherency backstop instead of hitting DRAM for data that's shared between cores.)
Having different line sizes in different caches is more complicated, especially in a multi-core system where caches have to maintain coherency with each other using MESI. Transferring around whole cache between caches works well.
But if if L1D lines are 64B and private L2 / shared L3 lines are 128B, then a load on one core might force the L2 cache to request both halves separately in case separate cores had each of the two halves of the 128B line modified. Sounds really complicated, and puts more logic into the outer-level cache.
(Paul Clayton's answer on the question you linked points out that a possible solution to that problem is separate validity bits for the two halves of a larger cache line, or even separate MESI coherency state. But still sharing the same tag, so if they are both valid then they have to be caching two halves of the same 128B block.)

Why is the size of L1 cache smaller than that of the L2 cache in most of the processors?

Why is the size of L1 cache smaller than that of the L2 cache in most of the processors ?
L1 is very tightly coupled to the CPU core, and is accessed on every memory access (very frequent). Thus, it needs to return the data really fast (usually within on clock cycle). Latency and throughput (bandwidth) are both performance-critical for L1 data cache. (e.g. four cycle latency, and supporting two reads and one write by the CPU core every clock cycle). It needs lots of read/write ports to support this high access bandwidth. Building a large cache with these properties is impossible. Thus, designers keep it small, e.g. 32KB in most processors today.
L2 is accessed only on L1 misses, so accesses are less frequent (usually 1/20th of the L1). Thus, L2 can have higher latency (e.g. from 10 to 20 cycles) and have fewer ports. This allows designers to make it bigger.
L1 and L2 play very different roles. If L1 is made bigger, it will increase L1 access latency which will drastically reduce performance because it will make all dependent loads slower and harder for out-of-order execution to hide. L1 size is barely debatable.
If we removed L2, L1 misses will have to go to the next level, say memory. This means that a lot of access will be going to memory which would imply we need more memory bandwidth, which is already a bottleneck. Thus, keeping the L2 around is favorable.
Experts often refer to L1 as a latency filter (as it makes the common case of L1 hits faster) and L2 as a bandwidth filter as it reduces memory bandwidth usage.
Note: I have assumed a 2-level cache hierarchy in my argument to make it simpler. In many of today's multicore chips, there's an L3 cache shared between all the cores, while each core has its own private L1 and maybe L2. In these chips, the shared last-level cache (L3) plays the role of memory bandwidth filter. L2 plays the role of on-chip bandwidth filter, i.e. it reduces access to the on-chip interconnect and the L3. This allows designers to use a lower-bandwidth interconnect like a ring, and a slow single-port L3, which allows them to make L3 bigger.
Perhaps worth mentioning that the number of ports is a very important design point because it affects how much chip area the cache consumes. Ports add wires to the cache which consumes a lot of chip area and power.
There are different reasons for that.
L2 exists in the system to speedup the case where there is a L1 cache miss. If the size of L1 was the same or bigger than the size of L2, then L2 could not accomodate for more cache lines than L1, and would not be able to deal with L1 cache misses. From the design/cost perspective, L1 cache is bound to the processor and faster than L2. The whole idea of caches is that you speed up access to the slower hardware by adding intermediate hardware that is more performing (and expensive) than the slowest hardware and yet cheaper than the faster hardware you have. Even if you decided to double the L1 cache, you would also increment L2, to speedup L1-cache misses.
So why is there L2 cache at all? Well, L1 cache is usually more performant and expensive to build, and it is bound to a single core. This means that increasing the L1 size by a fixed quantity will have that cost multiplied by 4 in a dual core processor, or by 8 in a quad core. L2 is usually shared by different cores --depending on the architecture it can be shared across a couple or all cores in the processor, so the cost of increasing L2 would be smaller even if the price of L1 and L2 were the same --which it is not.
#Aater's answer explains some of the basics. I'll add some more details + an examples of the real cache organization on Intel Haswell and AMD Piledriver, with latencies and other properties, not just size.
For some details on IvyBridge, see my answer on "How can cache be that fast?", with some discussion of the overall load-use latency including address-calculation time, and widths of the data busses between different levels of cache.
L1 needs to be very fast (latency and throughput), even if that means a limited hit-rate. L1d also needs to support single-byte stores on almost all architectures, and (in some designs) unaligned accesses. This makes it hard to use ECC (error correction codes) to protect the data, and in fact some L1d designs (Intel) just use parity, with better ECC only in outer levels of cache (L2/L3) where the ECC can be done on larger chunks for lower overhead.
It's impossible to design a single level of cache that could provide the low average request latency (averaged over all hits and misses) of a modern multi-level cache. Since modern systems have multiple very hungry cores all sharing a connection to the same relatively-high latency DRAM, this is essential.
Every core needs its own private L1 for speed, but at least the last level of cache is typically shared, so a multi-threaded program that reads the same data from multiple threads doesn't have to go to DRAM for it on each core. (And to act as a backstop for data written by one core and read by another). This requires at least two levels of cache for a sane multi-core system, and is part of the motivation for more than 2 levels in current designs. Modern multi-core x86 CPUs have a fast 2-level cache in each core, and a larger slower cache shared by all cores.
L1 hit-rate is still very important, so L1 caches are not as small / simple / fast as they could be, because that would reduce hit rates. Achieving the same overall performance would thus require higher levels of cache to be faster. If higher levels handle more traffic, their latency is a bigger component of the average latency, and they bottleneck on their throughput more often (or need higher throughput).
High throughput often means being able to handle multiple reads and writes every cycle, i.e. multiple ports. This takes more area and power for the same capacity as a lower-throughput cache, so that's another reason for L1 to stay small.
L1 also uses speed tricks that wouldn't work if it was larger. i.e. most designs use Virtually-Indexed, Physically Tagged (VIPT) L1, but with all the index bits coming from below the page offset so they behave like PIPT (because the low bits of a virtual address are the same as in the physical address). This avoids synonyms / homonyms (false hits or the same data being in the cache twice, and see Paul Clayton's detailed answer on the linked question), but still lets part of the hit/miss check happen in parallel with the TLB lookup. A VIVT cache doesn't have to wait for the TLB, but it has to be invalidated on every change to the page tables.
On x86 (which uses 4kiB virtual memory pages), 32kiB 8-way associative L1 caches are common in modern designs. The 8 tags can be fetched based on the low 12 bits of the virtual address, because those bits are the same in virtual and physical addresses (they're below the page offset for 4kiB pages). This speed-hack for L1 caches only works if they're small enough and associative enough that the index doesn't depend on the TLB result. 32kiB / 64B lines / 8-way associativity = 64 (2^6) sets. So the lowest 6 bits of an address select bytes within a line, and the next 6 bits index a set of 8 tags. This set of 8 tags is fetched in parallel with the TLB lookup, so the tags can be checked in parallel against the physical-page selection bits of the TLB result to determine which (if any) of the 8 ways of the cache hold the data. (Minimum associativity for a PIPT L1 cache to also be VIPT, accessing a set without translating the index to physical)
Making a larger L1 cache would mean it had to either wait for the TLB result before it could even start fetching tags and loading them into the parallel comparators, or it would have to increase in associativity to keep log2(sets) + log2(line_size) <= 12. (More associativity means more ways per set => fewer total sets = fewer index bits). So e.g. a 64kiB cache would need to be 16-way associative: still 64 sets, but each set has twice as many ways. This makes increasing L1 size beyond the current size prohibitively expensive in terms of power, and probably even latency.
Spending more of your power budget on L1D cache logic would leave less power available for out-of-order execution, decoding, and of course L2 cache and so on. Getting the whole core to run at 4GHz and sustain ~4 instructions per clock (on high-ILP code) without melting requires a balanced design. See this article: Modern Microprocessors: A 90-Minute Guide!.
The larger a cache is, the more you lose by flushing it, so a large VIVT L1 cache would be worse than the current VIPT-that-works-like-PIPT. And a larger but higher-latency L1D would probably also be worse.
According to #PaulClayton, L1 caches often fetch all the data in a set in parallel with the tags, so it's there ready to be selected once the right tag is detected. The power cost of doing this scales with associativity, so a large highly-associative L1 would be really bad for power-use as well as die-area (and latency). (Compared to L2 and L3, it wouldn't be a lot of area, but physical proximity is important for latency. Speed-of-light propagation delays matter when clock cycles are 1/4 of a nanosecond.)
Slower caches (like L3) can run at a lower voltage / clock speed to make less heat. They can even use different arrangements of transistors for each storage cell, to make memory that's more optimized for power than for high speed.
There are a lot of power-use related reasons for multi-level caches. Power / heat is one of the most important constraints in modern CPU design, because cooling a tiny chip is hard. Everything is a tradeoff between speed and power (and/or die area). Also, many CPUs are powered by batteries or are in data-centres that need extra cooling.
L1 is almost always split into separate instruction and data caches. Instead of an extra read port in a unified L1 to support code-fetch, we can have a separate L1I cache tied to a separate I-TLB. (Modern CPUs often have an L2-TLB, which is a second level of cache for translations that's shared by the L1 I-TLB and D-TLB, NOT a TLB used by the regular L2 cache). This gives us 64kiB total of L1 cache, statically partitioned into code and data caches, for much cheaper (and probably lower latency) than a monster 64k L1 unified cache with the same total throughput. Since there is usually very little overlap between code and data, this is a big win.
L1I can be placed physically close to the code-fetch logic, while L1D can be physically close to the load/store units. Speed-of-light transmission-line delays are a big deal when a clock cycle lasts only 1/3rd of a nanosecond. Routing the wiring is also a big deal: e.g. Intel Broadwell has 13 layers of copper above the silicon.
Split L1 helps a lot with speed, but unified L2 is the best choice.
Some workloads have very small code but touch lots of data. It makes sense for higher-level caches to be unified to adapt to different workloads, instead of statically partitioning into code vs. data. (e.g. almost all of L2 will be caching data, not code, while running a big matrix multiply, vs. having a lot of code hot while running a bloated C++ program, or even an efficient implementation of a complicated algorithm (e.g. running gcc)). Code can be copied around as data, not always just loaded from disk into memory with DMA.
Caches also need logic to track outstanding misses (since out-of-order execution means that new requests can keep being generated before the first miss is resolved). Having many misses outstanding means you overlap the latency of the misses, achieving higher throughput. Duplicating the logic and/or statically partitioning between code and data in L2 would not be good.
Larger lower-traffic caches are also a good place to put pre-fetching logic. Hardware pre-fetching enables good performance for things like looping over an array without every piece of code needing software-prefetch instructions. (SW prefetch was important for a while, but HW prefetchers are smarter than they used to be, so that advice in Ulrich Drepper's otherwise excellent What Every Programmer Should Know About Memory is out-of-date for many use cases.)
Low-traffic higher level caches can afford the latency to do clever things like use an adaptive replacement policy instead of the usual LRU. Intel IvyBridge and later CPUs do this, to resist access patterns that get no cache hits for a working set just slightly too large to fit in cache. (e.g. looping over some data in the same direction twice means it probably gets evicted just before it would be reused.)
A real example: Intel Haswell. Sources: David Kanter's microarchitecture analysis and Agner Fog's testing results (microarch pdf). See also Intel's optimization manuals (links in the x86 tag wiki).
Also, I wrote up a separate answer on: Which cache mapping technique is used in intel core i7 processor?
Modern Intel designs use a large inclusive L3 cache shared by all cores as a backstop for cache-coherence traffic. It's physically distributed between the cores, with 2048 sets * 16-way (2MiB) per core (with an adaptive replacement policy in IvyBridge and later).
The lower levels of cache are per-core.
L1: per-core 32kiB each instruction and data (split), 8-way associative. Latency = 4 cycles. At least 2 read ports + 1 write port. (Maybe even more ports to handle traffic between L1 and L2, or maybe receiving a cache line from L2 conflicts with retiring a store.) Can track 10 outstanding cache misses (10 fill buffers).
L2: unified per-core 256kiB, 8-way associative. Latency = 11 or 12 cycles. Read bandwidth: 64 bytes / cycle. The main prefetching logic prefetches into L2. Can track 16 outstanding misses. Can supply 64B per cycle to the L1I or L1D. Actual port counts unknown.
L3: unified, shared (by all cores) 8MiB (for a quad-core i7). Inclusive (of all the L2 and L1 per-core caches). 12 or 16 way associative. Latency = 34 cycles. Acts as a backstop for cache-coherency, so modified shared data doesn't have to go out to main memory and back.
Another real example: AMD Piledriver: (e.g. Opteron and desktop FX CPUs.) Cache-line size is still 64B, like Intel and AMD have used for several years now. Text mostly copied from Agner Fog's microarch pdf, with additional info from some slides I found, and more details on the write-through L1 + 4k write-combining cache on Agner's blog, with a comment that only L1 is WT, not L2.
L1I: 64 kB, 2-way, shared between a pair of cores (AMD's version of SMD has more static partitioning than Hyperthreading, and they call each one a core. Each pair shares a vector / FPU unit, and other pipeline resources.)
L1D: 16 kB, 4-way, per core. Latency = 3-4 c. (Notice that all 12 bits below the page offset are still used for index, so the usual VIPT trick works.) (throughput: two operations per clock, up to one of them being a store). Policy = Write-Through, with a 4k write-combining cache.
L2: 2 MB, 16-way, shared between two cores. Latency = 20 clocks. Read throughput 1 per 4 clock. Write throughput 1 per 12 clock.
L3: 0 - 8 MB, 64-way, shared between all cores. Latency = 87 clock. Read throughput 1 per 15 clock. Write throughput 1 per 21 clock
Agner Fog reports that with both cores of a pair active, L1 throughput is lower than when the other half of a pair is idle. It's not known what's going on, since the L1 caches are supposed to be separate for each core.
The other answers here give specific and technical reasons why L1 and L2 are sized as they are, and while many of them are motivating considerations for particular architectures, they aren't really necessary: the underlying architectural pressure leading to increasing (private) cache sizes as you move away from the core is fairly universal and is the same as the reasoning for multiple caches in the first place.
The three basic facts are:
The memory accesses for most applications exhibit a high degree of temporal locality, with a non-uniform distribution.
Across a large variety of process and designs, cache size and cache speed (latency and throughput) can be traded off against each other1.
Each distinct level of cache involves incremental design and performance cost.
So at a basic level, you might be able to say double the size of the cache, but incur a latency penalty of 1.4 compared to the smaller cache.
So it becomes an optimization problem: how many caches should you have and how large should they be? If memory access was totally uniform within the working set size, you'd probably end up with a single fairly large cache, or no cache at all. However, access is strongly non-uniform, so a small-and-fast cache can capture a large number of accesses, disproportionate to it's size.
If fact 2 didn't exist, you'd just create a very big, very fast L1 cache within the other constraints of your chip and not need any other cache levels.
If fact 3 didn't exist, you'd end up with a huge number of fine-grained "caches", faster and small at the center, and slower and larger outside, or perhaps a single cache with variable access times: faster for the parts closest to the core. In practice, rule 3 means that each level of cache has an additional cost, so you usually end up with a few quantized levels of cache2.
Other Constraints
This gives a basic framework to understand cache count and cache sizing decisions, but there are secondary factors at work as well. For example, Intel x86 has 4K page sizes and their L1 caches use a VIPT architecture. VIPT means that the size of the cache divided by the number of ways cannot be larger3 than 4 KiB. So an 8-way L1 cache as used on the half dozen Intel designs can be at most 4 KiB * 8 = 32 KiB. It is probably no coincidence that that's exactly the size of the L1 cache on those designs! If it weren't for this constraint, it is entirely possible you'd have seen lower-associativity and/or larger L1 caches (e.g., 64 KiB, 4-way).
1 Of course, there are other factors involved in the tradeoff as well, such as area and power, but holding those factors constant the size-speed tradeoff applies, and even if not held constant the basic behavior is the same.
2 In addition to this pressure, there is a scheduling benefit to known-latency caches, like most L1 designs: and out-of-order scheduler can optimistically submit operations that depend on a memory load on the cycle that the L1 cache would return, reading the result off the bypass network. This reduces contention and perhaps shaves a cycle of latency off the critical path. This puts some pressure on the innermost cache level to have uniform/predictable latency and probably results in fewer cache levels.
3 In principle, you can use VIPT caches without this restriction, but only by requiring OS support (e.g., page coloring) or with other constraints. The x86 arch hasn't done that and probably can't start now.
For those interested in this type of questions, my university recommends Computer Architecture: A Quantitative Approach and Computer Organization and Design: The Hardware/Software Interface. Of course, if you don't have time for this, a quick overview is available on Wikipedia.
I think the main reason for this is, that L1-Cache is faster and so it's more expensive.
https://en.wikichip.org/wiki/amd/microarchitectures/zen#Die
Compare the size of the L1, L2, and L3 caches physical size for an AMD Zen core, for example. The density increases dramatically with the cache level.
logically, the question answers itself.
If L1 were bigger than L2 (combined), then there would be no need of L2 Cache.
Why would you store your stuff on tape-drive if you can store all of it on HDD ?

what is the difference between l1 cache and l2 cache?

I know that l1 and l2 caches are levels in multi-level cache.
I would like to know where each level cache is placed, and what is the maximum number of cache levels allowed?
Both of these depend on the CPU. There are CPUs which have no cache at all, there are CPUs which have the L1 cache on die and the L2 cache on a separate die on the same chip or even on a separate chip, or there are CPUs which have both L1 and L2 cache on the same die as the CPU core.
There are multi-core, multi-chip CPUs where each core has its own L1 cache on die, the 4 cores of one multi-core chip share an L2 cache that is on chip, but on a separate die, and the 2 chips share an L3 cache that is on a separate chip, but in the same package. Sometimes, there are also so-called CPU books which contain multiple chip packages, which might or might not have their own shared cache, which would then be an L4 cache.
Of course, multi-core chips don't have to share their L2 cache, they can also have private L2 caches.
And it's not always obvious, what level a certain cache is, or even whether or not a piece of RAM is a cache at all.
For example, on later Intel 80486 processors, there was an L1 cache on the chip and an L2 cache on the motherboard. But then AMD came out with a socket-compatible CPU that had both an L1 and L2 cache on the chip. So, the exact same cache chip on the motherboard was either an L2 or L3 cache, depending on what kind of CPU you used.
On the Cell BE CPU, the SPEs have 256 KiByte of RAM each. Except that this RAM has about the same size and the same speed as a typical L2 cache, and since the SPEs don't have any other caches, you could also view this as a cache. However, caches are normally managed automatically by the CPU, whereas RAM is typically managed by the user program, the language runtime or the OS, not the CPU. So, is this RAM or a cache? It turns out that, in order to achieve best performance, you should really not view this as RAM, but more as a software-controlled cache.
The different between L1 and L2 cache
Although both L1 and L2 are cache memories they have their key differences. L1 and L2 are the first and second cache in the hierarchy of cache levels.
L1 has a smaller memory capacity than L2.
Also, L1 can be accessed faster than L2.
L2 is accessed only if the requested data in not found in L1.**
L1 is usually in-built to the chip, while L2 is soldered on the
motherboard very close to the chip.
Therefore, L1 has a very little delay compared to L2. Because L1 is
implemented using SRAM and L2 is implemented using DRAM, L1 does not
need refreshing, while L2 needs to be refreshed.
If the caches are strictly inclusive, all data in L1 can be found in
L2 as well. However, if the caches are exclusive, same data will not
be available in both L1 and L2.
IF YOU WANT TO READ DEEPLY CLICK THIS LINK
Taken from this link -
L1 and L2 are levels of cache memory in a computer. If the computer processor can find the data it needs for its next operation in cache memory, it will save time compared to having to get it from random access memory. L1 is "level-1" cache memory, usually built onto the microprocessor chip itself. For example, the Intel MMX microprocessor comes with 32 thousand bytes of L1.
L2 (that is, level-2) cache memory is on a separate chip (possibly on an expansion card) that can be accessed more quickly than the larger "main" memory. A popular L2 cache memory size is 1,024 kilobytes (one megabyte).
Complete Cache architecture is here in WIKI

Resources