Cache Configuration for MIPS Processor - caching

I am having trouble figuring out the tag, set, and byte offset fields for a 32-bit MIPS processor with the following cache configurations:
(a) A direct mapped cache with capacity 8192 bytes and block size 32 bytes.
(b) A 8-way set associate cache with capacity 2048 bytes and block size 16 bytes.
I would greatly appreciate any help or guidance on this matter. Thank you!
I attempted to use the formula for calculating the number of bits for each field in a cache memory address:
`Tag = 32 - (log2(cache capacity/block size) + log2(block size))
Set/Index = log2(cache capacity/block size)
Byte offset = log2(block size)`
I expected to be able to use this formula to find the number of bits for each field, but I am unsure if this is the correct formula for this specific MIPS processor.
As a result, I am seeking assistance to verify my understanding and solution.

Related

Relation between size of address bus and memory size; memory Segmentation in 8086

My question is related to memory segmentation in 8086. I learnt that,
8086 has a 20 bit address bus. And so it can address 2^20 different addresses. Which means it has an memory size of 2^20, i.e, 1MB.
I have a few doubts:
What I understand from the fact that 8086 has a 20 bit address bus is that it could have 2^20 different combinations of 0s and 1s, each of which represents one physical address. What I don't understand is that how does 2^20 different address locations mean 1 MB of addressable memory? How is total number of different addresses locations related to memory size (in Megabytes)?
Also, correct me if I'm wrong, the 16 bit segment registers in 8086 hold the starting address of the different segments in the memory (Code, Stack, Data, Extra).My question is, aren't the addresses in memory of 20 bits? Then how can the 16 bit register hold 20 bit addresses? If it contains the upper 16 bit of the 20 bit address, how does the processor make out to which exact address location it has to point?
P.S: I am a beginner is micro-processors and total reliant on self study, so kindly excuse if my questions seem a bit silly.
Thanks in advance.
For this question, its important to remember there is a different between the number of possible memory addresses and the amount of actual memory (RAM) installed in the system. For the 8086, memory addresses are 20-bits long as you note, so that means there are 2^20 possible memory addresses (which is exactly 1 MiB in size since 1 MiB is 1024 or 2^10 KiB and 1 KiB is 1024 or 2^10 Bytes). This does NOT mean the system has 1 MiB worth of RAM necessarily, it very likely has less but the most addresses the 8086 could possibly address is 1 MiB; so if nothing but RAM was in the address space, the most RAM it could possibly have is 1 MiB. Frequently, you might have gaps in the address space not filled with anything, some of the address space is used for ROM or other peripherals. So, that size of the address space is 1 MiB but that does not mean there is 1 MiB of RAM/memory in the system.
Correct, the segment registers are all 16-bits for the 8086. A memory address is created by combining the appropriate segment register with the argument (the argument being the result of whatever the addressing mode being used by the instruction) by adding the argument to the segment register's value shifted by 4 bits. So, if for example the ss is 0x1111, sp is at 0x2222 and you preform a push ax instruction, the 20-bit address to which the value is pushed is (ss << 4) + sp or 0x11110 + 0x02222 = 0x13332. More information can be found on Wikipedia under the Real Mode section: https://en.wikipedia.org/wiki/X86_memory_segmentation

Armv8 address fields for cache?

I am reading, ARM Cortex-A Series Programmer’s Guide for ARMv8-A.
In 11.1.2 Cache tags and Physical Addresses, There was an example for cache address fields.
Example:
Cache is 4-way 32KB
Cache line = 16-words (64 Byte)
And the address fields stated in the document:
Set(index) = 8 bits, Offset = 6 bits, Tag = 30 bits
From my understanding, 8 bits index will correspond to 256 cache lines in each way (which is illustrated correctly in the example). And offset is 6 bits (2^6 = 64) which is used to address bytes inside the line(64 bytes) correctly.
However the cache is 4 way which means that cache size is 4*256*64 = 64KB not 32KB.
Is my analysis correct or I am missing something ?
Someone asked the same question on arm community website: https://community.arm.com/developer/ip-products/processors/f/cortex-a-forum/8159/how-to-compute-a-cache-size
Here is the reply on his question:
" Got reply from ARM. It is a document error. It should be 2-way set-associative cache. 16KB * 2 = 32 KB "

How to calculate number of virtual pages

virtual adress size: 32 bits
page size = 4K =2^12 bytes
what is the number of pages?
i know the answer is (2^32)/(2^12) = 2^20 but why?
i think it should be (2^32)/(2^15) because of the byte bit conversion (2^12)*(8)=2^15
Every byte in memory has a numeric address starting from 0. The CPU has one or more registers which hold the address of that one byte which is being worked upon. A register is a physical device and has limits to how large a number it can store.
virtual address size: 32 bits
This means the address register can store one address (number) which could be anything between 0 and 2^32 -1.
As the largest address that the address register can store is 2^32 -1 there is no point in having more memory bytes. Because the CPU will never be able to work with them. So in general we assume the total memory to be 2^32 bytes.
page size = 4K =2^12 bytes
The total memory of millions of bytes is actually organized in chunks called pages. Here total memory of 2^32 bytes is chunked into pages of 2^12 bytes.
what is the number of pages?
the answer is (2^32)/(2^12) = 2^20. Good job!
but why? i think it should be (2^32)/(2^15) because of the byte bit conversion (2^12)*(8)=2^15
Here 2^32 is the total number of bytes in memory. 2^12 is total number of bytes in a page. Both numerator and denominator should be in same units - bytes. So you need not convert the denominator to bits.
Note:
I have used over simplification of terms like memory, address, register etc. Many of the statements made above are not valid for a real laptop - but useful for initial learning.

Calculating number of bits in a cache

Preface: There are many different design patterns that are important to cache's overall performance. Below are listed parameters for
different direct-mapped cache designs.
Cache data size: 32 kib
Cache block Size: 2 words
Cache access time: 1-cycle
Question: Calculate the number of bits required for the cache listed above, assuming a 32-bit address. Given that total size, find the
total size of the closest direct-mapped cache with 16-word blocks of
equal size or greater. Explain why the second cache, despite its
larger data size, might provide slower performance that the first
cache.
Here's the formula:
Number of bits in a cache 2^n X (block size + tag size + valid field size)
Here's what I got: 65536(1+14X(32X2)..
is this correct?
using: (2^index bits) * (valid bits + tag bits + (data bits * 2^offset bits))
for the first one i get:
total bits = 2^15 (1+14+(32*2^1)) = 2588672 bits
for the cache with 16 word blocks i get:
total bits = 2^13(1 +13+(32*2^4)) = 4308992
the next smallest cache with 16 word blocks and a 32 bit address works out to be 2158592 bits, smaller than the first cache.
I'm stuck on the same problem too but I have the answer to the first part.
To calculate the total number of bits required
You need to convert the KB to words and get the index bits.
Use the answer from part 1 to get your tag bits.
Plug them into this formula.
(2^(index bits)) * ((tag bits)+(valid bits)+(data size))
Hint: data size is 64 bits in this case and valid bit is 1. So just find the index and tag bits.
And I don't think your answer is right. I didn't check but I can see you are multiplying 1+14 and (32x2) instead of adding them.
I think the formula you were using is correct. According to my textbook "Computer Organization and Design The Hardware, 5th edition", the total number of bits in a direct-mapped cache is:
2^indext bits * (block size + tag size + valid field size).
block size was given by the question: 2 words = 32 bits
tag size: 32 - offset in bits - index in bits
valid field size is usually 1 valid bit

I don't understand something in memory addressing

I have a very simple (n00b) question.
A 20-bit external address bus gave a 1 MB physical address space (2^20
= 1,048,576).(Wikipedia)
Why 1 MByte?
2^20 = 1,048,576 bit = 1Mbit = 128KByte not 1MB
I misunderstood something.
When you have 20 bits you can address up to 2^20. This is your range, not the number of bits.
I.e. if you have 8 bits your range is up to 255 (unsigned) not 2^8 bits.
So with 20 bits you can address up to 2^20 bytes i.e. 1MB
I.e. with 20 bits you can represent addresses from 0 up to 2^20 = 1,048,576. I.e. you can reference up to 1MB of memory.
1 << 20 addresses, that is 1,048,576 bytes addressable. Hence, 1 MB physical address space.
Because the smallest addressable unit of memory (in general - some architectures have small bit-addressable pieces of memory) is the byte, not the bit. That is, each address refers to a byte, rather than to a bit.
Why, you ask? Direct access to individual bits is almost never needed - and if you need it, you can still load the surrounding byte and get the bit with bit masks and shifts. Increasing the bits per address allows you to address more memory with the same address range.
Note that a byte doesn't have to be 8 bit, strictly speaking, though it's ubiquitous by now. But regardless of the byte size, you're grouping bits together to be able to handle larger quantities of them.

Resources