Calculating number of bits in a cache - caching

Preface: There are many different design patterns that are important to cache's overall performance. Below are listed parameters for
different direct-mapped cache designs.
Cache data size: 32 kib
Cache block Size: 2 words
Cache access time: 1-cycle
Question: Calculate the number of bits required for the cache listed above, assuming a 32-bit address. Given that total size, find the
total size of the closest direct-mapped cache with 16-word blocks of
equal size or greater. Explain why the second cache, despite its
larger data size, might provide slower performance that the first
cache.
Here's the formula:
Number of bits in a cache 2^n X (block size + tag size + valid field size)
Here's what I got: 65536(1+14X(32X2)..
is this correct?

using: (2^index bits) * (valid bits + tag bits + (data bits * 2^offset bits))
for the first one i get:
total bits = 2^15 (1+14+(32*2^1)) = 2588672 bits
for the cache with 16 word blocks i get:
total bits = 2^13(1 +13+(32*2^4)) = 4308992
the next smallest cache with 16 word blocks and a 32 bit address works out to be 2158592 bits, smaller than the first cache.

I'm stuck on the same problem too but I have the answer to the first part.
To calculate the total number of bits required
You need to convert the KB to words and get the index bits.
Use the answer from part 1 to get your tag bits.
Plug them into this formula.
(2^(index bits)) * ((tag bits)+(valid bits)+(data size))
Hint: data size is 64 bits in this case and valid bit is 1. So just find the index and tag bits.
And I don't think your answer is right. I didn't check but I can see you are multiplying 1+14 and (32x2) instead of adding them.

I think the formula you were using is correct. According to my textbook "Computer Organization and Design The Hardware, 5th edition", the total number of bits in a direct-mapped cache is:
2^indext bits * (block size + tag size + valid field size).
block size was given by the question: 2 words = 32 bits
tag size: 32 - offset in bits - index in bits
valid field size is usually 1 valid bit

Related

How to calculate cache block size from its overhead?

I've being looking for this for a lot of time now (over 3 days) without luck. Maybe one of you guys can tell me how can I solve it.
Consider you have a computer with a 16-bit size address and a byte addressable memory. The cache is 2-way set-associative mapped, write-back policy and a perfect LRU replacement strategy. Cache has an overhead of 4352 bits. What's the size of the block?
Very few resources talk about overhead and the ones I've found only relate it to total cache size. The problem is I only know how to calculate cache size with #blocks or at least with the fields of the address properly defined (which I have not being able to do for this problem since I can't calculate the size of the tag.).
Any help would be appreciated.
So, here's how I read this question:
Overhead bits are the bits that don't count toward the actual data that is being cached.  They are bits that track maintenance state of the cache, and help the cache implement hits, write back, and eviction policy.  To some way of looking at it, if one byte is being cached (8 bits) how many non-data bits are in the cache to help manage that (or at least for all the actual data bits how many non-data/overhead bits are there).
This is mathematical, so I hope I haven't made an error, but even if I have maybe you can see your way through the reasoning.
Let's derive some additional information:
A write-back policy means the cache needs to store "dirty" information for each data block: dirty is 1-bit: yes, dirty -or- no, clean.
For 2-way set associative cache, a "perfect" LRU algorithm is also 1 bit (yes: first block -or- no: second block) but this 1 bit costs per index position (i.e. per line) — not per block as there are two blocks per index.
What we don't know is if there is a valid bit, which would also be per data block, but most caches I see in coursework have the valid bits, so we might assume they have it.
And lastly, there's the tag bits where tag bits are: however many bits are leftover in the address space bits after accounting for index bits and block offset bits.
So, a formula for overhead might be:
overhead in bits = index positions * (1 x LRU bit + block overhead bits)
where block overhead bits = 2 [ways] * (1 x Dirty bit + 1 x Valid bit + tag bits)
We also know that tag bits = address space bits - index bits - block bits
So, we have:
4352 [overhead in bits] = index positions * (1 + 2 * (2 + tag bits))
-and-
tag bits = address space bits - index bits - block offset bits
-and-
index positions = 2index bits
-and-
We also know that the number of tag, index, and block offset bits has to be an integer (no fractions of bits).
So, we can begin to reduce those two formulas by substituting:
4352 = index positions * (1 + 2 * (2 + address space bits - index bits - block bits)
by reduction also then:
4352 = 2index bits * (1 + 2 * (2 + 16 - index bits - block bits)
Solving for block bits we have:
-((4352/2index bits - 1)/2 - 18 + index bits) = block bits
I don't know how to solve this directly mathematically, given the constraint that the variables must be integers, so, instead of solving directly, simply try/search different values:
If index bits is 7 then by this formula, block bits is fractional, so that doesn't work.
If index bits is 9 then by this formula, block bits is fractional, so that doesn't work.
No other values between 0 and 16 result in an integer number of bits, except:
If index bits is 8 then by this formula, block bits is 2, so:
16 = tag bits + 8 + 2, meaning tag bits is 6, index bits is 8, and block offset is 2.
Since block offset is 2 then block size is 22.

What is overhead percentage?

Consider a 2KB direct mapped cache with blocks of size 1 word. As
always, addresses are 32 bits.
How many blocks does the cache contain? 2^7
How many bits long is each tag? (Tags are shown in pink in the class
notes.) 2^23
How many bits long is each cache index? (These are green in the notes)
2^7
What is the total size of the cache? (32 + 1+ 23) x 2^7
What percentage of the total size is the overhead?
what is .. overhead .. and percentage of overhead.. ?
overhead is tag size, and any other bits the cache needs to store other than the data itself.
(e.g. for an associative cache with LRU replacement, it would need to store some bits that record the LRU state to track which member of the set is next in line for eviction.)
overhead percentage is obviously overhead / total size, as the assignment says. (not overhead / data).

Calculating the memory address sizes for paging and offset and page table size.

This question is mostly just to clarify my understanding.
Say I have a 32-bit Computer, with virtual memory space of 2^32 bytes.
Memory paging is used, each page is 2^8 bytes.
So the memory address sizes are 24 bits. Since (2^32/2^8 = 2^24 bytes).
And the offset would be 8 bits? This I do not quite understand. Since I know that the total address is 32, and 24 is already taken by the pages, so the remainder is the offset of 8.
Lastly for the page size. If each physical memory address is stored in 32 bits (4 bytes), the table size would be 2^26 (2^24 * 2^2). Is this correct?
Page Table size = number of entries*size of entry
In your case, each page is 2^8 bytes, that is - you need 8 bits offset. You got that one right.
This leaves us with 24 bits for Page. 2^24 different pages.
Size of page-table for process X is: 2^24*Entry-Size. which is not provided by you here.
Lets assume it needs 32 bits per entry. Then, 2^24*32 = 2^24*2^5 = 2^29 bits.

Word size in bits to bytes conversion confusion

I have a pretty elementary question which is somewhat confusing me. It will be great to get some refresher on this.
Every computer has a word size. The word size is the maximum size of the virtual address space. So if we have lets say a 32 bit word size, we have a virtual address space that ranges to a max of 2^32 values. In references it says 2^32 bytes? Why is the range in bytes.
Also, What I am failing to understand is how 2^32 possible values be a possible address range of 4GB? So, my confusion stems from the confusion of turning the 32 bit word size into 4 byte word size, and then how 4 bytes, multiplied 2^32 times result in 4GB.
One way I tried to rationalize it is as follows:
2^32 bits = 2^2(bytes) x 2^10(kilobytes) x 2^10(megabytes) x 2^10(gigabytes)
So successive division of 2^32 by 2^10 results in 2^2 GB or 4 GB.
Can somebody point out how the 32-bit word size go to a 4GB page range?
Thanks
The argument in my head goes like this: We have 32 bits available to us, each bit can be at most 1. So the largest number we can accommodate is when all 32 bits (the 0 bit to the 31 bit that is) are filled with 1s. So the trick is to find the largest number in decimal form, by converting from binary to decimal we get:
1111111111111111111111111111111 (binary) = 4294967295 (decimal)
But what is 4294967295? It's actually one less than 2^32. Now there's another important thing to keep in mind:
4GB = 4294967296 bytes
But why is it 1 greater than our result? Because our first byte is byte 0 while the last is byte 4294967295 for a total of 4294967296 bytes.
So now we're in a position where the smallest number that can exist in a 32-bit register is 0 and the largest number that can exist in a 32-bit register is 4294967295.
0 (binary) - 1111111111111111111111111111111 (binary)
0 (decimal) - 4294967295 (decimal)
0 (hex) - 0xFFFFFFFF (hex)
So there is 4GB of addressable space because anything above 4GB will have an address that is too big of a number to fit inside a 32-bit number and thus inside a 32-bit register.
I did all this stuff inside excel and seeing it helped me a lot.

I don't understand something in memory addressing

I have a very simple (n00b) question.
A 20-bit external address bus gave a 1 MB physical address space (2^20
= 1,048,576).(Wikipedia)
Why 1 MByte?
2^20 = 1,048,576 bit = 1Mbit = 128KByte not 1MB
I misunderstood something.
When you have 20 bits you can address up to 2^20. This is your range, not the number of bits.
I.e. if you have 8 bits your range is up to 255 (unsigned) not 2^8 bits.
So with 20 bits you can address up to 2^20 bytes i.e. 1MB
I.e. with 20 bits you can represent addresses from 0 up to 2^20 = 1,048,576. I.e. you can reference up to 1MB of memory.
1 << 20 addresses, that is 1,048,576 bytes addressable. Hence, 1 MB physical address space.
Because the smallest addressable unit of memory (in general - some architectures have small bit-addressable pieces of memory) is the byte, not the bit. That is, each address refers to a byte, rather than to a bit.
Why, you ask? Direct access to individual bits is almost never needed - and if you need it, you can still load the surrounding byte and get the bit with bit masks and shifts. Increasing the bits per address allows you to address more memory with the same address range.
Note that a byte doesn't have to be 8 bit, strictly speaking, though it's ubiquitous by now. But regardless of the byte size, you're grouping bits together to be able to handle larger quantities of them.

Resources