Direct Table & Lookup Table - image

How to measure memory size of an image in direct coding 24-bit RGB color model & in 24-bit 256-entry loop-up table representation. For example: Given an image of resolution 800*600. how much spaces are required to save the image using direct coding and look-up table.

For a regular 24-bit RGB representation most probably you just have to multiply the number of pixel on number of bytes per pixel. 24 bits = 3 bytes, so the size is 800 * 600 * 3 bytes = 1440000 bytes ≈ 1.37 MiB. In some cases you may have rows of an image aligned on some boundary in memory, usually 4 or 8 or 32 bytes. But since 800 is divisible by 32, this will not change anything, still 1.37 MiB.
Now, for a look-up table, you have 1 byte per pixel, since you have only to address one entry in the table. This yields 800 * 600 * 1 = 480000 bytes ≈ 0.46 MiB. Plus the table itself: 256 colors, 24 bits (3 bytes) each - 256 * 3 = 768 bytes. Negligible comparing to the size of the image.

Related

Debayering bayer encoded Raw Images

I have an image which I need to write a debayer for, but I can't figure out how the data is packed.
The information I have about the image:
original bpp: 64;
PNG bpp: 8;
columns: 242;
rows: 3944;
data size: 7635584 bytes.
PNG https://drive.google.com/file/d/1fr8Tg3OvhavsgYTwjJnUG3vz-kZcRpi9/view?usp=sharing
SRC data: https://drive.google.com/file/d/1O_3tfeln76faqgewAknYKJKCbDq8UjEz/view?usp=sharing
I was told that it should be BGGR, but it doesn't look like any ordinary Bayer BGGR image to me. Also I got the image with a txt file which contains this text:
Camera resolution: 1280x944
Camera type: LVDS
Could the image be compressed somehow?
I'm completely lost here, I would appreciate any help.
Bayer pattern of the image in 8bpp representation
Looks like there are 4 images, and the pixels are stored in some kind of "packed 12" format.
Please note that "reverse engineering" the format is challenging, and the solution probably has few mistakes.
The 4 images are stored in steps of 4 rows:
aaaaaaaaaaaaa
bbbbbbbbbbbbb
ccccccccccccc
ddddddddddddd
aaaaaaaaaaaaa
bbbbbbbbbbbbb
ccccccccccccc
ddddddddddddd
...
aaa... marks the first image.
bbb... marks the second image.
ccc... marks the third image.
ddd... marks the fourth image.
There are about 168 rows at the top that we have to ignore.
Getting 1280 pixels out of 1936 bytes in each row:
Each row has 16 bytes we have to ignore.
Out of 1936 bytes, only 1920 bytes are relevant (assume we have to remove 8 bytes from each side).
The 1920 bytes represents 1280 pixels.
Every 2 pixels are stored in 3 bytes (every pixel is 12 bits).
The two 12 bits elements in 3 bytes are packed as follows:
8 MSB bits 8 MSB bits 4 LSB and 4 LSB bits
######## ######## #### ####
It's hard to tell how the LSB bits are divided between the two pixels (the LSB it mainly "noise").
After unpacking the pixels, and extracting one image out of the 4, the format looks like GRBG Bayer pattern (by changing the size of the margins we may get BGGR).
MATLAB code sample for extracting one image:
f = fopen('test.img', 'r'); % Open file (as binary file) for reading
T = fread(f, [1936, 168], 'uint8')'; % Read the first 168
I = fread(f, [1936, 944*4], 'uint8')'; % Read 944*4 rows
fclose(f);
% Convert from packed 12 to uint16 (also skip rows in steps of 4, and ignore 8 bytes from each side):
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
A = uint16(I(1:4:end, 8+1:3:end-8)); % MSB of even pixels (convert to uint16)
B = uint16(I(1:4:end, 8+2:3:end-8)); % MSB of odd pixels (convert to uint16)
C = uint16(I(1:4:end, 8+3:3:end-8)); % 4 bits are LSB of even pixels and 4 bits are LSB of odd pixels
I1 = A*16 + bitshift(C, -4); % Add the 4 LSB bits to the even pixels (may be a wrong)
I2 = B*16 + bitand(C, 15); % Add the other 4 LSB bits to the even pixels (may be a wrong)
I = zeros(size(I1, 1), size(I1, 2)*2, 'uint16'); % Allocate 1280x944 uint16 elements.
I(:, 1:2:end) = I1; % Copy even pixels
I(:, 2:2:end) = I2; % Copy odd pixels
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
J = demosaic(I*16, 'grbg'); % Apply demosaic (multiply by 16, because MATLAB assume 12 bits are in the upper bits).
figure;imshow(lin2rgb(J));impixelinfo % Show the output image (lin2rgb applies gamma correction).
Result (converted to 8 bit):

Calculating the total data+overhead of a set associative cache

This is a question from a Computer Architecture exam and I don't understand how to get to the correct answer.
Here is the question:
This question deals with main and cache memory only.
Address size: 32 bits
Block size: 128 items
Item size: 8 bits
Cache Layout: 6 way set associative
Cache Size: 192 KB (data only)
Write policy: Write Back
What is the total number of cache bits?
In order to get the number of tag bits, I find that 7 bits of the address are used for byte offset (0-127) and 8 bits are used for the block number (0-250) (250 = 192000/128/6), therefore 17 bits of the address are left for the tag.
To find the total number of bits in the cache, I would take (valid bit + tag size + bits per block) * number of blocks per set * number of sets = (1 + 17 + 1024) * 250 * 6 = 1,536,000. This is not the correct answer though.
The correct answer is 1,602,048 total bits in the cache and part of the answer is that there are 17 tag bits. After trying to reverse engineer the answer, I found that 1,602,048 = 1043 * 256 * 6 but I don't know if that is relevant to the solution because I don't know why those numbers would be used.
I'd like if someone could explain what I did wrong in my calculation to get a different answer.

Miss rate calculation

I have this problem:
A program that calculates the sum of 128x128 matrix of 32-bit integers (by rows). I have one-way cache that has 8 sets with block size of 64 bytes, considering only the access to the matrix not the instruction.
I should calculate its miss rate.
And also the miss rate by reading the matrix by column. Sorry if there are grammar mistakes, I only translated it to English.
What I've done so far is that (correct me if I'm wrong):
Integer size = 4B
64/4 = 16 (integers inside a block)
128/16 = 8 (blocks per row)
15 hit and 1 miss (each block)
120 hit and 8 miss (each row)
960 hit and 64 miss (all the matrix)
miss rate = 64/1024 = 0.06 = 6%

Why are kilo, mega and giga - bytes named after "bytes", if they all have 10 of more bits when bytes have 8 bits?

I get why we have the number 1024 instead of 1000 to use the suffix "kilo" in computing (computer uses base 2, so 2 ^ 10, blah blah blah). So I get the kilo part, but why is it called a kilo - "byte"? To make a kilo - "byte", we need to use bits with 10 digits from 0000000000 to 1111111111. That is not 8 digits, shouldn't it be called something else.
I.e. a kilobyte is not 1024 groupings of 8 bit binary digits, it is 1024 groups of 10 bit binary digits and a megabyte has even more than 10 binary digits - not 8. If asked how many bits are in 1 kilobytes, people calculate it as 1*1024*8. But that's wrong! It should be 1*1024*10.
I.e. a kilobyte is not 1024 groupings of 8 bit binary digits, it is
1024 groups of 10 bit binary digits
You are confusing the size of a byte with the size of the value needed to address those bytes.
On most systems a byte is 8 bits, which means 1000 bytes is exactly 1000*8 bits, and 2000 bytes is exactly 2000*8 bits (i.e. exactly the double, which makes sense).
To address or index those bytes you need 10 bits in the first example (2^10) and 11 bits in the second (2^11 up to 2048 bytes). It wouldn't make a lot of sense if the size of a byte was changing when there are more bytes in a data structure.
As for the 1000 (kilobyte) vs 1024 (kibibyte):
1 kB (kilobyte) = 10^3 = 1000
1 KiB (kibibyte) = 2^10 = 1024
A kilobyte used to be generally accepted as being 1024 bytes. However at some point hard disk manufacturers started to count 1 kB as 1000 bytes (kilo being 1000 which is actually correct):
1 GB = 1000^3 = 1000000000
1 GiB = 1024^3 = 1073741824
Windows still used 1 kB = 1024 bytes to show the hard disk size, i.e. it showed 954MB for 1GB of hard disk space. I remember a lot of customers complaining about that when checking, for example, the size of their 250GB drive which only showed 233GB in Windows.

Calculating Page Table Size

I'm reading through an example of page tables and just found this:
Consider a system with a 32-bit logical address space. If the page size in such a system is 4 KB (2^12), then a page table may consist of up to 1 million entries (2^32/2^12). Assuming that each entry consists of 4 bytes, each process may need up to 4 MB of physical address space for the page table alone.
I don't really understand what this 4MB result represents. Does it represent the space the actual page table takes up?
Since we have a virtual address space of 2^32 and each page size is 2^12, we can store (2^32/2^12) = 2^20 pages. Since each entry into this page table has an address of size 4 bytes, then we have 2^20*4 = 4MB. So the page table takes up 4MB in memory.
My explanation uses elementary building blocks that helped me to understand. Note I am leveraging #Deepak Goyal's answer above since he provided clarity:
We were given a logical 32-bit address space (i.e. We have a 32 bit computer)
Consider a system with a 32-bit logical address space
This means that every memory address can be 32 bits long.
"A 32-bit entry can point to one of 2^32 physical page frames"[2], stated differently,
"A 32-bit register can store 2^32 different values"
We were also told that
each page size is 4 KB
1 KB (kilobyte) = 1 x 1024 bytes = 2^10 bytes
4 x 1024 bytes = 2^2 x 2^10 bytes => 4 KB (i.e. 2^12 bytes)
The size of each page is thus 4 KB (Kilobytes NOT kilobits).
As Depaak said, we calculate the number of pages in the page table with this formula:
Num_Pages_in_PgTable = Total_Possible_Logical_Address_Entries / page size
Num_Pages_in_PgTable = 2^32 / 2^12
Num_Pages_in_PgTable = 2^20 (i.e. 1 million)
The authors go on to give the case where each entry in the page table takes 4 bytes. That means that the total size of the page table in physical memory will be 4MB:
Memory_Required_Per_Page = Size_of_Page_Entry_in_bytes x Num_Pages_in_PgTable
Memory_Required_Per_Page = 4 x 2^20
Memory_Required_Per_Page = 4 MB (Megabytes)
So yes, each process would require at least 4MB of memory to run, in increments of 4MB.
Example
Now if a professor wanted to make the question a bit more challenging than the explanation from the book, they might ask about a 64-bit computer. Let's say they want memory in bits. To solve the question, we'd follow the same process, only being sure to convert MB to Mbits.
Let's step through this example.
Givens:
Logical address space: 64-bit
Page Size: 4KB
Entry_Size_Per_Page: 4 bytes
Recall: A 64-bit entry can point to one of 2^64 physical page frames
- Since Page size is 4 KB, then we still have 2^12 byte page sizes
1 KB (kilobyte) = 1 x 1024 bytes = 2^10 bytes
Size of each page = 4 x 1024 bytes = 2^2 x 2^10 bytes = 2^12 bytes
How Many pages In Page Table?
`Num_Pages_in_PgTable = Total_Possible_Logical_Address_Entries / page size
Num_Pages_in_PgTable = 2^64 / 2^12
Num_Pages_in_PgTable = 2^52
Num_Pages_in_PgTable = 2^2 x 2^50
Num_Pages_in_PgTable = 4 x 2^50 `
How Much Memory in BITS Per Page?
Memory_Required_Per_Page = Size_of_Page_Entry_in_bytes x Num_Pages_in_PgTable
Memory_Required_Per_Page = 4 bytes x 8 bits/byte x 2^52
Memory_Required_Per_Page = 32 bits x 2^2 x 2^50
Memory_Required_Per_Page = 32 bits x 4 x 2^50
Memory_Required_Per_Page = 128 Petabits
[2]: Operating System Concepts (9th Ed) - Gagne, Silberschatz, and Galvin
In 32 bit virtual address system we can have 2^32 unique address, since the page size given is 4KB = 2^12, we will need (2^32/2^12 = 2^20) entries in the page table, if each entry is 4Bytes then total size of the page table = 4 * 2^20 Bytes = 4MB
Suppose logical address space is 32 bit so total possible logical entries will be 2^32 and other hand suppose each page size is 4 kilobyte then size of one page is 2^22^10=2^12...
now we know that no. of pages in page table is
pages=total possible logical address entries/page size
so pages=2^32/2^12 =2^20
Now suppose that each entry in page table takes 4 bytes then total size of page table in physical memory will be=2^22^20=2^22=4mb**
Since the Logical Address space is 32-bit long that means program size is 2^32 bytes i.e. 4GB.
Now we have the page size of 4KB i.e.2^12 bytes.Thus the number of pages in program are 2^20.(no. of pages in program = program size/page size).Now the size of page table entry is 4 byte hence the size of page table is 2^20*4 = 4MB(size of page table = no. of pages in program * page table entry size). Hence 4MB space is required in Memory to store the page table.
yes it represents the space the actual page table takes for one process.
if each page is 4KB -> 12 bits for offset (how?)
1 kb is 2^10 bytes => 4 kb is 4*2^10 bytes which is 2^12 => hence 12 bits for offset and the remaining 20 bits for VPN => 2^20 translations which means 2^20 pages are there which means 2^20 entries in the page table.
Hence, size of page table = number of entries in the page table * size of one entry
=> size of page table = 2^20 * 4 KB = 2^22 kB and 1 MB is 2^20 KB => 4 MB

Resources