Calculating Bytes - byte

Suppose each pixel in a digital image is represented by a 24-bit color value. How much memory does it take to store an uncompressed image of 2048 pixels by 1024 pixels?
I said for this that 24 bits is 3 bytes. And 2048 Pixels is 6KB (2048 * 3 / 1024) and 1024 Pixels is 3KB (1024 * 3 / 1024). And then I multipled to get 18KB^2.
But the answer says 6MB? How is this possible and how do 1024 and 2048 play into this because the answer says 6MB and doesn't explain.

24 bit => 24 bit / 8 bit = 3 byte
1) 2048 pixel * 1024 pixel = 2097152 pixel (Area)
1.1) 2097152 pixel * 3 byte = 6291456 byte (Each pixel 3 bytes)
2) 6291456 byte / 1024 byte = 6144 kilobyte
3) 6144 kilobyte / 1024 byte = 6 Megabyte

Related

how to compute image size from width and height pixels to bytes

image property
I try to compute image size in byte depend on that equation
width * height * bit depth / 8
depend on the image property
697 * 923 *24 = 15439944 /8 = 1929993 byte
but the size in byte not identical to the size in property ???

CGDisplayModeGetWidth/Height() sometimes returns pixels, sometimes points

According to Apple, both CGDisplayModeGetWidth() and CGDisplayModeGetHeight() should return points instead of pixels starting in macOS 10.8. But Apple's word on those APIs isn't consistent because here they say that the functions return pixels and not points.
This confusion is also reflected in practice because both functions only seem to return points sometimes, not all the time. Sometimes they also return pixels. Consider this example:
CGDirectDisplayID id = CGMainDisplayID();
CFArrayRef modes = CGDisplayCopyAllDisplayModes(id, NULL);
CGDisplayModeRef mode;
int c = 0, k, n;
n = CFArrayGetCount(modes);
for(k = 0; k < n; k++) {
mode = (CGDisplayModeRef) CFArrayGetValueAtIndex(modes, k);
printf("GOT SIZE: %d %d\n", (int) CGDisplayModeGetWidth(mode), (int) CGDisplayModeGetHeight(mode));
}
CFRelease(modes);
The code iterates over all available screen modes. In this example, the output is in pixels.
When using this code, however, the output is in points:
CGDirectDisplayID id = CGMainDisplayID();
mode = CGDisplayCopyDisplayMode(id);
printf("NEW GOT SIZE: %d %d\n", (int) CGDisplayModeGetWidth(mode), (int) CGDisplayModeGetHeight(mode));
CGDisplayModeRelease(mode);
But why? Why do CGDisplayModeGetWidth() and CGDisplayModeGetHeight() return pixels in the first code snippet and points in the second? This is confusing me.
To make things even more complicated, starting with macOS 10.8 there are two new APIs, namely CGDisplayModeGetPixelWidth() and CGDisplayModeGetPixelHeight(). These always return pixels, but I still don't understand why CGDisplayModeGetWidth() and CGDisplayModeGetHeight() return pixels in the first code snippet above... is this a bug?
EDIT
Here is the output for my 1680x1050 monitor. I am using Quartz Debug to put the monitor in 840x525 screen mode to do Retina tests. You can see that the output of the first code snippet must be in pixels because it returns modes such as 1680x1050 which would correspond to 3360x2100 pixels if it were points. Another proof that the first code snippet returns pixels not points lies in the fact that the screen mode the monitor is currently in (i.e. 840x525) isn't returned at all. Only the second code snippet returns this mode.
GOT SIZE: 1680 1050
GOT SIZE: 1152 870
GOT SIZE: 1280 1024
GOT SIZE: 1024 768
GOT SIZE: 1024 768
GOT SIZE: 1024 768
GOT SIZE: 832 624
GOT SIZE: 800 600
GOT SIZE: 800 600
GOT SIZE: 800 600
GOT SIZE: 800 600
GOT SIZE: 640 480
GOT SIZE: 640 480
GOT SIZE: 640 480
GOT SIZE: 640 480
GOT SIZE: 1280 1024
GOT SIZE: 1280 960
GOT SIZE: 848 480
GOT SIZE: 1280 960
GOT SIZE: 1360 768
GOT SIZE: 800 500
GOT SIZE: 1024 640
GOT SIZE: 1280 800
GOT SIZE: 1344 1008
GOT SIZE: 1344 840
GOT SIZE: 1600 1000
--------------------------
NEW GOT SIZE: 840 525

Matlab: Reorder 5 bytes to 4 times 10 bits RAW10

I would like to Import a RAW10 file into Matlab. The infos are directly attachted to the jpeg file provided by the raspberry pi camera.
4 Pixels are saved as 5 bytes.
The first four bytes contain the bit 9-2 of a pixel.
The last byte contains the missing LSB.
sizeRAW = 6404096;
sizeHeader =32768;
I = ones(1944,2592);
fin=fopen('0.jpeg','r');
off1 = dir('0.jpeg');
offset = off1.bytes - sizeRAW + sizeHeader;
fseek(fin, offset,'bof');
pixel = ones(1944,2592);
I=fread(fin,1944,'ubit10','l');
for col=1:2592
I(:,col)=fread(fin,1944,'ubit8','l');
col = col+4;
end
fclose(fin);
This is as far as I came yet, but it's not right.

Monochrome Bitmap (1 bbp) Padding and extra 0xF0 byte

I am working with a Monochrome Bitmap image, 1 bit per pixel.
When I examine the file with an hexadecimal editor, I notice that each row ends up with the following hexadecimal sequence: f0 00 00 00.
Having studied the problem a little bit, I concluded that the three last bytes 00 00 00 correspond to the row padding.
Question 1:
I would like to know if the following algorithm to determine the number of padding bytes in case of a 1 bbp BMP image is correct:
if(((n_width % 32) == 0) || ((n_width % 32) > 24))
{
n_nbPaddingBytes = 0;
}
else if((n_width % 32) <= 8)
{
n_nbPaddingBytes = 3;
}
else if((n_width % 32) <= 16)
{
n_nbPaddingBytes = 2;
}
else
{
n_nbPaddingBytes = 1;
}
n_width is the width in pixels of the BMP image.
For example, if n_width = 100 px then n_nbPaddingBytes = 3.
Question 2:
Apart from the padding (00 00 00), I have this F0 byte preceding the three bytes padding on every row. It results in a black vertical line of 4 pixels on the right side of the image.
Note 1: I am manipulating the image prior to printing it on a Zebra printer (I am flipping the image vertically and reverting the colors: basically a black pixel becomes a white one and vice versa).
Note 2: When I open the original BMP image with Paint, it has no such black vertical line on its right side.
Is there any reason why this byte 0xF0 is present at the end of each row?
Thank you for helping.
Best regards.
The bits representing the bitmap pixels are packed in rows. The size of each row is rounded up to a multiple of 4 bytes (a 32-bit DWORD) by padding.
RowSize = [(BitsPerPixel * ImageWidth + 31) / 32] * 4 (division is integer)
(BMP file format)
Monochrome image with width = 100 has line size 16 bytes (128 bits), so 3.5 bytes serve for padding (second nibble of F0 and 00 00 00). F represents right 4 columns of image (white for usual 0/1 palette).

How "bytesPerRow" is calculated from an NSBitmapImageRep

I would like to understand how "bytesPerRow" is calculated when building up an NSBitmapImageRep (in my case from mapping an array of floats to a grayscale bitmap).
Clarifying this detail will help me to understand how memory is being mapped from an array of floats to a byte array (0-255, unsigned char; neither of these arrays are shown in the code below).
The Apple documentation says that this number is calculated "from the width of the image, the number of bits per sample, and, if the data is in a meshed configuration, the number of samples per pixel."
I had trouble following this "calculation" so I setup a simple loop to find the results empirically. The following code runs just fine:
int Ny = 1; // Ny is arbitrary, note that BytesPerPlane is calculated as we would expect = Ny*BytesPerRow;
for (int Nx = 0; Nx<320; Nx+=64) {
// greyscale image representation:
NSBitmapImageRep *dataBitMapRep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes: nil // allocate the pixel buffer for us
pixelsWide: Nx
pixelsHigh: Ny
bitsPerSample: 8
samplesPerPixel: 1
hasAlpha: NO
isPlanar: NO
colorSpaceName: NSCalibratedWhiteColorSpace // 0 = black, 1 = white
bytesPerRow: 0 // 0 means "you figure it out"
bitsPerPixel: 8]; // bitsPerSample must agree with samplesPerPixel
long rowBytes = [dataBitMapRep bytesPerRow];
printf("Nx = %d; bytes per row = %lu \n",Nx, rowBytes);
}
and produces the result:
Nx = 0; bytes per row = 0
Nx = 64; bytes per row = 64
Nx = 128; bytes per row = 128
Nx = 192; bytes per row = 192
Nx = 256; bytes per row = 256
So we see that the bytes/row jumps in 64 byte increments, even when Nx incrementally increases by 1 all the way to 320 (I didn't show all of those Nx values). Note also that Nx = 320 (max) is arbitrary for this discussion.
So from the perspective of allocating and mapping memory for a byte array, how are the "bytes per row" calculated from first principles? Is the result above so the data from a single scan-line can be aligned on a "word" length boundary (64 bit on my MacBook Pro)?
Thanks for any insights, having trouble picturing how this works.
Passing 0 for bytesPerRow: means more than you said in your comment. From the documentation:
If you pass in a rowBytes value of 0, the bitmap data allocated may be padded to fall on long word or larger boundaries for performance. … Passing in a non-zero value allows you to specify exact row advances.
So you're seeing it increase by 64 bytes at a time because that's how AppKit decided to round it up.
The minimum requirement for bytes per row is much simpler. It's bytes per pixel times pixels per row. That's all.
For a bitmap image rep backed by floats, you'd pass sizeof(float) * 8 for bitsPerSample, and bytes-per-pixel would be sizeof(float) * samplesPerPixel. Bytes-per-row follows from that; you multiply bytes-per-pixel by the width in pixels.
Likewise, if it's backed by unsigned bytes, you'd pass sizeof(unsigned char) * 8 for bitsPerSample, and bytes-per-pixel would be sizeof(unsigned char) * samplesPerPixel.

Resources