I would like to Import a RAW10 file into Matlab. The infos are directly attachted to the jpeg file provided by the raspberry pi camera.
4 Pixels are saved as 5 bytes.
The first four bytes contain the bit 9-2 of a pixel.
The last byte contains the missing LSB.
sizeRAW = 6404096;
sizeHeader =32768;
I = ones(1944,2592);
fin=fopen('0.jpeg','r');
off1 = dir('0.jpeg');
offset = off1.bytes - sizeRAW + sizeHeader;
fseek(fin, offset,'bof');
pixel = ones(1944,2592);
I=fread(fin,1944,'ubit10','l');
for col=1:2592
I(:,col)=fread(fin,1944,'ubit8','l');
col = col+4;
end
fclose(fin);
This is as far as I came yet, but it's not right.
Related
I have an image of arbitrary dimensions ROWS and COLS. I want to tile this image into patches of arbitrary, but constant size blockSize = [blockSizeR, blockSizeC], given an arbitrary, but constant stride stride = [strideR, strideC]. When the number of patches in row or column direction times the respective block size doesn't equal the number of rows or columns, respectively (i.e. if there were spare rows or columns), I don't care about them (i.e. they can be ignored). It's sufficient if the image is tiled into all possible patches that fit completely into the image starting from the top left pixel.
There is a bunch of possible solutions floating around the web, but some don't allow overlap, some don't allow outputs if there are spare rows or columns, some are making inefficient use of for loops.
The closest thing to what I need is probably the solution posted on https://de.mathworks.com/matlabcentral/answers/330357-how-do-i-store-a-series-of-rgb-images-in-a-2d-array:
%img: source image
stride = [5, 5]; %height, width
blocksize = [11, 11]; %height, width
tilescount = (size(img(:, :, 1)) - blocksize - 1) / stride + 1;
assert(all(mod(tilescount, 1) == 0), 'cannot divide image into tile evenly')
tiles = cell(tilescount);
tileidx = 1;
for col = 1 : stride(2) : size(img, 2 ) - blocksize(2)
for row = 1 : stride(1) : size(img, 1) - blocksize(1)
tiles{tileidx} = img(row:row+stride(1)-1, col:col+stride(2)-1, :);
tileidx = tileidx + 1;
end
end
However, it also seems to work only if there are no spare rows or columns. How can I adapt that to an efficient solution for images with an arbitrary number of channels (I seek to apply it on both single-channel images and RGB images)?
The code above did not fully work, so I came up with the following solution based on it. Variable names are chosen such that they are self-explanatory.
tilesCountR = floor((ROWS - rowBlockSize - 1) / rowStride + 1);
tilesCountC = floor((COLS - colBlockSize - 1) / colStride + 1);
tiles = cell(tilesCountR * tilesCountC,1);
tileidx = 1;
for col = 1 : colStride : COLS - colBlockSize
for row = 1 : rowStride : ROWS - rowBlockSize
tiles{tileidx} = img(row:row+rowBlockSize-1, col:col+colBlockSize-1, :);
tileidx = tileidx + 1;
end
end
I am sampling some pixels from a reference image Ir and then moving them on a secondary image In. The first function I have written is as follows:
[r,c,d] = size(Ir);
rSample = fix(r * 0.4); % sample 40 percent of pixels
cSample = fix(c * 0.4); % sample 40 percent of pixels
rIdx = randi(r,rSample,1); % uniformly sample indices for rows
cIdx = randi(c,cSample,1); % uniformly sample indices for columns
kk = 1;
for ii = 1:length(rIdx)
for jj=1:length(cIdx)
In(rIdx(ii),cIdx(jj),:) = Ir(rIdx(ii),cIdx(jj),:) * fcn(rIdx(ii),cIdx(jj));
kk = kk + 1;
end
end
Another method to increase the performance (speed) of the code, that I came around is as follows:
nSample = fix(r*c*0.4);
Idx = randi(r*c,nSample,1);
for ii = 1:nSample
[I,J] = ind2sub([r,c],Idx(ii,1));
In(I,J,:) = Ir(I,J,:) * fcn(I,J);
end
In both codes, fcn(I,J) is a function that performs some computation on the pixel at [I,J] and the process can be different depending on the indices of the pixel.
Although I have removed one for-loop, I guess there is a better technique to increase the performance of the code even more.
Update:
As suggested by #Daniel the following line of the code does the job.
In(rIdx,cIdx,:)=Ir(rIdx,cIdx,:);
But the point is, I prefer to have only the sampled pixels to be able to process them faster. For instance having the samples in a vector format wit 3 layers for RGB.
Io = Ir(rIdx,cIdx,:);
Io1 = Io(:,:,1);
Io1v = Io1(:);
Ir=ones(30,30,3);
In=Ir*.5;
[r,c,d] = size(Ir);
rSamples = fix(r * 0.4); % sample 40 percent of pixels
cSamples = fix(c * 0.4); % sample 40 percent of pixels
rIdx = randi(r,rSamples,1); % uniformly sample indices for rows
cIdx = randi(c,cSamples,1); % uniformly sample indices for columns
In(rIdx,cIdx,:)=Ir(rIdx,cIdx,:);
I am working with a Monochrome Bitmap image, 1 bit per pixel.
When I examine the file with an hexadecimal editor, I notice that each row ends up with the following hexadecimal sequence: f0 00 00 00.
Having studied the problem a little bit, I concluded that the three last bytes 00 00 00 correspond to the row padding.
Question 1:
I would like to know if the following algorithm to determine the number of padding bytes in case of a 1 bbp BMP image is correct:
if(((n_width % 32) == 0) || ((n_width % 32) > 24))
{
n_nbPaddingBytes = 0;
}
else if((n_width % 32) <= 8)
{
n_nbPaddingBytes = 3;
}
else if((n_width % 32) <= 16)
{
n_nbPaddingBytes = 2;
}
else
{
n_nbPaddingBytes = 1;
}
n_width is the width in pixels of the BMP image.
For example, if n_width = 100 px then n_nbPaddingBytes = 3.
Question 2:
Apart from the padding (00 00 00), I have this F0 byte preceding the three bytes padding on every row. It results in a black vertical line of 4 pixels on the right side of the image.
Note 1: I am manipulating the image prior to printing it on a Zebra printer (I am flipping the image vertically and reverting the colors: basically a black pixel becomes a white one and vice versa).
Note 2: When I open the original BMP image with Paint, it has no such black vertical line on its right side.
Is there any reason why this byte 0xF0 is present at the end of each row?
Thank you for helping.
Best regards.
The bits representing the bitmap pixels are packed in rows. The size of each row is rounded up to a multiple of 4 bytes (a 32-bit DWORD) by padding.
RowSize = [(BitsPerPixel * ImageWidth + 31) / 32] * 4 (division is integer)
(BMP file format)
Monochrome image with width = 100 has line size 16 bytes (128 bits), so 3.5 bytes serve for padding (second nibble of F0 and 00 00 00). F represents right 4 columns of image (white for usual 0/1 palette).
I want to translate a monochrome FreeType glyph to an RGBA unsigned byte OpenGL texture. The colour of the texture at pixel (x, y) would be (255, 255, alpha), where
alpha = glyph->bitmap.buffer[pixelIndex(x, y)] * 255
I load my glyph using
FT_Load_Char(face, glyphChar, FT_LOAD_RENDER | FT_LOAD_MONOCHROME | FT_LOAD_TARGET_MONO)
The target texture has dimensions of glyph->bitmap.width * glyph->bitmap.rows. I've been able to index a greyscale glyph (loaded using just FT_Load_Char(face, glyphChar, FT_LOAD_RENDER)) with
glyph->bitmap.buffer[(glyph->bitmap.width * y) + x]
This does not appear work on a monochrome buffer though and the characters in my final texture are scrambled.
What is the correct way to get the value of pixel (x, y) in a monochrome glyph buffer?
Based on this thread I started on Gamedev.net, I've come up with the following function to get the filled/empty state of the pixel at (x, y):
bool glyphBit(const FT_GlyphSlot &glyph, const int x, const int y)
{
int pitch = abs(glyph->bitmap.pitch);
unsigned char *row = &glyph->bitmap.buffer[pitch * y];
char cValue = row[x >> 3];
return (cValue & (128 >> (x & 7))) != 0;
}
I have a similiar question some time ago. So I would to try help you.
The target texture has dimensions of glyph->bitmap.width * glyph->bitmap.rows
This is very specific dimension for OpenGl. Would be better if you round this to power of two.
In common way you make cycle where you get every glyph. Then cycle for every row from 0 to glyph->bitmap.rows. Then cycle for every byte (unsigned char) in row from 0 to glyph->pitch. Where you get byte by handling glyph->bitmap.buffer[pitch * row + i] (i is index of inner cycle and row is index of outer). For example:
if(s[i] == ' ') left += 20; else
for (int row = 0; row < g->bitmap.rows; ++row) {
if(kerning)
for(int b = 0; b < pitch; b++){
if(data[left + 64*(strSize*(row + 64 - g->bitmap_top)) + b] + g->bitmap.buffer[pitch * row + b] < UCHAR_MAX)
data[left + 64*(strSize*(row + 64 - g->bitmap_top)) + b] += g->bitmap.buffer[pitch * row + b];
else
data[left + 64*(strSize*(row + 64 - g->bitmap_top)) + b] = UCHAR_MAX;
} else
std::memcpy(data + left + 64*(strSize*(row + 64 - g->bitmap_top)) , g->bitmap.buffer + pitch * row, pitch);
}
left += g->advance.x >> 6;
This code is relevant to an 8-bit bitmap (standart FT_Load_Char(face, glyphChar, FT_LOAD_RENDER)).
Now I tried to use the monochrome flag and it caused me trouble. So my answer is not a solution to your problem. If you just want to display the letter then you should see my question.
The following Python function unpacks a FT_LOAD_TARGET_MONO glyph bitmap into a more convenient representation where each byte in the buffer maps to one pixel.
I've got some more info on monochrome font rendering with Python and FreeType plus additional example code on my blog: http://dbader.org/blog/monochrome-font-rendering-with-freetype-and-python
def unpack_mono_bitmap(bitmap):
"""
Unpack a freetype FT_LOAD_TARGET_MONO glyph bitmap into a bytearray where each
pixel is represented by a single byte.
"""
# Allocate a bytearray of sufficient size to hold the glyph bitmap.
data = bytearray(bitmap.rows * bitmap.width)
# Iterate over every byte in the glyph bitmap. Note that we're not
# iterating over every pixel in the resulting unpacked bitmap --
# we're iterating over the packed bytes in the input bitmap.
for y in range(bitmap.rows):
for byte_index in range(bitmap.pitch):
# Read the byte that contains the packed pixel data.
byte_value = bitmap.buffer[y * bitmap.pitch + byte_index]
# We've processed this many bits (=pixels) so far. This determines
# where we'll read the next batch of pixels from.
num_bits_done = byte_index * 8
# Pre-compute where to write the pixels that we're going
# to unpack from the current byte in the glyph bitmap.
rowstart = y * bitmap.width + byte_index * 8
# Iterate over every bit (=pixel) that's still a part of the
# output bitmap. Sometimes we're only unpacking a fraction of a byte
# because glyphs may not always fit on a byte boundary. So we make sure
# to stop if we unpack past the current row of pixels.
for bit_index in range(min(8, bitmap.width - num_bits_done)):
# Unpack the next pixel from the current glyph byte.
bit = byte_value & (1 << (7 - bit_index))
# Write the pixel to the output bytearray. We ensure that `off`
# pixels have a value of 0 and `on` pixels have a value of 1.
data[rowstart + bit_index] = 1 if bit else 0
return data
I would like to understand how "bytesPerRow" is calculated when building up an NSBitmapImageRep (in my case from mapping an array of floats to a grayscale bitmap).
Clarifying this detail will help me to understand how memory is being mapped from an array of floats to a byte array (0-255, unsigned char; neither of these arrays are shown in the code below).
The Apple documentation says that this number is calculated "from the width of the image, the number of bits per sample, and, if the data is in a meshed configuration, the number of samples per pixel."
I had trouble following this "calculation" so I setup a simple loop to find the results empirically. The following code runs just fine:
int Ny = 1; // Ny is arbitrary, note that BytesPerPlane is calculated as we would expect = Ny*BytesPerRow;
for (int Nx = 0; Nx<320; Nx+=64) {
// greyscale image representation:
NSBitmapImageRep *dataBitMapRep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes: nil // allocate the pixel buffer for us
pixelsWide: Nx
pixelsHigh: Ny
bitsPerSample: 8
samplesPerPixel: 1
hasAlpha: NO
isPlanar: NO
colorSpaceName: NSCalibratedWhiteColorSpace // 0 = black, 1 = white
bytesPerRow: 0 // 0 means "you figure it out"
bitsPerPixel: 8]; // bitsPerSample must agree with samplesPerPixel
long rowBytes = [dataBitMapRep bytesPerRow];
printf("Nx = %d; bytes per row = %lu \n",Nx, rowBytes);
}
and produces the result:
Nx = 0; bytes per row = 0
Nx = 64; bytes per row = 64
Nx = 128; bytes per row = 128
Nx = 192; bytes per row = 192
Nx = 256; bytes per row = 256
So we see that the bytes/row jumps in 64 byte increments, even when Nx incrementally increases by 1 all the way to 320 (I didn't show all of those Nx values). Note also that Nx = 320 (max) is arbitrary for this discussion.
So from the perspective of allocating and mapping memory for a byte array, how are the "bytes per row" calculated from first principles? Is the result above so the data from a single scan-line can be aligned on a "word" length boundary (64 bit on my MacBook Pro)?
Thanks for any insights, having trouble picturing how this works.
Passing 0 for bytesPerRow: means more than you said in your comment. From the documentation:
If you pass in a rowBytes value of 0, the bitmap data allocated may be padded to fall on long word or larger boundaries for performance. … Passing in a non-zero value allows you to specify exact row advances.
So you're seeing it increase by 64 bytes at a time because that's how AppKit decided to round it up.
The minimum requirement for bytes per row is much simpler. It's bytes per pixel times pixels per row. That's all.
For a bitmap image rep backed by floats, you'd pass sizeof(float) * 8 for bitsPerSample, and bytes-per-pixel would be sizeof(float) * samplesPerPixel. Bytes-per-row follows from that; you multiply bytes-per-pixel by the width in pixels.
Likewise, if it's backed by unsigned bytes, you'd pass sizeof(unsigned char) * 8 for bitsPerSample, and bytes-per-pixel would be sizeof(unsigned char) * samplesPerPixel.