After stripping off header bytes and de-compressing the pixel values, a PNG file leaves us with a set of rows (a horizontal strip of the image one pixel high).
Each row starts with a single byte specifying the filter used, followed by RGB values:
+-----+-----+-----+-----+-----+-----+-----+
| 0:F | 1:R | 2:G | 3:B | 4:R | 5:G | 6:B | // end of first row in image
+-----+-----+-----+-----+-----+-----+-----+
| 7:F | 8:R | 9:G |10:B |11:R |12:G |13:B | // end of second row
+-----+-----+-----+-----+-----+-----+-----+
In an image without the filter byte, I could just divide the index by 3 (since there are three values per RGB pixel), then use these formulas to get the x/y position of that pixel:
x = index % width
y = index / width
But the filter byte is throwing me off! How do I get the x/y position of a pixel, given a red pixel's byte index? (Say at byte 4 or at byte 11, as shown above.)
I've tried all kinds of permutations but I think there must be an elegant solution!
Based on comments from #usr2564301, I think this works correctly:
y = ((index-1) / 3) / width
x = ((index-y) / 3) % width
Where width is the width of the image in pixels, not the width of the row of bytes.
We subtract y from the index because each row has a single filter byte and we need to remove them all to get the x position.
Alternatively, y can be calculated using:
y = index / row_width
Where row_width is the number of bytes per row: three for RGB and one filter byte times the width of the image.
Related
I am trying to calculate the offset-x value by doing some simple math. I want offset-x to be calculated by looking at width value, subtracting it from 100, strip off the % (main issue) and dividing by 2 . This will make sure the bar is always centered no matter the width value.
This is what I have so far:
[bar/top]
; Dimension defined as pixel value (e.g. 35) or percentage (e.g. 50%),
; the percentage can optionally be extended with a pixel offset like so:
; 50%:-10, this will result in a width or height of 50% minus 10 pixels
width = 70%
height = 28pt
; divide negative space (20%) evenly and set the offset-x to that value
; 80% width = 20% negative space /2 = 10%
; test variable calculation:
barXoffset="$((echo 100-70)/2 | bc))"
; Offset defined as pixel value (e.g. 35) or percentage (e.g. 50%)
; the percentage can optionally be extended with a pixel offset like so:
; 50%:-10, this will result in an offset in the x or y direction
; of 50% minus 10 pixels
; offset-x = 15%
offset-x = ${barXoffset}%
offset-y = 2
How would you try to solve this?
Currently trying to pipe into bc but still need to strip of the % at the end.
does anybody know how to crop an image with a sliding window in Matlab?
e.g. I have an image of 1000x500 pixels, I would like to crop from this image blocks of 50x50 pixels... Of course I have to handle uneven divisions, but it is not necessary to have block of the same size.
Some details that have helped me in the past related to (i) ways to divide an image while block processing and (ii) "uneven division", as mentioned by OP.
(i) Ways to divide/process an image:
1. Process non-overlapping blocks:
Using default parameter {'BorderSize',[0 0]}, this can be handled with blockproc as below.
Example for (i)-1: Note the blocked nature of the output. Here each non-overlapping block of size 32 x 32 is used to calculate the std2() and the output std2 value is used to fill that particular block. The input and output are of size 32 x 32.
fun = #(block_struct) std2(block_struct.data) * ones(size(block_struct.data));
I2 = blockproc('moon.tif',[32 32],fun);
figure; subplot(1, 2, 1);
imshow('moon.tif'); title('input');
subplot(1,2, 2)
imshow(I2,[]); title('output');
Input and Output Image:
(i)-2: Process overlapping blocks:
Using parameter {'BorderSize',[V H]}: V rows are added above and below the block and H columns are added to the left and right of the block. The block that is processed has (N + 2*V) rows and (M + 2*H) columns. Using default parameter {'TrimBorder',true}, the border of the output is trimmed to the original input block size of N rows and M columns.
Example for (i)-2: Below code using blockproc uses {'BorderSize',[15 15]} with [N M] = [1 1]. This is similar to filtering each pixel of the image with a custom kernel. So the input to the processing unit is a block of size (1 + 2*15) rows and (1 + 2*15) columns. And since {'TrimBorder',true} by default, the std2 of the 31 rows by 31 columns block is provided as output for each pixel. The output is of size 1 by 1 after trimming border. Consequently, note that this example output is 'non-blocked' in contrast to the previous example. This code takes much longer time to process all the pixels.
fun = #(block_struct) std2(block_struct.data) * ones(size(block_struct.data));
I2 = blockproc('moon.tif',[1 1],fun,'BorderSize',[15 15]);
figure; subplot(1, 2, 1);
imshow('moon.tif'); title('input');
subplot(1,2, 2)
imshow(I2,[]); title('output');
Input and Output Image:
(ii) "Uneven division":
1. Zero/replicate/symmetric padding:
Zero padding so that an integer multiple of the blocks (N rows by M cols sized) can cover the [image + bounding zeros] in the uneven dimension. This can be achieved by using the default parameter {'PadMethod', 0} along with {'PadPartialBlocks' , true} ( which is false by default ). If a bounding region of zeros causes a high discontinuity in values computed from the bounding blocks, {'PadMethod', 'replicate'} or {'PadMethod', 'symmetric'} can be used.
2. Assume an "Active Region" within the image for block processing
For the case of processing each pixel, as in case (i)-2, we could assuming a bounding region of floor(block_size/2) pixels on all sides along the periphery of the image that is used as "Dummy" region. The Active region for block processing is contained within the Dummy region.
Something similar is used in imaging sensors where Dummy Pixels located at the periphery of an imaging array of Active Pixels allow for an operation like the color interpolation of all active area pixels. As color interpolation usually needs a 5x5 pixel mask to interpolate the color values of a pixel a bounding Dummy periphery of 2 pixels can be used.
Assuming MATLAB indexing, the region ( floor(block_size/2) + 1 ) to ( Input_Image_Rows - floor(block_size)/2) ) Rows by ( floor(block_size/2) + 1 ) to ( Input_ImageCols - floor(block_size)/2) ) Columns is considered as Active region (assuming a square block of side, block_size) which undergoes block processing for each pixel as in (i)-2.
Assuming a square block size of 5 by 5, this is shown by the following:
block_size = 5;
buffer_size = floor(block_size/2);
for i = (buffer_size+1):(image_rows-buffer_size)
for j = (buffer_size+1):(image_cols-buffer_size)
... % block processing for each pixel Image(i,j)
end
end
Matlab ver: R2013a
I would first look into the function blockproc to see if it can do what you want.
If you're sure you want to manually crop the image into blocks, you can use this script. It both writes the cropped images to .png files and saves the cropped images in the pages of a 3D array. You can modify it as you need.
This script assumes the image in evenly divisible by the block size. If it isn't, you'll need to pad it with zeros.
[rowstmp,colstmp]= size(myImage);
block_height = 50;
block_width = 50;
blocks_per_row = rows/block_height;
blocks_per_col = cols/block_width;
number_of_blocks = blocks_per_row*blocks_per_col;
%// pad image with zeros if needed
if ~(mod(rowstmp-1,block_height)==0)
rows = ceil(rowstmp/block_height)*block_height;
end
if ~(mod(colstmp-1,block_width)==0)
cols = ceil(colstmp/block_width)*block_width;
end
Im = uint8(zeros(rows,cols));
Im(1:rowstmp,1:colstmp) = myImage;
%// make sure these image have type uint8 so they save properly
cropped_image = uint8(zeros(rows,cols));
img_stack = uint8(zeros(rows,cols,number_of_blocks));
%// loop over the image blocks
for i = 1:blocks_per_row
for j = 1:blocks_per_col
%// get the cropped image from the original image
idxI = 1+(i-1)*block_height:i*block_height;
idxJ = 1+(j-1)*block_width :j*block_width;
cropped_image(idxI,idxJ) = Im(idxI,idxJ);
%//imshow(cropped_image)
%// write the cropped image to the current folder
filename = sprintf('block_row%d_col%d.png',i,j);
imwrite(cropped_image,filename);
cropped_image(idxI,idxJ) = 0;
%// keep all the blocks in a 3D array if we want to use them later
img_stack(:,:,(i-1)*blocks_per_col+j);
end
end
I am trying to do some image processing for which I am given an 8-bit grayscale image. I am supposed to change the contrast of the image by generating a lookup table that increases the contrast for pixel values between 50 and 205. I have generated a look up table using the following MATLAB code.
a = 2;
x = 0:255;
lut = 255 ./ (1+exp(-a*(x-127)/32));
When I plot lut, I get a graph shown below:
So far so good, but how do I go about increasing the contrast for pixel values between 50 and 205? Final plot of the transform mapping should be something like:
Judging from your comments, you simply want a linear map where intensities that are < 50 get mapped to 0, intensities that are > 205 get mapped to 255, and everything else is a linear mapping in between. You can simply do this by:
slope = 255 / (205 - 50); % // Generate equation of the line -
% // y = mx + b - Solve for m
intercept = -50*slope; %// Solve for b --> b = y - m*x, y = 0, x = 50
LUT = uint8(slope*(0:255) + intercept); %// Generate points
LUT(1:51) = 0; %// Anything < intensity 50 set to 0
LUT(206:end) = 255; %// Anything > intensity 205 set to 255
The LUT now looks like:
plot(0:255, LUT);
axis tight;
grid;
Take note at how I truncated the intensities when they're < 50 and > 205. MATLAB starts indexing at index 1, and so we need to offset the intensities by 1 so that they correctly map to pixel intensities which start at 0.
To finally apply this to your image, all you have to do is:
out = LUT(img + 1);
This is assuming that img is your input image. Again, take note that we had to offset the input by +1 as MATLAB starts indexing at location 1, while intensities start at 0.
Minor Note
You can easily do this by using imadjust, which basically does this for you under the hood. You call it like so:
outAdjust = imadjust(in, [low_in; high_in], [low_out; high_out]);
low_in and high_in represent the minimum and maximum input intensities that exist in your image. Note that these are normalized between [0,1]. low_out and high_out adjust the intensities of your image so that low_in maps to low_out, high_in maps to high_out, and everything else is contrast stretched in between. For your case, you would do:
outAdjust = imadjust(img, [0; 1], [50/255; 205/255]);
This should stretch the contrast such that the input intensity 50 maps to the output intensity 0 and the input intensity 205 maps to the output intensity 255. Any intensities < 50 and > 205 get automatically saturated to 0 and 255 respectively.
You need to take each pixel in your image and replace it with the corresponding value in the lookup table. This can be done with some nested for loops, but it is not the most idiomatic way to do it. I would recommend using arrayfun with a function that replaces a pixel.
new_image = arrayfun(#(pixel) lut(pixel), image);
It might be more efficient to use the code that generates lut directly on the image. If performance is a concern and you don't need to use a lookup table, try comparing both methods.
new_image = 255 ./ (1 + exp(-image * (x-127) / 32));
Note that the new_image variable will no longer be of type uint8. If you need to display it again (say, with imshow) you will need to convert it back by writing uint8(new_image).
I am interested in adding a single Gaussian shaped object to an existing image, something like in the attached image. The base image that I would like to add the object to is 8-bit unsigned with values ranging from 0-255. The bright object in the attached image is actually a tree represented by normalized difference vegetation index (NDVI) data. The attached script is what I have have so far. How can I add a a Gaussian shaped abject (i.e. a tree) with values ranging from 110-155 to an existing NDVI image?
Sample data available here which can be used with this script to calculate NDVI
file = 'F:\path\to\fourband\image.tif';
[I R] = geotiffread(file);
outputdir = 'F:\path\to\output\directory\'
%% Make NDVI calculations
NIR = im2single(I(:,:,4));
red = im2single(I(:,:,1));
ndvi = (NIR - red) ./ (NIR + red);
ndvi = double(ndvi);
%% Stretch NDVI to 0-255 and convert to 8-bit unsigned integer
ndvi = floor((ndvi + 1) * 128); % [-1 1] -> [0 256]
ndvi(ndvi < 0) = 0; % not really necessary, just in case & for symmetry
ndvi(ndvi > 255) = 255; % in case the original value was exactly 1
ndvi = uint8(ndvi); % change data type from double to uint8
%% Need to add a random tree in the image here
%% Write to geotiff
tiffdata = geotiffinfo(file);
outfilename = [outputdir 'ndvi_' '.tif'];
geotiffwrite(outfilename, ndvi, R, 'GeoKeyDirectoryTag', tiffdata.GeoTIFFTags.GeoKeyDirectoryTag)
Your post is asking how to do three things:
How do we generate a Gaussian shaped object?
How can we do this so that the values range between 110 - 155?
How do we place this in our image?
Let's answer each one separately, where the order of each question builds on the knowledge from the previous questions.
How do we generate a Gaussian shaped object?
You can use fspecial from the Image Processing Toolbox to generate a Gaussian for you:
mask = fspecial('gaussian', hsize, sigma);
hsize specifies the size of your Gaussian. You have not specified it here in your question, so I'm assuming you will want to play around with this yourself. This will produce a hsize x hsize Gaussian matrix. sigma is the standard deviation of your Gaussian distribution. Again, you have also not specified what this is. sigma and hsize go hand-in-hand. Referring to my previous post on how to determine sigma, it is generally a good rule to set the standard deviation of your mask to be set to the 3-sigma rule. As such, once you set hsize, you can calculate sigma to be:
sigma = (hsize-1) / 6;
As such, figure out what hsize is, then calculate your sigma. After, invoke fspecial like I did above. It's generally a good idea to make hsize an odd integer. The reason why is because when we finally place this in your image, the syntax to do this will allow your mask to be symmetrically placed. I'll talk about this when we get to the last question.
How can we do this so that the values range between 110 - 155?
We can do this by adjusting the values within mask so that the minimum is 110 while the maximum is 155. This can be done by:
%// Adjust so that values are between 0 and 1
maskAdjust = (mask - min(mask(:))) / (max(mask(:)) - min(mask(:)));
%//Scale by 45 so the range goes between 0 and 45
%//Cast to uint8 to make this compatible for your image
maskAdjust = uint8(45*maskAdjust);
%// Add 110 to every value to range goes between 110 - 155
maskAdjust = maskAdjust + 110;
In general, if you want to adjust the values within your Gaussian mask so that it goes from [a,b], you would normalize between 0 and 1 first, then do:
maskAdjust = uint8((b-a)*maskAdjust) + a;
You'll notice that we cast this mask to uint8. The reason we do this is to make the mask compatible to be placed in your image.
How do we place this in our image?
All you have to do is figure out the row and column you would like the centre of the Gaussian mask to be placed. Let's assume these variables are stored in row and col. As such, assuming you want to place this in ndvi, all you have to do is the following:
hsizeHalf = floor(hsize/2); %// hsize being odd is important
%// Place Gaussian shape in our image
ndvi(row - hsizeHalf : row + hsizeHalf, col - hsizeHalf : col + hsizeHalf) = maskAdjust;
The reason why hsize should be odd is to allow an even placement of the shape in the image. For example, if the mask size is 5 x 5, then the above syntax for ndvi simplifies to:
ndvi(row-2:row+2, col-2:col+2) = maskAdjust;
From the centre of the mask, it stretches 2 rows above and 2 rows below. The columns stretch from 2 columns to the left to 2 columns to the right. If the mask size was even, then we would have an ambiguous choice on how we should place the mask. If the mask size was 4 x 4 as an example, should we choose the second row, or third row as the centre axis? As such, to simplify things, make sure that the size of your mask is odd, or mod(hsize,2) == 1.
This should hopefully and adequately answer your questions. Good luck!
I have an image and I want to calculate the average gray value of different patches of the image.
I started with defining a patch using a row and column index. This is how I specify my where my subimage is located.
for x = 10 : 1 : 74
for y = 30 : 1 : 94
.........
end
end`
Now how do I calculate the average gray value of this subimage? I know that all this means is finding the mean(mean(image)). But since I have only the row and column positions, how can I apply this same concept.
try this
mean(mean(im(10:74,30:94)))
Assuming your image is some MxN matrix why don't you create a submatrix and calculate the mean over that?
eg:
subimage = image(10:74, 30:94);
mean_grey = mean(mean(subimage))
An alternative solution: convolve the image (I) with a flat kernel (h) (size of your 'sub-image') and take the value of the result at any index.
h = ones(a,b); % sub-image is size a x b
h = h / sum(h(:));
J = imfilter(I, h);
% J(x,y) will give you the average of a sub-image centered on (x,y)
Edge cases may cause strange behavior (sub-image out of image range), but you can supply a third argument to imfilter to address this.