Image processing storage of image - algorithm

I am unable to understand the following question, could any one explain me what exactly is it asking?
Suppose that an image of dimension 5*6 inches has detail to the
frequency of 600 dots per inch in each direction. How many samples are
required to preserve the information in the image? if the dynamic
range of the pixel values is between 0 and 200, how many megabytes do
we need to store the whole image without compression?
I tried to solve this way, but just first part I did I do not know this is correct or not:
5*6*600 = 18000
I think 18000 are total no of pixels required to preserve the information of image
but do not know this is correct or not but how to find megabytes for storage?

Well, it's 600 dpi in both the horizontal and the vertical.
Next you have to consider the bit depth. Looks like 200 values - I'm going to assume that's per channel. So 3 bytes per pixel.
( 5 * 600 ) * ( 6 * 600 ) = 10,800,000 pixels
10,800,000 * 3 = 32,400,000 bytes
32,400,000 / 1024 = 31,640.625 kilobytes
31,640.625 / 1024 = 30.899047852 megabytes

Related

Algorithm for scaling image based on another image size and also preserve its aspect ratio

I have 2 images, Image 1 and Image 2.
Image 1 has a size of 512(width) x 515(height).
Then Image 2 with size of 256(width) x 256(height).
Image 2 is will be used as a watermark and will be placed on top of Image 1.
I want Image 2 size to be dependent on Image 1 size. Image 2 can resize up or down depending on the size of Image 1.
The new size(width & height) of Image 2 should be 20 percent size of Image 1 and the-same time preserve its aspect ratio.
What's the algorithm to find the new size(width & height) of Image 2?
Right now, I use (20%/100) * 512 to resize it, but this does not preserve Image 2 aspect ratio.
If the two images don't have the same aspect ratio then it's mathematically impossible to scale both width and height by 20% and preserve the aspect ratio.
So, chose an axis that you will use to scale by, and scale the other one to the size that preserves the aspect ratio.
e.g, using width:
new_image1_width = 512 * (20 / 100) = 102.4
Then compute the new height to preserve the aspect ratio:
original_aspect_ratio = image2_width / image2_height = 256 / 256 = 1
new_image1_height = 102.4 / original_aspect_ratio = 102.4
Or do it the other way (this time multiplying by the ratio):
new_image1_height = 515 * (20 / 100) = 103
original_aspect_ratio = image2_width / image2_height = 256 / 256 = 1
new_image1_width = 103 * original_aspect_ratio = 103
If you have to handle arbitrary image sizes and arbitrary scale factors, you will need two switch between the two ways depending on what you want the rule to be. E.g. you could always go with the smaller of the two, or use a ratio-adjusted height unless this gives you a height larger than image 1 height, and in that case use the second way, or vice versa.

MATLAB Code Efficiency to improve run-time

I have a three dimensional cell that holds images (i.e. images = cell(10,4,5)) and each cell block holds images of different sizes. The sizes are not too important in terms of what I’m trying to achieve. I would like to know if there is an efficient way to compute the sharpness of each of these cell blocks (total cell blocks = 10*4*5 = 200). I need to compute the sharpness of each block using the following function:
If it matters:
40 cell blocks contain images of size 240 X 320
40 cell blocks contain images of size 120 X 160
40 cell blocks contain images of size 60 X 80
40 cell blocks contain images of size 30 X 40
40 cell blocks contain images of size 15 X 20
which totals to 200 cells.
%% Sharpness Estimation From Image Gradients
% Estimate sharpness using the gradient magnitude.
% sum of all gradient norms / number of pixels give us the sharpness
% metric.
function [sharpness]=get_sharpness(G)
[Gx, Gy]=gradient(double(G));
S=sqrt(Gx.*Gx+Gy.*Gy);
sharpness=sum(sum(S))./(480*640);
Currently I am doing the following:
for i = 1 : 10
for j = 1 : 4
for k = 1 : 5
sharpness = get_sharpness(images{i,j,k});
end
end
end
The sharpness function isn’t anything fancy. I just have a lot of data hence it takes a long time to compute everything.
Currently I am using a nested for loop that iterates through each cell block. Hope someone can help me find a better solution.
(P.S. This is my first time asking a question hence if anything is unclear please ask further questions. THANK YOU)

Choose best image size from a list for given screen dimensions

Given the following possible screen sizes:
720x480
1280x720
1920x1080
3840×2160
And a range of image sizes similar to the following (which may vary to some degree, and the maximum size can be anything up to the limit of available memory):
Square 75 75
Large Square 150 150
Thumbnail 100 75
Small 240 180
Small 320 320 240
Medium 500 375
Medium 640 640 480
Medium 800 800 600
Large 1024 768
Large 1600 1600 1200
Large 2048 2048 1536
Original 3264 2448
And that some images may not be available in "Original" size and may not be larger 1024x768
I need to choose the best image for the current screen dimension.
I'm unsure how to approach this. The language will be Brightscript, but I'm really looking for a selection algorithm, or at least some suggestions on how to write the selection algorithm.
I need to choose the best image for the current screen dimension
It depends on what one means by "the best". One could try to optimize for width, others for height, or trying for the both at the same time (that would be trying to minimize the remaining area ). Let's optimize the remaining area. Here's a pseudo-code:
given screen s0
initialize best_image := None
best_remaining_area := INF
for image in image_list:
if s0.height < image.height or s0.width < image.width:
continue
remaining_area = s0.height * s0.width - image.height * image.width
if remaining_area < best_remaining_area:
best_remaining_area = remaining_area
best_image = image
return best_image

What is benefit when convert signed 16 bit to unsigned 8 bit in medical image?

I have an DICOM image with its type is 16 bit signed. I see some paper, the author often converts it to unsigned 8 bit. But they did not explain why they do it. Could you explain what is benefit of this work? And How to implement it by matlab code?
Unsigned 8-bit images take up less memory, and some operations, for example median filtering, can be performed much faster on them.
However, you risk losing information when the dynamic range of the original image spans more than 256 grayvalues.
If you do want to convert images, you can perform convertedImage = uint8(image - min(image(:)));, but if you are not limited in terms of RAM, you may want to convert the image to double instead convertedImage = double(image), since that way, more mathematical operations, such as many filtering approaches, will be available to you.
The simple answer is, it makes the image take less memory. This not only helps when it comes to preserving storage space but also will most probably speed up processing times.
Also an algorithm which works on uint8 has a good chance of working for other types, too.
to convert from int32 to uint8 in matlab you have to consider what exactly you want:
does your image contain only values which span only 256 values? then you can do convertedImage = uint8(image - min(image(:))) like jonas said, but this will clip your values are out of range:
>> uint8([-1 0 1 200 255 256 257])
ans =
0 0 1 200 255 255 255
>> uint8([-1 0 1 200 255 256 257] - min([-1 0 1 200 255 256 257]))
ans =
0 1 2 201 255 255 255
if your image uses the full span of possible int16 vals you will want to scale it first so its values range from 0 to 256.

MATLAB imresize increases image size on disc on downscaling to half

I was trying to downscale a .png image using imresize by 0.5 which was originally 25 kB. However, on saving the scaled image using imwrite, the size of the saved image becomes 52 kB.
The following is the image and the commands:
image=imread('image0001.png');
B = imresize(image, 0.5);
imwrite(B,'img0001.png','png');
This also happens if the resolution is specified as follows:
B = imresize(image, [400 300]);
What is the reason for this? It seems to work fine when scaled to 0.15.
The reason is that imresize uses bicubic interpolation, thus producing additional pixel values. Your original image is small since it has small number of unique pixel values. After interpolation the number will increase, thus increasing the file size.
To preserve the number of unique values you can use: B = imresize(image, 0.5, 'nearest');. You can check it as follows:
image=imread('image0001.png');
B = imresize(image, 0.5);
numel(unique(image)); % gives 18
numel(unique(B)); % gives 256
with new interpolation:
image=imread('image0001.png');
B = imresize(image, 0.5, 'nearest');
numel(unique(image)); % gives 18
numel(unique(B)); % gives 18
Saving B now should produce smaller size.

Resources