How can I change the pixel value range in an image? - image

I have grayscale images with values in the range [0-65533]. I've never see this before. What is this range?
I want to scale the values to the range [0-1200]. I tried the imadjust function but it does not work because this function required values between 0.0 and 1.0 only.
How can I use imadjust to scale these values properly?

That range of values suggest that your grayscale image contains unsigned 16-bit integers, i.e. it is of type uint16 (integer values from 0 to 65535). The documentation for imadjust states that it supports images of this type, but it's still a little tricky to get the results you want.
Regardless of the image type, the contrast limits are always expected to be in the range [0 1]. This will require you to rescale them yourself by dividing by 65535:
scaledImage = imadjust(uint16(inputImage), [0 65533]./65535, [0 1200]./65535);
Note that I also added the conversion uint16(...) just to make absolutely sure the input image is that type when passed to imadjust. If your input image happened to be converted to type double first, imadjust would expect the values to be in the range [0 1] for the image as well, which would give you an incorrect output in this case.

If I understand correctly, you can just do something like this:
newimage=1200.*oldimage./65533;

Related

A proper way to convert 2D Array into RGB or GrayScale image for precision difference

I have a 2D CNN model where I perform a classification task. My images are all coming from a sensor data after conversion.
So, normally, my way is to convert them into images using the following approach
newsize = (9, 1000)
pic = acc_normalized[0]
img = Image.fromarray(np.uint8(pic*255), 'L')
img = img.resize(newsize)
image_path = "Images_Accel"
image_name = "D1." + str(2)
img.save(f"{image_path}/{image_name}.jpeg")
This is what I obtain:
However, their precision is sort of important. For instance, some of the numerical values are like:
117.79348187327987 or 117.76568758022673.
As you see in the above line, their difference is the digits, when I use uint8, it only takes 117 to when converting it into image pixels and it looks the same, right? But, I'd like to make them different. In some cases, the difference is even at the 8th or 10th digit.
So, when I try to use mode F and save them .jpeg in Image.fromarray line it gives me error and says that PIL cannot write mode F to jpeg.
Then, I tried to first convert them RGB like following;
img = Image.fromarray(pic, 'RGB')
I am not including np.float32 just before pic or not multiplying it by 255 as it is. Then, I convert this image to grayscale. This is what I got for RGB image;
After converting RGB into grayscale:
As you see, it seems that there is a critical different between the first pic and the last pic. So, what should be the proper way to use them in 2D CNN classification? or, should I convert them into RGB and choose grayscale in CNN implementation and a channel of 1? My image dimensions 1000x9. I can even change this dimension like 250x36 or 100x90. It doesn't matter too much. By the way, in the CNN network, I am able to get more than 90% test accuracy when I use the first-type of image.
The main problem here is using which image conversion method I'll be able to take into account those precision differences across the pixels. Would you give me some idea?
---- EDIT -----
Using .tiff format I made some quick comparisons.
First of all, my data looks like the following;
So, if I convert this first reading into an image using the following code where I use np.float64 and L gives me a grayscale image;
newsize = (9, 1000)
pic = acc_normalized[0]
img = Image.fromarray(np.float64(pic), 'L')
img = img.resize(newsize)
image_path = "Images_Accel"
image_name = "D1." + str(2)
img.save(f"{image_path}/{image_name}.tiff")
It gives me this image;
Then, the first 15x9 matrix seems like following image; The contradiction is that if you take a closer look at the numerical array, for instance (1,4) member, it's a complete black where the numerical array is equal to 0.4326132099074307. For grayscale images, black means that it's close to 0 cause it makes white if it's close to 1. However, if it's making a row operation, there is another value closer to 0 and I was expecting to see it black at (1,5) location. If it does a column operation, there is again something wrong. As I said, this data has been already normalized and varies within 0 and 1. So, what's the logic that it converts the array into an image? What kind of operation it does?
Secondly, if I first get an RGB image of the data and then convert it into a grayscale image, why I am not having exactly the same image as what I obtained first? Should the image coming from direct grayscale conversion (L method, np.float64) and the one coming from RGB-based (first I get RGB then convert it to grayscale) be the same? There is a difference in black-white pixels in those images. I do not know why we have it.
---- EDIT 2 ----
.tiff image with F mode and np.float32 gives the following;
I don't really understand your question, but you seem to want to store image differences that are less than 1, i.e. less than the resolution of integer values.
To do so, you need to use an image format that can store floats. JPEG, PNG, GIF, TGA and BMP cannot store floats. Instead, use TIFF, EXR or PFM formats which can handle floats.
Alternatively, you can create 16-bit PNG images wherein each pixel can store values in range 0..65535. So, say the maximum difference you wanted to store was 60 you could calculate the difference and multiply it by 1000 and round it to make an integer in range 0..60000 and store as 16-bit PNG.
You could record the scale factor as a comment within the image if it is variable.

Matlab imshow doesn't plot correctly but imshowpair does

I have imported an image. I have parsed it to double precision and performed some filtering on it.
When I plot the result with imshow, the double image is too dark. But when I use imshowpair to plot the original and the final image, both images are correctly displayed.
I have tried to use uint8, im2uint8, multiply by 255 and then use those functions, but the only way to obtain the correct image is using imshowpair.
What can I do?
It sounds like a problem where the majority of your intensities / colour data are outside the dynamic range of what is accepted for imshow when showing double data.
I also see that you're using im2double, but im2double simply converts the image to double and if the image is already double, nothing happens. It's probably because of the way you are filtering the images. Are you doing some sort of edge detection? The reason why you're getting dark images is probably because the majority of your intensities are negative, or are hovering around 0. imshow whe displaying double type images assumes that the dynamic range of intensities is [0,1].
Therefore, one way to resolve your problem is to do:
imshow(im,[]);
This shifts the display so that range so the smallest value is mapped to 0, and the largest to 1.
If you'd like a more permanent solution, consider creating a new output variable that does this for you:
out = (im - min(im(:))) / (max(im(:)) - min(im(:)));
This will perform the same shifting that imshow does when displaying data for you. You can now just do:
imshow(out);

Quantization Error in Lossless JPEG2000 (Matlab)

I have the following matrix:
A = [0.01 0.02; 1.02 1.80];
I want to compress this using JPEG 2000 and then recover the data. I used imwrite and imread in MATLAB as follows:
imwrite(A,'newA.jpg','jp2','Mode','lossless');
Ahat = imread('newA.jpg');
MATLAB give me the result in uint8. After converting data to double I get:
Ahat_double = im2double(Ahat)
Ahat_double =
0.0118 0.0196
1.0000 1.0000
I know this is because of the quantization, but I don't know how to resolve it and get the exact input data, which is what lossless compression is supposed to do.
Converting data to uint8 at the beginning did not help.
The reason why you are not getting the correct results is because A is a double precision matrix. When you are writing images to file in double precision, it assumes that the values vary between [0,1]. In your matrix, you have 2 values that are > 1. When you write this to file, these values will saturate to 1, and then they are saved to file. Actually, before even writing, the intensities will be scaled so that they are uint8 and vary between [0,255]. When you try re-reading the values, it will be read in as intensity 255, or double intensity of 1.0.
The other two values make sense when you read the values back in, as 0.01 in double form is actually 255*0.01 = 2.55 and thus rounded to 3 and 3 / 255 = 0.0118. For 0.02, this is 255*0.02 = 5.1 and thus rounded to 5 and 5 / 255 - 0.0196.
The only way you can possibly get around this is to renormalize your data before you write the image so that it conforms to [0,1]. To get the original data back, you would have to know the minimum and maximum values you had before you normalized this. Even when you do this, there are only 256 possible double precision values that can be encoded in your image (assuming grayscale), and so you will not be able to capture all possible floating point values this way.
As such, there is basically no way around your problem, so you're SOL!
If you want to encode arbitrary data using the JPEG 2000 standard, perhaps you should download this library from MATLAB's File Exchange. I haven't taken a closer look at it, but it may be able to compress arbitrary data using the JPEG 2000 algorithm.

Printing the pixel values of YUV image

When i convert a image to YUV from RGB I cant get a value of Y when I try printing the pixels. I get a value 377 and when I cast it with integer I get 255. WHich I presume is incorrect. Is there a better way or rather correct way to print the values of YUV image pixels.
Actually i am priting the values (int)src.at<Vec3b>(j,i).val[0]= 255
and src.at<Vec3b>(j,i).val[0] = 377
Also on that note, the Y is the combination of RGB calculated with some constants according to note. I am actually confused as how to get the value of Y.
This is a problem of OpenCV. OpenCV does not gracefully handle (scale) YUV or HSV color spaces for uchar format. With Vec3b you have effectively 3-channel uchar, and that ranges [0;255].
The solution is to use another matrix type. With cv::Mat3f you have a 3-channel floating point image. Then the values will be correctly converted by cvtColor function. You can get a Mat3f from a Mat3b by assignment.
Another solution that uses less memory may be Mat3s and Mat3w types, if supported by cvtColor.

Save an imagesc output in Matlab

I am using imagesc to get an integral image. However, I only manage to display it and then I have to save it by hand, I can't find a way to save the image from the script with imwrite or imsave. Is it possible at all?
The code:
image='C:\image.jpg';
in1= imread((image));
in=rgb2gray(in1);
in_in= cumsum(cumsum(double(in)), 2);
figure, imagesc(in_in);
You can also use the print command. For instance if you are running over multiple images and want to serialize them and save them, you can do something like:
% Create a new figure
figure (fig_ct)
% Plot your figure
% save the figure to your working directory
eval(['print -djpeg99 ' num2str(fig_ct)]);
% increment the counter for the next figure
fig_ct = fig_ct+1;
where fig_ct is just a counter. If you are interested in saving it in another format different than jpeg take a look at the documentation, you can do tiff, eps, and many more.
Hope this helps
I believe your problem may be with the fact that you are saving a double matrix that is not on the range of [0 1]. If you read the documentation, you'll see that
If the input array is of class double, and the image is a grayscale or
RGB color image, imwrite assumes the dynamic range is [0,1] and
automatically scales the data by 255 before writing it to the file as
8-bit values.
You can convert it yourself to a supported type (that's logical, uint8, uint16, or double) or get it in the range [0 1] by, for example, dividing it by the max:
imwrite (in_in / max (in_in(:)), 'out.jpg');
You may still want to further increase the dynamic range of the image you saved. For example, subtract the mininum before dividing by the max.
in_in = in_in - min (in_in(:));
in_in = in_in / max (in_in(:));
imwrite (in_in, 'out.jpg');
If you want exactly what imagesc displays
The imagesc function scales image data to the full range of the current colormap.
I don't know what exactly does it mean exactly but call imagesc requesting with 1 variable, and inspect the image handle to see the colormap and pass it to imwrite().
I'm a very new programmer, so apologies in advance if this isn't very helpful, but I just had the same problem and managed to figure it out. I used uint8 to convert it like this:
imwrite(uint8(in_in), 'in_in.jpg', 'jpg');

Resources