Quantization Error in Lossless JPEG2000 (Matlab) - image

I have the following matrix:
A = [0.01 0.02; 1.02 1.80];
I want to compress this using JPEG 2000 and then recover the data. I used imwrite and imread in MATLAB as follows:
imwrite(A,'newA.jpg','jp2','Mode','lossless');
Ahat = imread('newA.jpg');
MATLAB give me the result in uint8. After converting data to double I get:
Ahat_double = im2double(Ahat)
Ahat_double =
0.0118 0.0196
1.0000 1.0000
I know this is because of the quantization, but I don't know how to resolve it and get the exact input data, which is what lossless compression is supposed to do.
Converting data to uint8 at the beginning did not help.

The reason why you are not getting the correct results is because A is a double precision matrix. When you are writing images to file in double precision, it assumes that the values vary between [0,1]. In your matrix, you have 2 values that are > 1. When you write this to file, these values will saturate to 1, and then they are saved to file. Actually, before even writing, the intensities will be scaled so that they are uint8 and vary between [0,255]. When you try re-reading the values, it will be read in as intensity 255, or double intensity of 1.0.
The other two values make sense when you read the values back in, as 0.01 in double form is actually 255*0.01 = 2.55 and thus rounded to 3 and 3 / 255 = 0.0118. For 0.02, this is 255*0.02 = 5.1 and thus rounded to 5 and 5 / 255 - 0.0196.
The only way you can possibly get around this is to renormalize your data before you write the image so that it conforms to [0,1]. To get the original data back, you would have to know the minimum and maximum values you had before you normalized this. Even when you do this, there are only 256 possible double precision values that can be encoded in your image (assuming grayscale), and so you will not be able to capture all possible floating point values this way.
As such, there is basically no way around your problem, so you're SOL!
If you want to encode arbitrary data using the JPEG 2000 standard, perhaps you should download this library from MATLAB's File Exchange. I haven't taken a closer look at it, but it may be able to compress arbitrary data using the JPEG 2000 algorithm.

Related

A proper way to convert 2D Array into RGB or GrayScale image for precision difference

I have a 2D CNN model where I perform a classification task. My images are all coming from a sensor data after conversion.
So, normally, my way is to convert them into images using the following approach
newsize = (9, 1000)
pic = acc_normalized[0]
img = Image.fromarray(np.uint8(pic*255), 'L')
img = img.resize(newsize)
image_path = "Images_Accel"
image_name = "D1." + str(2)
img.save(f"{image_path}/{image_name}.jpeg")
This is what I obtain:
However, their precision is sort of important. For instance, some of the numerical values are like:
117.79348187327987 or 117.76568758022673.
As you see in the above line, their difference is the digits, when I use uint8, it only takes 117 to when converting it into image pixels and it looks the same, right? But, I'd like to make them different. In some cases, the difference is even at the 8th or 10th digit.
So, when I try to use mode F and save them .jpeg in Image.fromarray line it gives me error and says that PIL cannot write mode F to jpeg.
Then, I tried to first convert them RGB like following;
img = Image.fromarray(pic, 'RGB')
I am not including np.float32 just before pic or not multiplying it by 255 as it is. Then, I convert this image to grayscale. This is what I got for RGB image;
After converting RGB into grayscale:
As you see, it seems that there is a critical different between the first pic and the last pic. So, what should be the proper way to use them in 2D CNN classification? or, should I convert them into RGB and choose grayscale in CNN implementation and a channel of 1? My image dimensions 1000x9. I can even change this dimension like 250x36 or 100x90. It doesn't matter too much. By the way, in the CNN network, I am able to get more than 90% test accuracy when I use the first-type of image.
The main problem here is using which image conversion method I'll be able to take into account those precision differences across the pixels. Would you give me some idea?
---- EDIT -----
Using .tiff format I made some quick comparisons.
First of all, my data looks like the following;
So, if I convert this first reading into an image using the following code where I use np.float64 and L gives me a grayscale image;
newsize = (9, 1000)
pic = acc_normalized[0]
img = Image.fromarray(np.float64(pic), 'L')
img = img.resize(newsize)
image_path = "Images_Accel"
image_name = "D1." + str(2)
img.save(f"{image_path}/{image_name}.tiff")
It gives me this image;
Then, the first 15x9 matrix seems like following image; The contradiction is that if you take a closer look at the numerical array, for instance (1,4) member, it's a complete black where the numerical array is equal to 0.4326132099074307. For grayscale images, black means that it's close to 0 cause it makes white if it's close to 1. However, if it's making a row operation, there is another value closer to 0 and I was expecting to see it black at (1,5) location. If it does a column operation, there is again something wrong. As I said, this data has been already normalized and varies within 0 and 1. So, what's the logic that it converts the array into an image? What kind of operation it does?
Secondly, if I first get an RGB image of the data and then convert it into a grayscale image, why I am not having exactly the same image as what I obtained first? Should the image coming from direct grayscale conversion (L method, np.float64) and the one coming from RGB-based (first I get RGB then convert it to grayscale) be the same? There is a difference in black-white pixels in those images. I do not know why we have it.
---- EDIT 2 ----
.tiff image with F mode and np.float32 gives the following;
I don't really understand your question, but you seem to want to store image differences that are less than 1, i.e. less than the resolution of integer values.
To do so, you need to use an image format that can store floats. JPEG, PNG, GIF, TGA and BMP cannot store floats. Instead, use TIFF, EXR or PFM formats which can handle floats.
Alternatively, you can create 16-bit PNG images wherein each pixel can store values in range 0..65535. So, say the maximum difference you wanted to store was 60 you could calculate the difference and multiply it by 1000 and round it to make an integer in range 0..60000 and store as 16-bit PNG.
You could record the scale factor as a comment within the image if it is variable.

How to determine the number of bytes necessary to store an uncompressed grayscale image of size 8000 × 3400 pixels?

This is all of the information I was provided in the practice question. I am trying to figure out how to calculate it when prompted to do so on an exam...
How to determine the number of bytes necessary to store an uncompressed grayscale image of size 8000 × 3400 pixels?
I am also curious how the calculation changes if the image is a compressed binary image.
"I am trying to figure out how to calculate it when prompted to do so on an exam."
There are 8 bits to make 1 byte, so once you know how many bits-per-pixel (bpp) you have, this is a very simple calculation.
For 8 bits per pixel greyscale, just multiply the width by the height.
8000 * 3400 = 27200000 bytes.
For 1 bit per pixel black&white, multiply the width by the height and then divide by 8.
(8000 * 3400) / 8 = 3400000 bytes.
It's critical that the image is uncompressed, and that there's no padding at the end of each raster line. Otherwise the count will be off.
The first thing to work out is how many pixels you have. That is easy, it is just the width of the image multiplied by the height:
N = w * h
So, in your case:
N = 8000 * 3400 = 27200000 pixels
Next, in general you need to work out how many samples (S) you have at each of those 27200000 pixel locations in the image. That depends on the type of the image:
if the image is greyscale, you will have a single grey value at each location, so S=1
if the image is greyscale and has transparency as well, you will have a grey value plus a transparency (alpha) value at each location, so S=2
if the image is colour, you will have three samples for each pixel - one Red sample, one Green sample and one Blue sample, so S=3
if the image is colour and has transparency as well, you will get the 3 RGB values plus a transparency (alpha) value for each pixel, so S=4
there are others, but let's not get too complicated
The final piece of the jigsaw is how big each sample is, or how much storage it takes, i.e. the bytes per sample (B).
8-bit data takes 1 byte per sample, so B=1
16-bit data takes 2 bytes per sample, so B=2
32-bit floating point or integer data take 4 bytes per sample, so B=4
there are others, but let's not get too complicated
So, the actual answer for an uncompressed greyscale image is:
storage required = w * h * S * B
and in your specific case:
storage required = 8000 * 3400 * 1 * 1 = 27200000 bytes
If the image were compressed, the only thing you should hope and expect is that it takes less storage. The actual amount required will depend on:
how repetitive/predictable the image is - the more predictable the image is, in general, the better it will compress
how many colours the image contains - fewer colours generally means better compression
which image file format you require (PNG, JPEG, TIFF, GIF)
which compression algorithm you use (RLE, LZW, DCT)
how long you are prepared to wait for compression and decompression - the longer you can wait, the better you can compress in general
what losses/inaccuracies you are prepared to tolerate to save space - if you are prepared to accept a lower quality version of your image, you can get a smaller file

read disparity map using png file

I calculate a disparity map
d = disparity(imgL,imgR, 'Method', 'SemiGlobal', 'BlockSize', 7);
If I want to save the disparity map in image file
dis1 = d/63; imwrite(dis1,'dis.png');
How to read this disparity map in Matlab?
I tried:
disparityMap= single(imread('dis.png')/63);
But it doesn't give the same matrix. Thanks
The problem with saving PNG files with imwrite is that for floating point images such as your disparity map, the function multiplies the data by 255 and truncates the data to 8-bit unsigned integer before saving. Therefore if you try to re-read this image, you will need to divide by 255 to get it back to what it was before but due to truncation you will definitely get precision loss. You can approximate what you had before by first dividing 255 to get your scaled disparity map, then you need to multiply by 63 to undo your previous division by 63... oh yeah, and by the way you need to convert the datatype first before doing the division or else you will be subject to truncation of the datatype and that's also where you're going wrong too:
disparityMap = single(imread('dis.png'))*(63/255);
Be wary that you will not get it exactly the same as you had it before due to the precision loss when dividing by 63 and also when writing to file. The division by 63 will make small disparities even smaller so that when you actually scale by 255, truncate and save to file, these small disparities will inevitably get mapped to a smaller number when you read the file back into memory. Therefore, you need to make absolutely sure that this is what you actually want to do.

How to ignore certain values in A histogram? without using NaN in Matlab?

Say I have an grey scale image S and I'm looking to ignore all values above 250, how do I do it with out using NaN? the reason I don't want to use NaN is because Im looking to take statistical information from the resultant image such as average etc.
You can collect all image pixel intensities that are less than 250. That's effectively performing the same thing. If your image was stored in A, you can simply do:
pix = A(A < 250);
pix will be a single vector of all image pixels in A that have intensity of 249 or less. From there, you can perform whatever operations you want, such as the average, standard deviation, calculating the histogram of the above, etc.
Going with your post title, we can calculate the histogram of an image very easily using imhist that's part of the image processing toolbox, and so:
out = imhist(pix);
This will give you a 256 element vector where each value denotes the intensity count for a particular intensity. If we did this properly, you should only see bin counts up to intensity 249 (location 250 in the vector) and you should. If you don't have the image processing toolbox, you can repeat the same thing using histc and manually specifying the bin cutoffs to go from 0 up to 249:
out = histc(pix, 0:249);
The difference here is that we will get a histogram of exactly 250 bins whereas imhist will give you 256 bins by default. However, histc is soon to be deprecated and histcounts is what is recommended to use. Still the same syntax:
out = histcounts(pix, 0:249);
You can use logical indexing to build a histogram only using values in your specified range. For example you might do something like:
histogram(imgData(imgData < 250))

Save an imagesc output in Matlab

I am using imagesc to get an integral image. However, I only manage to display it and then I have to save it by hand, I can't find a way to save the image from the script with imwrite or imsave. Is it possible at all?
The code:
image='C:\image.jpg';
in1= imread((image));
in=rgb2gray(in1);
in_in= cumsum(cumsum(double(in)), 2);
figure, imagesc(in_in);
You can also use the print command. For instance if you are running over multiple images and want to serialize them and save them, you can do something like:
% Create a new figure
figure (fig_ct)
% Plot your figure
% save the figure to your working directory
eval(['print -djpeg99 ' num2str(fig_ct)]);
% increment the counter for the next figure
fig_ct = fig_ct+1;
where fig_ct is just a counter. If you are interested in saving it in another format different than jpeg take a look at the documentation, you can do tiff, eps, and many more.
Hope this helps
I believe your problem may be with the fact that you are saving a double matrix that is not on the range of [0 1]. If you read the documentation, you'll see that
If the input array is of class double, and the image is a grayscale or
RGB color image, imwrite assumes the dynamic range is [0,1] and
automatically scales the data by 255 before writing it to the file as
8-bit values.
You can convert it yourself to a supported type (that's logical, uint8, uint16, or double) or get it in the range [0 1] by, for example, dividing it by the max:
imwrite (in_in / max (in_in(:)), 'out.jpg');
You may still want to further increase the dynamic range of the image you saved. For example, subtract the mininum before dividing by the max.
in_in = in_in - min (in_in(:));
in_in = in_in / max (in_in(:));
imwrite (in_in, 'out.jpg');
If you want exactly what imagesc displays
The imagesc function scales image data to the full range of the current colormap.
I don't know what exactly does it mean exactly but call imagesc requesting with 1 variable, and inspect the image handle to see the colormap and pass it to imwrite().
I'm a very new programmer, so apologies in advance if this isn't very helpful, but I just had the same problem and managed to figure it out. I used uint8 to convert it like this:
imwrite(uint8(in_in), 'in_in.jpg', 'jpg');

Resources