Image resizing without quality loss? - image

If I have for example an image of size 400 x 600. I know how to resize it in order to be of size 80 x 80 by using the code below:
original_image = imread(my_image);
original_image_gray = rgb2gray(original_image);
Image_resized = imresize(original_image_gray, [80 80]);
But I think that imresize will resize the image with some losses in the quality. So how to resize it without any loss of the quality?

Image resizing itself will lose part of the image info, i.e. quality of the image.
What you can do is to choose the resizing method that fits your purpose by setting up the corresponding parameter:
[...] = imresize(...,method)
^^^^^^

Matlab stores images as pixel array. It is impossible, to store all the information contained in a 400x600 element matrix in a 80x80 matrix, therefore quality loss is unavoidable when resizing the pixel array, which is what imresize does.
If you want to reduce the physical size of your output, you should look at the imgwrite documentation, in particular at the XResolution and YResolution parameters in the case of creating png images.
original_image = imread(my_image);
imwrite(original_image_grey,'image.png','png','ResolutionUnit','cm','XResolution',400)
The above code will create a png of the original image with a resolution of 400px/cm, resulting in an image of 1cm width. The png will still be a 400x600px Bitmap.

Related

Shoud I resize image for mask RCNN?

I am training custom object detection by using mask RCNN. I have custom images that are of different sizes, so I am wondering if I need to resize the images so that they are all of the same size or not?
And if so, which method should I use to resize them?
Also I guess that I have to resize before labeling the images right?
You don't necessarily have to resize it before hand.
you can use this option in the model config file to set the size limit for your training.
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 600
max_dimension: 1024
}
}
Please make sure all the bounding boxes are in range with the image dimensions. i.e. the within the range of width and height of the image. Then the boxes and the images will be auto resized according to the parameter set here.
In Matterplot's Mask RCNN you can find documentation in the config file:
# Input image resizing
# Generally, use the "square" resizing mode for training and predicting
# and it should work well in most cases. In this mode, images are scaled
# up such that the small side is = IMAGE_MIN_DIM, but ensuring that the
# scaling doesn't make the long side > IMAGE_MAX_DIM. Then the image is
# padded with zeros to make it a square so multiple images can be put
# in one batch.
# Available resizing modes:
# none: No resizing or padding. Return the image unchanged.
# square: Resize and pad with zeros to get a square image
# of size [max_dim, max_dim].
# pad64: Pads width and height with zeros to make them multiples of 64.
# If IMAGE_MIN_DIM or IMAGE_MIN_SCALE are not None, then it scales
# up before padding. IMAGE_MAX_DIM is ignored in this mode.
# The multiple of 64 is needed to ensure smooth scaling of feature
# maps up and down the 6 levels of the FPN pyramid (2**6=64).
# crop: Picks random crops from the image. First, scales the image based
# on IMAGE_MIN_DIM and IMAGE_MIN_SCALE, then picks a random crop of
# size IMAGE_MIN_DIM x IMAGE_MIN_DIM. Can be used in training only.
# IMAGE_MAX_DIM is not used in this mode.
IMAGE_RESIZE_MODE = "square"
IMAGE_MIN_DIM = 800
IMAGE_MAX_DIM = 1024
How I understand it. When you train or predict this configuration will be used without you having to resize it manually. Ofcourse if you have different sizes and ratios of images this can be a problem.
512x512: ratio = 1 so this will upscale to 1024x1024
2054x2456: ratio = 0.836... so this will downscale to 1024x1024 keeping the ratio of 0.836... but using zeropadding to get the square shape.
Where it could go wrong is if a dimension of an object is relatively smaller or bigger in comparison with the different image dimensions which can result in a stretched or compressed object. In this case you should preprocess it manually so that in the end your object is of the same size and shape after the Mask RCNN function has molded it into the right shape.
The Matterplot function is found in "utils.py" and is called "resize_image".
In the "model.py" this is used during training when loading in the data and during inference (detect) to reshape the given numpy-array.

Using PIL (Pillow) and Image but keeping a good resolution

I'm using PIL to resize my images. Most of them are 640x480, some of them are bigger. Most of them are in png, but I have jpeg extension too.
I want to resize all my images to be 32x32 pixels, but I noticed that the resolution seems to change after using PIL.
I found that is a typical question and It's often a problem that figure out when you are saving the image.
I tried with different values of "quality", I read the documentation trying different parametres such as "subsampling" and trying both jpeg and png format.
Here is my code:
from PIL import Image
im = Image.open(os.path.join(my_path, file_name))
img = im.resize((32, 32))
if grey_scale is True:
img = img.convert('L') # to resize image in gray scale
img.save(os.path.join(my_path,
file_name[:file_name.index('.')] + '.jpg'), "JPEG", quality=100)
Here I have my input image
Here I have the grainy output obtained with my code
How can I resize my images to be smaller, but keeping a very good resolution?
The sort of image transformation you want is not possible. As when you try to resize an raster image to a lower pixel dimensions, you have to either Downsample it, or not sample it at all(simple resize). Even though you may preserve the resolution (total no of pixels in an image) but still since a pixel can represent one color at a time (atleast in a sub-pixel based display like a monitor), and your final image only has 1024 of them, and the detail in the original image is far more then what could be represented by these number of pixels, this would always result in a considerably lower quality(pixelated image with artifacts) in the final image.
But this is not the general case, as it depends a lot on what sort of details are represented by the image. If the image is not complex (not contains a lot of color changes), then it can be resized to a considerably lower quality version of it without losing details.
746x338 dimension image
32x32 dimension version of previous image
There is almost no difference between both the images (except for their physical size), even though their dimension's are a lot different. The Reason being these are non-complex images containing same pixel value over a large range, which makes it easier to resize them without loss in detail.
Now if the same process is tried out on a complex image, like the one you gave in the question, the result would be an Pixelated image.
SOLUTION:-
You can either choose for a large dimension value in the final image
(a lot more then 32x32) if you want to preserve the image quality.
Create a Vector equivalent of your image, which is resolution
independent and can be resized to a larger/smaller physical size
without affecting image quality.
P.S.:-
Don't save a .png image with .jpgextension, as jpg is a lossy compression technique(for the most part), which in turn results in a lower quality final image, then the original even if no manipulation are made over it.
Reducing the size, you can't keep the image crisp, because you need pixels for those and you can't keep both
There are different filters you can use in this case. See the below code
from PIL import Image
import os
import PIL
filters = [PIL.Image.NEAREST, PIL.Image.BILINEAR, PIL.Image.BICUBIC, PIL.Image.ANTIALIAS]
grey_scale = False
i = 0
for filter in filters:
im = Image.open("./image.png")
img = im.resize((32, 32), filter)
if grey_scale is True:
img = img.convert('L') # to resize image in gray scale
i = i + 1
img.save("./" + str(i) + '.jpg', "JPEG", quality=100)
Results:
Next, using resize you don't maintain the aspect ratio. So instead of using resize, use the thumbnail method which keeps aspect ratio as well
from PIL import Image
import os
import PIL
filters = [PIL.Image.NEAREST, PIL.Image.BILINEAR, PIL.Image.BICUBIC, PIL.Image.ANTIALIAS]
grey_scale = False
i = 5
for filter in filters:
im = Image.open("./image.png")
img = im.thumbnail((32, 32), filter)
img = im
if grey_scale is True:
img = img.convert('L') # to resize image in gray scale
i = i + 1
img.save("./" + str(i) + '.jpg', "JPEG", quality=100)
Results:

Reshaping greyscale images for neural network training - how to do this correctly

I have a general question about convolutional neural networks and image processing for training if your images are grey scale.
Take this image for example:
Its a grey scale image but when I do
image = cv2.imread("image.jpg")
print(image.shape)
I get
(1024, 1024, 3)
I know that opencv automatically creates 3 channels for jpg images. But when it comes to network training, it would be much more computationally efficient if I could use images in (1024, 1024, 1) - just like many of the MNIST tutorials demonstrate. However, if I reshape this:
image.reshape(1024, 1024 , 1)
And then try for example to show the image
plt.axis("off")
plt.imshow(reshaped_image)
plt.show()
I get
raise TypeError("Invalid dimensions for image data")
Does that mean that reshaping my images this way before network training is incorrect? I want to keep as much information in the image as possible but I don't want to have those extra channels if they aren't needed.
The reason that you're getting the error is that the output of your reshape does not have the same number of elements as the input. From the documentation for reshape:
No extra elements are included into the new matrix and no elements are excluded. Consequently, the product rows*cols*channels() must stay the same after the transformation.
Instead, use cvtColor to convert your 3-channel BGR image to a 1-channel grayscale image:
In Python:
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
Or in C++:
cv::cvtColor(image, image, cv::COLOR_BGR2GRAY);
You could also avoid conversion altogether by reading the image using the IMREAD_GRAYSCALE flag:
image = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
or
image = cv2.imread(image_path, 0)
(Thanks to #Alexander Reynolds for the Python code.)
This worked for me.
for image_path in dir:
img = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
X.append(img)
X = np.array(X)
X = np.expand_dims(X, axis=3)
set axis = Int : based on your array, 1 means it will prepend a new dimension in front.

how image size and resolution correlate in JPEG format

I have a photo with a size and pixel diemensions as shown below:
I opened and saved it using Matlab, and the size of this photo becomes much smaller, and also a smaller dpi value. But the diemension is still the same.
Then I converted the two to .bmp format, and the bmp images are in the same size! Does the dpi value correlate to image size, or there are other reasons behind?
When an image is described as 7952× 5304 pixels, then the dpi value is of no consequence. It means nothing.
Where the dpi value comes in, is in describing how large the image will be printed.
You can always resize the image to the dimensions you want with imresize.

Concept of Image pixels, resolution and magnification of image

I am trying to understand the relation between image pixels, their size, resolution and how changing the resolutions of an image results in the same image size but slight amount of blurriness. I referred to this links but some doubts still remain:
1) So, what is the "commonly" accepted definition of "resolution"?
2) How(and Why) does changing the resolution of an image (say, a Desktop wallpaper) result in the same image size but a slight amount of blurriness? In this case, what definition of "resolution", "pixels" and "image size" are we talking about?
Thanks in advance!
The image that is displayed on the screen is composed of thousands (or millions) of small dots; these are called pixels.
The number of pixels that can be displayed on the screen is referred to as the resolution of the image.
Image resolution is the detail an image holds.
As the resolution goes up, the image becomes more clear. It becomes
sharper, more defined, and more detailed as well.
In addition to image size, the quality of the image can also be manipulated. Here we use the word "compression". An uncompressed image is saved in a file format that doesn't compress the pixels in the image at all. Formats such as BMP or TIF files do not compress the image.
If you want to reduce the "file size" (number of megabytes required to save the image), you can choose to store your image as a JPG file and choose the amount of compression you want before saving the image.
These terms are explained in a video
Put in simple words, resolution is simply the number of pixels(picture cells) contained in a horizontal/vertical axis. The more the pixels arranged in an axis, better the image. The formal definition can be easily availed on the web.
Changing the resolution simply means changing the number of pixels in the image.Higher resolution implies more pixels hence a more detailed image. Reducing the resolution of an image decreases the image size following either lossy or lossless compression algorithms. This further reduces the amount of information in the image leading to the blurriness or the jagged edges.Image size varies according to the degree of compression only. 'Pixels' and 'Resolution' remain what they had been explained.

Resources