from PIL import Image
im = Image.open('abc.jpg')
print(im.format) //output: JPEG
How can I write the above code using skimage?
I tried:
import skimage
from PIL import Image
im = skimage.io.imread('abc.jpg')
print(Image.fromarray(im).format)
Did not work for me.
I don't think you can do that with skimage.
When you load an image with PIL like this:
im = Image.open(path)
it returns a PIL Image object which stores the width, height, colourspace, palette and EXIF data.
When you load an image with skimage it just returns a pure Numpy array corresponding to the pixel colours so there is no place to store the format or anything else.
The screenshot below compares the same image plotted using matplotlib on the left and Mac preview on the right.
Code for plotting the image using matplotlib is also fairly simple.
import matplotlib.pyplot as plt
import argparse
import skimage
parser = argparse.ArgumentParser(description='Color check.')
parser.add_argument('--image', required=False,metavar="path or URL to image")
args = parser.parse_args()
image = skimage.io.imread(args.image)
plt.imshow(image)
plt.show()
As you can see the colors are visibly different in the two images. Why is this happening and which one should I trust to be the correct color representation?
EDIT:
I plotted the image using opencv's imshow and it looks fine.
and here is the code:
import argparse
import cv2
windowName = "image"
cv2.namedWindow(windowName,cv2.WINDOW_NORMAL)
cv2.resizeWindow(windowName, 600,600)
parser = argparse.ArgumentParser(description='Color check.')
parser.add_argument('--image', required=False,metavar="path or URL to image")
args = parser.parse_args()
image = cv2.imread(args.image)
cv2.imshow(windowName, image)
cv2.waitKey(0)
The disagreement probably comes from the difference in colorspace used by the two programs. While Mac Preview is colorspace aware, and therefore should display images using the proper colorspace in which they are created/tagged, matplotlib uses the sRGB colorspace.
I would say that the color representation used by Mac Preview is most likely the one also used by the creator of the image, therefore the one I would pick.
How do I load a picture (file) and save it into multidimensional numpy array and show it? I need example for pypy.
In python I have source code:
from scipy import misc
import matplotlib.pyplot as plt
picture = misc.imread('parrot.jpg')
plt.imshow(newPicture)
plt.show()
Just share a way to create opencv image object from in memory buffer or from url to improve performance.
Sometimes we get image binary from url, to avoid additional file IO, we want to imread this image from in memory buffer or from url, but imread only supports read image from file system with path.
To create an OpenCV image object with in memory buffer(StringIO), we can use OpenCV API imdecode, see code below:
import cv2
import numpy as np
from urllib2 import urlopen
from cStringIO import StringIO
def create_opencv_image_from_stringio(img_stream, cv2_img_flag=0):
img_stream.seek(0)
img_array = np.asarray(bytearray(img_stream.read()), dtype=np.uint8)
return cv2.imdecode(img_array, cv2_img_flag)
def create_opencv_image_from_url(url, cv2_img_flag=0):
request = urlopen(url)
img_array = np.asarray(bytearray(request.read()), dtype=np.uint8)
return cv2.imdecode(img_array, cv2_img_flag)
As pointed out in the comments to the accepted answer, it is outdated and no longer functional.
Luckily, I had to solve this very problem using Python 3.7 with OpenCV 4.0 recently.
To handle image loading from an URL or an in-memory buffer, I defined the following two functions:
import urllib.request
import cv2
import numpy as np
def get_opencv_img_from_buffer(buffer, flags):
bytes_as_np_array = np.frombuffer(buffer.read(), dtype=np.uint8)
return cv2.imdecode(bytes_as_np_array, flags)
def get_opencv_img_from_url(url, flags):
req = urllib.request.Request(url)
return get_opencv_img_from_buffer(urllib.request.urlopen(req), flags)
As you can see one depends on the other.
The first one, get_opencv_img_from_buffer, can be used to get an image object from an in-memory buffer. It assumes that the buffer has a read method and that it returns an instance of an object that implements the buffer protocol
The second one, get_opencv_img_from_url, generates an image directly from an URL.
The flags argument is passed on to cv2.imdecode,
which has the following constants predefined in cv2:
cv2.IMREAD_ANYCOLOR - If set, the image is read in any possible color format.
cv2.IMREAD_ANYDEPTH - If set, return 16-bit/32-bit image when the input has the corresponding depth, otherwise convert it to 8-bit.
cv2.IMREAD_COLOR - If set, always convert image to the 3 channel BGR color image.
cv2.IMREAD_GRAYSCALE - If set, always convert image to the single channel grayscale image (codec internal conversion).
cv2.IMREAD_IGNORE_ORIENTATION - If set, do not rotate the image according to EXIF's orientation flag.
cv2.IMREAD_LOAD_GDAL - If set, use the gdal driver for loading the image.
cv2.IMREAD_REDUCED_COLOR_2 - If set, always convert image to the 3 channel BGR color image and the image size reduced 1/2.
cv2.IMREAD_REDUCED_COLOR_4 - If set, always convert image to the 3 channel BGR color image and the image size reduced 1/4.
cv2.IMREAD_REDUCED_COLOR_8 - If set, always convert image to the 3 channel BGR color image and the image size reduced 1/8.
cv2.IMREAD_REDUCED_GRAYSCALE_2 - If set, always convert image to the single channel grayscale image and the image size reduced 1/2.
cv2.IMREAD_REDUCED_GRAYSCALE_4 - If set, always convert image to the single channel grayscale image and the image size reduced 1/4.
cv2.IMREAD_REDUCED_GRAYSCALE_8 - If set, always convert image to the single channel grayscale image and the image size reduced 1/8.
cv2.IMREAD_UNCHANGED - If set, return the loaded image as is (with alpha channel, otherwise it gets cropped).
I am studying some simple image processing techniques and came across a question. A task that I want to carry out is to find circles of fixed size in a given image. I'd like to write my own code for learning purposes.
The image that I have is in JPEG format which is in binary, how can I quantize the image so that I can do quantitative analysis on it? I want to calculate the cross-correlation between the template and the image to match the circle.
Thanks a lot for your help.
Can you try converting the jpeg binary into a matrix data-structure?
For ex: in Java you could use code like this to get matrix out of an image
using javax imageio package..
import java.awt.image.BufferedImage;
import java.io.File;
import javax.imageio.ImageIO;
File file = new File("file.jpg");
BufferedImage img;
img = ImageIO.read(file);