How to read image from in memory buffer (StringIO) or from url with opencv python library - image

Just share a way to create opencv image object from in memory buffer or from url to improve performance.
Sometimes we get image binary from url, to avoid additional file IO, we want to imread this image from in memory buffer or from url, but imread only supports read image from file system with path.

To create an OpenCV image object with in memory buffer(StringIO), we can use OpenCV API imdecode, see code below:
import cv2
import numpy as np
from urllib2 import urlopen
from cStringIO import StringIO
def create_opencv_image_from_stringio(img_stream, cv2_img_flag=0):
img_stream.seek(0)
img_array = np.asarray(bytearray(img_stream.read()), dtype=np.uint8)
return cv2.imdecode(img_array, cv2_img_flag)
def create_opencv_image_from_url(url, cv2_img_flag=0):
request = urlopen(url)
img_array = np.asarray(bytearray(request.read()), dtype=np.uint8)
return cv2.imdecode(img_array, cv2_img_flag)

As pointed out in the comments to the accepted answer, it is outdated and no longer functional.
Luckily, I had to solve this very problem using Python 3.7 with OpenCV 4.0 recently.
To handle image loading from an URL or an in-memory buffer, I defined the following two functions:
import urllib.request
import cv2
import numpy as np
def get_opencv_img_from_buffer(buffer, flags):
bytes_as_np_array = np.frombuffer(buffer.read(), dtype=np.uint8)
return cv2.imdecode(bytes_as_np_array, flags)
def get_opencv_img_from_url(url, flags):
req = urllib.request.Request(url)
return get_opencv_img_from_buffer(urllib.request.urlopen(req), flags)
As you can see one depends on the other.
The first one, get_opencv_img_from_buffer, can be used to get an image object from an in-memory buffer. It assumes that the buffer has a read method and that it returns an instance of an object that implements the buffer protocol
The second one, get_opencv_img_from_url, generates an image directly from an URL.
The flags argument is passed on to cv2.imdecode,
which has the following constants predefined in cv2:
cv2.IMREAD_ANYCOLOR - If set, the image is read in any possible color format.
cv2.IMREAD_ANYDEPTH - If set, return 16-bit/32-bit image when the input has the corresponding depth, otherwise convert it to 8-bit.
cv2.IMREAD_COLOR - If set, always convert image to the 3 channel BGR color image.
cv2.IMREAD_GRAYSCALE - If set, always convert image to the single channel grayscale image (codec internal conversion).
cv2.IMREAD_IGNORE_ORIENTATION - If set, do not rotate the image according to EXIF's orientation flag.
cv2.IMREAD_LOAD_GDAL - If set, use the gdal driver for loading the image.
cv2.IMREAD_REDUCED_COLOR_2 - If set, always convert image to the 3 channel BGR color image and the image size reduced 1/2.
cv2.IMREAD_REDUCED_COLOR_4 - If set, always convert image to the 3 channel BGR color image and the image size reduced 1/4.
cv2.IMREAD_REDUCED_COLOR_8 - If set, always convert image to the 3 channel BGR color image and the image size reduced 1/8.
cv2.IMREAD_REDUCED_GRAYSCALE_2 - If set, always convert image to the single channel grayscale image and the image size reduced 1/2.
cv2.IMREAD_REDUCED_GRAYSCALE_4 - If set, always convert image to the single channel grayscale image and the image size reduced 1/4.
cv2.IMREAD_REDUCED_GRAYSCALE_8 - If set, always convert image to the single channel grayscale image and the image size reduced 1/8.
cv2.IMREAD_UNCHANGED - If set, return the loaded image as is (with alpha channel, otherwise it gets cropped).

Related

How do you save an array to a PNG image in RAM using Julia?

Saving an image to a PNG file is not a problem, the following code works fine (using Julia 1.5, FileIO 1.4.1 and ImageIO 0.3.0):
using FileIO
image = rand(UInt8, 200, 150, 3)
save("test.png", image)
However, I cannot find how to save the PNG image to a RAM buffer instead. I tried this:
io = IOBuffer()
save(Stream(format"PNG", io), image)
data = take!(io)
There's no error, but the resulting data is much too small: just 809 bytes (instead of about 90kB for the test.png file).
What am I doing wrong?
Your I/O code is correct but you are incorrectly generating the random image.
It should be:
using Images
image = [RGB(rand(N0f8,3)...) for x in 1:200, y in 1:150]
Now both the png file and buffer will have the same size in bytes (since png is compressed the exact number will vary with each randomized run):
julia> save(Stream(format"PNG", io), image)
90415

image plotted using matplotlib seems to have different colors than original JPEG image

The screenshot below compares the same image plotted using matplotlib on the left and Mac preview on the right.
Code for plotting the image using matplotlib is also fairly simple.
import matplotlib.pyplot as plt
import argparse
import skimage
parser = argparse.ArgumentParser(description='Color check.')
parser.add_argument('--image', required=False,metavar="path or URL to image")
args = parser.parse_args()
image = skimage.io.imread(args.image)
plt.imshow(image)
plt.show()
As you can see the colors are visibly different in the two images. Why is this happening and which one should I trust to be the correct color representation?
EDIT:
I plotted the image using opencv's imshow and it looks fine.
and here is the code:
import argparse
import cv2
windowName = "image"
cv2.namedWindow(windowName,cv2.WINDOW_NORMAL)
cv2.resizeWindow(windowName, 600,600)
parser = argparse.ArgumentParser(description='Color check.')
parser.add_argument('--image', required=False,metavar="path or URL to image")
args = parser.parse_args()
image = cv2.imread(args.image)
cv2.imshow(windowName, image)
cv2.waitKey(0)
The disagreement probably comes from the difference in colorspace used by the two programs. While Mac Preview is colorspace aware, and therefore should display images using the proper colorspace in which they are created/tagged, matplotlib uses the sRGB colorspace.
I would say that the color representation used by Mac Preview is most likely the one also used by the creator of the image, therefore the one I would pick.

Matlab imread return 4 channel matrix

I do imread image in Matlab, and it returns a 4-channel image:
im: 1012x972x4 uint8.
Which format is this image? How to check its color format(RGB, CMYK, etc)? I opened it in Gimp and the color profile is simply sRGB built-in
From imread() documentation:
The return value A is an array containing the image data. If the file
contains a grayscale image, A is an M-by-N array. If the file contains
a truecolor image, A is an M-by-N-by-3 array. For TIFF files
containing color images that use the CMYK color space, A is an
M-by-N-by-4 array. See TIFF in the Format-Specific Information section
for more information.
So the answer is apparently that this image's color space is CMYK.
If you want to check for a general input, then, from the same page:
To determine which color space is used, use imfinfo to get information
about the graphics file and look at the value of the
PhotometricInterpretation field.

matplotlib.pyplot.imsave backend

I'm working in Spyder with matplotlib.pyplot and want to save numpy array to images.
The documentation of imsave() says, that the format to which I can save depends on the backend. So what exactly is the backend? I seem to be able to save .tiff images, but f.e. I want them to be saved as 8-bit tiffs instead of RGB-Tiffs. Any Idea where I can change that?
If you are trying to save an array as a tiff (with no axis markers ect) from mat, you might be better off using PIL.
(this assumes you are using ipython --pylab, so rand is defined)
write:
import PIL.Image as Image
im = Image.new('L',(100,100))
im.putdata(np.floor(rand(100,100) * 256).astype('uint8').ravel())
im.save('test.tif')
The ravel() is important, putdata expects a sequence (ie, 1D) not an array.
read :
im2 = Image.open('test.tif')
figure()
imshow(im2)
and the output file:
$ tiffinfo test.tif
TIFF Directory at offset 0x8 (8)
Image Width: 100 Image Length: 100
Bits/Sample: 8
Compression Scheme: None
Photometric Interpretation: min-is-black
Rows/Strip: 100
Planar Configuration: single image plane

HowTo put region of Pyglet image into PIL Image?

My app reads frames from video (using pyglet), extracts a 'strip' (i.e. region - full width by 1 to 6 video lines), then pastes each strip (appends it) into an image that can later be written out an *.png file.
Problem is Pyglet GL images are stored in graphics memory, so are very limited re size.
My current klunky workaround is build up the to-be png image as small tiles (i.e. within the pyglet GL size limit), when the tile is full, write it out to a *png file, read that back in as a PIL Image file, then paste/append the tile image to the final PIL Image * png file.
I reckon it should be possible to convert the pyglet-GL strip to a PIL Image-friendly format, and paste/append that straight into the final PIL Image, thus having no need for the tile business, which greatly complicates things and has performance impact.
But I can't find how to get the strip data into a form that I can paste into a PIL Image.
Have seen many requests re converting the other way, but I've never run with the herd.
I know it's bad form to reply to one's own question, but I worked out an answer which may be useful to others, so here it is...
nextframe = source.get_next_video_frame() # get first video frame
while nextframe != None:# returns None at EOF
#
### extract strip data, conver to PIL Image-friendly form, paste itn into travo Image
#
strip = nextframe.get_region(0, (vid_frame_h//2)-(strip_h//2), strip_w, strip_h) # Extract strip from this frame
strip_image_data = strip.get_image_data()
strip_data = strip_image_data.get_data(strip_image_data.format, strip_image_data.pitch)
strip_Image = Image.frombuffer("RGB", (strip_w, strip_h), strip_data)
travo_image.paste(strip_Image, (0, travo_filled_to, strip_w, travo_filled_to + strip_h)) # paste strip image into travo_image
travo_filled_to = travo_filled_to + strip_h # update travo_filled pointer
nextframe = source.get_next_video_frame() # get next video frame (EOF returns None)
end of "while nextframe != None:"

Resources