How do you save an array to a PNG image in RAM using Julia? - image

Saving an image to a PNG file is not a problem, the following code works fine (using Julia 1.5, FileIO 1.4.1 and ImageIO 0.3.0):
using FileIO
image = rand(UInt8, 200, 150, 3)
save("test.png", image)
However, I cannot find how to save the PNG image to a RAM buffer instead. I tried this:
io = IOBuffer()
save(Stream(format"PNG", io), image)
data = take!(io)
There's no error, but the resulting data is much too small: just 809 bytes (instead of about 90kB for the test.png file).
What am I doing wrong?

Your I/O code is correct but you are incorrectly generating the random image.
It should be:
using Images
image = [RGB(rand(N0f8,3)...) for x in 1:200, y in 1:150]
Now both the png file and buffer will have the same size in bytes (since png is compressed the exact number will vary with each randomized run):
julia> save(Stream(format"PNG", io), image)
90415

Related

MATLAB imread() can't read old images?

So I ran into a weird problem with the MATLAB imread() function where it can't read old images (in this case, a bmp from 2002). Basically when I pass the image as an argument to imread(), it recognises the image as a grayscale even though it is clearly RGB.
Image is part of a standard test set available to download from here. I am using the 'boy.bmp' image.
% In Downloads folder
I_dl = imread('boy.bmp');
whos %to show current variables
OUTPUT:
Name Size Bytes Class Attributes
I_dl 512x768 393216 uint8
The image size is actually around 390KB so it's not that only one channel is getting loaded into the workspace somehow. Looks like it has to do with some older encoding system.
I ran an imshow() to check the image and this was the result.
TEMPORARY WORK-AROUND: I imported the image into GIMP, saved it as an xcf (GIMP's native format) and then exported it as a bmp. Then I did the imread() and then whos. It works.
Name Size Bytes Class Attributes
I 512x768x3 1179648 uint8
The file size expanded to 1.2MB too. Strange.
Any one else faced the same issue?
Regards.
boy.bmp contains the indexed image. You should load and use the colormap matrix:
[I_dl,cmap] = imread('boy.bmp');
imshow(I_dl,cmap);

Convert image to Blob

I want to upload image data to a php script on the server. I have a URL for an image source (PNG, the image might be located on a different server). I load this into a Javascript image, draw this into a canvas and use the canvas.toBlob() method (or a polyfill as it is not mainly supported yet) to generate a blob holding the image data. This works fine, but I recognized that the resulting blob size is much bigger than the original image data.
In contrast if I use a HTML File input and let the user select an image on the client the resulting blob has equal size to the original image. Can I get image data from a canvas that is equal to the original image size?
I guess the reason is that I loose the PNG (or any image compression) when using the canvas.toBlob() polyfill:
value: function (callback, type, quality) {
var binStr = atob(this.toDataURL(type, quality).split(',')[1]),
len = binStr.length,
arr = new Uint8Array(len);
for (var i=0; i<len; i++ ) {
arr[i] = binStr.charCodeAt(i);
}
callback(new Blob([arr], {type: type || 'image/png'}));
}
I am confused by so many conversion steps via image, canvas, blob - so maybe there is an alternative to get the image data from a given URL and finally append it to FormData to send it to the server?
The method toDataURL when using the png format only uses a limited set of the possible formats available for PNG files. It is the 8bit per channel RGBA (32 bits) compressed format. There are no options to use any of the other formats available so you are forced to include redundant data when you save as a PNG. PNG also has a 24bit and 8 bit format. PNG also has several compression options available though I am unsure which is used but each browser.
In most cases it is best to send the original image. If you need to modify the image and do not use the alpha channel (no transparency) but still want the quality to be high send it as a jpeg with quality set to 1 (max).
You may also consider the use of a custom encoder for PNG that gives you access to more of the PNG encoding options, or even try one of the many other formats available, or make up your own format, though you will be hard pushed to improve on jpeg and webp.
You could also consider compressing the data on the server when you store it, even jpeg and webp have a little room for more compression. For transport you should not worry as most data these days is compressed as it leaves the page and most definitely compressed by the time it leaves the clients ISP

matplotlib.pyplot.imsave backend

I'm working in Spyder with matplotlib.pyplot and want to save numpy array to images.
The documentation of imsave() says, that the format to which I can save depends on the backend. So what exactly is the backend? I seem to be able to save .tiff images, but f.e. I want them to be saved as 8-bit tiffs instead of RGB-Tiffs. Any Idea where I can change that?
If you are trying to save an array as a tiff (with no axis markers ect) from mat, you might be better off using PIL.
(this assumes you are using ipython --pylab, so rand is defined)
write:
import PIL.Image as Image
im = Image.new('L',(100,100))
im.putdata(np.floor(rand(100,100) * 256).astype('uint8').ravel())
im.save('test.tif')
The ravel() is important, putdata expects a sequence (ie, 1D) not an array.
read :
im2 = Image.open('test.tif')
figure()
imshow(im2)
and the output file:
$ tiffinfo test.tif
TIFF Directory at offset 0x8 (8)
Image Width: 100 Image Length: 100
Bits/Sample: 8
Compression Scheme: None
Photometric Interpretation: min-is-black
Rows/Strip: 100
Planar Configuration: single image plane

How to read image from in memory buffer (StringIO) or from url with opencv python library

Just share a way to create opencv image object from in memory buffer or from url to improve performance.
Sometimes we get image binary from url, to avoid additional file IO, we want to imread this image from in memory buffer or from url, but imread only supports read image from file system with path.
To create an OpenCV image object with in memory buffer(StringIO), we can use OpenCV API imdecode, see code below:
import cv2
import numpy as np
from urllib2 import urlopen
from cStringIO import StringIO
def create_opencv_image_from_stringio(img_stream, cv2_img_flag=0):
img_stream.seek(0)
img_array = np.asarray(bytearray(img_stream.read()), dtype=np.uint8)
return cv2.imdecode(img_array, cv2_img_flag)
def create_opencv_image_from_url(url, cv2_img_flag=0):
request = urlopen(url)
img_array = np.asarray(bytearray(request.read()), dtype=np.uint8)
return cv2.imdecode(img_array, cv2_img_flag)
As pointed out in the comments to the accepted answer, it is outdated and no longer functional.
Luckily, I had to solve this very problem using Python 3.7 with OpenCV 4.0 recently.
To handle image loading from an URL or an in-memory buffer, I defined the following two functions:
import urllib.request
import cv2
import numpy as np
def get_opencv_img_from_buffer(buffer, flags):
bytes_as_np_array = np.frombuffer(buffer.read(), dtype=np.uint8)
return cv2.imdecode(bytes_as_np_array, flags)
def get_opencv_img_from_url(url, flags):
req = urllib.request.Request(url)
return get_opencv_img_from_buffer(urllib.request.urlopen(req), flags)
As you can see one depends on the other.
The first one, get_opencv_img_from_buffer, can be used to get an image object from an in-memory buffer. It assumes that the buffer has a read method and that it returns an instance of an object that implements the buffer protocol
The second one, get_opencv_img_from_url, generates an image directly from an URL.
The flags argument is passed on to cv2.imdecode,
which has the following constants predefined in cv2:
cv2.IMREAD_ANYCOLOR - If set, the image is read in any possible color format.
cv2.IMREAD_ANYDEPTH - If set, return 16-bit/32-bit image when the input has the corresponding depth, otherwise convert it to 8-bit.
cv2.IMREAD_COLOR - If set, always convert image to the 3 channel BGR color image.
cv2.IMREAD_GRAYSCALE - If set, always convert image to the single channel grayscale image (codec internal conversion).
cv2.IMREAD_IGNORE_ORIENTATION - If set, do not rotate the image according to EXIF's orientation flag.
cv2.IMREAD_LOAD_GDAL - If set, use the gdal driver for loading the image.
cv2.IMREAD_REDUCED_COLOR_2 - If set, always convert image to the 3 channel BGR color image and the image size reduced 1/2.
cv2.IMREAD_REDUCED_COLOR_4 - If set, always convert image to the 3 channel BGR color image and the image size reduced 1/4.
cv2.IMREAD_REDUCED_COLOR_8 - If set, always convert image to the 3 channel BGR color image and the image size reduced 1/8.
cv2.IMREAD_REDUCED_GRAYSCALE_2 - If set, always convert image to the single channel grayscale image and the image size reduced 1/2.
cv2.IMREAD_REDUCED_GRAYSCALE_4 - If set, always convert image to the single channel grayscale image and the image size reduced 1/4.
cv2.IMREAD_REDUCED_GRAYSCALE_8 - If set, always convert image to the single channel grayscale image and the image size reduced 1/8.
cv2.IMREAD_UNCHANGED - If set, return the loaded image as is (with alpha channel, otherwise it gets cropped).

(crystal reports) How to decrease RPT file size

I'm trying to decrease size of my rpt file. The problem is in image. When I remove image, file is about 50KB size large. When I add image, it increases to 600KB.
I reduced resulution of my image. It has only 12KB. I tried to use JPG, PNG, TIFF. With bigger image, size of rpt file is growing (for 200KB image it is about 2MB rpt)
Do you have any solution to this problem?
Regards,
-- Marcin Krupowicz
Hai,
Try Inserting the image as an OLE object instead of as a picture object

Resources