I am sending an image by curl to flask server, i am using this curl command
curl -F "file=#image.jpg" http://localhost:8000/home
and I am trying to read the file using OpenCV on the server side.
On the server side I handle the image by this code
#app.route('/home', methods=['POST'])
def home():
data =request.files['file']
img = cv.imread(data)
fact_resp= model.predict(img)
return jsonify(fact_resp)
I am getting this error-
img = cv.imread(data)
TypeError: expected string or Unicode object, FileStorage found
How do I read the file using OpenCV on the server side?
Thanks!
I had similar issues while using opencv with flask server, for that first i saved the image to disk and read that image using saved filepath again using cv.imread()
Here is a sample code:
data =request.files['file']
filename = secure_filename(file.filename) # save file
filepath = os.path.join(app.config['imgdir'], filename);
file.save(filepath)
cv.imread(filepath)
But now i have got even more efficient approach from here by using cv.imdecode() to read image from numpy array as below:
#read image file string data
filestr = request.files['file'].read()
#convert string data to numpy array
file_bytes = numpy.fromstring(filestr, numpy.uint8)
# convert numpy array to image
img = cv.imdecode(file_bytes, cv.IMREAD_UNCHANGED)
After a bit of experimentation, I myself figured out a way to read the file using CV2.
For this I first read the image using PIL.image method
This is my code,
#app.route('/home', methods=['POST'])
def home():
data =request.files['file']
img = Image.open(request.files['file'])
img = np.array(img)
img = cv2.resize(img,(224,224))
img = cv2.cvtColor(np.array(img), cv2.COLOR_BGR2RGB)
fact_resp= model.predict(img)
return jsonify(fact_resp)
I wonder if there is any straight forward way to do this without using PIL.
So incase you want to do something like ,
file = request.files['file']
img = cv.imread(file)
then do it like this
import numpy as np
file = request.files['file']
file_bytes = np.fromfile(file, np.uint8)
file = cv.imdecode(file_bytes, cv.IMREAD_COLOR)
Now you don't need to do cv.imread() again, but can use this in the next line of codes.
This applies to OpenCV v3.x and onwards
Two-line solution, change grayscale to what you need
file_bytes = numpy.fromfile(request.files['image'], numpy.uint8)
# convert numpy array to image
img = cv.imdecode(file_bytes, cv.IMREAD_GRAYSCALE)
Related
Using Google Colab, I would like to be able to read in some number of image files each of which is at a different url and then display each of them. I got the following code to work but it only displays the first image (no output for the 2nd or error messages) Also, if I add a print statement to the output, then no image display at all. So what's the trick? Thanks.
!pip install pillow
import urllib.request
from PIL import Image
# First Image
imageURL1 = "https://www.example.com/dir/imagefile1.jpg"
imageName1="file1.jpg"
urllib.request.urlretrieve(imageURL1, imageName1)
img1 = Image.open(imageName1)
img1 # this works but only if it is the only output
# Second Image
imageURL2 = "https://www.example.com/dir/imagefile2.jpg"
imageName2="file2.jpg"
urllib.request.urlretrieve(imageURL2, imageName2)
img2 = Image.open(imageName2)
img2 # does not display
#print("x") # a print kills the image display
Found an answer that works. Use IPython to display the image. It works with multiple images and the print() works as well.
!pip install pillow
import urllib.request
from PIL import Image
from IPython.display import display
# First Image
imageURL1 = "https://www.example.com/dir/imagefile1.jpg"
imageName1="file1.jpg"
urllib.request.urlretrieve(imageURL1, imageName1)
img1 = Image.open(imageName1)
display(img1) # this works but only if it is the only output
print("AND THE PRINT WORKS")
# Second Image
imageURL2 = "https://www.example.com/dir/imagefile2.jpg"
imageName2="file2.jpg"
urllib.request.urlretrieve(imageURL2, imageName2)
img2 = Image.open(imageName2)
display(img2)
i checked with old threads but didn't find anything helpful.
i am a JD Edwards developer and we have a requirement in jde Orchestrator to process base64 string.
Can anyone help me and share full code for this?
i am new in groovy script.
def base64str = 'R0lGODlhAwADAHAAACwAAAAAAwADAIHsHCT97KYAAAAAAAACBIQRBwUAOw==' // < base64 string with 3x3 gif inside
def filename = System.properties['user.home']+'/documents/my.gif' // < filename with path
// save decoded base64 bytes into file
new File(filename).bytes = base64str.decodeBase64()
as a result you there should be a new file my.gif in current user/ documents folder
with 3x3 pixels image (43 bytes file)
I am sending a base64 encoded image via AJAX POST to a model stored in Google CloudML. I am getting an error telling me that my input_fn(): is failing to decode the image and transform it into jpeg.
Error:
Prediction failed: Error during model execution:
AbortionError(code=StatusCode.INVALID_ARGUMENT,
details="Expected image (JPEG, PNG, or GIF), got
unknown format starting with 'u\253Z\212f\240{\370
\351z\006\332\261\356\270\377' [[{{node map/while
/DecodeJpeg}} = DecodeJpeg[_output_shapes=
[[?,?,3]], acceptable_fraction=1, channels=3,
dct_method="", fancy_upscaling=true, ratio=1,
try_recover_truncated=false,
_device="/job:localhost/replica:0 /task:0
/device:CPU:0"](map/while/TensorArrayReadV3)]]")
Below is the full Serving_input_receiver_fn():
The first step I believe is to handle the incoming b64 encoded string and decode it. This is done with:
image = tensorflow.io.decode_base64(image_str_tensor)
The next step I believe is to open the bytes, but this is where I dont know how to handle the decoded b64 string with tensorflow code and need help.
With a python Flask app this can be done with:
image = Image.open(io.BytesIO(decoded))
pass the bytes through to get decoded by tf.image.decode_jpeg ????
image = tensorflow.image.decode_jpeg(image_str_tensor, channels=CHANNELS)
Full input_fn(): code
def serving_input_receiver_fn():
def prepare_image(image_str_tensor):
image = tensorflow.io.decode_base64(image_str_tensor)
image = tensorflow.image.decode_jpeg(image_str_tensor, channels=CHANNELS)
image = tensorflow.expand_dims(image, 0) image = tensorflow.image.resize_bilinear(image, [HEIGHT, WIDTH], align_corners=False)
image = tensorflow.squeeze(image, axis=[0])
image = tensorflow.cast(image, dtype=tensorflow.uint8)
return image
How do I decode my b64 string back into jpeg and then convert the jpeg to a tensor?
This is a sample for processing b64 images.
HEIGHT = 224
WIDTH = 224
CHANNELS = 3
IMAGE_SHAPE = (HEIGHT, WIDTH)
version = 'v1'
def serving_input_receiver_fn():
def prepare_image(image_str_tensor):
image = tf.image.decode_jpeg(image_str_tensor, channels=CHANNELS)
return image_preprocessing(image)
input_ph = tf.placeholder(tf.string, shape=[None])
images_tensor = tf.map_fn(
prepare_image, input_ph, back_prop=False, dtype=tf.uint8)
images_tensor = tf.image.convert_image_dtype(images_tensor, dtype=tf.float32)
return tf.estimator.export.ServingInputReceiver(
{'input': images_tensor},
{'image_bytes': input_ph})
export_path = os.path.join('/tmp/models/json_b64', version)
if os.path.exists(export_path): # clean up old exports with this version
shutil.rmtree(export_path)
estimator.export_savedmodel(
export_path,
serving_input_receiver_fn=serving_input_receiver_fn)
I've got code along the lines of the following which generates a new image out of some existing images.
from PIL import Image as pyImage
def create_compound_image(back_image_path, fore_image_path, fore_x_position):
back_image_size = get_image_size(back_image_path)
fore_image_size = get_image_size(fore_image_path)
new_image_width = (fore_image_size[0] / 2) + back_image_size[0]
new_image_height = fore_image_size[1] + back_image_size[1]
new_image = create_new_image_canvas(new_image_width, new_image_height)
back_image = pyImage.open(back_image_path)
fore_image = pyImage.open(fore_image_path)
new_image.paste(back_image, (0, 0), mask = None)
new_image.paste(fore_image, (fore_x_position, back_image_size[1]), mask = None)
return new_image
Later in the code, I've got something like this:
from kivy.uix.image import Image
img = Image(source = create_compound_image(...))
If I do the above, I get the message that Image.source only accepts string/unicode.
If I create a StringIO.StringIO() object from the new image, and try to use that as the source, the error message is the same as above. If I use the output of the StringIO object's getvalue() method as the source, the message is that the source must be encoded string without NULL bytes, not str.
What is the proper way to use the output of the create_compound_image() function as the source when creating a kivy Image object?
It seems you want to just combine two images into one, you can actually just create a texture using Texture.create and blit the data to a particular pos using Texture.blit_buffer .
from kivy.core.image import Image
from kivy.graphics import Texture
bkimg = Image(bk_img_path)
frimg = Image(fr_img_path)
new_size = ((frimg.texture.size[0]/2) + bkimg.texture.size[0],
frimg.texture.size[1] + bkimg.texture.size[1])
tex = Texture.create(size=new_size)
tex.blit_buffer(pbuffer=bkimg.texture.pixels, pos=(0, 0), size=bkimg.texture.size)
tex.blit_buffer(pbuffer=frimg.texture.pixels, pos=(fore_x_position, bkimg.texture.size[1]), size=frimg.texture.size)
Now you can use this texture anywhere directly like::
from kivy.uix.image import Image
image = Image()
image.texture = tex
source is a StringProperty and is expecting a path to file. That's why you got errors when you tried to pass PIL.Image object, StringIO object or string representation of image. It's not what framework wants. As for getting image from StringIO, it was discussed before here:
https://groups.google.com/forum/#!topic/kivy-users/l-3FJ2mA3qI
https://github.com/kivy/kivy/issues/684
You can also try much simpler, quick and dirty method - just save your image as a tmp file and read it normal way.
Python Wand open a img file as blob, md5 is incorrect.
with Image(filename=picture) as img:
blob = img.make_blob()
print 'blob md5', hashlib.md5(blob).hexdigest()
with open(picture, 'rb') as img:
content = img.read()
print 'content md5', hashlib.md5(content).hexdigest()
.make_blob() method does not write the exactly same binary to its source file. Use .signature property instead if you want the signature of the image pixels, not file representation.