I am sending a base64 encoded image via AJAX POST to a model stored in Google CloudML. I am getting an error telling me that my input_fn(): is failing to decode the image and transform it into jpeg.
Error:
Prediction failed: Error during model execution:
AbortionError(code=StatusCode.INVALID_ARGUMENT,
details="Expected image (JPEG, PNG, or GIF), got
unknown format starting with 'u\253Z\212f\240{\370
\351z\006\332\261\356\270\377' [[{{node map/while
/DecodeJpeg}} = DecodeJpeg[_output_shapes=
[[?,?,3]], acceptable_fraction=1, channels=3,
dct_method="", fancy_upscaling=true, ratio=1,
try_recover_truncated=false,
_device="/job:localhost/replica:0 /task:0
/device:CPU:0"](map/while/TensorArrayReadV3)]]")
Below is the full Serving_input_receiver_fn():
The first step I believe is to handle the incoming b64 encoded string and decode it. This is done with:
image = tensorflow.io.decode_base64(image_str_tensor)
The next step I believe is to open the bytes, but this is where I dont know how to handle the decoded b64 string with tensorflow code and need help.
With a python Flask app this can be done with:
image = Image.open(io.BytesIO(decoded))
pass the bytes through to get decoded by tf.image.decode_jpeg ????
image = tensorflow.image.decode_jpeg(image_str_tensor, channels=CHANNELS)
Full input_fn(): code
def serving_input_receiver_fn():
def prepare_image(image_str_tensor):
image = tensorflow.io.decode_base64(image_str_tensor)
image = tensorflow.image.decode_jpeg(image_str_tensor, channels=CHANNELS)
image = tensorflow.expand_dims(image, 0) image = tensorflow.image.resize_bilinear(image, [HEIGHT, WIDTH], align_corners=False)
image = tensorflow.squeeze(image, axis=[0])
image = tensorflow.cast(image, dtype=tensorflow.uint8)
return image
How do I decode my b64 string back into jpeg and then convert the jpeg to a tensor?
This is a sample for processing b64 images.
HEIGHT = 224
WIDTH = 224
CHANNELS = 3
IMAGE_SHAPE = (HEIGHT, WIDTH)
version = 'v1'
def serving_input_receiver_fn():
def prepare_image(image_str_tensor):
image = tf.image.decode_jpeg(image_str_tensor, channels=CHANNELS)
return image_preprocessing(image)
input_ph = tf.placeholder(tf.string, shape=[None])
images_tensor = tf.map_fn(
prepare_image, input_ph, back_prop=False, dtype=tf.uint8)
images_tensor = tf.image.convert_image_dtype(images_tensor, dtype=tf.float32)
return tf.estimator.export.ServingInputReceiver(
{'input': images_tensor},
{'image_bytes': input_ph})
export_path = os.path.join('/tmp/models/json_b64', version)
if os.path.exists(export_path): # clean up old exports with this version
shutil.rmtree(export_path)
estimator.export_savedmodel(
export_path,
serving_input_receiver_fn=serving_input_receiver_fn)
Related
I have flask api response picture:
FORMAT = {'image/jpeg':'JPEG', 'image/bmp':'BMP', 'image/png':'PNG', 'image/gif': 'GIF'}
#app.route('/api/image/<id>/<str_size>', methods=['get'])
def show_thumbnail(id, str_size):
size = int(str_size)
with get_db().cursor() as cur:
cur.callproc('getimage', (id,))
result = cur.fetchone()
buf = BytesIO(result[1])
if(size>0):
im = Image.open(buf)
im.thumbnail((size, size))
buf = BytesIO(b'')
im.save(buf, format=FORMAT[result[0].lower()])
fw = open('w03.jpg', 'wb')
fw.write(buf.getbuffer())
fw.close()
resp = Response(buf)
resp.headers.set('Content-Type', result[0].lower())
return resp
ps:
result[0] = 'image/jpeg'
result[1] is the bytes array of jpeg picture.
If I set the size(str_size) = 0, I mean I do not run PIL Image thumbnail code part. I can get the correct picture in response.
If I set the size(str_size) = 256 for instance, I find the 'w03.jpg' is correct and I can get the correct resize image, but the response is black for the reason is the image contains error.
im.save(buf) puts the buffer to the end. You need to rewind this before building resp Do this with buf.seek(0). I suspect buf.getbuffer doesn't change the stream position in the same way, which would explain why w03.jpg is correct in the second test:
You can also use a with block to minimize some of the code (this auto closes the file):
# ...
with open('w03.jpg', 'wb') as fw:
fw.write(buf.getbuffer())
buf.seek(0)
resp = Response(buf)
# ...
I encoded some images to TFRecords as an example and then try to decode them. However, there is a bug during the decode process and I really cannot fix it.
InvalidArgumentError: Expected image (JPEG, PNG, or GIF), got unknown format starting with '\257\222\244\257\222\244\260\223\245\260\223\245\262\225\247\263'
[[{{node DecodeJpeg}}]] [Op:IteratorGetNextSync]
encode:
def _bytes_feature(value):
"""Returns a bytes_list from a string / byte."""
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feature(value):
"""Returns a float_list from a float / double."""
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def _int64_feature(value):
"""Returns an int64_list from a bool / enum / int / uint."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
src_path = r"E:\data\example"
record_path = r"E:\data\data"
sum_per_file = 4
num = 0
key = 3
for img_name in os.listdir(src_path):
recordFileName = "trainPrecipitate.tfrecords"
writer = tf.io.TFRecordWriter(record_path + recordFileName)
img_path = os.path.join(src_path, img_name)
img = Image.open(img_path, "r")
height = np.array(img).shape[0]
width = np.array(img).shape[1]
img_raw = img.tobytes()
example = tf.train.Example(features = tf.train.Features(feature={
'image/encoded': _bytes_feature(img_raw),
'image/class/label': _int64_feature(key),
'image/height': _int64_feature(height),
'image/width': _int64_feature(width)
}))
writer.write(example.SerializeToString())
writer.close()
decode:
import IPython.display as display
train_files = tf.data.Dataset.list_files(r"E:\data\datatrainPrecipitate.tfrecords")
train_files = train_files.interleave(tf.data.TFRecordDataset)
def decode_example(example_proto):
image_feature_description = {
'image/height': tf.io.FixedLenFeature([], tf.int64),
'image/width': tf.io.FixedLenFeature([], tf.int64),
'image/class/label': tf.io.FixedLenFeature([], tf.int64, default_value=3),
'image/encoded': tf.io.FixedLenFeature([], tf.string)
}
parsed_features = tf.io.parse_single_example(example_proto, image_feature_description)
height = tf.cast(parsed_features['image/height'], tf.int32)
width = tf.cast(parsed_features['image/width'], tf.int32)
label = tf.cast(parsed_features['image/class/label'], tf.int32)
image_buffer = parsed_features['image/encoded']
image = tf.io.decode_jpeg(image_buffer, channels=3)
image = tf.cast(image, tf.float32)
return image, label
def processed_dataset(dataset):
dataset = dataset.repeat()
dataset = dataset.batch(1)
dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
# print(dataset)
return dataset
train_dataset = train_files.map(decode_example)
# train_dataset = processed_dataset(train_dataset)
print(train_dataset)
for (image, label) in train_dataset:
print(repr(image))
InvalidArgumentError: Expected image (JPEG, PNG, or GIF), got unknown format starting with '\257\222\244\257\222\244\260\223\245\260\223\245\262\225\247\263'
[[{{node DecodeJpeg}}]] [Op:IteratorGetNextSync]
I can use tf.io.decode_raw() to decode the TFRecords and then use tf.reshape() to get the original image. While still don't know when to use tf.io.decode_raw() and when to use tf.io.decode_jpeg().
I am sending an image by curl to flask server, i am using this curl command
curl -F "file=#image.jpg" http://localhost:8000/home
and I am trying to read the file using OpenCV on the server side.
On the server side I handle the image by this code
#app.route('/home', methods=['POST'])
def home():
data =request.files['file']
img = cv.imread(data)
fact_resp= model.predict(img)
return jsonify(fact_resp)
I am getting this error-
img = cv.imread(data)
TypeError: expected string or Unicode object, FileStorage found
How do I read the file using OpenCV on the server side?
Thanks!
I had similar issues while using opencv with flask server, for that first i saved the image to disk and read that image using saved filepath again using cv.imread()
Here is a sample code:
data =request.files['file']
filename = secure_filename(file.filename) # save file
filepath = os.path.join(app.config['imgdir'], filename);
file.save(filepath)
cv.imread(filepath)
But now i have got even more efficient approach from here by using cv.imdecode() to read image from numpy array as below:
#read image file string data
filestr = request.files['file'].read()
#convert string data to numpy array
file_bytes = numpy.fromstring(filestr, numpy.uint8)
# convert numpy array to image
img = cv.imdecode(file_bytes, cv.IMREAD_UNCHANGED)
After a bit of experimentation, I myself figured out a way to read the file using CV2.
For this I first read the image using PIL.image method
This is my code,
#app.route('/home', methods=['POST'])
def home():
data =request.files['file']
img = Image.open(request.files['file'])
img = np.array(img)
img = cv2.resize(img,(224,224))
img = cv2.cvtColor(np.array(img), cv2.COLOR_BGR2RGB)
fact_resp= model.predict(img)
return jsonify(fact_resp)
I wonder if there is any straight forward way to do this without using PIL.
So incase you want to do something like ,
file = request.files['file']
img = cv.imread(file)
then do it like this
import numpy as np
file = request.files['file']
file_bytes = np.fromfile(file, np.uint8)
file = cv.imdecode(file_bytes, cv.IMREAD_COLOR)
Now you don't need to do cv.imread() again, but can use this in the next line of codes.
This applies to OpenCV v3.x and onwards
Two-line solution, change grayscale to what you need
file_bytes = numpy.fromfile(request.files['image'], numpy.uint8)
# convert numpy array to image
img = cv.imdecode(file_bytes, cv.IMREAD_GRAYSCALE)
I'm trying to download a PNG image in Apps Script, convert it to JPEG, and generate a data URI for this new JPEG.
function test() {
var blob = UrlFetchApp.fetch('https://what-if.xkcd.com/imgs/a/156/setup.png').getBlob();
var jpeg = blob.getAs("image/jpeg");
var uri = 'data:image/jpeg;base64,' + Utilities.base64Encode(jpeg.getBytes());
Logger.log(uri);
}
When I run this, I get:
The image you are trying to use is invalid or corrupt.
Even something like:
function test() {
var bytes = UrlFetchApp.fetch('https://what-if.xkcd.com/imgs/a/156/setup.png').getBlob().getBytes();
var jpeg = Utilities.newBlob(bytes, MimeType.PNG).getAs(MimeType.JPEG);
DriveApp.createFile(jpeg);
}
doesn't work.
Your code is correct. This may be a bug, but it's specific to the file you are using, so may as well be a bug in the file (i.e., the file could indeed be corrupted somehow). Or maybe it uses some features of PNG format that Google doesn't handle. Replacing the URL by another one, e.g.,
var blob = UrlFetchApp.fetch('https://cdn.sstatic.net/Sites/mathematica/img/logo#2.png').getBlob();
both functions work as expected.
I've got code along the lines of the following which generates a new image out of some existing images.
from PIL import Image as pyImage
def create_compound_image(back_image_path, fore_image_path, fore_x_position):
back_image_size = get_image_size(back_image_path)
fore_image_size = get_image_size(fore_image_path)
new_image_width = (fore_image_size[0] / 2) + back_image_size[0]
new_image_height = fore_image_size[1] + back_image_size[1]
new_image = create_new_image_canvas(new_image_width, new_image_height)
back_image = pyImage.open(back_image_path)
fore_image = pyImage.open(fore_image_path)
new_image.paste(back_image, (0, 0), mask = None)
new_image.paste(fore_image, (fore_x_position, back_image_size[1]), mask = None)
return new_image
Later in the code, I've got something like this:
from kivy.uix.image import Image
img = Image(source = create_compound_image(...))
If I do the above, I get the message that Image.source only accepts string/unicode.
If I create a StringIO.StringIO() object from the new image, and try to use that as the source, the error message is the same as above. If I use the output of the StringIO object's getvalue() method as the source, the message is that the source must be encoded string without NULL bytes, not str.
What is the proper way to use the output of the create_compound_image() function as the source when creating a kivy Image object?
It seems you want to just combine two images into one, you can actually just create a texture using Texture.create and blit the data to a particular pos using Texture.blit_buffer .
from kivy.core.image import Image
from kivy.graphics import Texture
bkimg = Image(bk_img_path)
frimg = Image(fr_img_path)
new_size = ((frimg.texture.size[0]/2) + bkimg.texture.size[0],
frimg.texture.size[1] + bkimg.texture.size[1])
tex = Texture.create(size=new_size)
tex.blit_buffer(pbuffer=bkimg.texture.pixels, pos=(0, 0), size=bkimg.texture.size)
tex.blit_buffer(pbuffer=frimg.texture.pixels, pos=(fore_x_position, bkimg.texture.size[1]), size=frimg.texture.size)
Now you can use this texture anywhere directly like::
from kivy.uix.image import Image
image = Image()
image.texture = tex
source is a StringProperty and is expecting a path to file. That's why you got errors when you tried to pass PIL.Image object, StringIO object or string representation of image. It's not what framework wants. As for getting image from StringIO, it was discussed before here:
https://groups.google.com/forum/#!topic/kivy-users/l-3FJ2mA3qI
https://github.com/kivy/kivy/issues/684
You can also try much simpler, quick and dirty method - just save your image as a tmp file and read it normal way.