I am new to AWS Sagemaker. I have custom CV PyTorch model locally and deployed it to Sagemaker endpoint. I used custom inference.py code to define model_fn, input_fn, output_fn and predict_fn methods. So, I'm able to generate predictions on json input, which contains url to the image, the code is quite straigtforward:
def input_fn(request_body, content_type='application/json'):
logging.info('Deserializing the input data...')
image_transform = transforms.Compose([
transforms.Resize(size=(224, 224)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
if content_type:
if content_type == 'application/json':
input_data = json.loads(request_body)
url = input_data['url']
logging.info(f'Image url: {url}')
image_data = Image.open(requests.get(url, stream=True).raw)
return image_transform(image_data)
raise Exception(f'Requested unsupported ContentType in content_type {content_type}')
Then I am able to invoke endpoint with code:
client = boto3.client('runtime.sagemaker')
inp = {"url":url}
inp = json.loads(json.dumps(inp))
response = client.invoke_endpoint(EndpointName='ENDPOINT_NAME',
Body=json.dumps(inp),
ContentType='application/json')
The problem is, I see, that locally url request return slightly different image array comparing to the one on Sagemaker. Which is why on the same URL I obtain slightly different predictions. To check that at least model weights are the same I want to generate predictions on image itself, downloaded locally and to Sagemaker. But I fail trying to put image as input to endpoint. E.g.:
def input_fn(request_body, content_type='application/json'):
logging.info('Deserializing the input data...')
image_transform = transforms.Compose([
transforms.Resize(size=(224, 224)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
if content_type == 'application/x-image':
image_data = request_body
return image_transform(image_data)
raise Exception(f'Requested unsupported ContentType in content_type {content_type}')
Invoking endpoint I experience the error:
ParamValidationError: Parameter validation failed:
Invalid type for parameter Body, value: {'img': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=630x326 at 0x7F78A61461D0>}, type: <class 'dict'>, valid types: <class 'bytes'>, <class 'bytearray'>, file-like object
Does anybody know how to generate Sagemaker predictions by Pytorch model on images?
As always, after asking I found a solution. Actually, as the error suggested, I had to convert input to bytes or bytearray. For those who may need the solution:
from io import BytesIO
img = Image.open(open(PATH, 'rb'))
img_byte_arr = BytesIO()
img.save(img_byte_arr, format=img.format)
img_byte_arr = img_byte_arr.getvalue()
client = boto3.client('runtime.sagemaker')
response = client.invoke_endpoint(EndpointName='ENDPOINT_NAME
Body=img_byte_arr,
ContentType='application/x-image')
response_body = response['Body']
print(response_body.read())
Related
I recently developed a program that allows you to connect to Twitter and do some tasks automatically (like Tweeter, Liking) using only the account information: username;password;email_or_phone.
My problem is that I am now trying to add the functionality of Tweeting with an image but I can't.
Here is my code and my error:
async def tweet_success(self, msg: str, img_path: str):
# Get the number of bytes of the image
img_bytes = str(os.path.getsize(img_path))
# Get the media_id to add an image to my tweet
params = {'command': 'INIT','total_bytes': img_bytes,'media_type': 'image/png','media_category': 'tweet_image'}
response = requests.post('https://upload.twitter.com/i/media/upload.json', params=params, headers=self.get_headers())
media_id = response.text.split('{"media_id":')[1].split(',')[0]
params = {'command': 'APPEND','media_id': media_id,'segment_index': '0',}
# Try to get the raw binary of the image, My problem is here
data = open(img_path, "rb").read()
response = requests.post('https://upload.twitter.com/i/media/upload.json', params=params, headers=self.get_headers(), data=data,)
{"request":"\/i\/media\/upload.json","error":"media parameter is missing."}
Can someone help me ?
I tried
data = open(img_path, "rb").read()
data = f'------WebKitFormBoundaryaf0mMLIS7kpsKwPv\r\nContent-Disposition: form-data; name="media"; filename="blob"\r\nContent-Type: application/octet-stream\r\n\r\n{data}\r\n------WebKitFormBoundaryaf0mMLIS7kpsKwPv--\r\n'
data = open(img_path, "rb").read()
data = f'------WebKitFormBoundaryaf0mMLIS7kpsKwPv\r\nContent-Disposition: form-data; name="media"; filename="blob"\r\nContent-Type: application/octet-stream\r\n\r\n{data}\r\n------WebKitFormBoundaryaf0mMLIS7kpsKwPv--\r\n'.encode()
data = open(img_path, "rb").read()
data = base64.b64encode(data)
I am porting an application from PyQT5 to PyQt6. It displays multiple images in a QTextEdit. I need to add an image resource to QTextEdit QTextDocument but am getting an error.
TypeError: addResource(self, int, QUrl, Any): argument 1 has unexpected type 'ResourceType'
Method variables are img: Dictionary, counter: Integer, text_edit: QTextEdit
path_ = self.app.project_path
if img['mediapath'][0] == "/":
path_ = path_ + img['mediapath']
else:
path_ = img['mediapath'][7:]
document = text_edit.document()
image = QtGui.QImageReader(path_).read()
image = image.copy(img['x1'], img['y1'], img['width'], img['height'])
# Need unique image names or the same image from the same path is reproduced
imagename = self.app.project_path + '/images/' + str(counter) + '-' + img['mediapath']
url = QtCore.QUrl(imagename)
document.addResource(QtGui.QTextDocument.ResourceType.ImageResource, url, QtCore.QVariant(image))
The Qt6 documentation at https://doc.qt.io/qt-6/qtextdocument.html#addResource says:
For example, you can add an image as a resource in order to reference it from within the document:
document->addResource(QTextDocument::ImageResource,
QUrl("mydata://image.png"), QVariant(image));
Note: I have tried the following which matches the Qt6 documentation:
document.addResource(QtGui.QTextDocument.ImageResource, url, QtCore.QVariant(image))
This gives the error: AttributeError: type object 'QTextDocument' has no attribute 'ImageResource'
I found a solution that works, and I think, either the QT6 documentation needs to be updated, or the PyQt6 implementation of the documentation needs to be developed.
The required Integer is stored in the value attribute:
document.addResource(QtGui.QTextDocument.ResourceType.ImageResource.value, url, QtCore.QVariant(image))
QVariant method is not required, simpler code below:
document.addResource(QtGui.QTextDocument.ResourceType.ImageResource.value, url, image)
I am trying to integrate QnAmaker knowledge base with Azure Bot Service.
I am unable to find knowledge base id on QnAMaker portal.
How to find the kbid in QnAPortal?
The Knowledge Base Id can be located in Settings under “Deployment details” in your knowledge base. It is the guid that is nestled between “knowledgebases” and “generateAnswer” in the POST (see image below).
Hope of help!
Hey you can also use python to get this by take a look at the following code.
That is if you wanted to write a program to dynamically get the kb ids.
import http.client, os, urllib.parse, json, time, sys
# Represents the various elements used to create HTTP request path for QnA Maker
operations.
# Replace this with a valid subscription key.
# User host = '<your-resource-name>.cognitiveservices.azure.com'
host = '<your-resource-name>.cognitiveservices.azure.com'
subscription_key = '<QnA-Key>'
get_kb_method = '/qnamaker/v4.0/knowledgebases/'
try:
headers = {
'Ocp-Apim-Subscription-Key': subscription_key,
'Content-Type': 'application/json'
}
conn = http.client.HTTPSConnection(host)
conn.request ("GET", get_kb_method, None, headers)
response = conn.getresponse()
data = response.read().decode("UTF-8")
result = None
if len(data) > 0:
result = json.loads(data)
print
#print(json.dumps(result, sort_keys=True, indent=2))
# Note status code 204 means success.
KB_id = result["knowledgebases"][0]["id"]
print(response.status)
print(KB_id)
except :
print ("Unexpected error:", sys.exc_info()[0])
print ("Unexpected error:", sys.exc_info()[1])
I checked the whole azure-storage-blob gem and didn't find any way to get the URI for a blob. Is there some way to construct it correctly and in a generic way that will work for any other blob in any region?
I used S3 SDK before and I'm well grounded in S3 but new to Azure.
There is a protected method called blob_uri that looks like this:
def blob_uri(container_name, blob_name, query = {}, options = {})
if container_name.nil? || container_name.empty?
path = blob_name
else
path = ::File.join(container_name, blob_name)
end
options = { encode: true }.merge(options)
generate_uri(path, query, options)
end
So you could take the short cut of:
blob_client = Azure::Storage::Blob::BlobService.create(storage_account_name: 'XXX' , storage_access_key: 'XXX')
blob_client.send(:blob_uri, container_name,blob_name)
However, the actual URI is simply:
https://[storage_account_name].blob.core.windows.net/container/[container[s]]/[blob file name]
So since you have to know the blob name and the container to access to blob.
File.join(blob_client.host,container,blob_name)
Is the URI to the blob
I have the following code which serves up a public google cloud storage url for images I am uploading:
def to_internal_value(self, data):
file_name = str(uuid.uuid4())
# Get the file name extension:
file_extension = self.get_file_extension(file_name, data)
complete_file_name = "{}.{}".format(file_name, file_extension)
uploaded = data.read()
img = Image.open(io.BytesIO(uploaded))
new_image_io = io.BytesIO()
megapixels = img.width * img.height
# reduce size if image is bigger than MEGAPIXEL_LIMIT
if megapixels > self.MEGAPIXEL_LIMIT:
resize_factor = math.sqrt(megapixels/self.MEGAPIXEL_LIMIT)
resized = resizeimage.resize_thumbnail(img, [img.width/resize_factor,
img.height/resize_factor])
resized.save(new_image_io, format=file_extension.upper())
else:
img.save(new_image_io, format=file_extension.upper())
content = ContentFile(new_image_io.getvalue(), name=complete_file_name)
return super(Base64ImageField, self).to_internal_value(content)
def to_representation(self, value):
try:
blob = Blob(name=value.name, bucket=bucket)
blob.make_public()
return blob.public_url
except ValueError as e:
return value
The problem is that this is doubling the time for the request. In other words, instead of making the blob public just the first time it is uploaded, the code is executing this code each time the object is serialized to the client. I have tried moving the make_public() call into to_internal_value, but so far haven't had success, probably because I don't know exactly how to get value.