send binary with sendDocument in python-telegram-bot - python-telegram-bot

this work :
mybot.sendDocument(chat_id=chatid, document=open('bla.pdf', rb'))
But if I did before :
with open('bla.pdf', 'rb') as fp:
b = fp.read()
I can't do :
mybot.sendDocument(chat_id=chatid, document=b)
The error is :
TypeError: Object of type 'bytes' is not JSON serializable
I use python 3.5.2 win or linux
Thanks for answer

sorry I didn't see your answer.
My trouble was I wanted to send a downloaded document, not a document on disk.
I resolved it like this :
mybot.sendDocument(chat_id=chatid,document=io.BytesIO(self.downloaded_file))

Try to send just a file object:
mybot.sendDocument(chat_id=chatid, document=open('bla.pdf', 'rb'))

Related

AttributeError: 'Ghostscript' object has no attribute '_instance'

In my university semester project we are attempting to use ghostscript on some PDF files, however when we try to run our code, we get the error:
AttributeError: 'Ghostscript' object has no attribute '_instance'
We have tried various attempts to fix this, however have not found a solution yet. The only part in which we are using ghostscript is the following code:
ar = ["-sDEVICE=pdfwrite", "-dPDFSETTINGS=/prepress", "-dQUIET", "-dBATCH", "-dNOPAUSE", "-dPDFSETTINGS=/printer", "-sOutputFile=" + os.path.join(filepath, file), os.path.join(filepath, file)]
gs = ghostscript.Ghostscript(*ar)
del gs
We are using Python-3.8 and PyPi Ghostscript 0.7.
Has anyone else encountered this error or does anyone know how to fix it?
Apparently the order in which you pass the arguments is important. So instead of having:
ar = ["-sDEVICE=pdfwrite", "-dPDFSETTINGS=/prepress", "-dQUIET", "-dBATCH", "-dNOPAUSE", "-dPDFSETTINGS=/printer", "-sOutputFile=" + os.path.join(filepath, file), os.path.join(filepath, file)]
We now have:
ar = ["-dQUIET", "-dBATCH", "-dNOPAUSE", "-sDEVICE=pdfwrite", "-dPDFSETTINGS=/prepress", "-dPDFSETTINGS=/printer", "-sOutputFile=" + os.path.join(filepath, file), os.path.join(filepath, file)]
This solved the issue for us.

Google automl_v1beta1 error "the provided location ID is not valid"

I am trying to call trained model from google colab with example provided.
But there is an error.
Who knows is it beta error or I have not set somethoing properly?
Thanks in advance.
The code
from google.cloud import automl_v1beta1 as automl
automl_client = automl.AutoMlClient()
# Create client for prediction service.
prediction_client =
automl.PredictionServiceClient().from_service_account_json(
'XXXXX.json')
# Get the full path of the model.
model_full_id = automl_client.model_path(
project_id, compute_region, model_id
)
# Read the file content for prediction.
#with open(file_path, "rb") as content_file:
snippet = "fsfsf" #content_file.read()
# Set the payload by giving the content and type of the file.
payload = {"text_snippet": {"content": snippet, "mime_type": "text/plain"}}
# params is additional domain-specific parameters.
# currently there is no additional parameters supported.
params = {}
response = prediction_client.predict(model_full_id, payload, params)
print("Prediction results:")
for result in response.payload:
print("Predicted class name: {}".format(result.display_name))
print("Predicted class score: {}".format(result.classification.score))
The eror msg^
InvalidArgument: 400 List of found errors: 1.Field: name; Message: The provided location ID is not valid.
You have to use a region that supports AutoML beta. This works for me:
create_dataset("myproj-123456", "us-central1", "my_dataset_id", "en", "de")
I clone the repo "python-docs-samples" :
$ git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git
I navigate to the automl examples
$ cd /home/MY_USER/python-docs-samples/language/automl/
I set the environment variables for [1]:
GOOGLE_APPLICATION_CREDENTIALS
PROJECT_ID
REGION_NAME
I typed:
$ python automl_natural_language_dataset.py create_dataset automltest1 False
I got this message:
Dataset name: projects/198768927566/locations/us-central1/datasets/TCN7889001684301386365
Dataset id: TCN7889001684301386365
Dataset display name: automltest1
Text classification dataset metadata:
classification_type: MULTICLASS
Dataset example count: 0
Dataset create time:
seconds: 1569367227
nanos: 873147000
I set the environment variable for :
DATASET_ID
Please note that I got this for the step 5.
I typed:
python automl_natural_language_dataset.py import_data $DATASET_ID "gs://$PROJECT_ID-lcm/complaints_manual.csv"
I got this message:
Processing import...
Dataset imported.

Vision API: How to get JSON-output

I'm having trouble saving the output given by the Google Vision API. I'm using Python and testing with a demo image. I get the following error:
TypeError: [mid:...] + is not JSON serializable
Code that I executed:
import io
import os
import json
# Imports the Google Cloud client library
from google.cloud import vision
from google.cloud.vision import types
# Instantiates a client
vision_client = vision.ImageAnnotatorClient()
# The name of the image file to annotate
file_name = os.path.join(
os.path.dirname(__file__),
'demo-image.jpg') # Your image path from current directory
# Loads the image into memory
with io.open(file_name, 'rb') as image_file:
content = image_file.read()
image = types.Image(content=content)
# Performs label detection on the image file
response = vision_client.label_detection(image=image)
labels = response.label_annotations
print('Labels:')
for label in labels:
print(label.description, label.score, label.mid)
with open('labels.json', 'w') as fp:
json.dump(labels, fp)
the output appears on the screen, however I do not know exactly how I can save it. Anyone have any suggestions?
FYI to anyone seeing this in the future, google-cloud-vision 2.0.0 has switched to using proto-plus which uses different serialization/deserialization code. A possible error you can get if upgrading to 2.0.0 without changing the code is:
object has no attribute 'DESCRIPTOR'
Using google-cloud-vision 2.0.0, protobuf 3.13.0, here is an example of how to serialize and de-serialize (example includes json and protobuf)
import io, json
from google.cloud import vision_v1
from google.cloud.vision_v1 import AnnotateImageResponse
with io.open('000048.jpg', 'rb') as image_file:
content = image_file.read()
image = vision_v1.Image(content=content)
client = vision_v1.ImageAnnotatorClient()
response = client.document_text_detection(image=image)
# serialize / deserialize proto (binary)
serialized_proto_plus = AnnotateImageResponse.serialize(response)
response = AnnotateImageResponse.deserialize(serialized_proto_plus)
print(response.full_text_annotation.text)
# serialize / deserialize json
response_json = AnnotateImageResponse.to_json(response)
response = json.loads(response_json)
print(response['fullTextAnnotation']['text'])
Note 1: proto-plus doesn't support converting to snake_case names, which is supported in protobuf with preserving_proto_field_name=True. So currently there is no way around the field names being converted from response['full_text_annotation'] to response['fullTextAnnotation']
There is an open closed feature request for this: googleapis/proto-plus-python#109
Note 2: The google vision api doesn't return an x coordinate if x=0. If x doesn't exist, the protobuf will default x=0. In python vision 1.0.0 using MessageToJson(), these x values weren't included in the json, but now with python vision 2.0.0 and .To_Json() these values are included as x:0
Maybe you were already able to find a solution to your issue (if that is the case, I invite you to share it as an answer to your own post too), but in any case, let me share some notes that may be useful for other users with a similar issue:
As you can check using the the type() function in Python, response is an object of google.cloud.vision_v1.types.AnnotateImageResponse type, while labels[i] is an object of google.cloud.vision_v1.types.EntityAnnotation type. None of them seem to have any out-of-the-box implementation to transform them to JSON, as you are trying to do, so I believe the easiest way to transform each of the EntityAnnotation in labels would be to turn them into Python dictionaries, then group them all into an array, and transform this into a JSON.
To do so, I have added some simple lines of code to your snippet:
[...]
label_dicts = [] # Array that will contain all the EntityAnnotation dictionaries
print('Labels:')
for label in labels:
# Write each label (EntityAnnotation) into a dictionary
dict = {'description': label.description, 'score': label.score, 'mid': label.mid}
# Populate the array
label_dicts.append(dict)
with open('labels.json', 'w') as fp:
json.dump(label_dicts, fp)
There is a library released by Google
from google.protobuf.json_format import MessageToJson
webdetect = vision_client.web_detection(blob_source)
jsonObj = MessageToJson(webdetect)
I was able to save the output with the following function:
# Save output as JSON
def store_json(json_input):
with open(json_file_name, 'a') as f:
f.write(json_input + '\n')
And as #dsesto mentioned, I had to define a dictionary. In this dictionary I have defined what types of information I would like to save in my output.
with open(photo_file, 'rb') as image:
image_content = base64.b64encode(image.read())
service_request = service.images().annotate(
body={
'requests': [{
'image': {
'content': image_content
},
'features': [{
'type': 'LABEL_DETECTION',
'maxResults': 20,
},
{
'type': 'TEXT_DETECTION',
'maxResults': 20,
},
{
'type': 'WEB_DETECTION',
'maxResults': 20,
}]
}]
})
The objects in the current Vision library lack serialization functions (although this is a good idea).
It is worth noting that they are about to release a substantially different library for Vision (it is on master of vision's repo now, although not released to PyPI yet) where this will be possible. Note that it is a backwards-incompatible upgrade, so there will be some (hopefully not too much) conversion effort.
That library returns plain protobuf objects, which can be serialized to JSON using:
from google.protobuf.json_format import MessageToJson
serialized = MessageToJson(original)
You can also use something like protobuf3-to-dict

Issue reading file containing Unicode chars using JavaProperties gem

I have mobile automation code in Ruby with locale property files and code is using JavaProperties::Properties.new(filename with path) which is returning hash and we are reading property value by providing property name.
Recently fr_CA.properties file was updated with unicode chars, something like "Solde du dernier relev\u00E9". After the update, I'm getting value "Solde du dernier relevé" instead of "Solde du dernier relevé".
I need some help how/where to provide UTF-8 conversion type.
Quick help highly appreciated.
#filePaths={
:pathTo_some_JavaProperties => #resourcesPath+"/service_"+locale+""+platform_fileName+".properties",
:pathTo_locale_other_JavaProperties => #resourcesPath+"/MoblClient_XmlService"+locale+".properties"
// more file paths
}
begin
#someHash = JavaProperties::Properties.new(#filePaths.fetch(:pathTo_some_JavaProperties))
rescue Errno::ENOENT
filesNotFound << #filePaths.fetch(:pathTo_some_JavaProperties)
end
// Reading value as #someHash['propName'] which is giving output as "Solde du dernier relevé"
Ok, here's what I am getting:
In test.properties:
item1 = Solde du dernier relev\u00E9
Then in Ruby,
> JavaProperties.load('test.properties')[:item1]
# => "item1 Solde du dernier relevé"
You should try getting your problematic code as stripped as possible, and then see if you keep getting the error.
BTW, I think you should use JavaProperties.load, not JavaProperties.new as in your sample.

Opa : give a name to a binary resource

I'm trying to download a file from an opa database. I've used the following code :
case {path:[], query:[("download", filename)], ...} : Resource.binary(/myDatabase[filename], "application/txt")
It's working fine, but the file I download is always named "download.txt". How can I change this name ?
Thanks
case {path:[], query:[("download", filename)], ...} : Resource.binary(/myDatabase[filename], "application/txt") |> Resource.add_header(_, {content_disposition={attachment=filename}})

Resources