How to upload a media with the private Twitter's API? - image

I recently developed a program that allows you to connect to Twitter and do some tasks automatically (like Tweeter, Liking) using only the account information: username;password;email_or_phone.
My problem is that I am now trying to add the functionality of Tweeting with an image but I can't.
Here is my code and my error:
async def tweet_success(self, msg: str, img_path: str):
# Get the number of bytes of the image
img_bytes = str(os.path.getsize(img_path))
# Get the media_id to add an image to my tweet
params = {'command': 'INIT','total_bytes': img_bytes,'media_type': 'image/png','media_category': 'tweet_image'}
response = requests.post('https://upload.twitter.com/i/media/upload.json', params=params, headers=self.get_headers())
media_id = response.text.split('{"media_id":')[1].split(',')[0]
params = {'command': 'APPEND','media_id': media_id,'segment_index': '0',}
# Try to get the raw binary of the image, My problem is here
data = open(img_path, "rb").read()
response = requests.post('https://upload.twitter.com/i/media/upload.json', params=params, headers=self.get_headers(), data=data,)
{"request":"\/i\/media\/upload.json","error":"media parameter is missing."}
Can someone help me ?
I tried
data = open(img_path, "rb").read()
data = f'------WebKitFormBoundaryaf0mMLIS7kpsKwPv\r\nContent-Disposition: form-data; name="media"; filename="blob"\r\nContent-Type: application/octet-stream\r\n\r\n{data}\r\n------WebKitFormBoundaryaf0mMLIS7kpsKwPv--\r\n'
data = open(img_path, "rb").read()
data = f'------WebKitFormBoundaryaf0mMLIS7kpsKwPv\r\nContent-Disposition: form-data; name="media"; filename="blob"\r\nContent-Type: application/octet-stream\r\n\r\n{data}\r\n------WebKitFormBoundaryaf0mMLIS7kpsKwPv--\r\n'.encode()
data = open(img_path, "rb").read()
data = base64.b64encode(data)

Related

Google Analytics Reporting v4 with streams instead of views

I've just created a new Google Analytics property and it now defaults to data streams instead of views.
I had some code that was fetching reports through the API that I now need to updated to work with those data streams instead of views since there are not views anymore.
I've looked in the docs but i don't see anything related to data streams, anybody knows how this is done now?
Here's my current code that works with a view ID (I'm using the ruby google-api-client gem):
VIEW_ID = "XXXXXX"
SCOPE = 'https://www.googleapis.com/auth/analytics.readonly'
client = AnalyticsReportingService.new
#server to server auth mechanism using a service account
#creds = ServiceAccountCredentials.make_creds({:json_key_io => File.open('account.json'), :scope => SCOPE})
#creds.sub = "myserviceaccount#example.iam.gserviceaccount.com"
client.authorization = #creds
#metrics
metric_views = Metric.new
metric_views.expression = "ga:pageviews"
metric_unique_views = Metric.new
metric_unique_views.expression = "ga:uniquePageviews"
#dimensions
dimension = Dimension.new
dimension.name = "ga:hostname"
#range
range = DateRange.new
range.start_date = start_date
range.end_date = end_date
#sort
orderby = OrderBy.new
orderby.field_name = "ga:pageviews"
orderby.sort_order = 'DESCENDING'
rr = ReportRequest.new
rr.view_id = VIEW_ID
rr.metrics = [metric_views, metric_unique_views]
rr.dimensions = [dimension]
rr.date_ranges = [range]
rr.order_bys = [orderby]
grr = GetReportsRequest.new
grr.report_requests = [rr]
response = client.batch_get_reports(grr)
I would expect that there would be a stream_id property on the ReportRequest object that I could use instead of the view_id but that's not the case.
Your existing code uses the Google Analytics Reporting api to extract data from a Universal analytics account.
Your new Google analytics property is a Google Analytics GA4 account. To extract data from that you need to use the Google analytics data api These are two completely different systems. You will not be able to just port it.
You can find info on the new api and new library here: Ruby Client for the Google Analytics Data API
$ gem install google-analytics-data
Thanks to Linda's answer i was able to get it working, here's the same code ported to the data API, it might end up being useful to someone:
client = Google::Analytics::Data.analytics_data do |config|
config.credentials = "account.json"
end
metric_views = Google::Analytics::Data::V1beta::Metric.new(name: "screenPageViews")
metric_unique_views = Google::Analytics::Data::V1beta::Metric.new(name: "totalUsers")
dimension = Google::Analytics::Data::V1beta::Dimension.new(name: "hostName")
range = Google::Analytics::Data::V1beta::DateRange.new(start_date: start_date, end_date: end_date)
order_dim = Google::Analytics::Data::V1beta::OrderBy::DimensionOrderBy.new(dimension_name: "screenPageViews")
orderby = Google::Analytics::Data::V1beta::OrderBy.new(desc: true, dimension: order_dim)
request = Google::Analytics::Data::V1beta::RunReportRequest.new(
property: "properties/#{PROPERTY_ID}",
metrics: [metric_views, metric_unique_views],
dimensions: [dimension],
date_ranges: [range],
order_bys: [orderby]
)
response = client.run_report request

How to download file directly from blob URL?

I am looking to download a PDF directly from a blob URL using Ruby code. The URL appears like this:
blob:https://dev.myapp.com/ba853441-d1f7-4595-9227-1b0e445b188b
I am able to visit the link in a web browser and have the PDF appear in a new tab. On inspection, other than the GET request there are some request headers related to browser/user agent.
I've attempted to use OpenURI but it detects the url as not an HTTP URI. Open URI works just fine with files from URLs that look like https://.../invoice.pdf
I've also tried to go the JS route with this snippet but this is returning 0 for me, as others have also reported.
Any automated solutions that require a download onClick and then navigating the local disk is not sufficient for my project. I am looking to retrieve files directly from the URL in the same fashion that OpenURI works for a file on a server. Thanks in advance.
I was able to get the Javascript snippet to work. The piece that I was missing was that the blob URL needed to be opened/visited in the browser first (in this case, Chrome). Here's a code snippet that might work for others.
def get_file_content_in_base64(uri)
result = page.evaluate_async_script("
var uri = arguments[0];
var callback = arguments[1];
var toBase64 = function(buffer){for(var r,n=new Uint8Array(buffer),t=n.length,a=new Uint8Array(4*Math.ceil(t/3)),i=new Uint8Array(64),o=0,c=0;64>c;++c)i[c]='ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'.charCodeAt(c);for(c=0;t-t%3>c;c+=3,o+=4)r=n[c]<<16|n[c+1]<<8|n[c+2],a[o]=i[r>>18],a[o+1]=i[r>>12&63],a[o+2]=i[r>>6&63],a[o+3]=i[63&r];return t%3===1?(r=n[t-1],a[o]=i[r>>2],a[o+1]=i[r<<4&63],a[o+2]=61,a[o+3]=61):t%3===2&&(r=(n[t-2]<<8)+n[t-1],a[o]=i[r>>10],a[o+1]=i[r>>4&63],a[o+2]=i[r<<2&63],a[o+3]=61),new TextDecoder('ascii').decode(a)};
var xhr = new XMLHttpRequest();
xhr.responseType = 'arraybuffer';
xhr.onload = function(){ callback(toBase64(xhr.response)) };
xhr.onerror = function(){ callback(xhr.status) };
xhr.open('GET', uri);
xhr.send();
", uri)
if result.is_a? Integer
fail 'Request failed with status %s' % result
end
return result
end
def get_pdf_from_blob
yield # Yield to whatever Click actions that trigger the file download
sleep 3 # Wait for direct download to complete
visit 'chrome://downloads'
sleep 3
file_name = page.text.split("\n")[3]
blob_url = page.text.split("\n")[4]
visit blob_url
sleep 3 # Wait for PDF to load
base64_str = get_file_content_in_base64(blob_url)
decoded_content = Base64.decode64(base64_str)
file_path = "./tmp/#{file_name}"
File.open(file_path, "wb") do |f|
f.write(decoded_content)
end
return file_path
end
From here you can send file_path to S3, send to PDF Reader, etc.

AWS Sagemaker custom PyTorch model inference on raw image input

I am new to AWS Sagemaker. I have custom CV PyTorch model locally and deployed it to Sagemaker endpoint. I used custom inference.py code to define model_fn, input_fn, output_fn and predict_fn methods. So, I'm able to generate predictions on json input, which contains url to the image, the code is quite straigtforward:
def input_fn(request_body, content_type='application/json'):
logging.info('Deserializing the input data...')
image_transform = transforms.Compose([
transforms.Resize(size=(224, 224)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
if content_type:
if content_type == 'application/json':
input_data = json.loads(request_body)
url = input_data['url']
logging.info(f'Image url: {url}')
image_data = Image.open(requests.get(url, stream=True).raw)
return image_transform(image_data)
raise Exception(f'Requested unsupported ContentType in content_type {content_type}')
Then I am able to invoke endpoint with code:
client = boto3.client('runtime.sagemaker')
inp = {"url":url}
inp = json.loads(json.dumps(inp))
response = client.invoke_endpoint(EndpointName='ENDPOINT_NAME',
Body=json.dumps(inp),
ContentType='application/json')
The problem is, I see, that locally url request return slightly different image array comparing to the one on Sagemaker. Which is why on the same URL I obtain slightly different predictions. To check that at least model weights are the same I want to generate predictions on image itself, downloaded locally and to Sagemaker. But I fail trying to put image as input to endpoint. E.g.:
def input_fn(request_body, content_type='application/json'):
logging.info('Deserializing the input data...')
image_transform = transforms.Compose([
transforms.Resize(size=(224, 224)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
if content_type == 'application/x-image':
image_data = request_body
return image_transform(image_data)
raise Exception(f'Requested unsupported ContentType in content_type {content_type}')
Invoking endpoint I experience the error:
ParamValidationError: Parameter validation failed:
Invalid type for parameter Body, value: {'img': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=630x326 at 0x7F78A61461D0>}, type: <class 'dict'>, valid types: <class 'bytes'>, <class 'bytearray'>, file-like object
Does anybody know how to generate Sagemaker predictions by Pytorch model on images?
As always, after asking I found a solution. Actually, as the error suggested, I had to convert input to bytes or bytearray. For those who may need the solution:
from io import BytesIO
img = Image.open(open(PATH, 'rb'))
img_byte_arr = BytesIO()
img.save(img_byte_arr, format=img.format)
img_byte_arr = img_byte_arr.getvalue()
client = boto3.client('runtime.sagemaker')
response = client.invoke_endpoint(EndpointName='ENDPOINT_NAME
Body=img_byte_arr,
ContentType='application/x-image')
response_body = response['Body']
print(response_body.read())

How to make google blob public in DRF to_internal_value function?

I have the following code which serves up a public google cloud storage url for images I am uploading:
def to_internal_value(self, data):
file_name = str(uuid.uuid4())
# Get the file name extension:
file_extension = self.get_file_extension(file_name, data)
complete_file_name = "{}.{}".format(file_name, file_extension)
uploaded = data.read()
img = Image.open(io.BytesIO(uploaded))
new_image_io = io.BytesIO()
megapixels = img.width * img.height
# reduce size if image is bigger than MEGAPIXEL_LIMIT
if megapixels > self.MEGAPIXEL_LIMIT:
resize_factor = math.sqrt(megapixels/self.MEGAPIXEL_LIMIT)
resized = resizeimage.resize_thumbnail(img, [img.width/resize_factor,
img.height/resize_factor])
resized.save(new_image_io, format=file_extension.upper())
else:
img.save(new_image_io, format=file_extension.upper())
content = ContentFile(new_image_io.getvalue(), name=complete_file_name)
return super(Base64ImageField, self).to_internal_value(content)
def to_representation(self, value):
try:
blob = Blob(name=value.name, bucket=bucket)
blob.make_public()
return blob.public_url
except ValueError as e:
return value
The problem is that this is doubling the time for the request. In other words, instead of making the blob public just the first time it is uploaded, the code is executing this code each time the object is serialized to the client. I have tried moving the make_public() call into to_internal_value, but so far haven't had success, probably because I don't know exactly how to get value.

Tweeting images programmatically

I have a business requirement for the project that I'm working on to allow users to print, email and share an image on Facebook and Twitter. The first three are simple whereas I'm finding it impossible to find a succinct example of how to post a tweet with an image using only client side scripting. I've seen various solutions using the Twitter API and almost all of them are PHP based. Surely this can't be that difficult.
This example uses the TwitterAPI python library.
from TwitterAPI import TwitterAPI
TWEET_TEXT = 'some tweet text'
IMAGE_PATH = './some_image.png'
CONSUMER_KEY = ''
CONSUMER_SECRET = ''
ACCESS_TOKEN_KEY = ''
ACCESS_TOKEN_SECRET = ''
api = TwitterAPI(CONSUMER_KEY,CONSUMER_SECRET,ACCESS_TOKEN_KEY,ACCESS_TOKEN_SECRET)
# STEP 1 - upload image
file = open(IMAGE_PATH, 'rb')
data = file.read()
r = api.request('media/upload', None, {'media': data})
print('UPLOAD MEDIA SUCCESS' if r.status_code == 200 else 'UPLOAD MEDIA FAILURE')
# STEP 2 - post tweet with a reference to uploaded image
if r.status_code == 200:
media_id = r.json()['media_id']
r = api.request('statuses/update', {'status': TWEET_TEXT, 'media_ids': media_id})
print('UPDATE STATUS SUCCESS' if r.status_code == 200 else 'UPDATE STATUS FAILURE')

Resources