Tweeting images programmatically - image

I have a business requirement for the project that I'm working on to allow users to print, email and share an image on Facebook and Twitter. The first three are simple whereas I'm finding it impossible to find a succinct example of how to post a tweet with an image using only client side scripting. I've seen various solutions using the Twitter API and almost all of them are PHP based. Surely this can't be that difficult.

This example uses the TwitterAPI python library.
from TwitterAPI import TwitterAPI
TWEET_TEXT = 'some tweet text'
IMAGE_PATH = './some_image.png'
CONSUMER_KEY = ''
CONSUMER_SECRET = ''
ACCESS_TOKEN_KEY = ''
ACCESS_TOKEN_SECRET = ''
api = TwitterAPI(CONSUMER_KEY,CONSUMER_SECRET,ACCESS_TOKEN_KEY,ACCESS_TOKEN_SECRET)
# STEP 1 - upload image
file = open(IMAGE_PATH, 'rb')
data = file.read()
r = api.request('media/upload', None, {'media': data})
print('UPLOAD MEDIA SUCCESS' if r.status_code == 200 else 'UPLOAD MEDIA FAILURE')
# STEP 2 - post tweet with a reference to uploaded image
if r.status_code == 200:
media_id = r.json()['media_id']
r = api.request('statuses/update', {'status': TWEET_TEXT, 'media_ids': media_id})
print('UPDATE STATUS SUCCESS' if r.status_code == 200 else 'UPDATE STATUS FAILURE')

Related

How to upload a media with the private Twitter's API?

I recently developed a program that allows you to connect to Twitter and do some tasks automatically (like Tweeter, Liking) using only the account information: username;password;email_or_phone.
My problem is that I am now trying to add the functionality of Tweeting with an image but I can't.
Here is my code and my error:
async def tweet_success(self, msg: str, img_path: str):
# Get the number of bytes of the image
img_bytes = str(os.path.getsize(img_path))
# Get the media_id to add an image to my tweet
params = {'command': 'INIT','total_bytes': img_bytes,'media_type': 'image/png','media_category': 'tweet_image'}
response = requests.post('https://upload.twitter.com/i/media/upload.json', params=params, headers=self.get_headers())
media_id = response.text.split('{"media_id":')[1].split(',')[0]
params = {'command': 'APPEND','media_id': media_id,'segment_index': '0',}
# Try to get the raw binary of the image, My problem is here
data = open(img_path, "rb").read()
response = requests.post('https://upload.twitter.com/i/media/upload.json', params=params, headers=self.get_headers(), data=data,)
{"request":"\/i\/media\/upload.json","error":"media parameter is missing."}
Can someone help me ?
I tried
data = open(img_path, "rb").read()
data = f'------WebKitFormBoundaryaf0mMLIS7kpsKwPv\r\nContent-Disposition: form-data; name="media"; filename="blob"\r\nContent-Type: application/octet-stream\r\n\r\n{data}\r\n------WebKitFormBoundaryaf0mMLIS7kpsKwPv--\r\n'
data = open(img_path, "rb").read()
data = f'------WebKitFormBoundaryaf0mMLIS7kpsKwPv\r\nContent-Disposition: form-data; name="media"; filename="blob"\r\nContent-Type: application/octet-stream\r\n\r\n{data}\r\n------WebKitFormBoundaryaf0mMLIS7kpsKwPv--\r\n'.encode()
data = open(img_path, "rb").read()
data = base64.b64encode(data)

How to download file directly from blob URL?

I am looking to download a PDF directly from a blob URL using Ruby code. The URL appears like this:
blob:https://dev.myapp.com/ba853441-d1f7-4595-9227-1b0e445b188b
I am able to visit the link in a web browser and have the PDF appear in a new tab. On inspection, other than the GET request there are some request headers related to browser/user agent.
I've attempted to use OpenURI but it detects the url as not an HTTP URI. Open URI works just fine with files from URLs that look like https://.../invoice.pdf
I've also tried to go the JS route with this snippet but this is returning 0 for me, as others have also reported.
Any automated solutions that require a download onClick and then navigating the local disk is not sufficient for my project. I am looking to retrieve files directly from the URL in the same fashion that OpenURI works for a file on a server. Thanks in advance.
I was able to get the Javascript snippet to work. The piece that I was missing was that the blob URL needed to be opened/visited in the browser first (in this case, Chrome). Here's a code snippet that might work for others.
def get_file_content_in_base64(uri)
result = page.evaluate_async_script("
var uri = arguments[0];
var callback = arguments[1];
var toBase64 = function(buffer){for(var r,n=new Uint8Array(buffer),t=n.length,a=new Uint8Array(4*Math.ceil(t/3)),i=new Uint8Array(64),o=0,c=0;64>c;++c)i[c]='ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'.charCodeAt(c);for(c=0;t-t%3>c;c+=3,o+=4)r=n[c]<<16|n[c+1]<<8|n[c+2],a[o]=i[r>>18],a[o+1]=i[r>>12&63],a[o+2]=i[r>>6&63],a[o+3]=i[63&r];return t%3===1?(r=n[t-1],a[o]=i[r>>2],a[o+1]=i[r<<4&63],a[o+2]=61,a[o+3]=61):t%3===2&&(r=(n[t-2]<<8)+n[t-1],a[o]=i[r>>10],a[o+1]=i[r>>4&63],a[o+2]=i[r<<2&63],a[o+3]=61),new TextDecoder('ascii').decode(a)};
var xhr = new XMLHttpRequest();
xhr.responseType = 'arraybuffer';
xhr.onload = function(){ callback(toBase64(xhr.response)) };
xhr.onerror = function(){ callback(xhr.status) };
xhr.open('GET', uri);
xhr.send();
", uri)
if result.is_a? Integer
fail 'Request failed with status %s' % result
end
return result
end
def get_pdf_from_blob
yield # Yield to whatever Click actions that trigger the file download
sleep 3 # Wait for direct download to complete
visit 'chrome://downloads'
sleep 3
file_name = page.text.split("\n")[3]
blob_url = page.text.split("\n")[4]
visit blob_url
sleep 3 # Wait for PDF to load
base64_str = get_file_content_in_base64(blob_url)
decoded_content = Base64.decode64(base64_str)
file_path = "./tmp/#{file_name}"
File.open(file_path, "wb") do |f|
f.write(decoded_content)
end
return file_path
end
From here you can send file_path to S3, send to PDF Reader, etc.

How to fetch media url through api

How do I get a media instance through the REST api? I'm looking to either download the file or fetch the url for that media. I'm using Ruby
You can get the media SID from a message resource. For example:
account_sid = ENV['TWILIO_ACCOUNT_SID']
auth_token = ENV['TWILIO_AUTH_TOKEN']
#client = Twilio::REST::Client.new(account_sid, auth_token)
messages = #client.conversations
.conversations('CHXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX')
.messages
.list(order: 'desc', limit: 20)
messages.each do |message|
puts message.sid
message.media.each do |media|
puts "#{media.sid}: #{media.filename} #{media.content_type}"
end
end
I've not actually tried the above, the media objects may just be plain hashes and you would access the sid with media['sid'] instead.
Once you have the SID, you can fetch the media by constructing the following URL using the Chat service SID and the Media SID:
https://mcs.us1.twilio.com/v1/Services/<chat_service_sid>/Media/<Media SID>
For downloading files in Ruby, I like to use the Down gem. You can read about how to use Down to download images here. Briefly, here's how you would use Down and the URL above to download the image:
conversation_sid = "CHXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
media_sid = "MEXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
account_sid = ENV["TWILIO_ACCOUNT_SID"]
auth_token = ENV["TWILIO_AUTH_TOKEN"]
url = "https://#{account_sid}:#{auth_token}#mcs.us1.twilio.com/v1/Services/#{conversation_sid}/Media/#{media_sid}"
tempfile = Down.download(url)

How to save user data to database instead of a pickle or a json file when trying to post videos on YouTube using Django and data v3 api

I'm trying to upload videos to youtube using Django and MSSQL, I want to store the user data to DB so that I can log in from multiple accounts and post videos.
The official documentation provided by youtube implements a file system and after login, all the user data gets saved there, I don't want to store any data in a file as saving files to DB would be a huge risk and not a good practice. So how can I bypass this step and save data directly to DB and retrieve it when I want to post videos to a specific account?
In short, I want to replace the pickle file implementation with storing it in the database.
Here's my code
def youtubeAuthenticate():
os.environ["OAUTHLIB_INSECURE_TRANSPORT"] = "1"
api_service_name = "youtube"
api_version = "v3"
client_secrets_file = "client_secrets.json"
creds = None
# the file token.pickle stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first time
if os.path.exists("token.pickle"):
with open("token.pickle", "rb") as token:
creds = pickle.load(token)
# if there are no (valid) credentials availablle, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(client_secrets_file, SCOPES)
creds = flow.run_local_server(port=0)
# save the credentials for the next run
with open("token.pickle", "wb") as token:
pickle.dump(creds, token)
return build(api_service_name, api_version, credentials=creds)
#api_view(['GET','POST'])
def postVideoYT(request):
youtube = youtubeAuthenticate()
print('yt',youtube)
try:
initialize_upload(youtube, request.data)
except HttpError as e:
print("An HTTP error %d occurred:\n%s" % (e.resp.status, e.content))
return Response("Hello")
def initialize_upload(youtube, options):
print('options', options)
print("title", options['title'])
# tags = None
# if options.keywords:
# tags = options.keywords.split(",")
body=dict(
snippet=dict(
title=options['title'],
description=options['description'],
tags=options['keywords'],
categoryId=options['categoryId']
),
status=dict(
privacyStatus=options['privacyStatus']
)
)
# # Call the API's videos.insert method to create and upload the video.
insert_request = youtube.videos().insert(
part=",".join(body.keys()),
body=body,
media_body=MediaFileUpload(options['file'], chunksize=-1, resumable=True)
)
path = pathlib.Path(options['file'])
ext = path.suffix
getSize = os.path.getsize(options['file'])
resumable_upload(insert_request,ext,getSize)
# This method implements an exponential backoff strategy to resume a
# failed upload.
def resumable_upload(insert_request, ext, getSize):
response = None
error = None
retry = 0
while response is None:
try:
print("Uploading file...")
status, response = insert_request.next_chunk()
if response is not None:
respData = response
if 'id' in response:
print("Video id '%s' was successfully uploaded." % response['id'])
else:
exit("The upload failed with an unexpected response: %s" % response)
except HttpError as e:
if e.resp.status in RETRIABLE_STATUS_CODES:
error = "A retriable HTTP error %d occurred:\n%s" % (e.resp.status,
e.content)
else:
raise
except RETRIABLE_EXCEPTIONS as e:
error = "A retriable error occurred: %s" % e
if error is not None:
print(error)
retry += 1
if retry > MAX_RETRIES:
exit("No longer attempting to retry.")
max_sleep = 2 ** retry
sleep_seconds = random.random() * max_sleep
print("Sleeping %f seconds and then retrying..." % sleep_seconds)
time.sleep(sleep_seconds)

Python Youtube ffmpeg Session Has Been Invalidated

I get the following error while I'm playing YouTube audio with my bot
[tls # 0000024ef8c4d480] Error in the pull function.
[matroska,webm # 0000024ef8c4a400] Read error
[tls # 0000024ef8c4d480] The specified session has been invalidated for some reason.
Last message repeated 1 times
It seems like YouTube links expire? I don't really know but I need to fix this issue. This is my code:
class YTDLSource(discord.PCMVolumeTransformer):
def __init__(self, source, *, data, requester):
super().__init__(source)
self.requester = requester
self.title = data['title']
self.description = data['description']
self.uploader = data['uploader']
self.duration = data['duration']
self.web_url = data['webpage_url']
self.thumbnail = data['thumbnail']
def __getitem__(self, item: str):
return self.__getattribute__(item)
#classmethod
async def create_source(cls, ctx, player, search: str, *, loop, download=True):
async with ctx.typing():
loop = loop or asyncio.get_event_loop()
to_run = partial(ytdl.extract_info, url=search, download=download)
raw_data = await loop.run_in_executor(None, to_run)
if 'entries' in raw_data:
# take first item from a playlist
if len(raw_data['entries']) == 1:
data = raw_data['entries'][0]
else:
data = raw_data['entries']
#loops entries to grab each video_url
total_duration = 0
try:
for i in data:
webpage = i['webpage_url']
title = i['title']
description = i['description']
uploader = i['uploader']
duration = i['duration']
thumbnail = i['thumbnail']
total_duration += duration
if download:
source = ytdl.prepare_filename(i)
source = cls(discord.FFmpegPCMAudio(source), data=i, requester=ctx.author)
else:
source = {'webpage_url': webpage, 'requester': ctx.author, 'title': title, 'uploader': uploader, 'description': description, 'duration': duration, 'thumbnail': thumbnail}
player.queue.append(source)
except Exception as e:
print(e)
return
embed=discord.Embed(title="Playlist", description="Queued", color=0x30a4fb, timestamp=datetime.now(timezone.utc))
embed.set_author(name=ctx.author.display_name, icon_url=ctx.author.avatar_url)
embed.set_thumbnail(url=data[0]['thumbnail'])
embed.add_field(name=raw_data['title'], value=f"{len(data)} videos queued.", inline=True)
embed.set_footer(text=raw_data["uploader"] + ' - ' + '{0[0]}m {0[1]}s'.format(divmod(total_duration, 60)))
await ctx.send(embed=embed)
return
embed=discord.Embed(title="Playlist", description="Queued", color=0x30a4fb, timestamp=datetime.now(timezone.utc))
embed.set_author(name=ctx.author.display_name, icon_url=ctx.author.avatar_url)
embed.set_thumbnail(url=data['thumbnail'])
embed.add_field(name=data['title'], value=(data["description"][:72] + (data["description"][72:] and '...')), inline=True)
embed.set_footer(text=data["uploader"] + ' - ' + '{0[0]}m {0[1]}s'.format(divmod(data["duration"], 60)))
await ctx.send(embed=embed)
if download:
source = ytdl.prepare_filename(data)
else:
source = {'webpage_url': data['webpage_url'], 'requester': ctx.author, 'title': data['title'], 'uploader': data['uploader'], 'description': data['description'], 'duration': data['duration'], 'thumbnail': data['thumbnail']}
player.queue.append(source)
return
source = cls(discord.FFmpegPCMAudio(source), data=data, requester=ctx.author)
player.queue.append(source)
#classmethod
async def regather_stream(cls, data, *, loop):
loop = loop or asyncio.get_event_loop()
requester = data['requester']
to_run = partial(ytdl.extract_info, url=data['webpage_url'], download=True)
data = await loop.run_in_executor(None, to_run)
return(cls(discord.FFmpegPCMAudio(data['url']), data=data, requester=requester))
I'm using the rewrite branch of discord.py for the bot.
I'm not sure if I need to provide more details? Please let me know, I really need to get this fixed...
In fact it isn't really a problem with your code (and many people complain of this error).
This is just a possible issue when streaming a video. If you absolutely want to stream it, you have to accept this as a potential issue. Note how (almost) every music bots set limitations for the video/music you want to listen to.
If you need to ensure you do not get this issue, you have to fully download the music. (Which will also make the bot loading longer before playing).
Would you be able to post all your code? i may have a solution for you if i was able to see the whole code.
the solution i would recomend is to download the soong and then delete it after.
You could set your download to true and then add this in your player_loop
try:
# We are no longer playing this song...so, lets delete it!
with YoutubeDL(ytdlopts) as ydl:
info = ydl.extract_info(source.web_url, download=False)
filename = ydl.prepare_filename(info)
try:
if os.path.exists(filename):
os.remove(filename)
else:
pass
except Exception as E:
print(E)
await self.np.delete()
except discord.HTTPException:
pass
bit botched but could be cleaned up, this was the best solution i have found for me.

Resources