I am trying to run this report:
SCOPES = ['https://www.googleapis.com/auth/yt-analytics.readonly','https://www.googleapis.com/auth/yt-analytics.force-ssl','https://www.googleapis.com/auth/yt-analytics-monetary.readonly','https://www.googleapis.com/auth/youtube.readonly']
API_SERVICE_NAME = 'youtubeAnalytics'
API_VERSION = 'v2'
CLIENT_SECRETS_FILE = '/Users/secret.json'
def initialize_analyticsreporting():
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
parents=[tools.argparser])
flags = parser.parse_args([])
flow = client.flow_from_clientsecrets(
CLIENT_SECRETS_FILE, scope=SCOPES,
message=tools.message_if_missing(CLIENT_SECRETS_FILE))
storage = file.Storage('analyticsreporting.dat')
credentials = storage.get()
if credentials is None or credentials.invalid:
credentials = tools.run_flow(flow, storage, flags)
http = credentials.authorize(http=httplib2.Http())
analytics = build('youtubeAnalytics', 'v2', http=http)
return analytics
def execute_api_request(client_library_function, **kwargs):
response = client_library_function(
**kwargs
).execute()
print(response)
youtubeAnalytics = initialize_analyticsreporting()
execute_api_request(
youtubeAnalytics.reports().query,
ids='channel==MINE',
startDate='2020-05-01',
endDate='2020-12-31',
dimensions='video',
metrics='views,likes,dislikes,shares,adImpressions',
maxResults=200,
sort='-views'
)
But I get :
"Insufficient permission to access this report."
I am authenticated with OAuth, I am the content owner but I cant get the adImpressions metric to work.
My end goal is to simply get the impressions of a video using youtube analytics API. I've seen multiple threads on this topic but neither answers the question.
As DalmTo mentioned in the comments:
After adding additional scopes:
SCOPES = ['https://www.googleapis.com/auth/yt-analytics.readonly']
To
SCOPES = ['https://www.googleapis.com/auth/yt-analytics.readonly',
'https://www.googleapis.com/auth/yt-analytics-monetary.readonly',
'https://www.googleapis.com/auth/youtube.readonly']
And reauthenticating - deleting the .dat file, and letting a new one to be created, now I am able to receive the desired metrics.
Related
I've just created a new Google Analytics property and it now defaults to data streams instead of views.
I had some code that was fetching reports through the API that I now need to updated to work with those data streams instead of views since there are not views anymore.
I've looked in the docs but i don't see anything related to data streams, anybody knows how this is done now?
Here's my current code that works with a view ID (I'm using the ruby google-api-client gem):
VIEW_ID = "XXXXXX"
SCOPE = 'https://www.googleapis.com/auth/analytics.readonly'
client = AnalyticsReportingService.new
#server to server auth mechanism using a service account
#creds = ServiceAccountCredentials.make_creds({:json_key_io => File.open('account.json'), :scope => SCOPE})
#creds.sub = "myserviceaccount#example.iam.gserviceaccount.com"
client.authorization = #creds
#metrics
metric_views = Metric.new
metric_views.expression = "ga:pageviews"
metric_unique_views = Metric.new
metric_unique_views.expression = "ga:uniquePageviews"
#dimensions
dimension = Dimension.new
dimension.name = "ga:hostname"
#range
range = DateRange.new
range.start_date = start_date
range.end_date = end_date
#sort
orderby = OrderBy.new
orderby.field_name = "ga:pageviews"
orderby.sort_order = 'DESCENDING'
rr = ReportRequest.new
rr.view_id = VIEW_ID
rr.metrics = [metric_views, metric_unique_views]
rr.dimensions = [dimension]
rr.date_ranges = [range]
rr.order_bys = [orderby]
grr = GetReportsRequest.new
grr.report_requests = [rr]
response = client.batch_get_reports(grr)
I would expect that there would be a stream_id property on the ReportRequest object that I could use instead of the view_id but that's not the case.
Your existing code uses the Google Analytics Reporting api to extract data from a Universal analytics account.
Your new Google analytics property is a Google Analytics GA4 account. To extract data from that you need to use the Google analytics data api These are two completely different systems. You will not be able to just port it.
You can find info on the new api and new library here: Ruby Client for the Google Analytics Data API
$ gem install google-analytics-data
Thanks to Linda's answer i was able to get it working, here's the same code ported to the data API, it might end up being useful to someone:
client = Google::Analytics::Data.analytics_data do |config|
config.credentials = "account.json"
end
metric_views = Google::Analytics::Data::V1beta::Metric.new(name: "screenPageViews")
metric_unique_views = Google::Analytics::Data::V1beta::Metric.new(name: "totalUsers")
dimension = Google::Analytics::Data::V1beta::Dimension.new(name: "hostName")
range = Google::Analytics::Data::V1beta::DateRange.new(start_date: start_date, end_date: end_date)
order_dim = Google::Analytics::Data::V1beta::OrderBy::DimensionOrderBy.new(dimension_name: "screenPageViews")
orderby = Google::Analytics::Data::V1beta::OrderBy.new(desc: true, dimension: order_dim)
request = Google::Analytics::Data::V1beta::RunReportRequest.new(
property: "properties/#{PROPERTY_ID}",
metrics: [metric_views, metric_unique_views],
dimensions: [dimension],
date_ranges: [range],
order_bys: [orderby]
)
response = client.run_report request
I'm trying to do a simple query from the Google Vault API using JSON credentials provided from the Google API console for a service account. I'm getting a 400 response (Invalid Request / Invalid Argument) with the message:
The user does not belong to a G Suite customer.
Does anyone know what I might be doing wrong? Do I have to augment the JSON with anything to indicate our G Suite Account?
Thanks in advance.
The code's fairly straight forward:
matter_id = 'xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx'
vault = Google::Apis::VaultV1::VaultService.new
scope = 'https://www.googleapis.com/auth/ediscovery.readonly'
vault.authorization = Google::Auth::ServiceAccountCredentials.make_creds(
json_key_io: File.open('./xxxxxxxxxx.json'),
scope: scope)
vault.authorization.fetch_access_token!
m = vault.get_matter(matter_id)
Update -- this has been resolved. You have to update sub after you create the credentials to impersonate a user.
matter_id = 'xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx'
vault = Google::Apis::VaultV1::VaultService.new
scope = 'https://www.googleapis.com/auth/ediscovery.readonly'
credentials = Google::Auth::ServiceAccountCredentials.make_creds(
json_key_io: File.open('./xxxxxxxxxx.json'),
scope: scope)
credentials.update!(sub: 'user#domain.com')
vault.authorization = credentials
vault.authorization.fetch_access_token!
m = vault.get_matter(matter_id)
How do I get a media instance through the REST api? I'm looking to either download the file or fetch the url for that media. I'm using Ruby
You can get the media SID from a message resource. For example:
account_sid = ENV['TWILIO_ACCOUNT_SID']
auth_token = ENV['TWILIO_AUTH_TOKEN']
#client = Twilio::REST::Client.new(account_sid, auth_token)
messages = #client.conversations
.conversations('CHXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX')
.messages
.list(order: 'desc', limit: 20)
messages.each do |message|
puts message.sid
message.media.each do |media|
puts "#{media.sid}: #{media.filename} #{media.content_type}"
end
end
I've not actually tried the above, the media objects may just be plain hashes and you would access the sid with media['sid'] instead.
Once you have the SID, you can fetch the media by constructing the following URL using the Chat service SID and the Media SID:
https://mcs.us1.twilio.com/v1/Services/<chat_service_sid>/Media/<Media SID>
For downloading files in Ruby, I like to use the Down gem. You can read about how to use Down to download images here. Briefly, here's how you would use Down and the URL above to download the image:
conversation_sid = "CHXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
media_sid = "MEXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
account_sid = ENV["TWILIO_ACCOUNT_SID"]
auth_token = ENV["TWILIO_AUTH_TOKEN"]
url = "https://#{account_sid}:#{auth_token}#mcs.us1.twilio.com/v1/Services/#{conversation_sid}/Media/#{media_sid}"
tempfile = Down.download(url)
I'm trying to upload videos to youtube using Django and MSSQL, I want to store the user data to DB so that I can log in from multiple accounts and post videos.
The official documentation provided by youtube implements a file system and after login, all the user data gets saved there, I don't want to store any data in a file as saving files to DB would be a huge risk and not a good practice. So how can I bypass this step and save data directly to DB and retrieve it when I want to post videos to a specific account?
In short, I want to replace the pickle file implementation with storing it in the database.
Here's my code
def youtubeAuthenticate():
os.environ["OAUTHLIB_INSECURE_TRANSPORT"] = "1"
api_service_name = "youtube"
api_version = "v3"
client_secrets_file = "client_secrets.json"
creds = None
# the file token.pickle stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first time
if os.path.exists("token.pickle"):
with open("token.pickle", "rb") as token:
creds = pickle.load(token)
# if there are no (valid) credentials availablle, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(client_secrets_file, SCOPES)
creds = flow.run_local_server(port=0)
# save the credentials for the next run
with open("token.pickle", "wb") as token:
pickle.dump(creds, token)
return build(api_service_name, api_version, credentials=creds)
#api_view(['GET','POST'])
def postVideoYT(request):
youtube = youtubeAuthenticate()
print('yt',youtube)
try:
initialize_upload(youtube, request.data)
except HttpError as e:
print("An HTTP error %d occurred:\n%s" % (e.resp.status, e.content))
return Response("Hello")
def initialize_upload(youtube, options):
print('options', options)
print("title", options['title'])
# tags = None
# if options.keywords:
# tags = options.keywords.split(",")
body=dict(
snippet=dict(
title=options['title'],
description=options['description'],
tags=options['keywords'],
categoryId=options['categoryId']
),
status=dict(
privacyStatus=options['privacyStatus']
)
)
# # Call the API's videos.insert method to create and upload the video.
insert_request = youtube.videos().insert(
part=",".join(body.keys()),
body=body,
media_body=MediaFileUpload(options['file'], chunksize=-1, resumable=True)
)
path = pathlib.Path(options['file'])
ext = path.suffix
getSize = os.path.getsize(options['file'])
resumable_upload(insert_request,ext,getSize)
# This method implements an exponential backoff strategy to resume a
# failed upload.
def resumable_upload(insert_request, ext, getSize):
response = None
error = None
retry = 0
while response is None:
try:
print("Uploading file...")
status, response = insert_request.next_chunk()
if response is not None:
respData = response
if 'id' in response:
print("Video id '%s' was successfully uploaded." % response['id'])
else:
exit("The upload failed with an unexpected response: %s" % response)
except HttpError as e:
if e.resp.status in RETRIABLE_STATUS_CODES:
error = "A retriable HTTP error %d occurred:\n%s" % (e.resp.status,
e.content)
else:
raise
except RETRIABLE_EXCEPTIONS as e:
error = "A retriable error occurred: %s" % e
if error is not None:
print(error)
retry += 1
if retry > MAX_RETRIES:
exit("No longer attempting to retry.")
max_sleep = 2 ** retry
sleep_seconds = random.random() * max_sleep
print("Sleeping %f seconds and then retrying..." % sleep_seconds)
time.sleep(sleep_seconds)
I am trying to integrate QnAmaker knowledge base with Azure Bot Service.
I am unable to find knowledge base id on QnAMaker portal.
How to find the kbid in QnAPortal?
The Knowledge Base Id can be located in Settings under “Deployment details” in your knowledge base. It is the guid that is nestled between “knowledgebases” and “generateAnswer” in the POST (see image below).
Hope of help!
Hey you can also use python to get this by take a look at the following code.
That is if you wanted to write a program to dynamically get the kb ids.
import http.client, os, urllib.parse, json, time, sys
# Represents the various elements used to create HTTP request path for QnA Maker
operations.
# Replace this with a valid subscription key.
# User host = '<your-resource-name>.cognitiveservices.azure.com'
host = '<your-resource-name>.cognitiveservices.azure.com'
subscription_key = '<QnA-Key>'
get_kb_method = '/qnamaker/v4.0/knowledgebases/'
try:
headers = {
'Ocp-Apim-Subscription-Key': subscription_key,
'Content-Type': 'application/json'
}
conn = http.client.HTTPSConnection(host)
conn.request ("GET", get_kb_method, None, headers)
response = conn.getresponse()
data = response.read().decode("UTF-8")
result = None
if len(data) > 0:
result = json.loads(data)
print
#print(json.dumps(result, sort_keys=True, indent=2))
# Note status code 204 means success.
KB_id = result["knowledgebases"][0]["id"]
print(response.status)
print(KB_id)
except :
print ("Unexpected error:", sys.exc_info()[0])
print ("Unexpected error:", sys.exc_info()[1])