video.insert failing silently with positive response and video id - insert

I am testing my python video upload script since 2 days.
Yesterday everything ok. Upload successful; then quota limit reached.
Today continue to test: insert/upload succeeds with response containing a video id, but the video with that id never appears on the channel. Also not visible in youtube studio.
I tried with 2 different videos - same thing and I have a id for all of them.
Now quota is again reached
uploaded but not visible ids e.g.: pSqyId96gTk, -kw-yn-qAxI
If I upload the same video in the Youtube WEB Frontend, everything is ok.
Any Idea how to analyse that problem?
here is part of the response dict:
{'kind': 'youtube#video',
'etag': 'MgZ0r9Yu43ERF415Jw1lPRJgmDc',
'id': 'ZAuHewNcxL8',
'snippet':
{'publishedAt': '2021-12-22T12:40:54Z',
'channelId': 'UCNSTNQqqGxwBqoOdekZPj_A',
...
}
'status':
{
'uploadStatus': 'uploaded',
'privacyStatus': 'private',
'license': 'youtube',
'embeddable': True,
'publicStatsViewable': True}
}
privacyStatus: "public" has the same effect.

Related

People API - QUOTA_EXCEEDED / FBS quota limit exceeded

The google people api page says correctly how to authenticate and list 10 example contacts and everything works perfectly:
https://developers.google.com/people/quickstart/python
I can authenticate and list 10 perfectly but I'm having an error when trying to create new contacts.
The api is returning me the following error:
HttpError: <HttpError 429 when requesting https://people.googleapis.com/v1/people:createContact?alt=json returned "Resource has been exhausted (e.g. check quota).". Details: "[{'#type': 'type.googleapis.com/google.rpc.QuotaFailure', 'violations': [{'subject': 'QUOTA_EXCEEDED', 'description': 'FBS quota limit exceeded.'}]}]">
when i click on https://people.googleapis.com/v1/people:createContact?alt=json, i have the following json on page:
{
"error": {
"code": 403,
"message": "The request is missing a valid API key.",
"status": "PERMISSION_DENIED"
}
}
I changed the scopes perfectly, even creating contacts a few months ago.
Out of nowhere everything stopped working and I'm having trouble QUOTA_EXCEEDED and FBS quota limit exceeded
I redid the entire authentication process and even tried to list contacts and without problems, everything works perfectly LESS the creation of contacts
Some observations:
I use via jupyter notebook and I'm also logged in to the email where
I want to create the contacts
I've tried to run in an IDE and the same problem
I've created 26888 contacts this way
This project does not appear on the Google console because I think I
did the entire project through documentation page, and I believe that the quotas have not been exhausted, just because I can see the values ​​correctly. I create on average 1 contact every 3 seconds and 200 contacts per day (maximum)
I would like a lot of help to know why I can't create more contacts, I have a lot of work pending because of that, thanks.
my code to create contacts:
def main():
creds = None
# The file token.pickle stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists('token.pickle'):
with open('token.pickle', 'rb') as token:
creds = pickle.load(token)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json', SCOPES)
creds = flow.run_local_server(port=0)
# Save the credentials for the next run
with open('token.pickle', 'wb') as token:
pickle.dump(creds, token)
service = build('people', 'v1', credentials=creds)
#----------------creatingc contacts----------------------
print('trying')
for i in df_banco_linhas[:2]:
if i[1] not in df_csv_linhas:
time.sleep(3)
service.people().createContact( body={
"names": [
{
"givenName": i[0]
}
],
"phoneNumbers": [
{
'value': i[1]
}
]
}).execute()
print('create: ' + i[0])
time.sleep(3)
else:
print('NO')
if __name__ == '__main__':
main()
As the problem was only happening when creating contacts, I decided to investigate the limit on the number of contacts and I came across the limit of 25000 in the documentation.
I was forced to create another email to solve the problem and increase my capacity to 50000 contacts (synchronizing two emails).
Their error message denotes that the problem is in the quota limit (requests), when in fact it is limit of contacts by email.
I was getting this same quota limit exceeded error ("FBS quota limit exceeded.") for a different reason. I was supplying values too long for the Organization.jobDescription field.
Perhaps this specific quota limit triggers when some non-rate constraints are violated, like total number of emails or maximum length of fields.
This may not be intended, since that kind of violation doesn't fit the 429 status code, and that limit is not listed in the Quotas section of the API/Service Details page for the People API in the console.

Unable to Upload image in parse dashboard,Logs says : Could not Store file Quota Exceeded

I am Working on an Android App, for which I use Parse as a backend[Parse-Heroku-Mlab (sandbox Plan)], this app provides different Services in the city, & the service owners complete information is listed in the App which also has User Images and icons.
Issue : When i try to upload images it doesn't gets uploaded(until last day it was working fine), but the text fields in the dashboard works and gets uploaded.
Parse Logs says :
2017-05-28T04:30:17.793Z - Could not store file.
2017-05-28T04:30:17.790Z - quota exceeded
Screenshots of Mongodb stats is attached.
Could it be that the issue is with Mlab.The sandbox plan gives the storage upto 512 mb. I tried freeing the space, but no go.
Try using a CDNfor storage like GCSAdapter or de S3Adapter. It is plain simple to set up. Old Images will continue work as normal.
var GCSAdapter = require('parse-server-gcs-adapter');
var gcsAdapter = new GCSAdapter('project',
'keyFilePath',
'bucket' , {
bucketPrefix: '',
directAccess: false
});
var api = new ParseServer({
appId: 'my_app',
masterKey: 'master_key',
filesAdapter: gcsAdapter
})
Mlab calculates your quota by the fileSize value output from MongoDB. The size of an image can be from 3 to 12mb in modern phones.

Upload more than 50 Videos using YouTube API

I have a new channel with no videos uploaded yet. When I tried uploading some 500 videos I had using the YouTube Data API, the upload process stopped after about 50 videos. I do not understand how my quota limit reached 300,000 (the per 100 second default limit) as the quota required for uploading 1 video is just 1600. I have to upload around 500-600 videos every day as the nature of my business is such. Please help.
[RequestError] Server response: {
"error": {
"errors": [
{
"domain": "youtube.video",
"reason": "uploadLimitExceeded",
"message": "The user has exceeded the number of videos they may upload."
}
],
"code": 400,
"message": "The user has exceeded the number of videos they may upload."
}
}
This is a user based quota and not a project based quota. It has nothing to do with what you are seeing on the Google Developer console.
The quota used to be:
400 video uploads, 1500 write operations, and 50,000 read operations that each retrieve two resource parts.
Google has apparently changed how the quota works. A user can upload 50 videos then only one video every 15 minutes until the quota resets. Quota resets at midnight west cost USA time.
I have an email out to the team looking for feed back on this.

How to get all the videos of a YouTube channel with the Yt gem?

I want to use the Yt gem to get all the videos of channel. I configure the gem with my YouTube Data API key.
Unfortunately when I use it it returns a maximum of ~1000 videos, even for channels having more than 1000 videos. Yt::Channel#video_count returns the correct number of videos.
channel = Yt::Channel.new id: "UCGwuxdEeCf0TIA2RbPOj-8g"
channel.video_count # => 1845
channel.videos.map(&:id).size # => 949
The Youtube API can't be set to return more than 50 items per request, so I guess Yt automatically performs several requests going through each next page of results to be able to return more than 50 results.
For some reason though it does not go through all the result pages. I don't see a way in Yt for me to control how it goes through the pages of results. In particular I could not find a way to force it to get a single page of results, access the returned value nextPageToken, in order to perform a new request with this value.
Any idea?
Looking into gem's /spec folder, you can see a test for your code.
describe 'when the channel has more than 500 videos' do
let(:id) { 'UC0v-tlzsn0QZwJnkiaUSJVQ' }
specify 'the estimated and actual number of videos can be retrieved' do
# #note: in principle, the following three counters should match, but
# in reality +video_count+ and +size+ are only approximations.
expect(channel.video_count).to be > 500
expect(channel.videos.size).to be > 500
end
end
I did some tests and what I have noticed it that: video_count is the number that is displayed on youtube next to channel's name. This value is not accurate. Not rly sure what it represents.
If you do channel.videos.size, the number is not accurate either, because the videos collection can contain some empty(?) records.
If you do channel.videos.map(&:id).size the returned value should be correct. By correct I mean it should equal to number of videos listed at:
https://www.youtube.com/channel/:channel_id/videos

PlaylistItems: list does not return videoId when using part:id without snippet

Trying to manage the "cost" of API request and so to generate a delta of videos that were added to a playlist since last API request
Would like to make the "0" cost request of just fetching the videoIds before matching getting additional details about the Video in the playlist
GET https://www.googleapis.com/youtube/v3/playlistItems?part=id&playlistId=PLlTLHnxSVuIyeEZPBIQF_krewJkY2JSwi&key={YOUR_API_KEY}
The response is like below
"items": [
{
"kind": "youtube#playlistItem",
"etag": "\"5g01s4-wS2b4VpScndqCYc5Y-8k/2wturocJM7aMkvG4Zrmv45tbyWY\"",
"id": "UExsVExIbnhTVnVJeWVFWlBCSVFGX2tyZXdKa1kySlN3aS4xMjU2MjFGMDJBNEUzQzcw"
},
The playlistItem id cannot be used in the video list to get additional info about the video and instead part:"snippet" which has a cost associated with it has to be added to the playlistItems request. Is this a bug or intentional ? Also is there a way to map the playlistItem-id to videoId/ResourceId ?
Firstly, all calls have a cost. No matter what it is. Just how much depends on your request.
Yes, this is by design. They want to limit as much as possible the amount of calls to the system. This will make for better stream lining of call requests, as well as reducing strain on the site.

Resources