we are working on an application using Parse as backend and we are experiencing some timeout problems since last week.
The issue we are facing is "operation was slow and timed out" with a 124 error code. We are getting this error some hundred times a day. The error is appearing both for simple requests with a limit of 1 result and for more complex requests.
Parse SDK version we are using is JavaScript.
Anyone experiencing this issue or any help?
Thanks.
There is a limit of 1,000 objects per query.
Keep in mind there's a 3 second timeout in place for any query
beforeSave/afterSave methods Cloud Functions should finish within 15
seconds
Calling a Cloud Function counts as 1 API request
Using the Parse JavaScript SDK from within a Cloud Function will use API requests as always.
As suggested before, you should check :
https://www.parse.com/apps/NAMEOFYOURAPP#performance (replace NAMEOFYOURAPP by correct AppId)
Related
No matter what we try, all YouTube API requests we make are failing.
As we first thought this was a propagation issue, we waited out 10 minutes, then 30 minutes, 2 hours and now over 24 hours, to no avail.
We have found this thread, which covers a similar issue with an iOS app, but does not correspond to our use case.
Here is a run-down of what the problem is:
Activating the "Youtube Data API v3" API for our account shows as successful, and the API shows as enabled.
A POST to https://www.googleapis.com/upload/youtube/v3/videos (videos insert) consistently fails with the following error, despite the fact that we have waited hours for the API enabling to propagate:
Access Not Configured. YouTube Data API has not been used in project XXXXXXXXXXXX before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/youtube.googleapis.com/overview?project=928939889952 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.
Although the error does not directly point to this, our "Queries per day" quota for the YouTube Data API is showing as "0". We are not able to understand why this is showing as zero, and unfortunately, all attempts to edit it to something higher have failed, and disabling and then re-enabling the API has not solved the problem. In a completely separate project/account, this shows as "10,000" when enabling the YouTube Data API, and indeed video insert API calls work under that project.
This is a significant roadblock for us, as it prevents us from deploying our application: any help would be appreciated.
No access configured
Actually means that you dont have permission to access the api. It basically means you have enabled the api but dont have the quota to use it. Its different then the you have run out of quota error message.
After a strange change sometime last year by default the quota for the Youtube api is now 0. You need to request a quota extension it can take anywhere between a week to several months to get permission to use it.
It took me three months. No i dont have any idea how they expect anyone to develop a new application without any quota and to know ahead of time that they need to apply for quota in order to start developing their application. Its quite frustrating.
We have an web application which was working fine till yesterday. But since yesterday afternoon , one of our projects in google api console , all the keys started giving OVER_QUERY_LIMIT error.
And we cross checked that the quotas for that project and api are still not full. Can anybody help me to understand what may have caused this.
And after a days use also the API keys are still giving the same error.
Just to give more information we are using Geocoding API and Distance Matrix API in our application.
If you exceed the usage limits you will get an OVER_QUERY_LIMIT status code as a response. This means that the web service will stop providing normal responses and switch to returning only status code OVER_QUERY_LIMIT until more usage is allowed again. This can happen:
Within a few seconds, if the error was received because your application sent too many requests per second.
Within the next 24 hours, if the error was received because your application sent too many requests per day. The daily quotas are reset at midnight, Pacific Time.
This screencast provides a step-by-step explanation of proper request throttling and error handling, which is applicable to all web services.
Upon receiving a response with status code OVER_QUERY_LIMIT, your application should determine which usage limit has been exceeded. This can be done by pausing for 2 seconds and resending the same request. If status code is still OVER_QUERY_LIMIT, your application is sending too many requests per day. Otherwise, your application is sending too many requests per second.
Note: It is also possible to get the OVER_QUERY_LIMIT error:
From the Google Maps Elevation API when more than 512 points per request are provided.
From the Google Maps Distance Matrix API when more than 625 elements per request are provided.
Applications should ensure these limits are not reached before sending requests.
Documentation usage limits
I have a service that takes a queue of courses created in my SIS and am trying to automatically create them via the Google Classroom API. I was able to create around 1000 courses and now I am getting the error below:
Google.Apis.Requests.RequestError
The service is currently unavailable. [503]
Errors [
Message[The service is currently unavailable.] Location[ - ] Reason[backendError] Domain[global]
]
It does not seem to matter what I do, the error still occurs.
This is a regular occurrence with Google APIs. It's the method which Google servers use to say "you're going to fast slow down". In order to handle this, well behaved API clients should implement exponential backoff.
So for example, your script can create courses as fast as it can as long as it's getting HTTP 2xx success responses from Google. As soon as it sees a 503 back end error, it should pause all calls for 1 second and then retry the failed operation. Very often on 2nd try the operation will succeed but if it doesn't your script should pause 2 seconds, then 4, 8, etc, etc until success. I recommend maxing out at 10 tries and then failing with an error.
If your script does not do backoff and just continues to retry API calls with no pause, you are likely to see an increase in errors like this and your script may be blacklisted eventually.
I reran some test code that inserts several hundred items into my register. The code worked a few months ago. Now I'm receiving a 409 - "conflict/Another update may be in progress, please try again."
The error can happen when issuing a batch request of item DELETEs or batch request of item POSTs. It does NOT happen when each delete or post is issued as an individual request.
My process runs synchronously under a single thread so I never have more than a single request into Square at any time.
I'm guessing that this is a bug that has been introduced as part of some code change to check for concurrent updates, but which code has not been properly tested (again, just a guess).
Thanks for pointing out this undocumented error type. It is possible to receive a 409 error from items-related endpoints when you submit a large number of simultaneous requests (as with the Submit Batch endpoint). To reduce the likelihood of this error, you can reduce the number of requests you include in each individual batch (say from 30 to 15), or send each request individually.
Even after reducing the size of a batch, it is still possible that a 409 error might occur. Your application should be prepared to encounter this error and retry any affected requests.
If the issue persists, please add a comment to this question and we will investigate further.
I've got my Web API interface working fine but after a fairly short period, like around 5 minutes, it gives a 500 Internal Server Error on the first call and then works fine again until it times out. I'm not getting any additional information on the error message and there's nothing going to the logs. I don't have the issue on my development machine, just on the live server. Any ideas what might be causing this? How can I get additional error information on a live server?
The simple answer to this was set Custom Errors Off within the Web.config file. That gave a very useful error that made it easy to track down the problem.