Seeing high rate of SocketTimeoutException from calling calendar api - google-api

Our app uses Calendar API extensively and since 10PM PDT on 6/19/2019, we are seeing high rate of SocketTimeoutException from using the Calendar API java client. It's not so bad that our app is entirely broken, but it's bad enough that it's hard to make any sequence of event updates without a failure.
I believe the default timeout is 20 seconds (which I thought was already pretty long) and we up'd it to 30 seconds but did not help. Should the timeout be longer than 30 seconds? for event insert/update/delete calls?
Is it possible that we're being rate limited somehow? (Though I believe that would be returned with 403 exception with relevant error message, not SocketTimeoutException) Or is Google Calendar experiencing some other issues after the outage?
Thanks!

If you're inserting thousands of files simultaneously, it's conceivable that you are choking some resource (sockets, bandwidth, etc).
You may need to optimize your code by reducing the number of API calls made simultaneously per user/sec.
Increase the read timeouts: Timeouts and Errors

Related

YouTube API requests failing due to "Access Not Configured" (also: "queries per day" quota is locked to 0)

No matter what we try, all YouTube API requests we make are failing.
As we first thought this was a propagation issue, we waited out 10 minutes, then 30 minutes, 2 hours and now over 24 hours, to no avail.
We have found this thread, which covers a similar issue with an iOS app, but does not correspond to our use case.
Here is a run-down of what the problem is:
Activating the "Youtube Data API v3" API for our account shows as successful, and the API shows as enabled.
A POST to https://www.googleapis.com/upload/youtube/v3/videos (videos insert) consistently fails with the following error, despite the fact that we have waited hours for the API enabling to propagate:
Access Not Configured. YouTube Data API has not been used in project XXXXXXXXXXXX before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/youtube.googleapis.com/overview?project=928939889952 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.
Although the error does not directly point to this, our "Queries per day" quota for the YouTube Data API is showing as "0". We are not able to understand why this is showing as zero, and unfortunately, all attempts to edit it to something higher have failed, and disabling and then re-enabling the API has not solved the problem. In a completely separate project/account, this shows as "10,000" when enabling the YouTube Data API, and indeed video insert API calls work under that project.
This is a significant roadblock for us, as it prevents us from deploying our application: any help would be appreciated.
No access configured
Actually means that you dont have permission to access the api. It basically means you have enabled the api but dont have the quota to use it. Its different then the you have run out of quota error message.
After a strange change sometime last year by default the quota for the Youtube api is now 0. You need to request a quota extension it can take anywhere between a week to several months to get permission to use it.
It took me three months. No i dont have any idea how they expect anyone to develop a new application without any quota and to know ahead of time that they need to apply for quota in order to start developing their application. Its quite frustrating.

Google API Key giving Query over limit error

We have an web application which was working fine till yesterday. But since yesterday afternoon , one of our projects in google api console , all the keys started giving OVER_QUERY_LIMIT error.
And we cross checked that the quotas for that project and api are still not full. Can anybody help me to understand what may have caused this.
And after a days use also the API keys are still giving the same error.
Just to give more information we are using Geocoding API and Distance Matrix API in our application.
If you exceed the usage limits you will get an OVER_QUERY_LIMIT status code as a response. This means that the web service will stop providing normal responses and switch to returning only status code OVER_QUERY_LIMIT until more usage is allowed again. This can happen:
Within a few seconds, if the error was received because your application sent too many requests per second.
Within the next 24 hours, if the error was received because your application sent too many requests per day. The daily quotas are reset at midnight, Pacific Time.
This screencast provides a step-by-step explanation of proper request throttling and error handling, which is applicable to all web services.
Upon receiving a response with status code OVER_QUERY_LIMIT, your application should determine which usage limit has been exceeded. This can be done by pausing for 2 seconds and resending the same request. If status code is still OVER_QUERY_LIMIT, your application is sending too many requests per day. Otherwise, your application is sending too many requests per second.
Note: It is also possible to get the OVER_QUERY_LIMIT error:
From the Google Maps Elevation API when more than 512 points per request are provided.
From the Google Maps Distance Matrix API when more than 625 elements per request are provided.
Applications should ensure these limits are not reached before sending requests.
Documentation usage limits

Load testing WCF services gives huge (>200 sec) responses

I have a service being load tested by a third party. A few minutes after starting, we start to see requests hanging for a very long period of time and the caller ultimately times out (after 60 seconds).
They are testing with 15 users with each user using two devices at once, so a total of 30 connections.
The service is a simple façade to a more complex operation, calling an external system. Benchmarking our communications to the external system looks as though everything is responding in the time we would expect (sub 200ms).
The IIS logs reveals a bunch of very high requests (> 200sec) which ultimately do return a 200 and have Win32 error code ERROR_NETNAME_DELETD (error 64). I have checked the Service Log and can match up the response to the request (based on the SOAP message id) and can see that we do eventually respond with the correct information (although the client has long given up).
Any ideas as to what could be causing this behavior? We're hosting in IIS using wsHttpBinding and we're using WS-Security with x509 certificates (message & transport encryption).
We don't have benchmark logging inside of our service but the code is a very simple mapping of the WCF request to the server request, making the request, and mapping the response to the WCF response. We do this manually and there is no parsing involved (straight assignments).
After a detailed investigation, including getting Microsoft support involved we were hitting up against the serviceThrottling defaults, specifically the maxConcurrentSessions. We determined this from perfmon - there is a counter for this. We were unsure as to why we saw this as the service behaved when called with a .NET client.
It turns out that the Java consumer of this application, using CXF, was not respecting the WSDL (specifically the bit about WS-SecureConversation) and closing sessions out when it closed its connection.
Our solution was to jack up the maxConcurrentSessions to a high number, set the inactivityTimeout down low (to a minute) to force session abandonment. In addition, we set establishSecurityContext to false to avoid the WSS negotiation consuming an additional session.
The solution is inelegant as the service logs are littered with errors about forced session closures, but it fixed the issue we were seeing here. Unfortunately we had a requirement for WS-Security so our solution needed to stick with that.
I hope this helps someone as this was an interesting and time consuming problem to pin down.

wcf operation times out without error

I have a .NET 3.5 BasicHttpBinding no security WCF service hosted on IIS 6.0.
I have service throttling bumped up as per MS recommendations.
My service operation is getting called a few hundreds of time conccurrently, and at some point the client gets a timeout exception (59:00, that's whats set in the server and client timeouts).
If I raise the timeout it just hits the new limit.
It seems like the application just "freezes" somewhere and we have not been able to figure out how this happens.
WCF tracing on the server side doesn't come up with anything.
Any ideas regarding what could be the issue?
Thanks
I assume your WebService is not using the new async/await especially wrt the database calls. In that case its because you are blocking your limited threads.
In more detail. IIS/ASP.net only creates a limited number of threads to handle requests. The first...say 8 requests spin up threads and start working. At some point they will hit the database (I am assuming a traditional n-tier app). Those threads sleep. The next say...992 requests hit IIS and are held in a queue.
At some point the database calls return, process stuff...send data to the client. Another 8 requests are dequeued...hit the database...etc...
However each set of 8 requests takes a finite time to complete. With over 900 requests ahead of them, the last 100 or so threads will take at the very least 100 * latency * number of roundtrips before they can start up. If your latency * number of roundtrips is high...your last request will take a long time before it even gets dequeued, hence the timeout.
Two remedies exists. The first, create more threads....will use up all your memory and your IIS crashes. The second is to use .net 4.5 and async/await.
See here for more information

Slow facebook API via Python on Google App Engine (GAE)

I am fetching data from my news stream to filter it. This takes Facebook sometimes more than 5 seconds. I hit the url_fetch() timeout of Google App Engine.
Now is there any way to work around this timeout or to improve the speed with which Facebook replies to my request? This is the part where I get my exceptions:
params[u'access_token'] = self.access_token
result = json.loads(
urlfetch.fetch(
url=u'https://graph.facebook.com/me/home?limit=1000,
payload=urllib.urlencode(params),
method=urlfetch.POST,
headers={u'Content-Type': u'application/x-www-form-urlencoded'}
).content)
There's nothing you can do to speed it up - how fast it is is up to facebook. You can pass the deadline argument to URLFetch to set the maximum deadline for requests (in seconds). If you're doing a lot of calls, you probably want to look into using the asynchronous API to do calls in parallel.
I had a similar problem with a different project. You can use the mechanize library very adequately in GAE and it allows you to specify timeouts. Just copy the folder into your GAE project and you're good to go.
Use it sparingly though as long waits really drives up costs.

Resources