Google oAuth API 503 - google-api

I have a service which requires the usage of refresh credential API from Gmail and I have recently noticed a surge in HTTP 503 errors for the following API: https://accounts.google.com/o/oauth2/token
This happens for a certain duration of times and twice it coincided with a gmail downtime according to Google App Status. I have also checked to make sure that any Quota limits for gmail API was not hit from the admin console.
Please advice on how to proceed further on this.
Editing the question to provide further details from comments:
There are separate limits on authentication API (like the token endpoint).
-- Where do I find the limits on authentication API in Google developer console? I could only find the limits for Application APIs like Gmail/Google Calendar.
Questions:
How often are you calling this API/token endpoint?
-- once every ~50-60 mins for a user
Is this for the same user/token? (for the same user, you should try to use the access token until the expiry time that is 1 hour).
-- No this is for different users. For the same user, the same access token is used until its expiry.
If your server is making a lot of requests for different tokens/users, are they coming from the same IP?
-- They are not coming from the same IP, but from few servers (~5) which makes these requests.
What is the max qps you may be hitting?
-- 300 qps on an average (aggregated from all our servers), max would be 450 qps.

Related

How to avoid security assessment for Gmail API integration

I am fetching Gmail inbox to the web application by using Gmail API. I am using RestAPI to connect with the web application. Everything is working and ready to go live. But the app is rejected by Google and asks for a Security assessment which has around $75K expenses.
As we are only reading messages through API and users are also already permitted to perform the same activity.
My question is, How to avoid Security assessment as we are using restricted and sensitive scope like GMAIL API & PubSub. But without these scopes, we can't fetch the messages.
How to avoid Security assessment? Is there any other way to achieve this requirement?
Looking forward to the community help. It's a major blocker for us to go live.
Thanks in advance.
If you need to use a sensitive or restricted scope, and you are not exempt from verification.
There is no work around to the security assessment.
info from docs.
sensitive-scopes
Apps that request sensitive scopes must verify that they follow Google’s API Services User Data Policy and will not have to undergo an independent, third-party security assessment. This sensitive scopes verification process typically takes 3-5 business days to complete.
Option: don't request a sensitive scope.
https://mail.google.com/ (includes any usage of IMAP, SMTP, and POP3 protocols)
https://www.googleapis.com/auth/gmail.readonly
https://www.googleapis.com/auth/gmail.metadata
https://www.googleapis.com/auth/gmail.modify
https://www.googleapis.com/auth/gmail.insert
https://www.googleapis.com/auth/gmail.compose
https://www.googleapis.com/auth/gmail.settings.basic
https://www.googleapis.com/auth/gmail.settings.sharing
restricted-scopes
Apps that request restricted scopes must also verify that they follow Google’s API Services User Data Policy, but they must also meet the Additional Requirements for Specific Scopes. One of these additional requirements is an independent, third-party security assessment. For this reason, this restricted scopes verification process can potentially take several weeks to complete.
Option: Dont request a restricted scope.
https://mail.google.com/ (includes any usage of IMAP, SMTP, and POP3 protocols)
https://www.googleapis.com/auth/gmail.readonly
https://www.googleapis.com/auth/gmail.metadata
https://www.googleapis.com/auth/gmail.modify
https://www.googleapis.com/auth/gmail.insert
https://www.googleapis.com/auth/gmail.compose
https://www.googleapis.com/auth/gmail.settings.basic
https://www.googleapis.com/auth/gmail.settings.sharing
Exceptions to verification requirements
Could your app be exempt from the verification requirements.

Microservice blocked user persist in db and verify in other micro services

Description of our project
We are following micro services architecture in our project with database per service. We are trying to introduce blacklist function to our services which means if some user blacklisted to enter system, they can't use any of our micro services. We have multiple entry/exit points to our micro services such as gateway service (this gateway service will be used by frontend team), websocket message receivers, multiple spring schedulers to process the user data.
Current solution
We persist the blacklist users in db and exposed as an endpoint and we can name this as access service. Persisting this blacklist users to the database will be done by support team by calling the access service blacklist create endpoint. So whenever we receive a request from frontend, in gateway we will use the access service to check if the current user is present in the blacklist db, if it's blacklisted then we will block further access. The same goes to whatever message received from schedulers or websocket notifications i.e for example for each call we check whether the user is blacklisted.
Problem statement
We have 2 websocket notification receivers, multiple schedulers which will run for every 5 minutes which intern wants to access the same blacklist access service. Because of this we are making too many calls to our access service and causing this to be a bottleneck.
How do we avoid this case?
There are several approaches to the blocklisting problem.
First you could have one service with a blocklist and for every incoming request for every service you would do an extra call to blocklist service. Clearly, this is a huge availability and scalability risk.
Second option is push based: the blocklist service keeps notifying all other services about blocklisted users. In that case, every service can make a local decision to process a request.
The third option is to bake expiration into user sessions. Every session has three elements: expiration, access token and refresh token. Before expired every service will accept requests with valid access tokens. If an access token is expired, then the client has to get a new one by contacting a token service. That service will read refresh token and check if the user is still active and if that's the case - a new access token will be issues.
The third option is the one widely used. Most(all?) cloud providers have shorted lived credentials for this specific goal - to make sure an access can be revoked after some time.
Short lived credentials vs a dedicated service is a well known trade-off; you could read more details about very similar problem here: https://en.wikipedia.org/wiki/Certificate_revocation_list

Google Contacts API: Service Unavailable

Hello Google folks monitoring the API questions.
Today our API requests to the Contacts API are being blocked with 503 Service Unavailable and this message in html suggesting to do CAPTCHA:
Our systems have detected unusual traffic from your computer network. This page checks to see if it's really you sending the requests, and not a robot.
The block will expire shortly after those requests stop. In the meantime, solving the above CAPTCHA will let you continue to use our services.
We checked our API console and we're nowhere close to reaching our daily API request quotas. Why is this happening?
And how is a service supposed to do CAPTCHA? It's communicating using API from our servers but those messages are intended for a human using a browser.

API quota for Gmail REST API and Gmail IMAP using same console app

Gmail IMAP is also moved to Oauth2 authentication. So, we are using same console app for Gmail REST API and Gmail IMAP.
My Question : For IMAP and REST API the quota allotted will be shared or each has its own quota? If so can any one share quota for Gmail IMAP.
IMAP and GMAIL REST API has its own quota, Gmail API is subject to a daily usage limit that applies to all requests made from your application. For daily usage it has 1,000,000,000 quota units per day and 250 quota units per user per second rate limit.
For IMAP mail client, the maximum number of recipient allowed for each eamil is 500 receipts.
To view usage limits for your project or to request an increase to your quota, open API library on the Developers Console.

Authenticating a client-side web service request in a cached environment

We're building a set of external web services to be consumed client-side (using jquery/AJAX) by visitors to our site. The web services need to be publicly available but we'd like to limit access to site visitors.
Importantly, the site in question sits behind a CDN and we cache page content for 24 hours; AJAX requests would preferably be cached as well but I'm conscious doing so will limit our authentication options. Our visitors access the site and services anonymously.
What are some standard "patterns" for authenticating client requests? I'm not dealing with confidential data per-se but do want to deter other users/sites from hijacking these services for liability (think data distribution) and performance reasons.
I'm thinking of a shared secret that's refreshed daily and used site-wide by all clients; any web service request would include the secret. Pretty basic but are there other, better ways for the service to detect the caller's origin in a manner that can't be spoofed?
If the threat to your web service is related to someone automating the client calls, you can implement rate limiting on server side. As you rightly mentioned, client can be required to provide key for each request. Alternatively, if only mortals are going to interact with web service, you can also implement Human Interaction Proof like Captcha etc. One thing to make sure is that "key" which will be used by client needs to given in controlled manner. I once came across a system which basically gave away unlimited keys - this means that automation control will be ineffective as an attacker can request as many keys and make unlimited calls. If you are limiting using IP address, make sure that you throttle requests on network part of ip address (A.B.C.X) as host part (X) can change (when users are behind proxies) If your clients are anonymous, the best/closest "identifier" is indeed address.

Resources