Question
If InstalledAppFlow requires client secret json file to perform oauth2 authorization, how actual real-life applications using Google API are distributed?
Is client secret json file should be considered as part of application and included as constant?
Context
Currently I am learning how to use oauth2 to authorize google APIs access with python module google_auth_oauthlib.
And I found that Oauth2 authorization process itself require client secret files for InstalledAppFlow authorization method, but I never seen an application that asks for authorization asking for client secret.
After countless searches all I could find about it was this, from google identity docs.
The process results in a client ID and, in some cases, a client secret, which you embed in the source code of your application. (In this context, the client secret is obviously not treated as a secret.)
And from google cloud docs
Save the credentials file to client_secrets.json. This file must be distributed with your app.
Is this explaining that I should embed(include) client secret as constant in the code itself?
The issue with the client id and client secrete is that they need to be kept secure. Googles TOS requires that developers keep their client id and secrete secure
Asking developers to make reasonable efforts to keep their private keys private and not embed them in open source projects.
This can cause issues with for example open source applications. Can I really not ship open source with Client ID?
I have had a few conversations with the Oauth2 team at google over the years. Installed applications those that are compiled anyway can compile the client id and client secrete internally however that would not stop anyone from decompiling the application and retrieving the client id and client secret.
I was told that they are aware of that issue and that there is really no way around it.
I have seen other option where the client id and client secret would be sored on the server and then the installed application would request them from a web api. This is another option but you are sending them across HTTPS it should be considered secure even if you double encrypt them.
The fact of the matter is there really is no way around it. The main thing is that you should not release an application with for example a settings file where the client id and client secrete appear in clear text that would IMO be to great a risk you would need to compile it into your application or at the very least encrypt it some how.
You wont stop someone who really wants to get it from getting it but you will stop most people.
Why installed apps are the issue.
There are serval types of applications. Mobile, web, installed.
With mobile and web there are ways of configuring the client so that you can ensure that they only work form your server. With Web you have a redirect uri, with mobile there is the actual mobile api id.
With installed applications this is not possible because they mostly run on localhost. There is no way for you to know where the app is running so they are left open. So if anyone got ahold of your client id and client secret then they could use it for their app. Users would have no way of knowing it wasn't your official app and neither would google.
As you have a python script why not consider instructing your users in creating their own client id and client secret then they will be independent.
Related
I am trying to make a desktop application with Twitch API and Google API.
Since this application requires Twitch user permission, a user needs to authorize my application through twitch's OAuth and I think there's no way to omit this process.
Now, I want to add some functionalities from Goole APIs, for instance TTS.
My application will be installed and run on user's local machine,
it cannot store API key or credential information safely.
I think I have three options:
Add Google OAuth: This is most safe way, I think, but I don't think I can convince users to authorize another Google account even though they already authorized their Twitch account.
Make a kind of proxy server which verifies request for Google API using twitch authentication information and relays request to/response from Google API. This seems feasible but it requires additional payment to running server for sending data from Google API. I already have to pay for TTS service, another payment for proxy server which sends binary data frequently would be a financial burden for me.
Make a server to acquire API key for Google API. This also requires additional server, but it does not involve lots of traffic because application will access Google API directly once API key acquired. However, I concern that the API key may be easily stolen using monitoring tool such as wireshark.
Which method should I use here, and how can I improve it?
Or, is there better way for this case?
I'm trying to figure out how to migrate a system that is currently using ACS to Azure AD. I've read the migration docs provided by Azure and have looked through the Azure AD docs and the sample code but I'm still a bit lost as to what the best approach for my situation would be.
I've got a web API that has about 100 separate external systems that connect to it on a regular basis. We add a new connections approximately once a week. These external systems are not users--these are applications that are integrated with my application via my web API.
Currently each external system has an ACS service identity / password which they use to obtain a token which we then use to authenticate. Obviously this system is going away as of November 7.
All of the Azure AD documentation I've read so far indicates that, when I migrate, I should set up each of my existing clients as an "application registration" in Azure AD. The upshot of this is that each client, instead of connecting to me using a username and password, will have to connect using an application ID (which is always a GUID), an encrypted password, and a "resource" which seems to be the same as an audience URL from what I can see. This in itself is cumbersome but not that bad.
Then, implementing the authorization piece in my web API is deceptively simple. It looks like, fundamentally, all I need to do is include the properly configured [Authorize] attribute in my ApiController. But the trick is in getting it to be properly configured.
From what I can see in all the examples out there, I need to hard-code the unique Audience URL for every single client that might possibly connect to my API into my startup code somewhere, and that really does not seem reasonable to me so I can only assume that I must be missing something. Do I really need to recompile my code and do a new deployment every time a new external system wants to connect to my API?
Can anyone out there provide a bit of guidance?
Thanks.
You have misunderstood how the audience URI works.
It is not your client's URI, it is your API's URI.
When the clients request a token using Client Credentials flow (client id + secret), they all must use your API's App ID URI as the resource.
That will then be the audience in the token.
Your API only needs to check the token contains its App ID URI as the audience.
Though I want to also mention that if you want to do this a step better, you should define at least one application permission in your API's manifest. You can check my article on adding permissions.
Then your API should also check that the access token contains something like:
"roles": [
"your-permission-value"
]
It makes the security a bit better since any client app with an id + secret can get an access token for any API in that Azure AD tenant.
But with application permissions, you can require that a permission must be explicitly assigned for a client to be able to call your API.
It would make the migration a tad more cumbersome of course, since you'd have to require this app permission + grant it to all of the clients.
All of that can be automated with PowerShell though.
I am setting up an API for a mobile app (and down the line a website). I want to use oAuth 2.0 for authentication of the mobile client. To optimize my server setup, I wanted to setup an oAuth server (Lumen) separate from the API server (Laravel). Also, my db also lives on its own separate server.
My question is, if using separate servers and a package like lucadegasperi/oauth2-server-laravel do I need to have the package running on both server?
I am assuming this would be the case because the oAuth server will handle all of the authentication to get the access token and refresh access token functions. But then the API server will need to check the access token on protected endpoints.
Am I correct with the above assumptions? I have read so many different people recommending the oAuth server be separate from the API server, but I can't find any tutorials about how the multi-server dynamic works.
BONUS: I am migrating my DB from my API server, so I would assume I would need the oAuth packages migrations to be run from the API server also. Correct?
I have developed a web app that does its own user authentication and session management. I keep some data in Elasticsearch and now want to access it with Kibana.
Elasticsearch offers a RESTful web API without any authentication and Kibana is a purely browser side Javascript application that accesses Elasticsearch by direct AJAX calls. That is, there is no "Kibana server", just static HTML and Javascript.
My question is: How do I best implement common user sign on between the existing web app and Elasticsearch?
I am interested in specific Elasticsearch/Kibana solutions, but also in generic designs for single sign on to web apps and the external web APIs they use.
It seems the recommended way to secure Elasticsearch/Kibana is to have an Apache or Nginx reverse proxy in front that does SSL termination and user authentication (Basic auth). However, this doesn't play too well with the HTML form user authentication in my existing web app. Ideally I would like the user to sign on using the web app, and then be allowed direct access to the Elasticsearch API as well.
Solutions I've thought of so far:
Proxy everything in the web app: Have all calls go to the web app (server) which does the authentication, and have the web app issue the same request to the Elasticsearch web API and forward the response back to the browser.
Have the web app (server) store session info that Apache or Nginx somehow can look up and use to authorize access to the reverse proxy.
Ditch web app sign on and use basic auth for everything.
Note that this is a single installation, so I don't really need any federated SSO solutions.
My feeling is that the proxy within web app (#1) is a common generic solution, but it seems a bit heavyweight to have everything pass through the possibly slow web app, considering that Kibana uses the Elasticsearch API directly.
I haven't found an out of the box solution designed for the proxy authentication setup (#2). My idea is to have the web app store session info in memcache or the like and use some facility in the web server (Apache or Nginx) to look up the session based on a cookie and allow proxy access if authenticated.
The issue seems similar to serving static files directly using the web server (Apache or Nginx) while authenticating using a slow web app. Recommendations I've found for that are however very specific to that issue, like X-Sendfile.
You could use a sessionToken. This is a quite generic solution. Let me explain this. When the user logs in, you store a random string an pass him back to him. Each time the user tries to interact with your api you ask for the session Token you gave him. If it matches, you provide the service he is asking for, else, you just ignore his call. You should make session token expire in a certain interval of time and make a new one each time the user logs back in.
Hope this helps you.
We have a Windows app hosting a WebBrowser control that hits our REST APIs. We like to restrict access to the APIs to be only coming from withing the Windows app itself (for example, the APIs cannot be accessed in a browser, etc).
How can we accomplish that? what is the most secure way without having to expose any kind of credential (for example, if we use HTTP Basic auth, the username and password can be seen by reverse engineering the app itself)?
Thanks a bunch!
EDIT: We plan to distribute the application freely so we have no control over where the connection will be made from.
Restrict the REST interface to only accept connections from 127.0.0.1 (home) and then connect from your rest-consuming application only with http://localhost or http://127.0.0.1 in the URLs (if you use the external IP or DNS name of your machine it'll be treated as a remote connection and denied access).
You can do this with web server settings, or within the code of your REST APIs
I had a similar situation during a project where we distributed an iPhone app that also connected to a REST api that my team developed.
For security we used somewhat of a three-legged scenario. The app was required to authenticate using the user's credentials against a standalone service responsible only for authenticating and generating access tokens. Once the app received a valid access token, subsequent requests to the api required sending this token in the Authorization header.
You could do something similar. If you come up with a credential scheme to authenticate your app as valid API consumers you could use basic auth over HTTPS to obtain tokens, and then only by using those tokens could a consumer gain access to the rest of the API.