We are developing a resource server based on Spring Boot+Cloud using OAuth2+JWT Token for security and using Cloud Foundry UAA as the authorization server.
For the verifier key, we used security.oauth2.resource.jwt.key-uri property, so the resource server can dynamically pick up the public key from UAA at start up time. This was working all fine. When we tried to enable SSL in this resource server, we started to get Cannot convert access token to JSON error. When I debugged the code, it looked like it wasn't picking up the correct key to verify the signature. After spending some time, I figured out that it works if key-value is used instead of key-uri, but in that case you need to configure the key statically. It seems like a bug, but I am not 100% sure.
Is there a way to get the SSL working when still using key-uri for JWT? Or would you recommend a better and different approach to this?
Related
I have a function app which is hosted in my GCP project with authentication turned on.
This app will be triggered from Jfrog container registry webhook based on events.
The issue I face here is to authenticate/authorize the HTTP request. I tried using "Authorization: bearer " header, which works good. But that token seems to expire after 60 minutes.
Q: Is there a permanent way(with no expiration) to authorize/authenticate cloud function HTTP requests ? Jfrog webhooks cannot programmatically create tokens since it's a simple HTTP POST trigger which can accept additional headers.
I am finding it hard to get a solution from GCP documentation. I do have the service account created with "roles/cloudfunctions.invoker" role.
Reference about Jfrog artifactory webhooks: https://www.jfrog.com/confluence/display/JFROG/Webhooks
It's for that reason I wrote that article. It's based on ESPv2 and Cloud Run, but API Gateway is the managed version of that technical stack. The principle and the OpenAPI spec is the same.
The solution downgrade the security level from a short lived token (1h) to long lived token (no limit). But you can use API Gateway to ensure the API Key check and query forward.
A much more simpler pattern is to remove the authentication check on Cloud Functions (and make it public) and to perform that API key (in fact a random string comparison) in your functions itself.
In both case, the API is publicly accessible (API Gateway, or Cloud Functions) and, in case of DDoS attack, nothing will protect your service (and your money). Set the correct Cloud Functions Max instance to prevent any bad surprise.
Question
If InstalledAppFlow requires client secret json file to perform oauth2 authorization, how actual real-life applications using Google API are distributed?
Is client secret json file should be considered as part of application and included as constant?
Context
Currently I am learning how to use oauth2 to authorize google APIs access with python module google_auth_oauthlib.
And I found that Oauth2 authorization process itself require client secret files for InstalledAppFlow authorization method, but I never seen an application that asks for authorization asking for client secret.
After countless searches all I could find about it was this, from google identity docs.
The process results in a client ID and, in some cases, a client secret, which you embed in the source code of your application. (In this context, the client secret is obviously not treated as a secret.)
And from google cloud docs
Save the credentials file to client_secrets.json. This file must be distributed with your app.
Is this explaining that I should embed(include) client secret as constant in the code itself?
The issue with the client id and client secrete is that they need to be kept secure. Googles TOS requires that developers keep their client id and secrete secure
Asking developers to make reasonable efforts to keep their private keys private and not embed them in open source projects.
This can cause issues with for example open source applications. Can I really not ship open source with Client ID?
I have had a few conversations with the Oauth2 team at google over the years. Installed applications those that are compiled anyway can compile the client id and client secrete internally however that would not stop anyone from decompiling the application and retrieving the client id and client secret.
I was told that they are aware of that issue and that there is really no way around it.
I have seen other option where the client id and client secret would be sored on the server and then the installed application would request them from a web api. This is another option but you are sending them across HTTPS it should be considered secure even if you double encrypt them.
The fact of the matter is there really is no way around it. The main thing is that you should not release an application with for example a settings file where the client id and client secrete appear in clear text that would IMO be to great a risk you would need to compile it into your application or at the very least encrypt it some how.
You wont stop someone who really wants to get it from getting it but you will stop most people.
Why installed apps are the issue.
There are serval types of applications. Mobile, web, installed.
With mobile and web there are ways of configuring the client so that you can ensure that they only work form your server. With Web you have a redirect uri, with mobile there is the actual mobile api id.
With installed applications this is not possible because they mostly run on localhost. There is no way for you to know where the app is running so they are left open. So if anyone got ahold of your client id and client secret then they could use it for their app. Users would have no way of knowing it wasn't your official app and neither would google.
As you have a python script why not consider instructing your users in creating their own client id and client secret then they will be independent.
I'm trying to figure out how to migrate a system that is currently using ACS to Azure AD. I've read the migration docs provided by Azure and have looked through the Azure AD docs and the sample code but I'm still a bit lost as to what the best approach for my situation would be.
I've got a web API that has about 100 separate external systems that connect to it on a regular basis. We add a new connections approximately once a week. These external systems are not users--these are applications that are integrated with my application via my web API.
Currently each external system has an ACS service identity / password which they use to obtain a token which we then use to authenticate. Obviously this system is going away as of November 7.
All of the Azure AD documentation I've read so far indicates that, when I migrate, I should set up each of my existing clients as an "application registration" in Azure AD. The upshot of this is that each client, instead of connecting to me using a username and password, will have to connect using an application ID (which is always a GUID), an encrypted password, and a "resource" which seems to be the same as an audience URL from what I can see. This in itself is cumbersome but not that bad.
Then, implementing the authorization piece in my web API is deceptively simple. It looks like, fundamentally, all I need to do is include the properly configured [Authorize] attribute in my ApiController. But the trick is in getting it to be properly configured.
From what I can see in all the examples out there, I need to hard-code the unique Audience URL for every single client that might possibly connect to my API into my startup code somewhere, and that really does not seem reasonable to me so I can only assume that I must be missing something. Do I really need to recompile my code and do a new deployment every time a new external system wants to connect to my API?
Can anyone out there provide a bit of guidance?
Thanks.
You have misunderstood how the audience URI works.
It is not your client's URI, it is your API's URI.
When the clients request a token using Client Credentials flow (client id + secret), they all must use your API's App ID URI as the resource.
That will then be the audience in the token.
Your API only needs to check the token contains its App ID URI as the audience.
Though I want to also mention that if you want to do this a step better, you should define at least one application permission in your API's manifest. You can check my article on adding permissions.
Then your API should also check that the access token contains something like:
"roles": [
"your-permission-value"
]
It makes the security a bit better since any client app with an id + secret can get an access token for any API in that Azure AD tenant.
But with application permissions, you can require that a permission must be explicitly assigned for a client to be able to call your API.
It would make the migration a tad more cumbersome of course, since you'd have to require this app permission + grant it to all of the clients.
All of that can be automated with PowerShell though.
I am running a Kubernetes cluster hosted on GKE and would like to write an application (written in Go) that speaks to the Kubernetes API. My understanding is that I can either provide a client certificate, bearer token, or HTTP Basic Authentication in order to authenticate with the apiserver. I have already found the right spot to inject any of these into the Golang client library.
Unfortunately, the examples I ran across tend to reference to existing credentials stored in my personal kubeconfig file. This seems non-advisable from a security perspective and makes me believe that I should create a new client certificate / token / username-password pair in order to support easy revocation/removal of compromised accounts. However, I could not find a spot in the documentation actually describing how to go about this when running on managed Kubernetes in GKE. (There's this guide on creating new certificates explaining that the apiserver needs to get restarted with updated parameters eventually, something that to my understanding cannot be done in GKE.)
Are my security concerns for reusing my personal Kubernetes credentials in one (or potentially multiple) applications unjustified? If not, what's the right approach to generate a new set of credentials?
Thanks.
If your application is running inside the cluster, you can use Kubernetes Service Accounts to authenticate to the API server.
If this is outside of the cluster, things aren't as easy, and I suppose your concerns are justified. Right now, GKE does not allow additional custom identities beyond the one generated for your personal kubeconfig file.
Instead of using your credentials, you could grab a service account's token (inside a pod, read from /var/run/secrets/kubernetes.io/serviceaccount/token), and use that instead. It's a gross hack, and not a great general solution, but it might be slightly preferable to using your own personal credentials.
I am using Backendless.com as a BAAS for my application. I have some custom logic running on their servers which need to make an HTTP request to the Google Places API.
I'm trying to generate an API key for the Backendless.com server to run this request but i'm not sure what API key I need to generate. The Google developer console gives me 4 options. Server Key, Browser Key, Android Key, & iOS Key.
Server key seems to be the one I want to use... but I need to provide it with some IP addresses... I don't know where or how to find those! The console states that they are optional, but it seems insecure to not add the server IP address. What are the risks? Where can I find Backendless.com app server IP's?
Server key is what you want. Restricting access is a good additional security step to take, it is not however required. They basically make it so that if someone manages to steal your API Key, they can't use it from IPs that are not whitelisted. You will have to ask backendless.com if they have a finite list of IPs they can gurentee your requests will come from.