How to create new client certificates / tokens for programmatic access to the Kubernetes API hosted on GKE? - go

I am running a Kubernetes cluster hosted on GKE and would like to write an application (written in Go) that speaks to the Kubernetes API. My understanding is that I can either provide a client certificate, bearer token, or HTTP Basic Authentication in order to authenticate with the apiserver. I have already found the right spot to inject any of these into the Golang client library.
Unfortunately, the examples I ran across tend to reference to existing credentials stored in my personal kubeconfig file. This seems non-advisable from a security perspective and makes me believe that I should create a new client certificate / token / username-password pair in order to support easy revocation/removal of compromised accounts. However, I could not find a spot in the documentation actually describing how to go about this when running on managed Kubernetes in GKE. (There's this guide on creating new certificates explaining that the apiserver needs to get restarted with updated parameters eventually, something that to my understanding cannot be done in GKE.)
Are my security concerns for reusing my personal Kubernetes credentials in one (or potentially multiple) applications unjustified? If not, what's the right approach to generate a new set of credentials?
Thanks.

If your application is running inside the cluster, you can use Kubernetes Service Accounts to authenticate to the API server.
If this is outside of the cluster, things aren't as easy, and I suppose your concerns are justified. Right now, GKE does not allow additional custom identities beyond the one generated for your personal kubeconfig file.
Instead of using your credentials, you could grab a service account's token (inside a pod, read from /var/run/secrets/kubernetes.io/serviceaccount/token), and use that instead. It's a gross hack, and not a great general solution, but it might be slightly preferable to using your own personal credentials.

Related

How to restrict access to a small user community (IAM users) in GCP / Cloud DNS / HTTPS application

I have a request to restrict the access (access control) to a small user community in GCP.
Let me explain the question.
This is the current set up:
A valid GCP Organization: MyOrganization.com (under which the GCP project is deployed / provisioned)
Cloud DNS (To configure domain names, A & TXT records, zones and subdomains to build the URL for the application).
Oauth client set up (tokens, authorized redirects URIs, etc.).
HTTPS load balancer (GKE -managed k8s service- with ingress service), SSL certificate and keys issued by a trusted CA.
The application was built using python + Django framework.
I have already deployed the application (GCP resources) and it is working smooth.
The thing is that, since we are working in GCP, all IAM users who has a valid userID#MyOrgnization.com can access the application (https://URL-for-my-Appl.com).
Now, I have a new request, which consists in restricting access (access control) to the application only for a small user community within that GCP organization.
For example, I need to ensure that only specific IAM users can access the application (https://URL-for-my-Appl.com), such as:
user1#MyOrganization.com
user2#MyOrganization.com
user3#MyOrganization.com
user4#MyOrganization.com
How could I do that, taking into account the info I sent earlier ?
thanks!
You can use Cloud IAP (Identity Aware Proxy) in order to do that.
Identity-Aware Proxy (IAP) lets you manage access to applications
running in App Engine standard environment, App Engine flexible
environment, Compute Engine, and GKE. IAP establishes a central
authorization layer for applications accessed by HTTPS, so you can
adopt an application-level access control model instead of using
network-level firewalls. When you turn on IAP, you must also use
signed headers or the App Engine standard environment Users API to
secure your app.
Note: you can configure it on your load balancer.
It's not clear in your question if your application uses google auth (but considering that you talk about org-restricted login I think so) - if that's the case you should be able to enable it without virtually touching anything in your application if you are using the Users API.
The best and easiest solution is to deploy IAP (Identity Aware Proxy) on your HTTPS Loadbalancer
Then, grant only the user that you want (or create a gsuite user group and grant it, it's often easier to manage)

Authenticating External System Connections to Web API with Azure AD

I'm trying to figure out how to migrate a system that is currently using ACS to Azure AD. I've read the migration docs provided by Azure and have looked through the Azure AD docs and the sample code but I'm still a bit lost as to what the best approach for my situation would be.
I've got a web API that has about 100 separate external systems that connect to it on a regular basis. We add a new connections approximately once a week. These external systems are not users--these are applications that are integrated with my application via my web API.
Currently each external system has an ACS service identity / password which they use to obtain a token which we then use to authenticate. Obviously this system is going away as of November 7.
All of the Azure AD documentation I've read so far indicates that, when I migrate, I should set up each of my existing clients as an "application registration" in Azure AD. The upshot of this is that each client, instead of connecting to me using a username and password, will have to connect using an application ID (which is always a GUID), an encrypted password, and a "resource" which seems to be the same as an audience URL from what I can see. This in itself is cumbersome but not that bad.
Then, implementing the authorization piece in my web API is deceptively simple. It looks like, fundamentally, all I need to do is include the properly configured [Authorize] attribute in my ApiController. But the trick is in getting it to be properly configured.
From what I can see in all the examples out there, I need to hard-code the unique Audience URL for every single client that might possibly connect to my API into my startup code somewhere, and that really does not seem reasonable to me so I can only assume that I must be missing something. Do I really need to recompile my code and do a new deployment every time a new external system wants to connect to my API?
Can anyone out there provide a bit of guidance?
Thanks.
You have misunderstood how the audience URI works.
It is not your client's URI, it is your API's URI.
When the clients request a token using Client Credentials flow (client id + secret), they all must use your API's App ID URI as the resource.
That will then be the audience in the token.
Your API only needs to check the token contains its App ID URI as the audience.
Though I want to also mention that if you want to do this a step better, you should define at least one application permission in your API's manifest. You can check my article on adding permissions.
Then your API should also check that the access token contains something like:
"roles": [
"your-permission-value"
]
It makes the security a bit better since any client app with an id + secret can get an access token for any API in that Azure AD tenant.
But with application permissions, you can require that a permission must be explicitly assigned for a client to be able to call your API.
It would make the migration a tad more cumbersome of course, since you'd have to require this app permission + grant it to all of the clients.
All of that can be automated with PowerShell though.

Auto sign on with Windows Authentication

I am to be having a lot of problems, misinformation and confusion when attempting to find out the plausibility and viability of attempting this.
The requirement is for a remote client, accessing our website to be auto signed in with their Active Directory User account.
We have the option to setup a WCF service (or something similar) on their remote server for authentication purposes. Which from my little understanding is how this problem will be tackled.
So, my question after a little background is this.
CAN this be done, and HOW can it be done?
Instead of hosting a WCF service on their domain, I would look into installing ADFS on their domain.
You can change your website to accept security tokens from ADFS using the WS-Federation protocol. You can use classes from the System.IdentityModel namespace for that. An example of how to implement this in ASP.NET can be found here.
An alternative would be to use Azure Active Directory as your identity provider and have your client sync accounts to their AAD directory (or federate between AAD and ADFS). An example can be found here.

Centralized Authentication Server OpenAM vs FreeRadius

The basic requirement is to centralize the authentication and authorization of multiple SaaS applications to ease development (each SaaS application using minimal code to authenticate against a single source) and when necessary provide SSO. The authentication mechanism must handle the following options available to the user:
Use Third Party Authentication -- Google
Use our centralized authentication
Use the corporate provided authentication (ADFS)
In my research, I have found many, many ways this can be done and have found OpenAM to be the most complete solution, but then I came across FreeRadius which could also be used.
My Questions are:
There seems to be a plug-in for each tool where one can use the other together (OpenAM - authenticate against radius server), but is there any use case where FreeRadius would be preferred as the SOLE authentication server over OpenAM.
Does OpenAM require that a web agent installed for the server - if all I am doing is serving a Restful Interface (developed in Node.js) - is it possible to authenticate users without installing a web agent (there is no web agent for Node.js).
Can I pass user credentials from Browser -> Server (node.js) -> OpenAM thereby not giving the user the OpenAM login screen. The OpenAM token will be passed from OpenAM -> Server -> Browser (setting the cookies's origin as the SaaS's application.
That is each SaaS application server will serve as a "proxy" for user management (authenticate, authorize, and manage[create|update|delete] users)
Thank you
I'm early to the Open Identity Stack game but I am deploying an OpenAM (and OpenIDM + OpenDJ) based solution to handle exactly the solutions you mention.
direct answers:
As far as handing sole authentication over to FreeRadius I don't see why you would want to but anything is possible. Given your mention of the multiple directories (identity sources - google, ADFS, and your centralized authentication) I would think hooking up OpenAM to provide the RADIUS authentication (i.e. OpenAM RADIUS hook, not FreeRadius) would make sense.
No, a web agent doesn't have to be applied but it may make sense. There are some node.js pieces to help (https://github.com/alesium/node-openam). You just need to talk from your server to the OpenAM side (REST) and that should be good.
You can do that or you can just skin the OpenAM login screen to look like your own. I'd suggest the latter as you're then relying on OpenAM for the login screen security. If you're doing a pure proxy then you take that burden on. Your call as a design decision obviously.
good luck!
you're comparing a RADIUS sever with a Web SSO solution ... I'm not sure if this makes sense.
It seems FreeRadius does not have that many 'auth backends' (like Oauth to leverage Google Auth)
I am looking into the solution for a similar requirement myself, but I am looking to integrate 2FA as well. I have seen so many different solutions, but haven't pinned down the best one yet. Here is what I have come up with so far:
RCDev OpenID seems to be pretty comprehensive, and it is free for cases with less than 40 users.
Green Rocket's GreenRADIUS is expensive, but they have plugins for every scenario and it can work.
Red Hat's KeyCloak could be used in combination with TACACS+ or FreeRADIUS to accomplish this

AWS and Shibboleth/SAML

I have been looking into whether it is possible to use Shibboleth/SAML with Amazon Web Services.
I'm finding very little information on this. As far as I can tell, it is possible to install Shibboleth/SAML on an EC2 server as a Service Provider.
What I am not so sure on is whether it is possible to tie all of AWS to Shibboleth - and how this would work.
My knowledge of all three are vaguely fuzzy - I've been doing a great deal of reading, but I'm not really familiar with this technology at all.
If I understand you correctly, what you are trying to do is use identity federation to grant a user temporary security credentials to perform AWS api calls. You would like your users to authenticate to your own identity provider (Shibboleth in this case), and be granted access to AWS services based on that authentication.
A good example of this that you can use as a framework is in this AWS sample code.
In a nutshell:
You need a proxy that the users connect to, passing in their authentication credentials. You would then verify them by authenticating to Shibboleth, AD, LDAP or whatever.
You need a Token Vending Machine that your proxy would then call to get a valid AWS secret key using GetFederationTokenRequest.
Your client would then use the token given to it to make the AWS api calls.
The concepts of federated identity include terms like STS, SP, and IdP, if you are looking for a starting place to research the topic more.

Resources