We are using SCDF on PCF with File-based Authentication, it works fine on a single instance - however when we scale to 2 or more instances, it fails on login stating "Not Logged in" - there's no error message on the server..
Does SCDF store user info in session ? Not sure why login not working when scaled up
SCDF - 1.5.1.RELEASE
(Apparently it was working in 1.3.0.RELEASE)
The file-based authentication is not a recommended approach for cloud platforms like PCF.
In PCF in particular, you'd want to take advantage of the single-sign-on solution provided by the platform. With OAuth and SSO backed by UAA, it'd be a consistent security experience regardless of the number of instances. Please refer to the write-up on authentication options available for SCDF on PCF.
With this, you can centrally also renew an expired OAuth token or even revoke them as needed.
Also, as an FYI, when using the SCDF Tile, all this is automatically configured for you. You'd create an instance of SCDF service from the marketplace and the space-developer can gain access to the Dashboard, REST-APIs, and Shell - all of it works on an SSO model by default.
Related
I have a request to restrict the access (access control) to a small user community in GCP.
Let me explain the question.
This is the current set up:
A valid GCP Organization: MyOrganization.com (under which the GCP project is deployed / provisioned)
Cloud DNS (To configure domain names, A & TXT records, zones and subdomains to build the URL for the application).
Oauth client set up (tokens, authorized redirects URIs, etc.).
HTTPS load balancer (GKE -managed k8s service- with ingress service), SSL certificate and keys issued by a trusted CA.
The application was built using python + Django framework.
I have already deployed the application (GCP resources) and it is working smooth.
The thing is that, since we are working in GCP, all IAM users who has a valid userID#MyOrgnization.com can access the application (https://URL-for-my-Appl.com).
Now, I have a new request, which consists in restricting access (access control) to the application only for a small user community within that GCP organization.
For example, I need to ensure that only specific IAM users can access the application (https://URL-for-my-Appl.com), such as:
user1#MyOrganization.com
user2#MyOrganization.com
user3#MyOrganization.com
user4#MyOrganization.com
How could I do that, taking into account the info I sent earlier ?
thanks!
You can use Cloud IAP (Identity Aware Proxy) in order to do that.
Identity-Aware Proxy (IAP) lets you manage access to applications
running in App Engine standard environment, App Engine flexible
environment, Compute Engine, and GKE. IAP establishes a central
authorization layer for applications accessed by HTTPS, so you can
adopt an application-level access control model instead of using
network-level firewalls. When you turn on IAP, you must also use
signed headers or the App Engine standard environment Users API to
secure your app.
Note: you can configure it on your load balancer.
It's not clear in your question if your application uses google auth (but considering that you talk about org-restricted login I think so) - if that's the case you should be able to enable it without virtually touching anything in your application if you are using the Users API.
The best and easiest solution is to deploy IAP (Identity Aware Proxy) on your HTTPS Loadbalancer
Then, grant only the user that you want (or create a gsuite user group and grant it, it's often easier to manage)
We are trying to put an app on the marketplace which needs multiple client_ids
(The app is running on appengine standard with python 2.7)
a client_id for the service_account with domain wide authority
a client_id for the web application
a client_id from an apps-script library
All client_ids use different scopes. I have combined all scopes and entered them on the marketplace SDK configuration.
When i deploy the app on a test domain, only the serviceaccount seems to be authorized.
When the user then access the webapplication he is presented a grant screen which we want to avoid.
The documentation https://developers.google.com/apps-marketplace/preparing?hl=fr seems to imply that multiple client_id's are possible.
How should i configure the marketplace app so that multiple client_ids are authorized?
Is there something special i should do on the credentials configuration page of the api-manager?
Check how you implement the authorization using OAuth 2.0, Service accounts allow a Google Apps domain administrators to grant service accounts domain-wide authority to access user data on behalf of users in the domain. You can also read Server to Server Applications documentation.
Note: You can only use AppAssertionCredentials credential objects in applications that are running on Google App Engine or Google Compute Engine. If you need to run your application in other environments—for example, to test your application locally—you must detect this situation and use a different credential mechanism (see Other). You can use the application default credentials to simplify this process.
Hope this helps.
It turned out all three client_id's were being authorized after all.
the days that i was testing this, it took very long for the authorization to take effect.
At this time all scopes and clientid are authorized within a few minutes.
Can anyone Tell me if it is possible to combine SSO from Spnego and Spring security with Oauth
This is my problem :
The Client I now represent has chosen Spnego as their SSO solution.
This requires us to use a full blown appServer (Liberty) in all scenarios.
At the same time, the knowlegde and skills about Spnego in the developent team is very limited.
Due to issues with creating the keytab files, Spnego is only available in the formal test environment and not our local test enviroment.
This makes it very difficult/time consuming to test and devlop due to the long deployment time to the formal test enviroment.
Not over to my question:
If possible I would like to be able to "log in" to a service in the formal test enviroment (OAUTH2 authentication server ?) using SPNEGO SSO and get a token back that I can use in further requests towards my services located locally and/or in any other test enviroment.
Is this even possible ? I have not seen any examples where the authenticantionServer is using another sso provider to actually authenticate the user.
A different possibility might be to to do some sort of redirect from the login service in the test environment but I fear the Spnego token created only will be valid on a sever in the same domain..
I`m sorry if this question is confusing or not clear.
My knowledge of this domain (security) is limited and I struggle to get a grasp of how I can test my code locally with security enabled.
Links to any resources on the net that addresses some of these issues will be greatly appreciated.
I have a cluster secured by Kerberos, and have a REST API that needs to interact with the cluster on behalf of the user. I have used Spring Security with SPNEGO to authenticate the user, but when I try to use the Hadoop SDK, it fails for various reasons based on what I try.
When I try to use the SDK directly after the user logs in, it gives me SIMPLE authentication is not enabled.
I have noticed the session's Authenticator is UserNamePasswordAuthenticationToken which does not make sense, since I'm authenticating against the Kerberos realm with the credentials from the user.
I am trying to use this project out of the box with my own service account and keytab: https://github.com/spring-projects/spring-security-kerberos/tree/master/spring-security-kerberos-samples/sec-server-spnego-form-auth
For what it's worth, you can leverage Apache Knox (http://knox.apache.org) to consume the Hadoop REST APIs in a secured cluster. Knox will take care of the SPNEGO negotiation with the various components for you. You could use the HTTP header based pre-auth SSO provider to propagate the identity of your enduser to Knox.
Details: http://knox.apache.org/books/knox-0-8-0/user-guide.html#Preauthenticated+SSO+Provider
You will need to ensure that only trusted clients can call your service if you are using that provider however.
Alternatively, you can authenticate to Knox against LDAP with username/password with the default Shiro provider.
One of the great benefits of using Knox this way is that your service never needs to know anything about whether the cluster is kerberized. Knox abstracts that from you.
First of all, Spring Sec Kerberos Extension is a terrible piece of code. I have evaluated it once and abstained from using it. You need the credential of the client authenticating to your cluster. You have basically two options here:
If you are on Tomcat, you can try the JEE pre-auth wrapper from Spring Security along with my Tomcat SPNEGO AD Authenticator from trunk. If will receive the delegated credential from the client which will enable you to perform your task, assuming that your server account is trusted for delegation.
If the above is not an option, resort to S4U2Proxy/S4U2Self with Java 8 and obtain a Kerberos ticket on behalf of the user principal and perform then your REST API call.
As soon as you have the GSSCredential the flow is the same.
Disclaimer: I have no idea about Hadoop but the GSS-API process is always the same.
My identity and access management tool of choice is OpenAM utilising their container based policy agents, this approach is not possible however using the Heroku Celadon Cedar stack -- at least it doesn't look possible to me (www.heroku.com)
What is the recommended way to enforce authentication and authorization for cedar deployed apps?
Thanks
/W
I'm not sure about the OpenAM access management tool. However if your application requires authentication or authorization then I would recommend to contact third party services linke TeleSign for their identity and access management sevices.
You can store your users in your own database, or used a hosted identity service like Stormpath (disclaimer: it's awesome).
If you end up using something like Stormpath, you'll basically work with a REST API to create, manage, and authenticate users.