I have been trying to push the audit payload to elastic search datasource from my Nifi pipeline. I see PutElasticsearchRecord we could have ElasticSearchClientServiceImpl set with username and password. But my requirement is with a principal and keytab based auth. Has someone done similar stuff or is implmenting your own ElasticSearchClient with kereberos auth the best way ?
Related
We are working on a project where we are authenticating users with Azure Active Directory. Upon the successful authentication, the user's browsers receive an Id and Access token, and then we use the same access token to query other Microsoft products (Sharepoint, OneDrive, etc).
We are planning to use Elastic Search for our search need. We have already set up SAML/ OpenId realms on our ECE Deployment Portal and Cluster. So if any users try to access ECE deployment portal/ Kibana, they will be prompted to authenticate against Microsoft Azure AD, and upon successful authentication, they get redirected to ECE or Kibana.
We are using C# and NEST dll (ElasticSearch.Net) to create queries and search the elastic search end point. We are not sure how exactly should we use the access token received on the UI side with Elastic Search to query out indices. We know, we can use native user credentials or API keys to access the elastic search but we want to use the same azure ad authentication flow(SAML/OpenID) to access Elastic Search.
Is it possible to use the Azure AD access token received on the UI side to access & query Elastic Search Clusters or is there any other way to re-authenticate users while they try to access Elastic Search Cluster?
Is there a way to authenticate the users with elastic search end point and generate an access token that can be used to query elastic search further?
In short, we want to re-authenticate users with Elastic Search while querying the data?
var settings = new ConnectionSettings(new Uri(mEsQuerySource.Url));
settings.BasicAuthentication("user", "plain text password");
mClient = new ElasticClient(settings);
Thank You Tim for sharing the solution over elastic Portal. I am updating the same answer over here to help other community member.
In current versions of Elasticsearch (as I write this, 7.14 is the latest version) there is no way to use an Azure AD access token to directly access Elasticsearch.
That is, you cannot have your application authenticate directly to AAD and then use the tokens you receive from AAD as a credential to authenticate to Elasticsearch.
There is no authentication provider in Elasticsearch that works with arbitrary tokens from an external issuer.
You can however do the same thing that ECE and Kibana do and perform SAML or OpenID Connect authentication via Elasticsearch, in order to generate Elasticsearch access & refresh tokens (which are separate from the Azure AD tokens).
There is documentation on how to perform SAML 3 or OIDC 2 authentication to Elasticsearch via a custom application.
The high-level overview would be (I assume SAML here, but OIDC would be similar):
When a user accesses your application they would authenticate against AzureAD as normal
Then, you would use the Elasticsearch APIs to perform an additional authentication against an Elasticsearch SAML realm with Elasticsearch as the service provider and AzureAD as the Identity Provider.
Since the user is already authenticated within Azure AD, that second authentication process should be transparent to the user - AAD will simply issue a new SAML assertion with Elasticsearch as the recipient.
Those Elasticsearch APIs will accept the SAML assertion, and return a pair of tokens (access + refresh) that can be used to authenticate to Elasticsearch
Your application will retain the access + refresh tokens for the user's session
The access token will be used to authenticate when accessing Elasticsearch APIs
The refresh token will be used to generate a new access token when the old one expires (or is about to expire).
If your users are in an identity store that Elasticsearch can query (e.g. something that supports LDAP search), then another option is to use the Elasticsearch run-as capability.
In this case your application would authenticate to Elasticsearch using a single system credential (probably a user in the native realm). That user would have permission to run-as all other users and this can be used to perform searches on behalf of your end users without needing them to authenticate directly to Elasticsearch.
The final option would be to implement a custom realm, if you have engineers who are comfortable writing Java
Reference: Use azure active directory with NEST/Elasticsearch.net - Elastic Stack / Elasticsearch - Discuss the Elastic Stack
According to the documentation, one prerequisite for using NiFi CLI against a secured NiFi instance is to configure proxy user request for the node's identity (e.g. CN=localhost, OU=NIFI).
https://nifi.apache.org/docs/nifi-docs/html/toolkit-guide.html#prerequisites-for-running-in-a-secure-environment
I understand how to configure it through the NiFi web user interface. However, is it possible to do the same through scripting?
The reason is that I am working on a NiFi installation script, and I would like to install NiFi and configure users/policies in one go if it is possible.
Thank you!
If you are trying to use NiFi CLI to setup NiFi itself, then you're only real option is for NiFi CLI to perform operations as the Initial Admin identity.
It then depends how NiFi is configured to perform authentication, meaning where is your initial admin identity coming from. Is it a DN from a client cert, a user in LDAP, a kerberos principal, etc?
If it is a client cert, then you can just configure NiFi CLI to use that cert and it should work.
If it is a LDAP user, then you need to have NiFi CLI use one of NiFi's server certs to proxy the LDAP user.
Both of these scenarios are shown in the docs:
https://nifi.apache.org/docs/nifi-docs/html/toolkit-guide.html#security-configuration
I am new to OpenID connect & security domain. I have configured Nifi to use OpenID for authentication using online documentation. And to automate a few nifi related tasks we are using nipyapi.
I have already written python code which does automated flow deployment for basic nifi installation (unsecured & without user authentication)
Now, I have to move the code to secured Nifi installation. How to authenticate to OpenID connect using nipyapi/rest API ?
AS per discussion with Bryan, i am planning to use client certificate for authentication but it started giving authorization error. and have created another question with the details.
Nifi - Client Certificate Authorization Error
OpenID Connect generally requires that you follow a flow of re-directs, typically in the browser. NiFi re-directs you to the login page of the OIDC provider, upon completion, the OIDC provider redirects you back to NiFi. I'm not exactly sure how, or if you even can, perform this login process from scripts. An easy alternative would be to just generate a client certificate to represent an automation user for any NiPyApi scripts. Client certificate authentication is always enabled by default for NiFi.
I am using the latest confluent images (5.1.0) and externalized OracleDB passwords for Connect configurations in vault. I am able to successfully register the custom config provider for vault with following configuration.
"connection.password": "${vault:vault_path:vault_db_password_key}"
When I do a GET connector request, the connection.password in the response is the resolved password and it is shown as such and was not hidden. But in logs, I could see it as
connection.password = [hidden]
Please let me know if this issue is handled as part of KIP-297 or if I am missing something?
Not sure if this answers your question, but there is no such Credential provider for vault:, by default, there is only file:, so unless you've written your own (or found one elsewhere), then that won't work.
I have a cluster secured by Kerberos, and have a REST API that needs to interact with the cluster on behalf of the user. I have used Spring Security with SPNEGO to authenticate the user, but when I try to use the Hadoop SDK, it fails for various reasons based on what I try.
When I try to use the SDK directly after the user logs in, it gives me SIMPLE authentication is not enabled.
I have noticed the session's Authenticator is UserNamePasswordAuthenticationToken which does not make sense, since I'm authenticating against the Kerberos realm with the credentials from the user.
I am trying to use this project out of the box with my own service account and keytab: https://github.com/spring-projects/spring-security-kerberos/tree/master/spring-security-kerberos-samples/sec-server-spnego-form-auth
For what it's worth, you can leverage Apache Knox (http://knox.apache.org) to consume the Hadoop REST APIs in a secured cluster. Knox will take care of the SPNEGO negotiation with the various components for you. You could use the HTTP header based pre-auth SSO provider to propagate the identity of your enduser to Knox.
Details: http://knox.apache.org/books/knox-0-8-0/user-guide.html#Preauthenticated+SSO+Provider
You will need to ensure that only trusted clients can call your service if you are using that provider however.
Alternatively, you can authenticate to Knox against LDAP with username/password with the default Shiro provider.
One of the great benefits of using Knox this way is that your service never needs to know anything about whether the cluster is kerberized. Knox abstracts that from you.
First of all, Spring Sec Kerberos Extension is a terrible piece of code. I have evaluated it once and abstained from using it. You need the credential of the client authenticating to your cluster. You have basically two options here:
If you are on Tomcat, you can try the JEE pre-auth wrapper from Spring Security along with my Tomcat SPNEGO AD Authenticator from trunk. If will receive the delegated credential from the client which will enable you to perform your task, assuming that your server account is trusted for delegation.
If the above is not an option, resort to S4U2Proxy/S4U2Self with Java 8 and obtain a Kerberos ticket on behalf of the user principal and perform then your REST API call.
As soon as you have the GSSCredential the flow is the same.
Disclaimer: I have no idea about Hadoop but the GSS-API process is always the same.