In the legacy ACL system (pre 1.4), i was able to create acl tokens using the api endpoint /v1/acl/update passing in an existing ID as a parameter in the payload, e.g:
"ID": "##uuid",
This would create a token with that uuid in consul.
In the new system, I cannot create a token and pass in an already chosen ID of that token, either via consul acl client or acl API. Any suggestions?
The only pre-assigned token i'm aware of that works is the bootstrap master token, which can be configured in acl.json at startup and consul will use that to bootstrap the cluster and create the mgmt token:
"tokens": {
"master": "##uuid",
}
Note that the purpose here is ability to recover from outage. If I have 100 tokens in consul and lose the cluster, how do I rebuild with the same tokens (which would be saved off somewhere)?
this was already raised in https://github.com/hashicorp/consul/issues/4977, with the targeted feature included in 1.4.4 release (date TBD)
Related
I'm running two keycloak docker instances and configured cluster as specified here https://hub.docker.com/r/jboss/keycloak/
I can able see logs related to clustering and two records in JGROUPSPING table. Also it works when authenticate(openid-connect) through Host1 and get access token/refresh token and able to retrieve new access_token using refresh token via Host2, which means I believe clustering setup is working.
But Im getting 401 error when I make API call to Host2 either using the access token I received from Host1 or access_token I got from Host1's refresh token. It works only when I use access_token received from same host.
My understanding is that these access_tokens doesn't related to cookie it should be working seamlessly. But it fails.
I had a problem with the verification of the access token signature.
The access token are signed by Keycloak with a keystore. If you don't have a certificate and key mounted in the docker, this keystore will be different between the nodes in your cluster, and a token generated by one node will not be valid for another node.
So you have to follow the "Setting up TLS(SSL)" part of the documentation of the docker.
We are working on a project where we are authenticating users with Azure Active Directory. Upon the successful authentication, the user's browsers receive an Id and Access token, and then we use the same access token to query other Microsoft products (Sharepoint, OneDrive, etc).
We are planning to use Elastic Search for our search need. We have already set up SAML/ OpenId realms on our ECE Deployment Portal and Cluster. So if any users try to access ECE deployment portal/ Kibana, they will be prompted to authenticate against Microsoft Azure AD, and upon successful authentication, they get redirected to ECE or Kibana.
We are using C# and NEST dll (ElasticSearch.Net) to create queries and search the elastic search end point. We are not sure how exactly should we use the access token received on the UI side with Elastic Search to query out indices. We know, we can use native user credentials or API keys to access the elastic search but we want to use the same azure ad authentication flow(SAML/OpenID) to access Elastic Search.
Is it possible to use the Azure AD access token received on the UI side to access & query Elastic Search Clusters or is there any other way to re-authenticate users while they try to access Elastic Search Cluster?
Is there a way to authenticate the users with elastic search end point and generate an access token that can be used to query elastic search further?
In short, we want to re-authenticate users with Elastic Search while querying the data?
var settings = new ConnectionSettings(new Uri(mEsQuerySource.Url));
settings.BasicAuthentication("user", "plain text password");
mClient = new ElasticClient(settings);
Thank You Tim for sharing the solution over elastic Portal. I am updating the same answer over here to help other community member.
In current versions of Elasticsearch (as I write this, 7.14 is the latest version) there is no way to use an Azure AD access token to directly access Elasticsearch.
That is, you cannot have your application authenticate directly to AAD and then use the tokens you receive from AAD as a credential to authenticate to Elasticsearch.
There is no authentication provider in Elasticsearch that works with arbitrary tokens from an external issuer.
You can however do the same thing that ECE and Kibana do and perform SAML or OpenID Connect authentication via Elasticsearch, in order to generate Elasticsearch access & refresh tokens (which are separate from the Azure AD tokens).
There is documentation on how to perform SAML 3 or OIDC 2 authentication to Elasticsearch via a custom application.
The high-level overview would be (I assume SAML here, but OIDC would be similar):
When a user accesses your application they would authenticate against AzureAD as normal
Then, you would use the Elasticsearch APIs to perform an additional authentication against an Elasticsearch SAML realm with Elasticsearch as the service provider and AzureAD as the Identity Provider.
Since the user is already authenticated within Azure AD, that second authentication process should be transparent to the user - AAD will simply issue a new SAML assertion with Elasticsearch as the recipient.
Those Elasticsearch APIs will accept the SAML assertion, and return a pair of tokens (access + refresh) that can be used to authenticate to Elasticsearch
Your application will retain the access + refresh tokens for the user's session
The access token will be used to authenticate when accessing Elasticsearch APIs
The refresh token will be used to generate a new access token when the old one expires (or is about to expire).
If your users are in an identity store that Elasticsearch can query (e.g. something that supports LDAP search), then another option is to use the Elasticsearch run-as capability.
In this case your application would authenticate to Elasticsearch using a single system credential (probably a user in the native realm). That user would have permission to run-as all other users and this can be used to perform searches on behalf of your end users without needing them to authenticate directly to Elasticsearch.
The final option would be to implement a custom realm, if you have engineers who are comfortable writing Java
Reference: Use azure active directory with NEST/Elasticsearch.net - Elastic Stack / Elasticsearch - Discuss the Elastic Stack
According to the documentation, one prerequisite for using NiFi CLI against a secured NiFi instance is to configure proxy user request for the node's identity (e.g. CN=localhost, OU=NIFI).
https://nifi.apache.org/docs/nifi-docs/html/toolkit-guide.html#prerequisites-for-running-in-a-secure-environment
I understand how to configure it through the NiFi web user interface. However, is it possible to do the same through scripting?
The reason is that I am working on a NiFi installation script, and I would like to install NiFi and configure users/policies in one go if it is possible.
Thank you!
If you are trying to use NiFi CLI to setup NiFi itself, then you're only real option is for NiFi CLI to perform operations as the Initial Admin identity.
It then depends how NiFi is configured to perform authentication, meaning where is your initial admin identity coming from. Is it a DN from a client cert, a user in LDAP, a kerberos principal, etc?
If it is a client cert, then you can just configure NiFi CLI to use that cert and it should work.
If it is a LDAP user, then you need to have NiFi CLI use one of NiFi's server certs to proxy the LDAP user.
Both of these scenarios are shown in the docs:
https://nifi.apache.org/docs/nifi-docs/html/toolkit-guide.html#security-configuration
I'm trying to build a web-based SaaS solution in ASP.NET Core 2.0, with the help of micro-services architecture, token based authentication and service will be hosted on Docker. Each client has its own users, product and other details with multiple databases with shared schema. Each micro-service has its own database (Schema-per-service).
I hit a roadblock where I need to locate logged in user’s database credentials (connection string), so that database connection will be passed dynamically to respective micro-service to fetch data from respective client database?
I suppose that you have some sort of microservice to handle client authentication into his SaaS account and generate a token to consume the SaaS microservices (like a "private key") correct?
It's the perfect case for microservices architecture:
Create a microservice that domains resources about the client's environment configuration
This microservice receives requests with the client's private key
Then requests the authentication service to validate the passed private key
Get the response of the authentication service and some sort of client's unique key
Responds with the environment configuration corresponding to that client's unique key (or 404 if the auth token doesn't match with any client)
Now having this microservice (I'll call "environment microservice"), any other microservice of your SaaS just needs request the environment microservice to get client's configurations (database connection string, storage system and etc). From this point, you can implement some caching policy at each service to keep the private keys mapping to a set of configurations (and persistent database connections if your model permits). Just ensure that this cache has an interval to validate the tokens and configurations against the environment microservice.
I have a dev and prod cognito pool, a dev/prod lambda function that pushes to a dev/prod dynamoDb table.
Is there a simple way to have it know when to use the prod credentials (pool id, etc), and when to use the dev credentials?
And same to do with firing the appropriate dev/prod API gateway apis that check the appropriate pools for authentication, and post to the appropriate dynamoDb tables? For now I just manually change the tokens, and in API Gateway, I manually switch out which cognito pool the API gateway authenticates and which tables they post to, which isn't very practical.
If you expose your lambda with API Gateway then just deploy it to two stages - a prod stage which calls the prod lambda which accesses prod Dynamodb & a dev stage which calls dev lambda. In your application, you would just need to change the stage name & you can do so by setting it from Info.plist.
Regarding how to get tokens for prod or dev automatically, it depends on how you get these tokens. For example, you could create a /login resource in API Gateway which takes username + password as parameters and returns tokens. Again, deploy it to two stages which use different Cognito pool in the backend calls. Now, you can use the same variable/property in your application to get the stage name for getting tokens too.
So, by just changing the value of one property you can switch between prod & dev in your app.