Hashicorp-vault userpass authentication - amazon-ec2

I am trying to use Hashicorp vault for storing secrets for service accounts username and passwords --- I am following this link https://www.vaultproject.io/docs/auth/userpass.html to create the user name and password.
My question here is that here as per this example I am specifying the password "foo" when i curl this from any ec2 instances, as part of automations, so we want to automated this and codes will come from git:
curl \
--request POST \
--data '{"password": "foo"}' \
http://10.10.218.10:8200/v1/auth/userpass/login/mitchellh
Our policy is that we should NOT store any password in git... How do I run this curl and get authenticated to vault without specify the password for the user? is this possible?

Why you don't want to use aws-auth-method?
Also, if you are sure to want to use password authentication I think you can do something like this:
Generate user/password in the Vault, store user passwords in the Vault and set a policy to allow reading specific user password for specific ec2-instance (EC2 auth method);
In the ec2-instance run consul-template which will authenticate in the Vault with an ec2-instance role;
This consul-template will generate curl command with specific user name and password
Use this command

Related

ansible login using PAM SSO

I need some help with ansible login, the company i work for uses PAM where we type in username and info and we get auth prompt from microsoft authenticator to login.
In a nut shell from cli the command to login to a device would be
ssh -p 4422 mydomain.com\username#aus.mydomain\usernameadmin#1.1.1.1#authserver.mydomain.com
i need this to translate into ansible config for the login part and hopefully also use inventory so we dont have to specify the ip address.
thanks in advance.

Google cloud tasks cannot authorize to cloud run with error PERMISSION_DENIED(7): HTTP status code 403

The question is very similar to Google Cloud Tasks cannot authenticate to Cloud Run, but I am not using my own domain. I am using cloud run domain itself.
I have followed both below tutorial (both are kind of similar):
https://cloud.google.com/run/docs/triggering/using-tasks
https://cloud.google.com/tasks/docs/creating-http-target-tasks
I am not sure if there is more to the tutorial that I am missing, but below is the complete scenario.
I have some Django REST based APIs running on Google Cloud Run, and they are public. Furthermore, I can use them without any issue (though then need authentication).
I have created a Google Tasks Queue, and I am sending tasks to it using the following code
class QueueTaskMixin:
def send_task(self, payload=None):
url = 'https://my-public-cloud-run-app.a.run.app/api/conversation_types/'
client = tasks_v2.CloudTasksClient(credentials=credentials)
# `credentials` above belongs to a service account which have all sort of accesses
# Cloud SQL Admin
# Cloud SQL Client
# Cloud SQL Editor
# Cloud Tasks Admin
# Cloud Tasks Task Runner
# Service Account Token Creator
# Service Account User
# Cloud Run Admin
# Cloud Run Invoker
# Cloud Run Service Agent
# Storage Admin
parent = client.queue_path(
project=settings.GS_PROJECT_ID,
location=settings.GS_Q_REGION,
queue=settings.GS_Q_NAME)
task = {
'http_request':
{
'headers': {
'Content-type': 'application/json',
},
'http_method': tasks_v2.HttpMethod.POST,
'url': url,
'oidc_token': {
'service_account_email': settings.GS_Q_IAM_EMAIL,
# GS_Q_IAM_EMAIL is a another Service account that has
# Cloud Tasks Enqueuer
# Service Account User
# Cloud Run Invoker
}
}
}
if payload:
if isinstance(payload, dict):
payload = json.dumps(payload)
converted_payload = payload.encode()
task['http_request']['body'] = converted_payload
response = client.create_task(
request={'parent': parent, 'task': task}
)
print('Created task {}'.format(response.name))
Now I am getting PERMISSION_DENIED(7): HTTP status code 403 error.
My API logs shows the same:
"POST /api/conversation_types/ HTTP/1.1" 403 58 "-" "Google-Cloud-Tasks"
Forbidden: /api/conversation_types/
Now what I am not sure is whether this 403 error is thrown by
Two Google services trying to authorize each other
Or my API. Because my API requires authentication/authorization. As in, a user needs to log in using their username and password, and they will get a JWT token, and then they can call this API.
Referring to the documentation, I am not sure where I have to provide my APIs username/password/JWT token. The documentation says:
To authenticate between Cloud Tasks and an HTTP Target handler, Cloud Tasks creates a header token. This token is based on the credentials in the Cloud Tasks Enqueuer service account, identified by its email address.
Do I need to add this service account email address into my APIs as a user? Do I use oidc or oauth?
Any comments or answers much appreciated.
Update 1 - After removing auth from my API, Cloud Tasks is able to call the API successfully. So now how do I auth Cloud Task to be able to run my API?
Update 2 - Tried using OAuthToken and got error
400 Invalid url for HttpRequest.oauth_token. The hostname in the url must end with ".googleapis.com"
Looks like will have to go for OIDC token only.
Update 3 - Google Docs says:
OIDC tokens are signed JSON Web Tokens (JWT) and are used primarily to assert identity and not to provide any implicit authorization against a resource, unlike OAuth tokens, which do provide access.
Does that mean OIDC tokens cannot be used for authorization? Because I am getting authorization error here. Can't use OIDC, can't use OAuth, what to use then?
Update 4
As per comments from #johnhanley, I have updated my API to accept Bearer token for authentication. But even if my API is getting the token correctly, it won't be able to authorize it and will give invalid token error (I verified it using curl command. In both cases where token format is incorrect or token is incorrect, API simply return 403 Forbidden error).
Anyone can tell how to give the password (I can generate that in my API for the service account user email) to the service account so that using that password and email ID as username, OIDC can generate a token and use it to authenticate. Or am I going in the wrong direction?
Check the presence of Cloud Tasks service agent (service-[project-number]#gcp-sa-cloudtasks.iam.gserviceaccount.com) in your IAM Admin Console Page and grant him the roles/iam.serviceAccountTokenCreator role.
Don't turn off authentication, it does work. Here are some steps to follow in the form of bash shell commands and comments to explain the steps
Set some variables
PROJECT_ID='my-gcp-project-123'
TASK_ENQUER_SVC_ACCOUNT='tasks-creator-svc'
TASKS_SRV_ACCOUNT='cloud-tasks-svc'
CLOUD_RUN_SRV_ACCOUNT='cloud-run-svc'
GPC_REGION='australia-southeast1'
Create a cloud task queue
gcloud tasks queues create my-task-queue
Create a service account for sending the tasks - you need cloudtasks.enqueuer and serviceAccountUser
gcloud iam service-accounts create $TASK_ENQUER_SVC_ACCOUNT \
--display-name "Service account for sending cloud tasks"
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member \
serviceAccount:$TASK_ENQUER_SVC_ACCOUNT#${PROJECT_ID}.iam.gserviceaccount.com \
--role "roles/logging.logWriter"
# cloudtasks.enqueuer role is needed to create tasks
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member \
serviceAccount:$TASK_ENQUER_SVC_ACCOUNT#${PROJECT_ID}.iam.gserviceaccount.com \
--role "roles/cloudtasks.enqueuer"
# service account user role is needed for permission iam.serviceAccounts.actAs
# so a task can be generated as service account $TASKS_SRV_ACCOUNT
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member \
serviceAccount:$TASK_ENQUER_SVC_ACCOUNT#${PROJECT_ID}.iam.gserviceaccount.com \
--role "roles/iam.serviceAccountUser"
Create the service account for running the Cloud Run service
gcloud iam service-accounts create $CLOUD_RUN_SRV_ACCOUNT \
--display-name "Service account for running cloud run service"
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member \
serviceAccount:$CLOUD_RUN_SRV_ACCOUNT#${PROJECT_ID}.iam.gserviceaccount.com \
--role "roles/logging.logWriter"
Create the service account that cloud-tasks will use - no special permissions needed
gcloud iam service-accounts create $TASKS_SRV_ACCOUNT \
--display-name "Service account for the cloud-tasks"
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member \
serviceAccount:$TASKS_SRV_ACCOUNT#${PROJECT_ID}.iam.gserviceaccount.com \
--role "roles/logging.logWriter"
Deploy your cloud run service with ingress=all and no-allow-unauthenticated
Cloud tasks does not use serverless VPC connectors, so ingress needs to be all
gcloud run deploy my-cloud-run-service \
--image=us.gcr.io/${PROJECT_ID}/my-docker-image \
--concurrency=1 \
--cpu=2 \
--memory=250Mi \
--max-instances=10 \
--platform=managed \
--region=$GPC_REGION \
--no-allow-unauthenticated \
--port=8080 \
--service-account=$CLOUD_RUN_SRV_ACCOUNT#${PROJECT_ID}.iam.gserviceaccount.com \
--ingress=all \
--project=${PROJECT_ID}
You must now give permission for TASKS_SRV_ACCOUNT to call the Cloud Run target service
gcloud run services add-iam-policy-binding my-cloud-run-service \
--member=serviceAccount:$TASKS_SRV_ACCOUNT#${PROJECT_ID}.iam.gserviceaccount.com \
--role=roles/run.invoker \
--region=$GPC_REGION
You can then send your cloud tasks with authentication tokens as per instructions at this step
https://cloud.google.com/tasks/docs/creating-http-target-tasks#token
The service_account_email parameter is the TASKS_SRV_ACCOUNT created above

What is the default Username and Password for ElasticSearch 7.2.0 (when x-pack enabled)?

I did the change in config/elasticsearch.yml to
xpack.security.enabled: true
And now after starting elasticsearch (./bin/elasticsearch) and then do:
curl localhost:9200
getting:
{"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}
Then tried these 2:
curl localhost:9200 -u elastic:elastic
curl localhost:9200 -u elastic:changeme
getting:
{"error":{"root_cause":[{"type":"security_exception","reason":"failed to authenticate user [elastic]",
"header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}], "type":"security_exception", "reason":"failed to authenticate user [elastic]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}%
What is the default username/password for Elasticsearch 7.2.0?
You need to use elasticsearch-setup-passwords util to generate/set password for the inbuilt user of elastic.
To setup password you can use either one of the following commands:
bin/elasticsearch-setup-passwords interactive
bin/elasticsearch-setup-passwords auto
The interactive parameter prompts new password for the users, whereas auto generates them for you.
elastic user is the superuser for elastic-cluster.
Read more one configuring security here.

How to change password of AWS Cognito User?

I'm developing a web application which uses the AWS services backend side.
I'm using AWS Cognito to manage the users but I have a problem. When I create a new user (with a temporary password) it is required that I change this password manually to make it definitive.
The only way I have to change the password is using AWS Cli, as explained here:
https://docs.aws.amazon.com/cli/latest/reference/cognito-idp/change-password.html
I have to type in the shell the old password, the new password and the Access Token. The problem is: where I find this "Access token"? I don't know what to type in the shell! The AWS Cognito console doen't help.
To change a user password :
With this aws cli :
$ aws --version
aws-cli/1.17.9 Python/3.6.10 Linux/5.3.0-26-generic botocore/1.14.9
You can do this this way :
aws cognito-idp admin-set-user-password --user-pool-id "eu-west-11111" --username "aaaaaa-aaaa-aaaa-aaaa" --password "a new password" --permanent
To have more information :
aws cognito-idp admin-set-user-password help
The aws cognito-idp change-password can only be used with a user who is able to sign in, because you need the Access token from aws cognito-idp admin-initiate-auth.
But since the user has a temporary password, it will face the NEW_PASSWORD_REQUIRED challenge when trying to sign in.
Here's how I did it:
$ aws cognito-idp admin-create-user --user-pool-id USERPOOLID --username me#example.com --desired-delivery-mediums EMAIL --user-attributes Name=email,Value=me#example.com
$ aws cognito-idp initiate-auth --client-id CLIENTID --auth-flow USER_PASSWORD_AUTH --auth-parameters USERNAME=me#example.com.me,PASSWORD="tempPassword"
Now you get a NEW_PASSWORD_REQUIRED challenge and a very long session token.
Use that one to respond to the challenge:
$ aws cognito-idp admin-respond-to-auth-challenge --user-pool-id USERPOOLID --client-id CLIENTID --challenge-responses "NEW_PASSWORD=LaLaLaLa1234!!!!,USERNAME=me#example.com" --challenge-name NEW_PASSWORD_REQUIRED --session "YourLongSessionToken"
Update:
Since the original answer, a new option, aws cognito-idp admin-set-user-password has been introduced.
The right API is:
https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_AdminSetUserPassword.html
The syntax is:
{
"Password": "string",
"Permanent": true,
"Username": "string",
"UserPoolId": "string"
}
You can specify that the specified password is permanent, and you will have the user in the CONFIRMED status.
It's correct that this API doesn't require the old password, because it wouldn't be safe. The admin doesn't need to know user passwords. So the API has been named "AdminSetUserPassword" and not "AdminChangeUserPassword".
The access token is retrieved by logging the user in. You can get this token by running the aws cli command aws cognito-idp admin-initiate-auth for the user (Found here).
This will require you to have root credentials for the cognito pool, which I assume you have. The command will return the access token which you can use for one hour (cognito tokens expire after 1 hour regardless of settings, look here).

Hdfs to s3 Distcp - Access Keys

For copying the file from HDFS to S3 bucket I used the command
hadoop distcp -Dfs.s3a.access.key=ACCESS_KEY_HERE\
-Dfs.s3a.secret.key=SECRET_KEY_HERE /path/in/hdfs s3a:/BUCKET NAME
But the access key and sectet key are visible here which are not secure .
Is there any method to provide credentials from file .
I dont want to edit config file ,which is one of the method I came across .
I also faced the same situation, and after got temporary credentials from matadata instance. (in case you're using IAM User's credential, please notice that the temporary credentials mentioned here is IAM Role, which attach to EC2 server not human, refer http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
I found only specifying the credentials in the hadoop distcp cmd will not work.
You also have to specify a config fs.s3a.aws.credentials.provider. (refer http://hortonworks.github.io/hdp-aws/s3-security/index.html#using-temporary-session-credentials)
Final command will look like below
hadoop distcp -Dfs.s3a.aws.credentials.provider="org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider" -Dfs.s3a.access.key="{AccessKeyId}" -Dfs.s3a.secret.key="{SecretAccessKey}" -Dfs.s3a.session.token="{SessionToken}" s3a://bucket/prefix/file /path/on/hdfs
Recent (2.8+) versions let you hide your credentials in a jceks file; there's some documentation on the Hadoop s3 page there. That way: no need to put any secrets on the command line at all; you just share them across the cluster and then, in the distcp command, set hadoop.security.credential.provider.path to the path, like jceks://hdfs#nn1.example.com:9001/user/backup/s3.jceks
Fan: if you are running in EC2, the IAM role credentials should be automatically picked up from the default chain of credential providers: after looking for the config options & env vars, it tries a GET of the EC2 http endpoint which serves up the session credentials. If that's not happening, make sure that com.amazonaws.auth.InstanceProfileCredentialsProvider is on the list of credential providers. It's a bit slower than the others (and can get throttled), so best to put near the end.
Amazon allows to generate temporary credentials that you can retrieve from http://169.254.169.254/latest/meta-data/iam/security-credentials/
you can read from there
An application on the instance retrieves the security credentials provided by the role from the instance metadata item iam/security-credentials/role-name. The application is granted the permissions for the actions and resources that you've defined for the role through the security credentials associated with the role. These security credentials are temporary and we rotate them automatically. We make new credentials available at least five minutes prior to the expiration of the old credentials.
The following command retrieves the security credentials for an IAM role named s3access.
$ curl http://169.254.169.254/latest/meta-data/iam/security-credentials/s3access
The following is example output.
{
"Code" : "Success",
"LastUpdated" : "2012-04-26T16:39:16Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "AKIAIOSFODNN7EXAMPLE",
"SecretAccessKey" : "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"Token" : "token",
"Expiration" : "2012-04-27T22:39:16Z"
}
For applications, AWS CLI, and Tools for Windows PowerShell commands that run on the instance, you do not have to explicitly get the temporary security credentials — the AWS SDKs, AWS CLI, and Tools for Windows PowerShell automatically get the credentials from the EC2 instance metadata service and use them. To make a call outside of the instance using temporary security credentials (for example, to test IAM policies), you must provide the access key, secret key, and the session token. For more information, see Using Temporary Security Credentials to Request Access to AWS Resources in the IAM User Guide.
if you do not want to use access and secret key (or show them on your scripts) and if your EC2 instance has access to S3 then you can use the instance credentials
hadoop distcp \
-Dfs.s3a.aws.credentials.provider="com.amazonaws.auth.InstanceProfileCredentialsProvider" \
/hdfs_folder/myfolder \
s3a://bucket/myfolder
Not sure if it is because of a version difference, but to use "secrets from credential providers" the -Dfs flag would not work for me, I had to use the -D flag as shown on the hadoop version 3.1.3 "Using_secrets_from_credential_providers" docs.
First I saved my AWS S3 credentials in a Java Cryptography Extension KeyStore (JCEKS) file.
hadoop credential create fs.s3a.access.key \
-provider jceks://hdfs/user/$USER/s3.jceks \
-value <my_AWS_ACCESS_KEY>
hadoop credential create fs.s3a.secret.key \
-provider jceks://hdfs/user/$USER/s3.jceks \
-value <my_AWS_SECRET_KEY>
Then the following distcp command format worked for me.
hadoop distcp \
-D hadoop.security.credential.provider.path=jceks://hdfs/user/$USER/s3.jceks \
/hdfs_folder/myfolder \
s3a://bucket/myfolder

Resources