I'm new to open distro for elasticsearch and trying to run it on the Kubernetes cluster. After deploying the cluster, I need to change the password for admin user.
I went through this post - default-password-reset
I came to know that, to change the password I need to do the following steps:
exec in one of the master nodes
generate a hash for the new password using /usr/share/elasticsearch/plugins/opendistro_security/tools/hash.sh script
update /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml with the new hash
run /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh with parameters
Questions:
Is there any way to set those (via env or elasticsearch.yml) during bootstrapping the cluster?
I had to recreate internal_users.yml file with the updated password hashes and mounted the file in /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml directory for database pods.
So, when the Elasticsearch nodes bootstrapped, it bootstrapped with the updated password for default users ( i.e. admin ).
I used bcrypt go package to generate password hash.
docker exec -ti ELASTIC_MASTER bash
/usr/share/elasticsearch/plugins/opendistro_security/tools/hash.sh
##enter pass
yum install nano
#replace generated hash with new one
nano /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml
#exec this command to take place
sh /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -icl -nhnv -cacert config/root-ca.pem -cert config/admin.pem -key config/admin-key.pem
You can also execute below commands to obtain value of username, password from you kubernetes cluster:
kubectl get secret -n wazuh elastic-cred -o go-template='{{.data.username | base64decode}}'
kubectl get secret -n wazuh elastic-cred -o go-template='{{.data.password | base64decode}}'
Note: '-n wazuh' indicates the namespace, use what applies to you
Ref: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html
Related
If a system (e.g., a kubernetes node) is using containerd, how do I configure it to pull container images from a registry mirror instead of docker.io?
The answer seems to depend on version, but for 1.6+:
First, ensure that /etc/containerd/config.toml sets:
plugins."io.containerd.grpc.v1.cri".registry.config_path = "/etc/containerd/certs.d"
Second, create /etc/containerd/certs.d/docker.io/hosts.toml (and intervening directories as necessary) with content:
server = "https://registry-1.docker.io" # default after trying hosts
host."https://my-local-mirror".capabilities = ["pull", "resolve"]
(May need to restart containerd after modifying the first file? systemctl restart containerd Updates to the second path should be detected without restart.)
Note that earlier version 1.4 (e.g., in amazon-eks-ami up until a couple months ago) used a quite different method to configure the mirror.
If these changes are being automated, such as in a launch template user data script, the commands could be as follows. Note the escaping of quotation marks, and which side of the pipe needs extra privileges.
sudo mkdir -p /etc/containerd/certs.d/docker.io
echo 'plugins."io.containerd.grpc.v1.cri".registry.config_path = "/etc/containerd/certs.d"' | sudo tee -a /etc/containerd/config.toml
printf 'server = "https://registry-1.docker.io"\nhost."http://my-local-mirror".capabilities = ["pull", "resolve"]\n' | sudo tee /etc/containerd/certs.d/docker.io/hosts.toml
sudo systemctl restart containerd
For more recent installations, may not need to modify config.toml (i.e. if default already set appropriately). Also, may not need to use sudo depending on where these commands are run from (such as in a launch template, for AWS EC2).
For background: I'm attempting to automate steps to provision and create a multitude of Logstash processes within Ansible, but want to ensure the steps and configuration work manually before automating the process.
I have installed Logstash as per Elastic's documentation (its an RPM installation), and have it correctly shipping logs to my ES instance without issue. Elasticsearch and Logstash are both v7.12.0.
Following the keystore docs, I've created a /etc/sysconfig/logstash file and have set the permissions to the file to 0600. I've added the LOGSTASH_KEYSTORE_PASS key to the file to use as the environment variable sourced by the keystore command on creation and reading of the keystore itself.
Upon running the sudo /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash create command, the process spits back the following error:
WARNING: The keystore password is not set.
Please set the environment variable `LOGSTASH_KEYSTORE_PASS`.
Failure to do so will result in reduced security.
Continue without password protection on the keystore? [y/N]
This should not be the case, as the keystore process should be sourcing my password env var from the aforementioned file. Has anyone experienced a similar issue, and if so, how did you solve it?
This is expected, the file /etc/sysconfig/logstash will be read only when you start logstash as a service, not when you run it from command line.
To create the keystore you will need to export the variable with the password first, as explained in the documentation.
set +o history
export LOGSTASH_KEYSTORE_PASS=mypassword
set -o history
sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash create
After that, when you start logstash as a service it will read the variable from the /etc/sysconfig/logstash file.
1 - you should write your password for KEYSTORE itself.
It is under config/startup-options.
E.g. LOGSTASH_KEYSTORE_PASS=mypassword (without export)
2 - Then you should use the Keystore password to create your keystore file.
set +o history
export LOGSTASH_KEYSTORE_PASS=mypassword
set -o history
..logstash/bin/logstash-keystore --path.settings ../logstash create
Note: logstash-keystore and logstash.keystore are different things. you created the one with dot. It is in config/.. directory where your startup.options is.
History command is to hide your password to be seen. Because if somebody uses "history" to list all the commands used previously, they can see your password.
3 - Then you can add your first password into keystore file. You should give your keystore password beforehand.
set +o history
export LOGSTASH_KEYSTORE_PASS=mypassword
set -o history
./bin/logstash-keystore add YOUR_KEY
Then it will ask for your VALUE. If you do not give your keystore password, you get an error: Found a file at....but it's not a valid Logstash keystore
4 - Once you give your password. You can list the content of your keystore file, or remove. Replace "list" with "remove".
./bin/logstash-keystore list
I need create a MariaDB docker container, but need set the root password, but the password is set using a argument from the command line, it is very dangerous for the storage in the .bash_history.
I try use secrets using print pass | docker secret create mysql-root -, but have the same problem, the password is saved into .bash_history. The docker secret is not very secret.
I try use an interactive command:
while read -e line; do printf $line | docker secret create mysql-root -; break; done;
But, is very ugly xD. Why is a beter way to create a docker secret without save it into bash history but without remove all bash history?
The simplest way I have found is to use the following:
docker secret create private_thing -
Then enter the secret on the command line, followed by Ctrl-D twice.
You could try
printf $line | sudo docker secret create MYSQL_ROOT_PASSWORD -
and then
docker run --name some-mysql -e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql-root -d mariadb:tag
The information concerning using secrets with MariaDB can be found on the MariaDB page of DockerHub.
"Docker Secrets
As an alternative to passing sensitive information via environment variables, _FILE may be appended to the previously listed environment variables, causing the initialization script to load the values for those variables from files present in the container. In particular, this can be used to load passwords from Docker secrets stored in /run/secrets/<secret_name> files. For example:
$ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql-root -d mariadb:tag
Currently, this is only supported for MYSQL_ROOT_PASSWORD, MYSQL_ROOT_HOST, MYSQL_DATABASE, MYSQL_USER, and MYSQL_PASSWORD"
You can use openssl rand option to generate a random string and pass to docker secret command i.e
openssl rand -base64 10| docker secret create my_sec -
The openssl rand option will generate 10 byte base64 encoded random string.
For example: upon launching my EC2 instance, I would like to automatically run
docker login
so I can pull a private image from dockerhub and run it. To login to dockerhub I need to input a username and password, and this is what I would like to automate but haven't been able to figure out how.
I do know that you can pass in a script to be ran on launch via User Data. The issue is that my script expects input and I would like to automate entering that input.
Thanks in advance!
If just entering a password for docker login is your problem then I would suggest searching for a manual for docker login. 30 secs on Google gave me this link:
https://docs.docker.com/engine/reference/commandline/login/
It suggests something of the form
docker login --username foo --password-stdin < ~/my_password.txt
Which will read the password from a file my_password.txt in the current users home directory.
Seems like the easiest solution for you here is to modify your script to accept command line parameters, and pass those in with the UserData string.
Keep in mind that this will require you to change your launch configs every time your password changes.
The better solution here is to store your containers in ECS, and let AWS handle the authentication for you (as far as pulling the correct containers from a repo).
Your UserData then turns into something along:
#!/bin/bash
mkdir -p /etc/ecs
rm -f /etc/ecs/ecs.config # cleans up any old files on this instance
echo ECS_LOGFILE=/log/ecs-agent.log >> /etc/ecs/ecs.config
echo ECS_LOGLEVEL=info >> /etc/ecs/ecs.config
echo ECS_DATADIR=/data >> /etc/ecs/ecs.config
echo ECS_CONTAINER_STOP_TIMEOUT=5m >> /etc/ecs/ecs.config
echo ECS_CLUSTER=<your-cluster-goes-here> >> /etc/ecs/ecs.config
docker pull amazon/amazon-ecs-agent
docker run --name ecs-agent --detach=true --restart=on-failure:10 --volume=/var/run/docker.sock:/var/run/docker.sock --volume=/var/log/ecs/:/log --volume=/var/lib/ecs/data:/data --volume=/sys/fs/cgroup:/sys/fs/cgroup:ro --volume=/var/run/docker/execdriver/native:/var/lib/docker/execdriver/native:ro --publish=127.0.0.1:51678:51678 --env-file=/etc/ecs/ecs.config amazon/amazon-ecs-agent:latest
You may or may not need all the volumes specified above.
This setup lets the AWS ecs-agent handle your container orchestration for you.
Below is what I could suggest at this moment -
Create a S3 bucket i.e mybucket.
Put a text file(doc_pass.txt) with your password into that S3 bucket
Create a IAM policy which has GET access to just that particular S3 bucket & add this policy to the EC2 instance role.
Put below script in you user data -
aws s3 cp s3://mybucket/doc_pass.txt doc_pass.txt
cat doc_pass.txt | docker login --username=YOUR_USERNAME --password-stdin
This way you just need to make your S3 bucket secure, no secrets gets displayed in the user data.
I have an EC2 ASG on AWS and I'm interested in storing the shell script that's used to instantiate any given instance in an S3 bucket and have it downloaded and run upon instantiation, but it all feels a little rickety even though I'm using an IAM Instance Role, transferring via HTTPS, and encrypting the script itself while at rest in the S3 bucket using KMS using S3 Server Side Encryption (because the KMS method was throwing an 'Unknown' error).
The Setup
Created an IAM Instance Role that gets assigned to any instance in my ASG upon instantiation, resulting in my AWS creds being baked into the instance as ENV vars
Uploaded and encrypted my Instance-Init.sh script to S3 resulting in a private endpoint like so : https://s3.amazonaws.com/super-secret-bucket/Instance-Init.sh
In The User-Data Field
I input the following into the User Data field when creating the Launch Configuration I want my ASG to use:
#!/bin/bash
apt-get update
apt-get -y install python-pip
apt-get -y install awscli
cd /home/ubuntu
aws s3 cp s3://super-secret-bucket/Instance-Init.sh . --region us-east-1
chmod +x Instance-Init.sh
. Instance-Init.sh
shred -u -z -n 27 Instance-Init.sh
The above does the following:
Updates package lists
Installs Python (required to run aws-cli)
Installs aws-cli
Changes to the /home/ubuntu user directory
Uses the aws-cli to download the Instance-Init.sh file from S3. Due to the IAM Role assigned to my instance, my AWS creds are automagically discovered by aws-cli. The IAM Role also grants my instance the permissions necessary to decrypt the file.
Makes it executable
Runs the script
Deletes the script after it's completed.
The Instance-Init.sh Script
The script itself will do stuff like setting env vars and docker run the containers that I need deployed on my instance. Kinda like so:
#!/bin/bash
export MONGO_USER='MyMongoUserName'
export MONGO_PASS='Top-Secret-Dont-Tell-Anyone'
docker login -u <username> -p <password> -e <email>
docker run - e MONGO_USER=${MONGO_USER} -e MONGO_PASS=${MONGO_PASS} --name MyContainerName quay.io/myQuayNameSpace/MyAppName:latest
Very Handy
This creates a very handy way to update User-Data scripts without the need to create a new Launch Config every time you need to make a minor change. And it does a great job of getting env vars out of your codebase and into a narrow, controllable space (the Instance-Init.sh script itself).
But it all feels a little insecure. The idea of putting my master DB creds into a file on S3 is unsettling to say the least.
The Questions
Is this a common practice or am I dreaming up a bad idea here?
Does the fact that the file is downloaded and stored (albeit briefly) on the fresh instance constitute a vulnerability at all?
Is there a better method for deleting the file in a more secure way?
Does it even matter whether the file is deleted after it's run? Considering the secrets are being transferred to env vars it almost seems redundant to delete the Instance-Init.sh file.
Is there something that I'm missing in my nascent days of ops?
Thanks for any help in advance.
What you are describing is almost exactly what we are using to instantiate Docker containers from our registry (we now use v2 self-hosted/private, s3-backed docker-registry instead of Quay) into production. FWIW, I had the same "this feels rickety" feeling that you describe when first treading this path, but after almost a year now of doing it -- and compared to the alternative of storing this sensitive configuration data in a repo or baked into the image -- I'm confident it's one of the better ways of handling this data. Now, that being said, we are currently looking at using Hashicorp's new Vault software for deploying configuration secrets to replace this "shared" encrypted secret shell script container (say that five times fast). We are thinking that Vault will be the equivalent of outsourcing crypto to the open source community (where it belongs), but for configuration storage.
In fewer words, we haven't run across many problems with a very similar situation we've been using for about a year, but we are now looking at using an external open source project (Hashicorp's Vault) to replace our homegrown method. Good luck!
An alternative to Vault is to use credstash, which leverages AWS KMS and DynamoDB to achieve a similar goal.
I actually use credstash to dynamically import sensitive configuration data at container startup via a simple entrypoint script - this way the sensitive data is not exposed via docker inspect or in docker logs etc.
Here's a sample entrypoint script (for a Python application) - the beauty here is you can still pass in credentials via environment variables for non-AWS/dev environments.
#!/bin/bash
set -e
# Activate virtual environment
. /app/venv/bin/activate
# Pull sensitive credentials from AWS credstash if CREDENTIAL_STORE is set with a little help from jq
# AWS_DEFAULT_REGION must also be set
# Note values are Base64 encoded in this example
if [[ -n $CREDENTIAL_STORE ]]; then
items=$(credstash -t $CREDENTIAL_STORE getall -f json | jq 'to_entries | .[]' -r)
keys=$(echo $items | jq .key -r)
for key in $keys
do
export $key=$(echo $items | jq 'select(.key=="'$key'") | .value' -r | base64 --decode)
done
fi
exec $#