How to create a secret docker secret? - bash

I need create a MariaDB docker container, but need set the root password, but the password is set using a argument from the command line, it is very dangerous for the storage in the .bash_history.
I try use secrets using print pass | docker secret create mysql-root -, but have the same problem, the password is saved into .bash_history. The docker secret is not very secret.
I try use an interactive command:
while read -e line; do printf $line | docker secret create mysql-root -; break; done;
But, is very ugly xD. Why is a beter way to create a docker secret without save it into bash history but without remove all bash history?

The simplest way I have found is to use the following:
docker secret create private_thing -
Then enter the secret on the command line, followed by Ctrl-D twice.

You could try
printf $line | sudo docker secret create MYSQL_ROOT_PASSWORD -
and then
docker run --name some-mysql -e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql-root -d mariadb:tag
The information concerning using secrets with MariaDB can be found on the MariaDB page of DockerHub.
"Docker Secrets
As an alternative to passing sensitive information via environment variables, _FILE may be appended to the previously listed environment variables, causing the initialization script to load the values for those variables from files present in the container. In particular, this can be used to load passwords from Docker secrets stored in /run/secrets/<secret_name> files. For example:
$ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql-root -d mariadb:tag
Currently, this is only supported for MYSQL_ROOT_PASSWORD, MYSQL_ROOT_HOST, MYSQL_DATABASE, MYSQL_USER, and MYSQL_PASSWORD"

You can use openssl rand option to generate a random string and pass to docker secret command i.e
openssl rand -base64 10| docker secret create my_sec -
The openssl rand option will generate 10 byte base64 encoded random string.

Related

Open Distro for Elasticsearch: reset default admin password

I'm new to open distro for elasticsearch and trying to run it on the Kubernetes cluster. After deploying the cluster, I need to change the password for admin user.
I went through this post - default-password-reset
I came to know that, to change the password I need to do the following steps:
exec in one of the master nodes
generate a hash for the new password using /usr/share/elasticsearch/plugins/opendistro_security/tools/hash.sh script
update /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml with the new hash
run /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh with parameters
Questions:
Is there any way to set those (via env or elasticsearch.yml) during bootstrapping the cluster?
I had to recreate internal_users.yml file with the updated password hashes and mounted the file in /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml directory for database pods.
So, when the Elasticsearch nodes bootstrapped, it bootstrapped with the updated password for default users ( i.e. admin ).
I used bcrypt go package to generate password hash.
docker exec -ti ELASTIC_MASTER bash
/usr/share/elasticsearch/plugins/opendistro_security/tools/hash.sh
##enter pass
yum install nano
#replace generated hash with new one
nano /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml
#exec this command to take place
sh /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -icl -nhnv -cacert config/root-ca.pem -cert config/admin.pem -key config/admin-key.pem
You can also execute below commands to obtain value of username, password from you kubernetes cluster:
kubectl get secret -n wazuh elastic-cred -o go-template='{{.data.username | base64decode}}'
kubectl get secret -n wazuh elastic-cred -o go-template='{{.data.password | base64decode}}'
Note: '-n wazuh' indicates the namespace, use what applies to you
Ref: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html

Docker login to gcp using json credentials

I want to log into docker on google cloud from the command line in Windows using credentials in json format.
Firstly, I generated the keys of the service accounts in google cloud IAM & Admin. Afterwards, I tried to login as advised using the following commands:
set /p PASS=<keyfile.json
docker login -u _json_key -p "%PASS%" https://[HOSTNAME]
The json that is generated from google, though, has newline characters and the
above set command couldn't read the whole file.
Then, I edited the file to be a single line. But still, the set command is not reading the whole file. Any advice on how to read a json file using the set command and pass it to the docker login command below?
The solution on this is to run the following command:
docker login -u _json_key --password-stdin https://gcr.io < keyfile.json

How to automate Quay.io login in a shell script?

I've got a shell script that I use to configure my Ubuntu instance upon instantiation. One of the things I need to do is login to my Quay.io account so I can pull docker images from my private registry. Kinda like so:
Instance-Config.sh
#!/bin/bash
docker login quay.io -u 'myUserName' -p 'myPassword' -e 'me#mydomain.com'
docker run quay.io/myUserName/myContainerName
The above script works just fine when logging in to Dockerhub, but when I try to use it to login to Quay.io it produces prompts for the various arguments (-u, -p, -e) when it should automatically fill those from the arguments provided in the command.
How do I go about automating login for Quay.io?
I should note that I've already tried logging in, copying the contents of the ~/.dockercfg file and then trying to echo the resulting string into a new .dockercfg file in the Instance-Init.sh script but there must be a machine id or something in the auth token that's produced and placed in the .dockercfg file so the resulting login from one machine cannot be used on a new instance (which is probably a good thing).
Doi. You need to put the host argument at the end, like illustrated in their docs:
#!/bin/bash
docker login -u 'myUserName' -p 'myPassword' -e 'me#mydomain.com' quay.io
docker run quay.io/myUserName/myContainerName
Hopefully that'll help someone else save some time.

Is it secure to store EC2 User-Data shell scripts in a private S3 bucket?

I have an EC2 ASG on AWS and I'm interested in storing the shell script that's used to instantiate any given instance in an S3 bucket and have it downloaded and run upon instantiation, but it all feels a little rickety even though I'm using an IAM Instance Role, transferring via HTTPS, and encrypting the script itself while at rest in the S3 bucket using KMS using S3 Server Side Encryption (because the KMS method was throwing an 'Unknown' error).
The Setup
Created an IAM Instance Role that gets assigned to any instance in my ASG upon instantiation, resulting in my AWS creds being baked into the instance as ENV vars
Uploaded and encrypted my Instance-Init.sh script to S3 resulting in a private endpoint like so : https://s3.amazonaws.com/super-secret-bucket/Instance-Init.sh
In The User-Data Field
I input the following into the User Data field when creating the Launch Configuration I want my ASG to use:
#!/bin/bash
apt-get update
apt-get -y install python-pip
apt-get -y install awscli
cd /home/ubuntu
aws s3 cp s3://super-secret-bucket/Instance-Init.sh . --region us-east-1
chmod +x Instance-Init.sh
. Instance-Init.sh
shred -u -z -n 27 Instance-Init.sh
The above does the following:
Updates package lists
Installs Python (required to run aws-cli)
Installs aws-cli
Changes to the /home/ubuntu user directory
Uses the aws-cli to download the Instance-Init.sh file from S3. Due to the IAM Role assigned to my instance, my AWS creds are automagically discovered by aws-cli. The IAM Role also grants my instance the permissions necessary to decrypt the file.
Makes it executable
Runs the script
Deletes the script after it's completed.
The Instance-Init.sh Script
The script itself will do stuff like setting env vars and docker run the containers that I need deployed on my instance. Kinda like so:
#!/bin/bash
export MONGO_USER='MyMongoUserName'
export MONGO_PASS='Top-Secret-Dont-Tell-Anyone'
docker login -u <username> -p <password> -e <email>
docker run - e MONGO_USER=${MONGO_USER} -e MONGO_PASS=${MONGO_PASS} --name MyContainerName quay.io/myQuayNameSpace/MyAppName:latest
Very Handy
This creates a very handy way to update User-Data scripts without the need to create a new Launch Config every time you need to make a minor change. And it does a great job of getting env vars out of your codebase and into a narrow, controllable space (the Instance-Init.sh script itself).
But it all feels a little insecure. The idea of putting my master DB creds into a file on S3 is unsettling to say the least.
The Questions
Is this a common practice or am I dreaming up a bad idea here?
Does the fact that the file is downloaded and stored (albeit briefly) on the fresh instance constitute a vulnerability at all?
Is there a better method for deleting the file in a more secure way?
Does it even matter whether the file is deleted after it's run? Considering the secrets are being transferred to env vars it almost seems redundant to delete the Instance-Init.sh file.
Is there something that I'm missing in my nascent days of ops?
Thanks for any help in advance.
What you are describing is almost exactly what we are using to instantiate Docker containers from our registry (we now use v2 self-hosted/private, s3-backed docker-registry instead of Quay) into production. FWIW, I had the same "this feels rickety" feeling that you describe when first treading this path, but after almost a year now of doing it -- and compared to the alternative of storing this sensitive configuration data in a repo or baked into the image -- I'm confident it's one of the better ways of handling this data. Now, that being said, we are currently looking at using Hashicorp's new Vault software for deploying configuration secrets to replace this "shared" encrypted secret shell script container (say that five times fast). We are thinking that Vault will be the equivalent of outsourcing crypto to the open source community (where it belongs), but for configuration storage.
In fewer words, we haven't run across many problems with a very similar situation we've been using for about a year, but we are now looking at using an external open source project (Hashicorp's Vault) to replace our homegrown method. Good luck!
An alternative to Vault is to use credstash, which leverages AWS KMS and DynamoDB to achieve a similar goal.
I actually use credstash to dynamically import sensitive configuration data at container startup via a simple entrypoint script - this way the sensitive data is not exposed via docker inspect or in docker logs etc.
Here's a sample entrypoint script (for a Python application) - the beauty here is you can still pass in credentials via environment variables for non-AWS/dev environments.
#!/bin/bash
set -e
# Activate virtual environment
. /app/venv/bin/activate
# Pull sensitive credentials from AWS credstash if CREDENTIAL_STORE is set with a little help from jq
# AWS_DEFAULT_REGION must also be set
# Note values are Base64 encoded in this example
if [[ -n $CREDENTIAL_STORE ]]; then
items=$(credstash -t $CREDENTIAL_STORE getall -f json | jq 'to_entries | .[]' -r)
keys=$(echo $items | jq .key -r)
for key in $keys
do
export $key=$(echo $items | jq 'select(.key=="'$key'") | .value' -r | base64 --decode)
done
fi
exec $#

Specify private key in SSH as string

I can connect to a server via SSH using the -i option to specify the private key:
ssh -i ~/.ssh/id_dsa user#hostname
I am creating a script that takes the id_dsa text from the database but I am not sure how I can give that string to SSH. I would need something like:
ssh --option $STRING user#hostname
Where $STRING contains the value of id_dsa. I need to know the --option if there is one.
Try the following:
echo $KEY | ssh -i /dev/stdin username#host command
The key doesn't appear from a PS statement, but because stdin is redirected it's only useful for single commands or tunnels.
There is no such switch - as it would leak sensitive information. If there were, anyone could get your private key by doing a simple ps command.
EDIT: (because of theg added details in comment)
You really should store the key in to a temporary file. Make sure you set the permissions correctly before writing to the file, if you do not use command like mktemp to create the temporary file.
Make sure you run the broker (or agent in case of OpenSSH) process and load the key using <whatever command you use to fetch it form the database> | ssh-add -
Passing cryptokey as a string is not advisable but for the sake of the question, I would say I came across the same situation where I need to pass key as a string in a script. I could use key stored in a file too but the nature of the script is to make it very flexible, containing everything in itself was a requirement. so I used to assign variable and pass it and echo it as follows :
#!/bin/bash
KEY="${ YOUR SSH KEY HERE INSIDE }"
echo "${KEY}" | ssh -q -i /dev/stdin username#IP 'hostnamectl'
exit 0
Notes:
-q suppress all warnings
By the way , the catch here in above script, since we are using echo it will print the ssh key which is again not recommended , to hide that you can use grep to grep some anything which will not be printed for sure but still stdin will have the value from the echo. So the final cmd can be modified as follows :
#!/bin/bash
KEY="${ YOUR SSH KEY HERE INSIDE }"
echo "${KEY}" | grep -qw "less" | ssh -q -i /dev/stdin username#IP 'hostnamectl'
exit 0
This worked for me.
I was looking at the same problem. Adding private key content to ssh command via stdin did not work for me. I found out that its possible to add the private key file contents to ssh-agent using the command ssh-add. This will let you ssh into the remote host without explicitly specifying the identity file. My particular usecase was that I didn't want to store the SSH key in cleartext on my machine and was dynamically getting it from a secrets vault. This answer is mostly a collection of other answers on StackOverflow.
ssh-agent is a program to hold private keys used for public key
authentication. Through use of environment variables the agent can
be located and automatically used for authentication when logging
in to other machines using ssh
Source
This is what I have done.
First start the ssh-agent.
You can start it from your terminal by simply executing ssh-agent.
OPTIONAL: If you'd like to make sure ssh-agent is running on every login, you can add something like the following to your shell config.
This is what I have added to my ~/.bashrc file.
# set SSH_AUTH_SOCK env var to a fixed value
export SSH_AUTH_SOCK=~/.ssh/ssh-agent.sock
# test whether $SSH_AUTH_SOCK is valid
ssh-add -l 2>/dev/null >/dev/null
# if not valid, then start ssh-agent using $SSH_AUTH_SOCK
[ $? -ge 2 ] && ssh-agent -a "$SSH_AUTH_SOCK" >/dev/null
Source
(This particular snippet also makes sure new ssh-agent processes are not getting created when there's one already running.)
Now you have the ssh-agent running.
Since we're interested in loading SSH key as a string, I'll assume a scenario where private key contents has already been loaded in to a variable, $SSH_PRIVATE_KEY.
I can now add this Key contents to the ssh-agent by executing the following command.
ssh-add - <<< "${SSH_PRIVATE_KEY}"
This can just be added to the bashrc file as well.
You can confirm that your key has been added by listing all keys by executing ssh-agent -l. Aaand you're done now.
Try connecting to the remote host and you don't need a private key file.
ssh username#hostname
This does come with extra security risks. These are some I could think of:
Adding the private key to the ssh-agent will let any process on the machine access the key to authenticate remote hosts without explicitly providing any information.
Since the goal is to load Private key as a string, it will either be stored in a variable or the contents embedded directly in the command. This might make the key available in command history, the shell variable and other places.

Resources