Using CLI arguments to pass secrets is generally found upon: It exposes the secrets to other processes (ps aux) and potentially stores it in the shell history.
Is there a way to create Kubernetes secrets using kubectl that is not exposing the secret as described? I.e. a way to do this interactively?
kubectl create secret generic mysecret --from-literal key=token
You can create from file using e.g.
kubectl create secret generic mysecret --from-file=key=name-of-file.txt
This will prevent the secret text in the commandline, but it does still tell anyone looking through your history where to find the secret text
Also, if you put a space at the start of the line, it does not get added to shell history
kubectl create secret generic mysecret --from-literal....
vs
kubectl create secret generic mysecret --from-literal....
(with space at the start)
Related
My helm install was giving some errors and I used this command to try to fix it. Now my cluster is gone
kubectl config view --raw >~/.kube/config
The error is:
localhost:8080 was refused - did you specify the right host or port?
You can't reverse it because you have basically overwritten the existing file, the only way to get it back is to restore from backups.
See this answer for more details on why it happened: https://stackoverflow.com/a/6696881/3066081
It boils down to the way bash handles shell redirects - > overwrites file before kubectl has a chance to access it, and after seeing an empty file, it creates the following yaml:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
localhost:8080 is a default address kubectl tries to connect to if kubeconfig is empty.
As for proper way of doing that in future you should:
Backup your current ~/.kube/config to ~/.kube/config.bak, just in case
cp ~/.kube/config{,.bak}
Create a new, temporary file, containing output of your command, i.e.
kubectl config view --raw > raw_config
Replace kubeconfig with new one
mv raw_config ~/.kube/config
I have the ability to encrypt variables using another mechanism(Azure pipeline secret feature), so I would like to save an ansible-vault password there(in Azure pipeline) and pass it to playbook execution as an extra var.
May I know if it can be done so?
An example of what/how I'm expecting is
ansible-playbook --extra-vars "vault-password=${pipelinevariable}"
Vault password cannot be passed as an extra var. There are several ways to provide it which are all covered in the documentation:
Providing vault password section in the general vault documentation.
Using vault in playbooks
Very basically your options are:
providing it interactively passing the --ask-vault-pass option
reading it from a file (static or executable) by either:
providing the --vault-password-file /path/to/vault option on the command line
setting the ANSIBLE_VAULT_PASSWORD_FILE environment variable (e.g. export ANSIBLE_VAULT_PASSWORD_FILE=/path/to/vault).
There is much more to learn in the above doc, especially how to use several vault passwords with ids, how to use a client script to retrieve the password from a key store...
Although this doesn't use extra vars, I believe it fulfills what you were trying to do:
Optional/one-time only: ask for the password and set it as an environment variable:
read -s ansible_vault_pass && export ansible_vault_pass
Now use that variable in your ansible command:
ansible-playbook your-playbook.yml --vault-password-file <(cat <<<"$ansible_vault_pass")
Credits for, and explanation of the <(cat <<<"") technique are in this other StackOverflow answer: Forcing cURL to get a password from the environment.
May I know if it can be done so?
Not familiar with Ansible Vault, but you have at least two directions based on the documents shared by Zeitounator.
1.Use a CMD task first to create a vault-password-file with plain-text content. (Not sure if the vault-password-file can be created in this way, it might not work.)
(echo $(SecretVariableName)>xxx.txt)
Then you may use the newly created xxx.txt file as input of ansible-playbook --vault-password-file /path/to/my/xxx.txt xxx.yml.
2.Create a corresponding vault-password-file before running the pipeline, add it to version control. (Same source repo of your current pipeline)
Then you can use ansible-playbook --vault-password-file easily when the vault-password-file is available. Also you can store the password file in private github repo, fetch the repo via git clone https://{userName}:{userPassword}#github.com/xxx/{RepoName}.git, copy the needed password file to the directory where you run the ansible-playbook commands via Copy Files task. This direction should work no matter if direction 1 is supported.
I'm new to open distro for elasticsearch and trying to run it on the Kubernetes cluster. After deploying the cluster, I need to change the password for admin user.
I went through this post - default-password-reset
I came to know that, to change the password I need to do the following steps:
exec in one of the master nodes
generate a hash for the new password using /usr/share/elasticsearch/plugins/opendistro_security/tools/hash.sh script
update /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml with the new hash
run /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh with parameters
Questions:
Is there any way to set those (via env or elasticsearch.yml) during bootstrapping the cluster?
I had to recreate internal_users.yml file with the updated password hashes and mounted the file in /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml directory for database pods.
So, when the Elasticsearch nodes bootstrapped, it bootstrapped with the updated password for default users ( i.e. admin ).
I used bcrypt go package to generate password hash.
docker exec -ti ELASTIC_MASTER bash
/usr/share/elasticsearch/plugins/opendistro_security/tools/hash.sh
##enter pass
yum install nano
#replace generated hash with new one
nano /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml
#exec this command to take place
sh /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -icl -nhnv -cacert config/root-ca.pem -cert config/admin.pem -key config/admin-key.pem
You can also execute below commands to obtain value of username, password from you kubernetes cluster:
kubectl get secret -n wazuh elastic-cred -o go-template='{{.data.username | base64decode}}'
kubectl get secret -n wazuh elastic-cred -o go-template='{{.data.password | base64decode}}'
Note: '-n wazuh' indicates the namespace, use what applies to you
Ref: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html
I have an EC2 ASG on AWS and I'm interested in storing the shell script that's used to instantiate any given instance in an S3 bucket and have it downloaded and run upon instantiation, but it all feels a little rickety even though I'm using an IAM Instance Role, transferring via HTTPS, and encrypting the script itself while at rest in the S3 bucket using KMS using S3 Server Side Encryption (because the KMS method was throwing an 'Unknown' error).
The Setup
Created an IAM Instance Role that gets assigned to any instance in my ASG upon instantiation, resulting in my AWS creds being baked into the instance as ENV vars
Uploaded and encrypted my Instance-Init.sh script to S3 resulting in a private endpoint like so : https://s3.amazonaws.com/super-secret-bucket/Instance-Init.sh
In The User-Data Field
I input the following into the User Data field when creating the Launch Configuration I want my ASG to use:
#!/bin/bash
apt-get update
apt-get -y install python-pip
apt-get -y install awscli
cd /home/ubuntu
aws s3 cp s3://super-secret-bucket/Instance-Init.sh . --region us-east-1
chmod +x Instance-Init.sh
. Instance-Init.sh
shred -u -z -n 27 Instance-Init.sh
The above does the following:
Updates package lists
Installs Python (required to run aws-cli)
Installs aws-cli
Changes to the /home/ubuntu user directory
Uses the aws-cli to download the Instance-Init.sh file from S3. Due to the IAM Role assigned to my instance, my AWS creds are automagically discovered by aws-cli. The IAM Role also grants my instance the permissions necessary to decrypt the file.
Makes it executable
Runs the script
Deletes the script after it's completed.
The Instance-Init.sh Script
The script itself will do stuff like setting env vars and docker run the containers that I need deployed on my instance. Kinda like so:
#!/bin/bash
export MONGO_USER='MyMongoUserName'
export MONGO_PASS='Top-Secret-Dont-Tell-Anyone'
docker login -u <username> -p <password> -e <email>
docker run - e MONGO_USER=${MONGO_USER} -e MONGO_PASS=${MONGO_PASS} --name MyContainerName quay.io/myQuayNameSpace/MyAppName:latest
Very Handy
This creates a very handy way to update User-Data scripts without the need to create a new Launch Config every time you need to make a minor change. And it does a great job of getting env vars out of your codebase and into a narrow, controllable space (the Instance-Init.sh script itself).
But it all feels a little insecure. The idea of putting my master DB creds into a file on S3 is unsettling to say the least.
The Questions
Is this a common practice or am I dreaming up a bad idea here?
Does the fact that the file is downloaded and stored (albeit briefly) on the fresh instance constitute a vulnerability at all?
Is there a better method for deleting the file in a more secure way?
Does it even matter whether the file is deleted after it's run? Considering the secrets are being transferred to env vars it almost seems redundant to delete the Instance-Init.sh file.
Is there something that I'm missing in my nascent days of ops?
Thanks for any help in advance.
What you are describing is almost exactly what we are using to instantiate Docker containers from our registry (we now use v2 self-hosted/private, s3-backed docker-registry instead of Quay) into production. FWIW, I had the same "this feels rickety" feeling that you describe when first treading this path, but after almost a year now of doing it -- and compared to the alternative of storing this sensitive configuration data in a repo or baked into the image -- I'm confident it's one of the better ways of handling this data. Now, that being said, we are currently looking at using Hashicorp's new Vault software for deploying configuration secrets to replace this "shared" encrypted secret shell script container (say that five times fast). We are thinking that Vault will be the equivalent of outsourcing crypto to the open source community (where it belongs), but for configuration storage.
In fewer words, we haven't run across many problems with a very similar situation we've been using for about a year, but we are now looking at using an external open source project (Hashicorp's Vault) to replace our homegrown method. Good luck!
An alternative to Vault is to use credstash, which leverages AWS KMS and DynamoDB to achieve a similar goal.
I actually use credstash to dynamically import sensitive configuration data at container startup via a simple entrypoint script - this way the sensitive data is not exposed via docker inspect or in docker logs etc.
Here's a sample entrypoint script (for a Python application) - the beauty here is you can still pass in credentials via environment variables for non-AWS/dev environments.
#!/bin/bash
set -e
# Activate virtual environment
. /app/venv/bin/activate
# Pull sensitive credentials from AWS credstash if CREDENTIAL_STORE is set with a little help from jq
# AWS_DEFAULT_REGION must also be set
# Note values are Base64 encoded in this example
if [[ -n $CREDENTIAL_STORE ]]; then
items=$(credstash -t $CREDENTIAL_STORE getall -f json | jq 'to_entries | .[]' -r)
keys=$(echo $items | jq .key -r)
for key in $keys
do
export $key=$(echo $items | jq 'select(.key=="'$key'") | .value' -r | base64 --decode)
done
fi
exec $#
I've started an EC2 instance and installed the ec2-api-tools. Environment variables (JAVA_HOME, EC2_PRIVATE_KEY, EC2_CERT) are set up.
Running ec2-describe-instances doesn't return anything. According to the EC2 command line reference information on all currently running (and terminated) instances should be returned. What's going wrong?
In general ec2-describe-images -o self -o amazon works, so the EC2 tools are working. Adding explicitly -K and -C parameters to ec2-describe-instances doesn't change the situation.
A little more detail:
You don't need to set the EC2_URL directly. You can use the more friendly command-line option:
--region eu-west-1
(substituting the name of the region you want to address).
This way you don't need to look up the region's URL endpoint.
Here are the EC2 Command Line API Tools general options where this is explained.
if all your instances are in eu-west-1, configure your aws cli to use this region by default.
just type : aws configure
and you ll be prompted to enter your credential, then you can rewrite the region