I am having an issue with Ansible attempting to decode the incorrect vault when passing a command in.
The setup is a follows:
My ansible.cfg
[defaults]
transport=smart
hostfile=./hosts
host_key_checking=False
timeout=5
[ssh_connection]
ssh_args = ""
My hostfile:
[BigIP-Devices]
BigIP-1
BigIP-2
[Cisco-Devices]
Cisco-1
Cisco-2
TEST1
My group_vars:
BigIP-Devices < This is a vault encrypted with bigip-vaultkey
Cisco-Devices < This is a vault encrypted with cisco-vaultkey
bigip-vaultkey
cisco-vaultkey
Both the vaults are like the following with different details for each:
---
ansible_ssh_user: user
ansible_ssh_pass: password
I am trying to use the following command:
ansible -c ssh Cisco-Devices --vault-password-file ./group_vars/cisco-vaultkey --limit TEST1 -m raw -a "show version"
Even though it's calling Cisco-Devices in the command, I get the following error:
ERROR! Decryption failed on /home/users/ansible/device-access/group_vars/BigIP-Devices
However, if I move the BigIP files out of group_vars, it works correctly.
Any one have any ideas?
Many thanks in advance for your help!!!
It seems that this is an expected behaviour.
All host patterns and limits are applied after full inventory is parsed.
In your case Ansible discovers BigIP-Devices and Cisco-Devices groups and tries to load corresponding group variables.
If you never execute your playbook on BigIP-Devices and Cisco-Devices at the same time, you probably want to separate them into different inventories like:
./inventories/
./inventories/bigip/hosts
./inventories/bigip/group_vars/all/BigIP-Devices
./inventories/
./inventories/cisco/hosts
./inventories/cisco/group_vars/all/Cisco-Devices
and add required inventory with -i inventories/bigip.
P.S. Or use same vault password for all files.
Related
I have an ansible playbook that I run from below command line and it works fine.
ansible-playbook -e 'host_key_checking=False' -e 'num_serial=10' test.yml -u golden
It works on the hosts specified in /etc/ansible/hosts file. But is there any way to pass hostnames directly on the command line or generate new file with hostname line by line in it so that my ansible works on that hostnames instead of working from default /etc/ansible/hosts file?
Below is my ansible file:
# This will copy files
---
- hosts: servers
serial: "{{ num_serial }}"
tasks:
- name: copy files to server
shell: "(ssh -o StrictHostKeyChecking=no abc.host.com 'ls -1 /var/lib/workspace/data/*' | parallel -j20 'scp -o StrictHostKeyChecking=no abc.host.com:{} /data/holder/files/procs/')"
- name: sleep for 3 sec
pause: seconds=3
Now I wanted to generate new file which will have all the servers line by line and then my ansible play book work on that file instead? Is this possible?
I am running ansible 2.6.3 version.
The question has been answered probably but will just answer again to add more points.
Always look for command line for help related to the arguments or any info needed.
ansible-playbook --help | grep inventory
-i INVENTORY, --inventory=INVENTORY, --inventory-file=INVENTORY
specify inventory host path or comma separated host
list. --inventory-file is deprecated
The support of ansible inventory in file format is with two extensions:
yml
ini --> specifying ini extension is not mandatory.
The inventory link provides more info on the format and should be referred before choosing any format to implement.
Adding #HermanTheGermanHesse answer's so that all the possible points are covered.
In case the above is not used/you don't want to use. Ansible at last will refer the ansible.cfg for the hosts and variable definition.
[defaults]
inventory = path/to/hosts
From here:
The ansible.cfg file will be chosen in this order:
ANSIBLE_CONFIG environment variable
/ansible.cfg
~/.ansible.cfg
/etc/ansible/ansible.cfg
You can use the -i flag to specify the inventory to use. For example:
ansible-playbook -i hosts play.yml
A way to specify the inventory file to use is to set inventory in the ansible.cfg-file as such:
[defaults]
inventory = path/to/hosts
From here:
The ansible.cfg file will be chosen in this order:
ANSIBLE_CONFIG environment variable
./ansible.cfg
~/.ansible.cfg
/etc/ansible/ansible.cfg
EDIT
From your comment:
[WARNING]: Could not match supplied host pattern, ignoring: servers PLAY [servers]
It seems that ansible doesn't recognize hosts passed with the -i Flag as belonging to a group. Since you mentioned in chat that you generate a list with the passed hosts, I'd suggest creating a file where the list of hosts to passed is made to belong to a group callerd [servers] and passing the path to it with the -i Flag.
Context
In my company, we have a shared repository containing our ansible script for our servers. I would like to introduce vault variables to handle services password in the near future. For now, we use encrypted password prompts at the beginning of our playbooks. This solution is annoying (it asked for 3 passwords on some playbooks).
Need
As most of our ansible users are no experts, I would like them to be able to run playbooks smoothly. It means ansible-playbook commands should be short and work without any mandatory parameters (e.g. no --ask-sudo-pass and whatnot). It also means I prefer prompting for vault password at the beginning of a playbook only when it’s needed.
Moreover, I do not want to use ansible password file, because it is not easily shareable and I don’t like the idea of having a cleartext password file on all our ansible users computers.
Problem
Adding --ask-vault-pass for every concerned playbooks is not an option. Our ansible users will not understand why sometimes this parameter is needed and why sometimes it’s not. On the contrary, asking the vault password for every playbook is a burden, because we have a lot of playbooks and sudo password is already asked each time too.
I tried to achieve the following solution, using prompts and various options. Nothing seems to work. Documentation does not explain how to do this:
The best solution (according to me)
Let’s look at this main.yml file:
- import_tasks: foo.yml
tags: always
- import_tasks: bar.yml
tags: bar
# Only this tasks uses a vault encrypted variable
- import_tasks: baz.yml
tags: [baz, vault]
Now, there’s a playbook.yml, importing this main.yml file.
In a perfect world, I would like this to happen:
example1:
ansible-playbook -i prod playbook.yml
Vault password:
example2:
ansible-playbook -i prod playbook.yml --tags baz
Vault password:
example3:
ansible-playbook -i prod playbook.yml --tags foo
# it runs without asking for vault password, because no tasks needing vault
# password will be run
Question
How can I configure ansible to ask for vault password only when it is needed (meaning: every time a vault encrypted variable is encountered)? Is it even possible? If not, what workaround would be viable given my situation?
Thanks.
I found a workaround.
The idea is to always load all passwords (including sudo) from an ansible vault file, needing only the vault password for each playbook. It means all machines should have the same deployer user password. This is even simpler than before, because there’s a single master password (the vault password) to control them all.
This is how it’s done:
ansible.cfg:
[privilege_escalation]
become_ask_pass = False
become = True
[defaults]
ask_vault_pass = True
vars/vault/env:
_vault:
sudo: !vault |
$ANSIBLE_VAULT;1.1;AES256
BLAHBLAHBLAH
another_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
ANOTHERENCRYPTEDPASSWORD
And for each playbook:
# Playbook stuff
# [...]
vars:
ansible_become_pass: "{{ _vault.sudo }}"
vars_files:
# env var is set dynamically in inventory file, so I can have different password per env.
# If the file is not found, it loads an empty null file
- [ "vars/vault/{{ env }}", "vars/null"]
# [...] Loading other var files
Now, a simple ansible-playbook -i env/prod myPlayBook.yml will only ask for the vault password, no prompt for sudo or anything else. It’s consistent and easy to share (only the encrypted password file and the vault password must be shared).
I have a deployment project that I share with other teams. I have encrypted my secrets with vault.
I would like to encrypt the production file with a password and a staging file with an other password to avoid other teams having access to production secrets.
Is it possible to do that ?
I have done something like that. My secrets :
cat /group_vars/all/vault_production.yml (encrypt with password A)
production_password: 'test1'
cat/group_vars/all/vault_staging.yml (encrypt with password B)
staging_password: 'test2'
My environments :
cat hosts-production
[all:vars]
env_type=production
cat hosts-staging
[all:vars]
env_type=staging
My script :
- copy:
content: |
env PASS={{hostvars[inventory_hostname][env_type + '_password']}}
...
And I launch the playbook like that.
# for production
ansible-playbook -i hosts-staging test.yml --vault-password-file .password_a
# for staging
ansible-playbook -i hosts-staging test.yml --vault-password-file .password_b
But that doesn't work because there is 2 differents passwords (ERROR! Decryption failed).
Do you know how to do that ?
Thanks.
BR,
Eric
Multiple vault passwords are supported since Ansible 2.4:
ansible-playbook --vault-id dev#dev-password --vault-id prod#prompt site.yml
If multiple vault passwords are provided, by default Ansible will attempt to decrypt vault content by trying each vault secret in the order they were provided on the command line.
In the above case, the ‘dev’ password will be tried first, then the ‘prod’ password for cases where Ansible doesn’t know which vault id is used to encrypt something.
Sorry, only one vault password allowed per run today. Best way to work around this in the case where you really only need one or the other is to dynamically load a vaulted file based on a var; eg:
- hosts: localhost
vars_files:
- secretstuff-{{ env_type }}.yml
tasks:
...
or
- hosts: localhost
tasks:
- include_vars: secretstuff-{{ env_type }}.yml
...
depending on if you need the vars to survive for one play or the entire run (the latter will bring them in as facts instead of play vars).
Consider if I want to check something quickly. Something that doesn't really need connecting to a host (to check how ansible itself works, like, including of handlers or something). Or localhost will do. I'd probably give up on this, but man page says:
-i PATH, --inventory=PATH
The PATH to the inventory, which defaults to /etc/ansible/hosts. Alternatively, you can use a comma-separated
list of hosts or a single host with a trailing comma host,.
And when I run ansible-playbook without inventory, it says:
[WARNING]: provided hosts list is empty, only localhost is available
Is there an easy way to run playbook against no host, or probably localhost?
Prerequisites. You need to have ssh server running on the host (ssh localhost should let you in).
Then if you want to use password authentication (do note the trailing comma):
$ ansible-playbook playbook.yml -i localhost, -k
In this case you also need sshpass.
In case of public key authentication:
$ ansible-playbook playbook.yml -i localhost,
And the test playbook, to get you started:
- hosts: all
tasks:
- debug: msg=test
You need to have a comma in the localhost, option argument, because otherwise it would be treated as a path to an inventory. The inventory plugin responsible for parsing the value can be found here.
You can define a default inventory with only localhost
See it is explained here:
https://docs.ansible.com/ansible/latest/reference_appendices/config.html#the-configuration-file
And in your playbook add use this
- hosts: all
connection: local
tasks:
- debug: msg=test
It will use local connection so no SSH is required (thus it doesn't expose your machine). It might be quicker unless you need to troubleshoot a ssh issue.
Also for quicker feedback loop you can use: gather_facts: no you already know your target.
I ran into a configuration problem when coding an Ansible playbook for SSH private key files. In static Ansible inventories, I can define combinations of host servers, IP addresses, and related SSH private keys - but I have no idea how to define those with dynamic inventories.
For example:
---
- hosts: tag_Name_server1
gather_facts: no
roles:
- role1
- hosts: tag_Name_server2
gather_facts: no
roles:
- roles2
I use the below command to call that playbook:
ansible-playbook test.yml -i ec2.py --private-key ~/.ssh/SSHKEY.pem
My questions are:
How can I define ~/.ssh/SSHKEY.pem in Ansible files rather than on the command line?
Is there a parameter in playbooks (like gather_facts) to define which private keys should be used which hosts?
If there is no way to define private keys in files, what should be called on the command line when different keys are used for different hosts in the same inventory?
TL;DR: Specify key file in group variable file, since 'tag_Name_server1' is a group.
Note: I'm assuming you're using the EC2 external inventory script. If you're using some other dynamic inventory approach, you might need to tweak this solution.
This is an issue I've been struggling with, on and off, for months, and I've finally found a solution, thanks to Brian Coca's suggestion here. The trick is to use Ansible's group variable mechanisms to automatically pass along the correct SSH key file for the machine you're working with.
The EC2 inventory script automatically sets up various groups that you can use to refer to hosts. You're using this in your playbook: in the first play, you're telling Ansible to apply 'role1' to the entire 'tag_Name_server1' group. We want to direct Ansible to use a specific SSH key for any host in the 'tag_Name_server1' group, which is where group variable files come in.
Assuming that your playbook is located in the 'my-playbooks' directory, create files for each group under the 'group_vars' directory:
my-playbooks
|-- test.yml
+-- group_vars
|-- tag_Name_server1.yml
+-- tag_Name_server2.yml
Now, any time you refer to these groups in a playbook, Ansible will check the appropriate files, and load any variables you've defined there.
Within each group var file, we can specify the key file to use for connecting to hosts in the group:
# tag_Name_server1.yml
# --------------------
#
# Variables for EC2 instances named "server1"
---
ansible_ssh_private_key_file: /path/to/ssh/key/server1.pem
Now, when you run your playbook, it should automatically pick up the right keys!
Using environment vars for portability
I often run playbooks on many different servers (local, remote build server, etc.), so I like to parameterize things. Rather than using a fixed path, I have an environment variable called SSH_KEYDIR that points to the directory where the SSH keys are stored.
In this case, my group vars files look like this, instead:
# tag_Name_server1.yml
# --------------------
#
# Variables for EC2 instances named "server1"
---
ansible_ssh_private_key_file: "{{ lookup('env','SSH_KEYDIR') }}/server1.pem"
Further Improvements
There's probably a bunch of neat ways this could be improved. For one thing, you still need to manually specify which key to use for each group. Since the EC2 inventory script includes details about the keypair used for each server, there's probably a way to get the key name directly from the script itself. In that case, you could supply the directory the keys are located in (as above), and have it choose the correct keys based on the inventory data.
The best solution I could find for this problem is to specify private key file in ansible.cfg (I usually keep it in the same folder as a playbook):
[defaults]
inventory=ec2.py
vault_password_file = ~/.vault_pass.txt
host_key_checking = False
private_key_file = /Users/eric/.ssh/secret_key_rsa
Though, it still sets private key globally for all hosts in playbook.
Note: You have to specify full path to the key file - ~user/.ssh/some_key_rsa silently ignored.
You can simply define the key to use directly when running the command:
ansible-playbook \
\ # Super verbose output incl. SSH-Details:
-vvvv \
\ # The Server to target: (Keep the trailing comma!)
-i "000.000.0.000," \
\ # Define the key to use:
--private-key=~/.ssh/id_rsa_ansible \
\ # The `env` var is needed if `python` is not available:
-e 'ansible_python_interpreter=/usr/bin/python3' \ # Needed if `python` is not available
\ # Dry–Run:
--check \
deploy.yml
Copy/ Paste:
ansible-playbook -vvvv --private-key=/Users/you/.ssh/your_key deploy.yml
I'm using the following configuration:
#site.yml:
- name: Example play
hosts: all
remote_user: ansible
become: yes
become_method: sudo
vars:
ansible_ssh_private_key_file: "/home/ansible/.ssh/id_rsa"
I had a similar issue and solved it with a patch to ec2.py and adding some configuration parameters to ec2.ini. The patch takes the value of ec2_key_name, prefixes it with the ssh_key_path, and adds the ssh_key_suffix to the end, and writes out ansible_ssh_private_key_file as this value.
The following variables have to be added to ec2.ini in a new 'ssh' section (this is optional if the defaults match your environment):
[ssh]
# Set the path and suffix for the ssh keys
ssh_key_path = ~/.ssh
ssh_key_suffix = .pem
Here is the patch for ec2.py:
204a205,206
> 'ssh_key_path': '~/.ssh',
> 'ssh_key_suffix': '.pem',
422a425,428
> # SSH key setup
> self.ssh_key_path = os.path.expanduser(config.get('ssh', 'ssh_key_path'))
> self.ssh_key_suffix = config.get('ssh', 'ssh_key_suffix')
>
1490a1497
> instance_vars["ansible_ssh_private_key_file"] = os.path.join(self.ssh_key_path, instance_vars["ec2_key_name"] + self.ssh_key_suffix)